From ee271f464ce40d94272d3464151a06a31aa6dc39 Mon Sep 17 00:00:00 2001 From: JessamyT Date: Tue, 7 Jan 2025 16:06:56 -0800 Subject: [PATCH] batch 4 --- docs/dev/reference/apis/services/vision.md | 2 +- docs/dev/reference/try-viam/reserve-a-rover.md | 8 ++++---- .../rover-resources/rover-tutorial-fragments.md | 10 +++++----- docs/dev/reference/try-viam/try-viam-tutorial.md | 6 +++--- docs/manage/_index.md | 1 + .../reference/services/slam/cloudslam/_index.md | 2 +- .../reference/services/vision/detector_3d_segmenter.md | 2 +- docs/operate/reference/services/vision/mlmodel.md | 2 +- .../reference/services/vision/obstacles_depth.md | 2 +- .../reference/services/vision/obstacles_distance.md | 2 +- .../reference/services/vision/obstacles_pointcloud.md | 2 +- docs/tutorials/configure/pet-photographer.md | 2 +- docs/tutorials/projects/verification-system.md | 2 +- docs/tutorials/services/color-detection-scuttle.md | 2 +- docs/tutorials/services/webcam-line-follower-robot.md | 4 ++-- static/include/services/apis/generated/vision.md | 10 +++++----- .../apis/overrides/protos/vision.GetDetections.md | 2 +- .../overrides/protos/vision.GetDetectionsFromCamera.md | 2 +- 18 files changed, 32 insertions(+), 31 deletions(-) diff --git a/docs/dev/reference/apis/services/vision.md b/docs/dev/reference/apis/services/vision.md index 692c7e6609..0d45fcb942 100644 --- a/docs/dev/reference/apis/services/vision.md +++ b/docs/dev/reference/apis/services/vision.md @@ -102,7 +102,7 @@ To get started using Viam's SDKs to connect to and control your machine, go to y When executed, this sample code creates a connection to your machine as a client. -The following examples assume that you have a machine configured with a [camera](/operate/reference/components/camera/) and a vision service [detector](/operate/reference/services/vision/#detections), [classifier](/operate/reference/services/vision/#classifications) or [segmenter](/operate/reference/services/vision/#segmentations). +The following examples assume that you have a machine configured with a [camera](/operate/reference/components/camera/) and a vision service [detector](/dev/reference/apis/services/vision/#detections), [classifier](/operate/reference/services/vision/#classifications) or [segmenter](/operate/reference/services/vision/#segmentations). {{< tabs >}} {{% tab name="Python" %}} diff --git a/docs/dev/reference/try-viam/reserve-a-rover.md b/docs/dev/reference/try-viam/reserve-a-rover.md index fcac6cade8..b37b2f67e3 100644 --- a/docs/dev/reference/try-viam/reserve-a-rover.md +++ b/docs/dev/reference/try-viam/reserve-a-rover.md @@ -18,7 +18,7 @@ date: "2022-01-01" _Try Viam_ is a way to try out the Viam platform without setting up any hardware yourself. You can take over a Viam Rover in our robotics lab to play around! -Watch this tutorial video for a walkthrough of Try Viam, including [how to reserve a Viam Rover](#using-the-reservation-system), [navigate the Viam platform](/fleet/), and [drive the rover](/operate/reference/components/base/wheeled/#test-the-base): +Watch this tutorial video for a walkthrough of Try Viam, including [how to reserve a Viam Rover](#using-the-reservation-system), [navigate the Viam platform](/operate/), and [drive the rover](/operate/reference/components/base/wheeled/#test-the-base): {{}} @@ -67,7 +67,7 @@ You can take over and play around with a Viam Rover in our robotics lab from any 1. Please notify Viam support on [our Community Discord](https://discord.gg/viam). 2. Use the **Add Viam Support** button on your machine's Location page to give Viam Support access to your _location_. - Refer to [Managing Locations and sub-locations](/cloud/locations/). + Refer to [Grant access](/manage/manage/access/#grant-access). ### Can I extend my time? @@ -115,7 +115,7 @@ Your machine belongs to the [organization](/cloud/organizations/) you were in wh ### Can I share this Location with a friend to work on the machine together? -Sure, you can [invite other users to your organization](/cloud/locations/) to collaborate on your machine. +Sure, you can [invite other users to your organization](/manage/manage/access/#grant-access) to collaborate on your machine. As members of your organization, those users have full control of your machine. Another collaboration option is to use screen sharing in a Zoom or Webex session. @@ -123,7 +123,7 @@ Another collaboration option is to use screen sharing in a Zoom or Webex session You can only borrow one rover at a time. You cannot join the queue for another reservation while you have an active rental session. -If you would like to, you can [extend your reservation](/appendix/try-viam/reserve-a-rover/#can-i-extend-my-time). +If there is no one waiting after you, you can extend your reservation with the **Extend** button. ### I loved my experience - can I play around more? diff --git a/docs/dev/reference/try-viam/rover-resources/rover-tutorial-fragments.md b/docs/dev/reference/try-viam/rover-resources/rover-tutorial-fragments.md index 475313a19f..18b1a63191 100644 --- a/docs/dev/reference/try-viam/rover-resources/rover-tutorial-fragments.md +++ b/docs/dev/reference/try-viam/rover-resources/rover-tutorial-fragments.md @@ -43,7 +43,7 @@ Click **Save** in the upper right corner of the page to save your new configurat The fragment adds the following components to your machine's JSON configuration: -- A [board component](/components/board/) named `local` representing the Raspberry Pi. +- A [board component](/operate/reference/components/board/) named `local` representing the Raspberry Pi. - Two [motors](/operate/reference/components/motor/gpio/) (`right` and `left`) - The configured pin numbers correspond to where the motor drivers are connected to the board. - Two [encoders](/operate/reference/components/encoder/single/), one for each motor @@ -74,7 +74,7 @@ Click **Save** in the upper right corner of the page to save your new configurat The fragment adds the following components to your machine's JSON configuration: -- A [board component](/components/board/) named `local` representing the Raspberry Pi. +- A [board component](/operate/reference/components/board/) named `local` representing the Raspberry Pi. - Two [motors](/operate/reference/components/motor/gpio/) (`right` and `left`) - The configured pin numbers correspond to where the motor drivers are connected to the board. - Two [encoders](/operate/reference/components/encoder/single/), one for each motor @@ -105,7 +105,7 @@ Click **Save** in the upper right corner of the page to save your configuration. The fragment adds the following components to your machine's JSON configuration: -- A [board component](/components/board/) named `local` representing the Raspberry Pi +- A [board component](/operate/reference/components/board/) named `local` representing the Raspberry Pi - An I2C bus for connection to the accelerometer. - Two [motors](/operate/reference/components/motor/gpio/) (`right` and `left`) - The configured pin numbers correspond to where the motor drivers are connected to the board. @@ -143,7 +143,7 @@ Click **Save** in the upper right corner of the page to save your new configurat The fragment adds the following components to your machine's JSON configuration: -- A [board component](/components/board/) named `local` representing the Jetson. +- A [board component](/operate/reference/components/board/) named `local` representing the Jetson. - Two [motors](/operate/reference/components/motor/gpio/) (`right` and `left`) - The configured pin numbers correspond to where the motor drivers are connected to the board. - Two [encoders](/operate/reference/components/encoder/single/), one for each motor @@ -174,7 +174,7 @@ Click **Save** in the upper right corner of the page to save your new configurat The fragment adds the following components to your machine's JSON configuration: -- A [board component](/components/board/) named `local` representing the Jetson. +- A [board component](/operate/reference/components/board/) named `local` representing the Jetson. - Two [motors](/operate/reference/components/motor/gpio/) (`right` and `left`) - The configured pin numbers correspond to where the motor drivers are connected to the board. - Two [encoders](/operate/reference/components/encoder/single/), one for each motor diff --git a/docs/dev/reference/try-viam/try-viam-tutorial.md b/docs/dev/reference/try-viam/try-viam-tutorial.md index 7b6176f856..1fb71b49b9 100644 --- a/docs/dev/reference/try-viam/try-viam-tutorial.md +++ b/docs/dev/reference/try-viam/try-viam-tutorial.md @@ -22,7 +22,7 @@ You can take over a Viam Rover in our robotics lab to play around! The rental rover is made up of a chassis with a Raspberry Pi 4B single-board computer, two motors, encoders, and a camera. The Try Viam area also has an overhead camera to provide a view of the rental rover, allowing you to view its movements in real time. -Watch this tutorial video for a walkthrough of Try Viam, including [how to reserve a Viam Rover](/appendix/try-viam/reserve-a-rover/#using-the-reservation-system), [navigate the Viam platform](/fleet/), and [drive the rover](#control-tab): +Watch this tutorial video for a walkthrough of Try Viam, including [how to reserve a Viam Rover](/appendix/try-viam/reserve-a-rover/#using-the-reservation-system), [navigate the Viam platform](/operate/), and [drive the rover](#control-tab): {{}} @@ -127,7 +127,7 @@ You can also see their current positions (based on encoder readings) in real tim #### Board control -The [board component](/components/board/) is the signal wire hub of a machine which allows you to control the states of individual GPIO pins on the board. +The [board component](/operate/reference/components/board/) is the signal wire hub of a machine which allows you to control the states of individual GPIO pins on the board. For the Viam Rover, the board component is named `local` and controls a Raspberry Pi on the Viam Rover. With it, you can control the states of individual GPIO pins on the board. @@ -147,7 +147,7 @@ There you can view the configuration for each component in the machine: attribut ### Board configuration -The [board component](/components/board/) is the signal wire hub of a machine. +The [board component](/operate/reference/components/board/) is the signal wire hub of a machine. Configuring a board component allows you to control the states of individual GPIO pins to command the electrical signals sent through and received by the board. For the Viam Rover, the board component is a Raspberry Pi with **Name** `local`, **Type** `board`, and **Model** `viam:raspberry-pi:rpi`. diff --git a/docs/manage/_index.md b/docs/manage/_index.md index 47bdd69712..d92aa91910 100644 --- a/docs/manage/_index.md +++ b/docs/manage/_index.md @@ -10,6 +10,7 @@ overview: true description: "Remotely deploy and manage software on any fleet of devices. You can monitor all connected devices and troubleshoot any issues - from anywhere." aliases: - /cloud/ + - /fleet/ --- Viam's fleet management tooling allows you to remotely deploy and manage software on any fleet of devices. You can monitor all connected devices and troubleshoot any issues - from anywhere. diff --git a/docs/operate/reference/services/slam/cloudslam/_index.md b/docs/operate/reference/services/slam/cloudslam/_index.md index 0b4df6779d..5247f0acb6 100644 --- a/docs/operate/reference/services/slam/cloudslam/_index.md +++ b/docs/operate/reference/services/slam/cloudslam/_index.md @@ -431,7 +431,7 @@ The following attributes are available for `viam:cloudslam-wrapper:cloudslam` | `api_key` | string | **Required** | An [API key](/manage/manage/access/) with location owner or higher permission. | | `api_key_id` | string | **Required** | The associated API key ID with the API key. | | `organization_id` | string | **Required** | The organization ID of your [organization](/dev/reference/glossary/#organization). | -| `location_id` | string | **Required** | The location ID of your [location](/dev/reference/glossary/location/). | +| `location_id` | string | **Required** | The location ID of your [location](/dev/reference/glossary/#term-location/). | | `machine_id` | string | **Required** | The machine ID of your [machine](/dev/reference/apis/fleet/#find-machine-id). | | `machine_part_id` | string | Optional | The machine part ID of your [machine part](/dev/reference/apis/fleet/#find-machine-id). Used for local package creation and updating mode. | | `viam_version` | string | Optional | The version of viam-server to use with CloudSLAM. Defaults to `stable`. | diff --git a/docs/operate/reference/services/vision/detector_3d_segmenter.md b/docs/operate/reference/services/vision/detector_3d_segmenter.md index b72d9fb514..64a7d10e00 100644 --- a/docs/operate/reference/services/vision/detector_3d_segmenter.md +++ b/docs/operate/reference/services/vision/detector_3d_segmenter.md @@ -14,7 +14,7 @@ aliases: # SMEs: Bijan, Khari --- -_Changed in [RDK v0.2.36 and API v0.1.118](/appendix/changelog/#vision-service)_ +_Changed in [RDK v0.2.36 and API v0.1.118](/dev/reference/changelog/#vision-service)_ The `detector_3d_segmenter` vision service model takes 2D bounding boxes from an [object detector](../#detections), and, using the intrinsic parameters of the chosen camera, projects the pixels in the bounding box to points in 3D space. If the chosen camera is not equipped to do projections from 2D to 3D, then this vision model will fail. diff --git a/docs/operate/reference/services/vision/mlmodel.md b/docs/operate/reference/services/vision/mlmodel.md index 469fabcaed..ef7e9fcd0b 100644 --- a/docs/operate/reference/services/vision/mlmodel.md +++ b/docs/operate/reference/services/vision/mlmodel.md @@ -16,7 +16,7 @@ aliases: # SMEs: Bijan, Khari --- -_Changed in [RDK v0.2.36 and API v0.1.118](/appendix/changelog/#vision-service)_ +_Changed in [RDK v0.2.36 and API v0.1.118](/dev/reference/changelog/#vision-service)_ The `mlmodel` {{< glossary_tooltip term_id="model" text="model" >}} of the Viam vision service supports machine learning detectors and classifiers that draw bounding boxes or return class labels based on a deployed TensorFlow Lite, TensorFlow, PyTorch, or ONNX ML model. diff --git a/docs/operate/reference/services/vision/obstacles_depth.md b/docs/operate/reference/services/vision/obstacles_depth.md index 73d60c610e..98c5e77918 100644 --- a/docs/operate/reference/services/vision/obstacles_depth.md +++ b/docs/operate/reference/services/vision/obstacles_depth.md @@ -13,7 +13,7 @@ aliases: # SMEs: Bijan, Khari --- -_Changed in [RDK v0.2.36 and API v0.1.118](/appendix/changelog/#vision-service)_ +_Changed in [RDK v0.2.36 and API v0.1.118](/dev/reference/changelog/#vision-service)_ The `obstacles_depth` vision service model is for depth cameras, and is best for motion planning with transient obstacles. Use this segmenter to identify well separated objects above a flat plane. diff --git a/docs/operate/reference/services/vision/obstacles_distance.md b/docs/operate/reference/services/vision/obstacles_distance.md index c1532b8a77..3aea94a133 100644 --- a/docs/operate/reference/services/vision/obstacles_distance.md +++ b/docs/operate/reference/services/vision/obstacles_distance.md @@ -13,7 +13,7 @@ aliases: # SMEs: Bijan, Khari --- -_Changed in [RDK v0.2.36 and API v0.1.118](/appendix/changelog/#vision-service)_ +_Changed in [RDK v0.2.36 and API v0.1.118](/dev/reference/changelog/#vision-service)_ `obstacles_distance` is a segmenter that takes point clouds from a camera input and returns the average single closest point to the camera as a perceived obstacle. It is best for transient obstacle avoidance. diff --git a/docs/operate/reference/services/vision/obstacles_pointcloud.md b/docs/operate/reference/services/vision/obstacles_pointcloud.md index 9823a8d07f..c8314402b8 100644 --- a/docs/operate/reference/services/vision/obstacles_pointcloud.md +++ b/docs/operate/reference/services/vision/obstacles_pointcloud.md @@ -13,7 +13,7 @@ aliases: # SMEs: Bijan, Khari --- -_Changed in [RDK v0.2.36 and API v0.1.118](/appendix/changelog/#vision-service)_ +_Changed in [RDK v0.2.36 and API v0.1.118](/dev/reference/changelog/#vision-service)_ `obstacles_pointcloud` is a segmenter that identifies well separated objects above a flat plane. It first identifies the biggest plane in the scene, eliminates that plane, and clusters the remaining points into objects. diff --git a/docs/tutorials/configure/pet-photographer.md b/docs/tutorials/configure/pet-photographer.md index a9e8720814..a8d619981c 100644 --- a/docs/tutorials/configure/pet-photographer.md +++ b/docs/tutorials/configure/pet-photographer.md @@ -828,7 +828,7 @@ Whether you've downloaded the `colorfilter` module, or written your own color fi Next, add the following services to your smart machine to support the color filter module: - The [data management service](/data-ai/capture-data/capture-sync/) enables your smart machine to capture data and sync it to the cloud. -- The [vision service](/operate/reference/services/vision/#detections) enables your smart machine to perform color detection on objects in a camera stream. +- The [vision service](/dev/reference/apis/services/vision/#detections) enables your smart machine to perform color detection on objects in a camera stream. ### Add the data management service diff --git a/docs/tutorials/projects/verification-system.md b/docs/tutorials/projects/verification-system.md index 8c280b3de6..56ea2cd701 100644 --- a/docs/tutorials/projects/verification-system.md +++ b/docs/tutorials/projects/verification-system.md @@ -340,7 +340,7 @@ For example: - Write a program using one of the [Viam SDK](/dev/reference/sdks/) to poll the `facial-verification` module for its current state, and take action when a particular state is reached. For example, you could use [`GetClassificationsFromCamera()`](/dev/reference/apis/services/vision/#getclassificationsfromcamera) to capture when a transition into the `ALARM` state occurs, and then send you an email with the captured image of the trespasser! -- Try changing the type of [detectors](/operate/reference/services/vision/#detections), using different detectors for the `TRIGGER_1` and `TRIGGER_2` states. +- Try changing the type of [detectors](/dev/reference/apis/services/vision/#detections), using different detectors for the `TRIGGER_1` and `TRIGGER_2` states. - Add the [filtered camera module](/data-ai/capture-data/filter-before-sync/) to your machine, and use it as the source camera in your verification system in order to save images to the Viam Cloud only when the system enters into specific states. This way, you could limit the images captured and synced to only those you are interested in reviewing later, for example. - If you don't want the `ALARM` capabilities, and would like to just use it as a notification system when a detector gets triggered, set `disable_alarm: true` in the config, which prevents `TRIGGER_2` from entering into the `COUNTDOWN` state, meaning the system will only cycle between the states of `TRIGGER_1` and `TRIGGER_2`. diff --git a/docs/tutorials/services/color-detection-scuttle.md b/docs/tutorials/services/color-detection-scuttle.md index cd2ed08990..b4a7421966 100644 --- a/docs/tutorials/services/color-detection-scuttle.md +++ b/docs/tutorials/services/color-detection-scuttle.md @@ -68,7 +68,7 @@ Turn on the power to the rover. This tutorial uses the color `#a13b4c` or `rgb(161,59,76)` (a reddish color). -To create a [color detector vision service](/operate/reference/services/vision/#detections): +To create a [color detector vision service](/dev/reference/apis/services/vision/#detections): {{< tabs >}} {{% tab name="Builder" %}} diff --git a/docs/tutorials/services/webcam-line-follower-robot.md b/docs/tutorials/services/webcam-line-follower-robot.md index d06cab2947..5ce9cb2fa6 100644 --- a/docs/tutorials/services/webcam-line-follower-robot.md +++ b/docs/tutorials/services/webcam-line-follower-robot.md @@ -33,7 +33,7 @@ toc_hide: true Many line-following robots rely on a dedicated array of infrared sensors to follow a dark line on a light background or a light line on a dark background. This tutorial uses a standard webcam in place of these sensors, and allows a robot to follow a line of any color that is at least somewhat different from the background. -**Goal**: To make a wheeled robot follow a colored line along the floor using a webcam and the Viam vision service color detector. +**Goal**: To make a wheeled robot follow a colored line along the floor using a webcam and the Viam vision service color detector. **What you will learn**: @@ -225,7 +225,7 @@ Next, navigate to the **CONFIGURE** tab of your machine's page in the [Viam app] 1. **Add a vision service.** -Next, add a vision service [detector](/operate/reference/services/vision/#detections): +Next, add a vision service [detector](/dev/reference/apis/services/vision/#detections): Click the **+** (Create) icon next to your machine part in the left-hand menu and select **Service**. Select type `vision` and model `color detector`. diff --git a/static/include/services/apis/generated/vision.md b/static/include/services/apis/generated/vision.md index 70e3bff1fb..6c916291b8 100644 --- a/static/include/services/apis/generated/vision.md +++ b/static/include/services/apis/generated/vision.md @@ -1,6 +1,6 @@ ### GetDetectionsFromCamera -Get a list of detections from the next image from a specified camera using a configured [detector](/operate/reference/services/vision/#detections). +Get a list of detections from the next image from a specified camera using a configured [detector](/dev/reference/apis/services/vision/#detections). {{< tabs >}} {{% tab name="Python" %}} @@ -91,7 +91,7 @@ For more information, see the [Flutter SDK Docs](https://flutter.viam.dev/viam_s ### GetDetections -Get a list of detections from a given image using a configured [detector](/operate/reference/services/vision/#detections). +Get a list of detections from a given image using a configured [detector](/dev/reference/apis/services/vision/#detections). {{< tabs >}} {{% tab name="Python" %}} @@ -440,7 +440,7 @@ For more information, see the [Python SDK Docs](https://python.viam.dev/autoapi/ **Returns:** -- [([]*viz.Object)](https://pkg.go.dev/go.viam.com/rdk/vision#Object): A list of point clouds and associated metadata like the center coordinates of each point cloud. +- [([]\*viz.Object)](https://pkg.go.dev/go.viam.com/rdk/vision#Object): A list of point clouds and associated metadata like the center coordinates of each point cloud. - [(error)](https://pkg.go.dev/builtin#error): An error, if one occurred. **Example:** @@ -507,7 +507,7 @@ Used for visualization. **Returns:** -- ([viam.services.vision.vision.CaptureAllResult](https://python.viam.dev/autoapi/viam/services/vision/vision/index.html#viam.services.vision.vision.CaptureAllResult)): A class that stores all potential returns from the vision service. It can return the image from the camera along with its associated detections, classifications, and objects, as well as any extra info the model may provide. +- ([viam.services.vision.vision.CaptureAllResult](https://python.viam.dev/autoapi/viam/services/vision/vision/index.html#viam.services.vision.vision.CaptureAllResult)): A class that stores all potential returns from the vision service. It can return the image from the camera along with its associated detections, classifications, and objects, as well as any extra info the model may provide. **Example:** @@ -745,7 +745,7 @@ For more information, see the [Python SDK Docs](https://python.viam.dev/autoapi/ **Returns:** -- [(*Properties)](https://pkg.go.dev/go.viam.com/rdk/services/vision#Properties) +- [(\*Properties)](https://pkg.go.dev/go.viam.com/rdk/services/vision#Properties) - [(error)](https://pkg.go.dev/builtin#error): An error, if one occurred. For more information, see the [Go SDK Docs](https://pkg.go.dev/go.viam.com/rdk/services/vision#Service). diff --git a/static/include/services/apis/overrides/protos/vision.GetDetections.md b/static/include/services/apis/overrides/protos/vision.GetDetections.md index f230c686ee..63d3e29b42 100644 --- a/static/include/services/apis/overrides/protos/vision.GetDetections.md +++ b/static/include/services/apis/overrides/protos/vision.GetDetections.md @@ -1 +1 @@ -Get a list of detections from a given image using a configured [detector](/operate/reference/services/vision/#detections). +Get a list of detections from a given image using a configured [detector](/dev/reference/apis/services/vision/#detections). diff --git a/static/include/services/apis/overrides/protos/vision.GetDetectionsFromCamera.md b/static/include/services/apis/overrides/protos/vision.GetDetectionsFromCamera.md index a17aff9b8e..4fcbb7aa05 100644 --- a/static/include/services/apis/overrides/protos/vision.GetDetectionsFromCamera.md +++ b/static/include/services/apis/overrides/protos/vision.GetDetectionsFromCamera.md @@ -1 +1 @@ -Get a list of detections from the next image from a specified camera using a configured [detector](/operate/reference/services/vision/#detections). +Get a list of detections from the next image from a specified camera using a configured [detector](/dev/reference/apis/services/vision/#detections).