diff --git a/docs/docs/answer-modes/1-binary-detectors.md b/docs/docs/answer-modes/1-binary-detectors.md
new file mode 100644
index 00000000..2d7c7f38
--- /dev/null
+++ b/docs/docs/answer-modes/1-binary-detectors.md
@@ -0,0 +1,55 @@
+# Binary Classification Detectors
+
+Binary classification detectors are used to answer yes/no questions about images. Most of Groundlight's documentation examples are for binary classification detectors, as they are the simplest type of detector.
+
+In order to create a binary classification detector, you need to provide a query that asks a yes/no question. For example, "Is there an eagle visible?" or "Is the door fully closed?".
+
+```python notest
+from groundlight import Groundlight
+gl = Groundlight()
+
+# highlight-start
+detector = gl.create_detector(
+    name="eagle-detector",
+    query="Is there an eagle visible?",
+    confidence_threshold=0.9,
+)
+# highlight-end
+```
+
+## Submit an Image Query to a Binary Classification Detector
+
+Now that you have created a binary classification detector, you can submit an image query to it.
+
+```python notest
+from groundlight import Groundlight
+gl = Groundlight()
+
+detector = gl.get_detector_by_name("eagle-detector")
+
+# highlight-start
+# Check if an eagle is visible in an image
+image_query = gl.submit_image_query(detector, "path/to/image.jpg")
+# highlight-end
+
+print(f"Result: {image_query.result.label}")
+print(f"Confidence: {image_query.result.confidence}")
+```
+
+Binary classification detectors return a `label` attribute in the result object, which will be either `"YES"` or `"NO"`. If a query is ambiguous, it is also possible for the detector to return an `"UNCLEAR"` label.
+
+The `confidence` attribute represents the confidence level in the predicted label, which (for a binary classification detector) is a value between 0.5 and 1. A higher confidence score indicates that the model is more certain about its prediction.
+
+## Add a label to a Binary Classification Detector
+
+To provide ground truth labels for binary classification detectors, you can specify the label as either `"YES"`, `"NO"`, or `"UNCLEAR"`. This helps improve the accuracy of your detector over time.
+
+```python notest
+from groundlight import Groundlight
+gl = Groundlight()
+
+# highlight-start
+# Add a binary label to the image query from the previous example
+gl.add_label(image_query, label="YES")
+# highlight-end
+```
diff --git a/docs/docs/answer-modes/2-multi-choice-detectors.md b/docs/docs/answer-modes/2-multi-choice-detectors.md
new file mode 100644
index 00000000..2318302b
--- /dev/null
+++ b/docs/docs/answer-modes/2-multi-choice-detectors.md
@@ -0,0 +1,60 @@
+# Multiple Choice (Choose One) Detectors
+
+If you want to classify images into multiple categories, you can create a multi-class detector.
+
+```python notest
+from groundlight import ExperimentalApi
+gl_exp = ExperimentalApi()
+
+# highlight-start
+class_names = ["Golden Retriever", "Labrador Retriever", "German Shepherd", "Other"]
+detector = gl_exp.create_multiclass_detector(
+    name="dog-breed-detector",
+    query="What kind of dog is this?",
+    class_names=class_names,
+)
+# highlight-end
+```
+
+:::tip
+We recommend adding an "Other" class to your multi-class detector to handle cases where the image does not belong to any of the pre-defined classes.
+:::
+
+:::note
+Multi-Class Detectors are available on [Business and Enterprise plans](https://www.groundlight.ai/pricing).
+:::
+
+## Submit an Image Query to a Multi-Class Detector
+
+Now that you have created a multi-class detector, you can submit an image query to it.
+
+```python notest
+from groundlight import ExperimentalApi
+gl_exp = ExperimentalApi()
+
+detector = gl_exp.get_detector_by_name("dog-breed-detector")
+
+# highlight-start
+# Classify the breed of a dog in an image
+image_query = gl_exp.submit_image_query(detector, "path/to/image.jpg")
+# highlight-end
+
+print(f"Result: {image_query.result.label}")
+print(f"Confidence: {image_query.result.confidence}")
+```
+
+Multi-class detectors return a `label` attribute in the result object, which contains the predicted class label. The `label` attribute will be one of the class names provided when creating the detector. The `confidence` attribute represents the confidence level in the predicted class, which is a value between `1/len(class_names)` and 1.
+
+## Add a label to a Multi-Class Detector
+
+To provide ground truth labels for multi-class detectors, you can specify the label of the correct class.
+
+```python notest
+from groundlight import ExperimentalApi
+gl_exp = ExperimentalApi()
+
+# highlight-start
+# Add a multi-class label to the image query from the previous example
+gl_exp.add_label(image_query, label="German Shepherd")
+# highlight-end
+```
diff --git a/docs/docs/answer-modes/3-counting-detectors.md b/docs/docs/answer-modes/3-counting-detectors.md
new file mode 100644
index 00000000..e245c3b7
--- /dev/null
+++ b/docs/docs/answer-modes/3-counting-detectors.md
@@ -0,0 +1,115 @@
+# Count Detectors
+
+Counting detectors are used to count the number of objects in an image. Groundlight's counting detectors also return bounding boxes around the objects they count.
+
+```python notest
+from groundlight import ExperimentalApi
+gl_exp = ExperimentalApi()
+
+# highlight-start
+detector = gl_exp.create_counting_detector(
+    name="car-counter",
+    query="How many cars are in the parking lot?",
+    class_name="car",
+    max_count=20,
+    confidence_threshold=0.2,
+)
+# highlight-end
+```
+
+Counting detectors should be provided with a query that asks "how many" objects are in the image.
+
+A maximum count (of 25 or fewer) must be specified when creating a counting detector. This is the maximum number of objects that the detector will count in an image. Groundlight's ML models are optimized for counting up to 20 objects, but you can increase the maximum count to 25 if needed. If you have an application that requires counting more than 25 objects, please [contact us](mailto:support@groundlight.ai).
+
+The `confidence_threshold` parameter sets the minimum confidence level required for the ML model's predictions. If the model's confidence falls below this threshold, the query will be sent for human review. Count detectors can have a `confidence_threshold` set to any value between `1/(max_count + 2)` and 1.
+
+:::note
+Counting Detectors are available on [Business and Enterprise plans](https://www.groundlight.ai/pricing).
+:::
+
+## Submit an Image Query to a Counting Detector
+
+Now that you have created a counting detector, you can submit an image query to it.
+
+```python notest
+from groundlight import ExperimentalApi
+gl_exp = ExperimentalApi()
+
+detector = gl_exp.get_detector_by_name("car-counter")
+
+# highlight-start
+# Count the number of cars in an image
+image_query = gl_exp.submit_image_query(detector, "path/to/image.jpg")
+# highlight-end
+
+print(f"Counted {image_query.result.count} cars")
+print(f"Confidence: {image_query.result.confidence}")
+print(f"Bounding Boxes: {image_query.rois}")
+```
+
+In the case of counting detectors, the `count` attribute of the result object will contain the number of objects counted in the image. The `confidence` attribute represents the confidence level in the specific count. Note that this implies that confidences may be lower (on average) for counting detectors with a higher maximum count.
+
+<!-- TODO: display an example image with bounding boxes -->
+
+:::tip Drawing Bounding Boxes
+You can visualize the bounding boxes returned by counting detectors using a library like OpenCV. Here's an example of how to draw bounding boxes on an image:
+
+```python notest
+import cv2
+import numpy as np
+
+def draw_bounding_boxes(image_path, rois):
+    """
+    Draw bounding boxes on an image based on ROIs returned from a counting detector.
+
+    Args:
+        image_path: Path to the image file
+        rois: List of ROI objects returned from image_query.rois
+    """
+    image = cv2.imread(image_path)
+    if image is None:
+        raise ValueError(f"Could not read image from {image_path}")
+    height, width = image.shape[:2]
+
+    # Draw bounding boxes
+    for roi in rois:
+        x1 = int(roi.geometry.left * width)
+        y1 = int(roi.geometry.top * height)
+        x2 = int(roi.geometry.right * width)
+        y2 = int(roi.geometry.bottom * height)
+        cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0), 2)
+        label_text = f"{roi.label}: {roi.score:.2f}"
+        cv2.putText(image, label_text, (x1, y1-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
+
+    # Display the image
+    cv2.imshow("Image with Bounding Boxes", image)
+    cv2.waitKey(0)
+    cv2.destroyAllWindows()
+
+# Example usage:
+# image_query = gl.submit_image_query(detector, "path/to/image.jpg")
+# draw_bounding_boxes("path/to/image.jpg", image_query.rois)
+```
+:::
+
+## Add a label to a Counting Detector
+
+The Groundlight API allows you to add labels to image queries, including Region of Interest (ROI) data.
+When adding a label to a counting detector, if you include ROIs, the number of ROIs should match
+the count you are labeling.
+
+```python notest
+from groundlight import ExperimentalApi
+gl_exp = ExperimentalApi()
+
+# highlight-start
+# Add a count label with corresponding ROIs to the image query from the previous example.
+#   ROIs are specified as (left, top) and (right, bottom) coordinates, with values
+#   between 0 and 1 representing the percentage of the image width and height.
+roi1 = gl_exp.create_roi("car", (0.1, 0.2), (0.2, 0.3))
+roi2 = gl_exp.create_roi("car", (0.4, 0.4), (0.5, 0.6))
+roi3 = gl_exp.create_roi("car", (0.6, 0.5), (0.8, 0.9))
+rois = [roi1, roi2, roi3]
+gl_exp.add_label(image_query, label=len(rois), rois=rois)
+# highlight-end
+```
\ No newline at end of file
diff --git a/docs/docs/answer-modes/_category_.json b/docs/docs/answer-modes/_category_.json
new file mode 100644
index 00000000..d1e461da
--- /dev/null
+++ b/docs/docs/answer-modes/_category_.json
@@ -0,0 +1,4 @@
+{
+    "label": "Answer Modes",
+    "position": 3
+}
diff --git a/docs/docs/answer-modes/answer-modes.md b/docs/docs/answer-modes/answer-modes.md
new file mode 100644
index 00000000..f817bda1
--- /dev/null
+++ b/docs/docs/answer-modes/answer-modes.md
@@ -0,0 +1,10 @@
+# Detector Answer Modes
+
+Groundlight offers several detector modalities to suit different computer vision tasks. While previous examples have focused on binary classification, this guide will walk you through using counting and multi-class detectors. Let's explore how these different modes can be used via the Groundlight SDK.
+
+- **[Binary Detectors](1-binary-detectors.md)**: Learn how to create detectors that answer yes/no questions about images.
+- **[Multiple Choice (Choose One) Detectors](2-multi-choice-detectors.md)**: Create detectors that select one answer from a predefined list of options.
+- **[Count Detectors](3-counting-detectors.md)**: Use detectors to count the number of objects present in an image - and return bounding boxes around the counted objects.
+<!-- 4. [Text Recognition Detectors](4-text-recognition-detectors.md) -->
+
+<!-- TODO: object detection modes -->
\ No newline at end of file
diff --git a/docs/docs/guide/3-working-with-detectors.md b/docs/docs/guide/3-working-with-detectors.md
index 30628959..1ad2cddd 100644
--- a/docs/docs/guide/3-working-with-detectors.md
+++ b/docs/docs/guide/3-working-with-detectors.md
@@ -1,6 +1,9 @@
 # Working with Detectors
 
-### Explicitly create a new detector
+This guide will walk you through creating, retrieving, and managing detectors in Groundlight. Groundlight supports several detector modalities to suit different computer vision tasks - for more information on these modes, see the [Detector Answer Modes](../answer-modes/answer-modes.md) guide.
+
+
+## Explicitly create a new detector
 
 Typically you'll use the `get_or_create_detector(name: str, query: str)` method to find an existing detector you've already created with the same name, or create a new one if it doesn't exists. But if you'd like to force creating a new detector you can also use the `create_detector(name: str, query: str)` method
 
@@ -16,7 +19,7 @@ detector = gl.create_detector(name="your_detector_name", query="is there a hummi
 # highlight-end
 ```
 
-### Retrieve an existing detector
+## Retrieve an existing detector
 To work with a detector that you've previously created, you need to retrieve it using its unique identifier. This is typical in Groundlight applications where you want to continue to use a detector you've already created.
 
 <!-- Don't test because the ID can't be faked -->
@@ -43,7 +46,7 @@ detector = gl.get_detector_by_name(name="your_detector_name")
 # highlight-end
 ```
 
-### List your detectors
+## List your detectors
 To manage and interact with your detectors, you might need to list them. Groundlight provides a straightforward way to retrieve a list of detectors you've created. By default, the list is paginated to show 10 results per page, but you can customize this to suit your needs.
 
 ```python
@@ -59,38 +62,3 @@ detectors = gl.list_detectors()
 detectors = gl.list_detectors(page=1, page_size=5)
 # highlight-end
 ```
-
-### [BETA] Create a Counting Detector
-So far, all of the detectors we've created have been binary classification detectors. But what if you want to count the number of objects in an image? You can create a counting detector to do just that. Counting detectors also return bounding boxes around the objects they count.
-
-:::note
-
-Counting Detectors are available on [Pro, Business, and Enterprise plans](https://www.groundlight.ai/pricing).
-
-:::
-
-```python notest
-from groundlight import ExperimentalApi
-
-gl_experimental = ExperimentalApi()
-
-# highlight-start
-detector = gl_experimental.create_counting_detector(name="your_detector_name", query="How many cars are in the parking lot?", max_count=20)
-# highlight-end
-```
-
-### [BETA] Create a Multi-Class Detector
-If you want to classify images into multiple categories, you can create a multi-class detector.
-
-```python notest
-from groundlight import ExperimentalApi
-
-gl_experimental = ExperimentalApi()
-
-# highlight-start
-class_names = ["Golden Retriever", "Labrador Retriever", "German Shepherd"]
-detector = gl_experimental.create_multiclass_detector(
-    name, query="What kind of dog is this?", class_names=class_names
-)
-# highlight-end
-```
\ No newline at end of file
diff --git a/docs/docs/guide/_category_.json b/docs/docs/guide/_category_.json
index 2e266ee8..f4658fde 100644
--- a/docs/docs/guide/_category_.json
+++ b/docs/docs/guide/_category_.json
@@ -1,4 +1,4 @@
 {
   "label": "Guide",
-  "position": 3
+  "position": 4
 }
diff --git a/docs/docs/guide/guide.md b/docs/docs/guide/guide.md
index bbb77ea6..bb8c9b99 100644
--- a/docs/docs/guide/guide.md
+++ b/docs/docs/guide/guide.md
@@ -11,6 +11,6 @@ On the following pages, we'll guide you through the process of building applicat
 - **[Asynchronous queries](7-async-queries.md)**: Groundlight makes it easy to submit asynchronous queries. Learn how to submit queries asynchronously and retrieve the results later.
 - **[Using Groundlight on the edge](8-edge.md)**: Discover how to deploy Groundlight in edge computing environments for improved performance and reduced latency.
 - **[Alerts](9-alerts.md)**: Learn how to set up alerts to notify you via text (SMS) or email when specific conditions are met in your visual applications.
-- **[Industrial applications](../sample-applications/4-industrial.md)**: Learn how to apply modern natural-language-based computer vision to your industrial and manufacturing applications.
+
 
 By exploring these resources and sample applications, you'll be well on your way to building powerful visual applications using Groundlight's computer vision and natural language capabilities.
diff --git a/docs/docs/other-ways-to-use/_category_.json b/docs/docs/other-ways-to-use/_category_.json
index dc7ff00c..16f37a0d 100644
--- a/docs/docs/other-ways-to-use/_category_.json
+++ b/docs/docs/other-ways-to-use/_category_.json
@@ -1,5 +1,5 @@
 {
     "label": "Alternative Deployment Options",
-    "position": 5,
+    "position": 6,
     "collapsed": false
 }
\ No newline at end of file
diff --git a/docs/docs/sample-applications/_category_.json b/docs/docs/sample-applications/_category_.json
index f1985446..138a71ea 100644
--- a/docs/docs/sample-applications/_category_.json
+++ b/docs/docs/sample-applications/_category_.json
@@ -1,4 +1,4 @@
 {
     "label": "Sample Applications",
-    "position": 4
+    "position": 5
   }