Skip to content

Commit 0ffe6c1

Browse files
authored
Merge pull request #6 from groundlight/wordsmithing-1
A few wording changes.
2 parents f19a92d + 833f512 commit 0ffe6c1

File tree

1 file changed

+11
-3
lines changed

1 file changed

+11
-3
lines changed

README.md

+11-3
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,9 @@
11
# Model Evaluation Tool
2-
A simple tool for evaluating the performance of your Groundlight Binary ML model.
2+
A simple tool for manually evaluating the performance of your Groundlight Binary ML model.
33

4-
This script provides a simple way for users to do an independent evaluation of the ML's performance. Note that this is not the recommended way of using our service, as this only evaluates ML performance and not the combined performance of our ML + escalation system. However, the balanced accuracy results from `evaluate.py` should fall within the bounds of Projected ML Accuracy shown on our website, if the train and evaluation dataset that the user provided are well randomized.
4+
This script provides a simple way for you to do an independent evaluation of your Groundlight model's ML performance. Note that this is not the recommended way of using our service, as this only evaluates ML performance and not the combined performance of our ML + escalation system. However, the balanced accuracy results from `evaluate.py` should fall within the bounds of Projected ML Accuracy shown on our website, if the train and evaluation dataset that the user provided are well randomized.
5+
6+
Note this tool only works for **binary detectors**.
57

68
## Installation
79

@@ -72,6 +74,8 @@ Optionally, set the `--delay` argument to prevent going over the throttling limi
7274

7375
### Evaluate the Detector
7476

77+
Before evaluating the ML model, you should wait a few minutes for the model to be fully trained. Small models generally train very quickly, but to be sure your model is fully trained, you should wait 10 or 15 minutes after submitting the training images.
78+
7579
To evaluate the ML model performance for a detector, simply run the script `evaluate.py` with the following arguments:
7680

7781
```bash
@@ -88,4 +92,8 @@ Average Confidence
8892
Balanced Accuracy
8993
Precision
9094
Recall
91-
```
95+
```
96+
97+
## Questions?
98+
99+
If you have any questions or feedback about this tool, feel free to reach out to your Groundlight contact, over email at [email protected], in your dedicated Slack channel, or using the chat widget in the bottom-right corner of the dashboard.

0 commit comments

Comments
 (0)