-
Notifications
You must be signed in to change notification settings - Fork 590
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added support for batch size and multi processing to evaluate_detections #5376
base: develop
Are you sure you want to change the base?
Conversation
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
e49f1ea
to
7ba6a45
Compare
How did you test this/on what platform? Pymongo is not fork safe and since fiftyone uses a single global mongo client, iterating over samples could result in deadlock. https://www.mongodb.com/docs/languages/python/pymongo-driver/current/faq/#can-i-use-pymongo-with-multiprocessing- |
Also we are pausing this for now as discussed the other day 😊 |
@kaixi-wang : I am running this locally on my mac computer against local & remote DB. The samples are fetched on the main thread and the actual IOU computation is parallelized, so I don't think there is a danger to Pymongo. |
futures.append((future, sample)) | ||
|
||
# Collect results | ||
for future, sample in futures: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We probably should define a constant somewhere so that we can collect the results earlier without having to maintain a large list
fiftyone/utils/eval/detection.py
Outdated
@@ -135,6 +139,9 @@ def evaluate_detections( | |||
progress (None): whether to render a progress bar (True/False), use the | |||
default value ``fiftyone.config.show_progress_bars`` (None), or a | |||
progress callback function to invoke instead | |||
batch_size (None): the batch size at which to process samples. By | |||
default, all samples are processed in a single (1) batch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not true that the default is a batch size of 1. iter_samples(autosave=True)
uses a batching strategy by default:
fiftyone/fiftyone/core/view.py
Lines 500 to 516 in 67bdf7e
batch_size (None): the batch size to use when autosaving samples. | |
If a ``batching_strategy`` is provided, this parameter | |
configures the strategy as described below. If no | |
``batching_strategy`` is provided, this can either be an | |
integer specifying the number of samples to save in a batch | |
(in which case ``batching_strategy`` is implicitly set to | |
``"static"``) or a float number of seconds between batched | |
saves (in which case ``batching_strategy`` is implicitly set to | |
``"latency"``) | |
batching_strategy (None): the batching strategy to use for each | |
save operation when autosaving samples. Supported values are: | |
- ``"static"``: a fixed sample batch size for each save | |
- ``"size"``: a target batch size, in bytes, for each save | |
- ``"latency"``: a target latency, in seconds, between saves | |
By default, ``fo.config.default_batcher`` is used |
The default batcher sends requests every 0.2 seconds:
fiftyone/fiftyone/core/config.py
Lines 159 to 182 in 67bdf7e
self.default_batcher = self.parse_string( | |
d, | |
"default_batcher", | |
env_var="FIFTYONE_DEFAULT_BATCHER", | |
default="latency", | |
) | |
self.batcher_static_size = self.parse_int( | |
d, | |
"batcher_static_size", | |
env_var="FIFTYONE_BATCHER_STATIC_SIZE", | |
default=100, | |
) | |
self.batcher_target_size_bytes = self.parse_int( | |
d, | |
"batcher_target_size_bytes", | |
env_var="FIFTYONE_BATCHER_TARGET_SIZE_BYTES", | |
default=2**20, | |
) | |
self.batcher_target_latency = self.parse_number( | |
d, | |
"batcher_target_latency", | |
env_var="FIFTYONE_BATCHER_TARGET_LATENCY", | |
default=0.2, | |
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, so if batch_size is not passed, we use latency strategy by default 👍
What changes are proposed in this pull request?
This PR introduces parallel processing capabilities to the detection evaluation functionality in FiftyOne. Key changes include:
The parallelization significantly improves performance for large datasets while maintaining the same accuracy and output format as the original
How is this patch tested? If it is not, please explain why.
Release Notes
Is this a user-facing change that should be mentioned in the release notes?
notes for FiftyOne users.
(Details in 1-2 sentences. You can just refer to another PR with a description
if this PR is part of a larger change.)
What areas of FiftyOne does this PR affect?
fiftyone
Python library changes