-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement pre-fetching in map() and gen() #521
base: main
Are you sure you want to change the base?
Conversation
Deploying datachain-documentation with Cloudflare Pages
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #521 +/- ##
==========================================
+ Coverage 87.43% 87.51% +0.07%
==========================================
Files 97 97
Lines 10069 10099 +30
Branches 1374 1382 +8
==========================================
+ Hits 8804 8838 +34
+ Misses 908 905 -3
+ Partials 357 356 -1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
results = self.dataset_rows_select(paginated_query.offset(offset)) | ||
# Ensure we're using a thread-local connection | ||
with self.clone() as wh: | ||
# Cursor results are not thread-safe, so we convert them to a list |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: why do they have to be thread safe (I mean cursor results)? Since we run producer in a separate thread now in async mapper?
Q: are there any implications in terms of memory usage for this?
@@ -66,3 +74,5 @@ def add(self, settings: "Settings"): | |||
self.parallel = settings.parallel or self.parallel | |||
self._workers = settings._workers or self._workers | |||
self.min_task_size = settings.min_task_size or self.min_task_size | |||
if settings.prefetch is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is a reason to have a mix of styles here - some protected vars, some not, some self._cache = settings._cache or self._cache
, some like if settings.prefetch is not None:
?
@@ -325,6 +325,7 @@ def settings( | |||
parallel=None, | |||
workers=None, | |||
min_task_size=None, | |||
prefetch: Optional[int] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
q: why int? let's update the docs here (do we have some CI to detect these discrepancies btw (missing docs) cc @skshetry )
@@ -111,6 +112,37 @@ async def process(row): | |||
list(mapper.iterate(timeout=4)) | |||
|
|||
|
|||
@pytest.mark.parametrize("create_mapper", [AsyncMapper, OrderedMapper]) | |||
def test_mapper_deadlock(create_mapper): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will it deadlock if we don't wrap producer into a thread? were you trying to test (make sure that producer is wrapped)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great. A few question.
One more general question.
Does this implementation mean that we now won't start (at least the very first row) UDF until file is fetched? Before it was doing this "on-demand" I guess - when file is needed. I wonder how big of an issue it can be in certain scenarios (and especially if we decide to do prefetch for batches (agg, batch mapper)).
This adds a
prefetch
setting which enables async downloading of objects to the cache before running a generator or mapper UDF (see #40). The default is to use 2 workers, but it can be disabled using.setting(prefetch=0)
. Note that it has no effect if caching isn't enabled (caching is disabled by default).In order for this to work,
AbstractWarehouse.dataset_select_paginated()
is now required to be thread-safe, so query result pages are now buffered as a list in that function.