You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the renci / nq backup asset is ran every time but the actual process of sending it or doing any computation is skipped if all the partitions in the dag aren't materialized.
That being said, I am unclear what happens if the entire graph gets crawled and all partitions are materialized. Namely, if one more source gets recrawled, the other partitions are still materialized so its possible the export would run again after just one source is recrawled after a full export.
In general, working with dynamic partitions is a bit ill documented and harder to work with.
The text was updated successfully, but these errors were encountered:
Its possible that the proper way to do this would be to track the start time of the materialization of each partition are write a simple algorithm to check which are grouped together.
It also might just be easier to have an asset at the end of the pipeline that resets all dynamic partitions. That is the equivalent of grouping assets together in the next materialization
Currently the renci / nq backup asset is ran every time but the actual process of sending it or doing any computation is skipped if all the partitions in the dag aren't materialized.
That being said, I am unclear what happens if the entire graph gets crawled and all partitions are materialized. Namely, if one more source gets recrawled, the other partitions are still materialized so its possible the export would run again after just one source is recrawled after a full export.
In general, working with dynamic partitions is a bit ill documented and harder to work with.
The text was updated successfully, but these errors were encountered: