You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, when deploying the scheduler for the live Geoconnex sitemap, we encounter issues while crawling sitemaps that contain around 50,000 entries. This is due to the automatic cancellation of any Dagster run where a step exceeds 180 seconds. As a result, the process is prematurely interrupted, making it difficult to reliably crawl large sitemaps.
Expected Outcome:
We need to adjust the timeout configuration to a more reasonable value that allows the crawling of large sitemaps (with up to 50,000 entries) without triggering premature cancellations. The goal is to enable smooth and reliable crawling of these large datasets without interference from the current Dagster monitoring and cancellation system.
The text was updated successfully, but these errors were encountered:
Currently, when deploying the scheduler for the live Geoconnex sitemap, we encounter issues while crawling sitemaps that contain around 50,000 entries. This is due to the automatic cancellation of any Dagster run where a step exceeds 180 seconds. As a result, the process is prematurely interrupted, making it difficult to reliably crawl large sitemaps.
Expected Outcome:
We need to adjust the timeout configuration to a more reasonable value that allows the crawling of large sitemaps (with up to 50,000 entries) without triggering premature cancellations. The goal is to enable smooth and reliable crawling of these large datasets without interference from the current Dagster monitoring and cancellation system.
The text was updated successfully, but these errors were encountered: