You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since we have the chance in Zimit to have a "monitoring" process, and since current situation but also past ones showed that we regularly have situations where the crawler and/or warc2zim get stucks, should we implement a solution to automatically monitor and stop when such a situation appears?
We can monitor both the disk size (size is expected to continuously grow) and the progress JSON file. Should both show no sign of progress for a given amount of time (let's say 5 mins), the we stop them (kill -9) and fail the crawl.
I still need to confirm that it covers current issue(s) with the crawler by monitor ming production manually (see kiwix/operations#367)
WDYT?
The text was updated successfully, but these errors were encountered:
Since we have the chance in Zimit to have a "monitoring" process, and since current situation but also past ones showed that we regularly have situations where the crawler and/or warc2zim get stucks, should we implement a solution to automatically monitor and stop when such a situation appears?
We can monitor both the disk size (size is expected to continuously grow) and the progress JSON file. Should both show no sign of progress for a given amount of time (let's say 5 mins), the we stop them (kill -9) and fail the crawl.
I still need to confirm that it covers current issue(s) with the crawler by monitor ming production manually (see kiwix/operations#367)
WDYT?
The text was updated successfully, but these errors were encountered: