You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a narrative for containers that would like to load a big dataset ahead of time and then operate in turbo mode with persisting that data across job invocations?
Aka amortizing long startup time over multiple job invocations?
This applies to applications that need to load a model or a graph or a database in order to execute the job and want to keep that in memory or on disk in between runs.
Options on the top of my head:
Expose watchbot as library so it becomes trivial to implement your own job invocation on top of the SQS polling (e.g. treat this as very special ECS service)
Support "persistent turbo" mode jobs as an option that does not clean out data directories (or kill a background process) in between job runs
Adding a use case here (cc @vsmart): loading large self-contained machine learning models once and using ecs-watchbot to scale it out on cpu workers running it on a large amount of images. The models are read-only and will always be the same. At the moment we simply use a large batch size per worker to amortize the model downloading on each worker; but this limits scale out.
Adding a use case here (cc @vsmart): loading large self-contained machine learning models once and using ecs-watchbot to scale it out on cpu workers running it on a large amount of images. The models are read-only and will always be the same. At the moment we simply use a large batch size per worker to amortize the model downloading on each worker; but this limits scale out.
I second that use case.
Hit that use case with NLP models before too that take some minutes to download 👍
Is there a narrative for containers that would like to load a big dataset ahead of time and then operate in turbo mode with persisting that data across job invocations?
Aka amortizing long startup time over multiple job invocations?
This applies to applications that need to load a model or a graph or a database in order to execute the job and want to keep that in memory or on disk in between runs.
Options on the top of my head:
/cc @rclark @jakepruitt
The text was updated successfully, but these errors were encountered: