You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The worker package is where any work gets separated out into related to querying, computing, aggregating, etc. Basically all the current cron jobs that exist in the backend/core are to be separated out into the worker package.
BullMQ is mainly used as a message queue to facilitate communication between various services. There exists jobs (things to be done), queues (the list of jobs), and workers (the things to pick a job off the queue and execute them). BullMQ uses Redis underneath, thus #2042 is needed first.
Containerized workers are meant to be run in parallel, and scaled as needed. They will need a mongodb connection and redis connection. A worker will listen for jobs from the queue and pick only one off (so if multiple workers are running, a job will be run only once, by only one worker). Thus, the backend could be scaled more by spinning up more worker pods as necessary.
There will need to be a otv-worker docker repo created - the build and publish ci pipelines already exist for this.
In terms of deployment, there will need to be charts and helmfile created. Considerations would be:
Horizontal Pod Autoscaling - perhaps above a certain cpu usage new pods get created
workers will need a mongodb and redis connection
The text was updated successfully, but these errors were encountered:
Similar to #2040 (read that first).
The
worker
package is where any work gets separated out into related to querying, computing, aggregating, etc. Basically all the current cron jobs that exist in the backend/core are to be separated out into theworker
package.BullMQ is mainly used as a message queue to facilitate communication between various services. There exists jobs (things to be done), queues (the list of jobs), and workers (the things to pick a job off the queue and execute them). BullMQ uses Redis underneath, thus #2042 is needed first.
Containerized workers are meant to be run in parallel, and scaled as needed. They will need a mongodb connection and redis connection. A worker will listen for jobs from the queue and pick only one off (so if multiple workers are running, a job will be run only once, by only one worker). Thus, the backend could be scaled more by spinning up more worker pods as necessary.
There will need to be a
otv-worker
docker repo created - the build and publish ci pipelines already exist for this.In terms of deployment, there will need to be charts and helmfile created. Considerations would be:
The text was updated successfully, but these errors were encountered: