Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poor performance when enqueuing many throttled jobs because of unindexed queries #1603

Open
Intrepidd opened this issue Feb 13, 2025 · 2 comments

Comments

@Intrepidd
Copy link
Contributor

Intrepidd commented Feb 13, 2025

When the concurrency extension looks up for existing queued jobs within the throttle period, it performs the following query :

enqueued_within_period = GoodJob::Job.where(concurrency_key: key)
.where(GoodJob::Job.arel_table[:created_at].gt(throttle_period.ago))
.count

In my case, this query will run in 200-300ms for each enqueue attempt, so it adds up pretty quickly when trying to enqueue hundreds of jobs.

The index on concurrency_key has a where finished_at is NULL, so it can't be used here.

Would it make sense to have an index solely on concurrency_key ?

Thanks

@bensheldon
Copy link
Owner

bensheldon commented Feb 14, 2025

Good catch on that. I think you're correct that the current index is insufficient for the throttle query 😓

I think the correct index here would be [concurrency_key, :created_at]. That would also cover the execution throttling (as it can use just the first column of the compound index):

https://github.com/bensheldon/good_job/blob/b557525a9003a1fc1a434742dae58053b947e708/lib/good_job/active_job_extensions/concurrency.rb#L162C1-L164C165

@Intrepidd
Copy link
Contributor Author

Thanks ! I will make a PR shortly if that's ok

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Inbox
Development

No branches or pull requests

2 participants