-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor check_job_status #1507
Comments
That's a great point about if the job itself is somehow interrupted - a server reboot, power outage, etc. Perhaps we do need to keep some semblance of whether or not a job itself is successful in that case so we know where to pick it back up from. Another factor to consider though is tracking strictly at the message send level and the retries configured with those should they fail or get blocked for whatever reason. With these two status checks in play, we need to make sure they're working in tandem with each other and not clobbering or competing with one another, which would also overwhelm the system. |
I think the current code is fine, provided that we are continuing to run with 800 numbers, which AWS is going to throttle at roughly 10,000 messages per hour anyway. I think there is nothing to do here, until we introduce short codes or whatever the other thingy is that users theoretically can send with. The current code is correct for 800 numbers. |
Okay, sounds good, thanks @terrazoon! If our testing with the other current fixes and improvements shows that we're still in good shape, if not even better shape, then we can close this one out for now. |
Originally check_job_status declared that a job that started 30 minutes ago and has not finished is 'incomplete' and kicked off an attempt to finish this incomplete job. For long running jobs, this means a new task would start to run, trying to insert rows into the database that were already there and triggering an avalanche of IntegrityErrors that would last for hours and bog down the whole system.
As an expedient, we changed the timing to allow 4 hours for the job to finish before it is declared incomplete. This is because we're currently only supporting sends of 25k rows and we know we can send those out in less than 2 hours.
But ideally, we should be using job status and not time ranges. What would cause a job to just stop running? Is it possible? Is it for a random reboot in the middle of a task? If it has to be time based, what should the time be. Someone needs to take a look and rethink this.
The text was updated successfully, but these errors were encountered: