You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 8, 2018. It is now read-only.
If 40 were too many threads I would expect CPU load to reflect that. Spiking response times make me think that we hit some slow queries which backed up our threads. What other parameters are we missing? Network I/O on the box?
I think we should strive to fail faster (200s is way too long). I do not think it is possible with python threads - I do not know of a way how to cancel a running thread in python from "outside" after a certain time :(. Logging requests that took too much time would make sense in this context but this info is in some way already available in papertrail logs.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Right after https://www.youtube.com/watch?v=p1E-svVd9Xc we crashed. Manifested as a drained Aspen thread pool:
CPU load isn't bad, response times spike.
The text was updated successfully, but these errors were encountered: