-
-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Find redis performance regression trigger #550
Comments
Probably the simplest way to debug this would be to downgrade Sentry, let it run for a bit, and see if the performance issue goes away. |
Looking in Sentry's issue tracker, I don't see anything since May 1 related to redis that could be part of this. Hm. |
Hm, another regression report tonight from Sentry: Again, looks mostly like redis, weirdly enough. |
Let's just check in on this and see if it's still bad. If so, let's think some more about it. If not, let's close and hope it doesn't happen again. |
Sentry is reporting that our redis performance got slower starting on June 1:
And looking in AWS, we see that indeed our CPU bumped up several percentages that day:
Not much happened that day in our code base, but we did merge three PRs to update dependencies:
Of the three, I can only imagine the Sentry one being the one that could impact performance, though it'd be pretty weird for Sentry to have anything to do with redis. That said, there are a few new features in Sentry that are related to celery queue length that could be related (note, we don't use Celery in this project).
The actual performance impact is pretty negligible — a few hundred ms — but Sentry is generous with us, so maybe we should go take a look and see if Sentry has a regression worth filing.
The text was updated successfully, but these errors were encountered: