Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFI] How does this differentiate between a dependent service getting slower and standard too much concurrency impacting latency? #171

Open
huggsboson opened this issue Jul 27, 2021 · 0 comments

Comments

@huggsboson
Copy link

I can absolutely see as this thing is tuning connection limits up and down watching average latency being a great signal. What happens when you're at steady state and one of your dependencies (say a database) becomes unresponsive and your latencies spike? It's hard to prescribe the perfect one-size meets-all response to this situation but in some scenarios I'd want it to add more concurrency to counteract the blocked threads waiting on a DB connection timeout.

Ideally it would incorporate a cpu or blocked threads metric into the equation to help determine if it's a dependency or the service itself that is impacting latency.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant