Replies: 3 comments 1 reply
-
I see this issue suggests some stuff: #145. I'll experiment with disabling backpressure, though I doubt that's the issue because the default is 10,000 before it kicks in and we shouldn't exceed more than 1,000 or so at a time. |
Beta Was this translation helpful? Give feedback.
-
Interesting, yeah backpressure or Otherwise I would use To your other question about concurrency - yeah the interface and actor-model based routing layer here is really designed to offer better performance at higher concurrency, so I would expect you see to comparatively better QPS/throughput vs deadpool the more tasks share a client (assuming For what it's worth, or in case it helps with testing/debugging, there's a benchmarking tool (https://github.com/aembke/fred.rs/tree/main/bin/benchmark) that might help. It also shows how to set up tracing. tokio-console might also be helpful. This is the line that spawns the task that acts as the root of the supervision tree for all the connections. |
Beta Was this translation helpful? Give feedback.
-
Also, can you show some of the code that uses the pool and makes requests to the server? |
Beta Was this translation helpful? Give feedback.
-
I'm experimenting with using Fred vs deadpool-redis. See the attached graph. I upgraded the image in prod to one using fred at about 17:09, and then reverted around 17:23.
You can see that with Fred, latency overall worsens; higher baseline latency, spikier spikes.
I'm building my pool like this:
In this case I'm not using Redis Cluster or replicas, this is just with a boring old normal Redis.
Is there something I can do with my config to get equivalent performance to what I was getting with deadpool-redis? I'd love to keep using fred due to the improved configurability, the cleaner API (e.g. one client instead of different clients for normal vs cluster Redis), active development, etc.
Beta Was this translation helpful? Give feedback.
All reactions