-
I’m getting really slow performance when sending messages to SQS from my rust Lambda as I test with a KafkaEvent containing a realistic number of records. When I send one or two records, the messages individually send in 30-50ms. When I use a full sample payload with about 100 records, each message takes about 400ms – 1.3s. I’m guessing that it has something to do with how we use the client because I was seeing a similar issue with requesting a secret until realizing I only needed to request it once. Perhaps we need to tamper down the batch size from the Kafka process because it’s just using the default, but that seems like a big slow down regardless. I was hoping someone could help us with best practices on this. The AWS chat representative pointed me to an article on horizontal scaling that suggested using more clients or clients with more threads. I tried instantiating a new client for every message and saw no improvement, and I don’t see a way to increase the number of threads that a client has to use. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
The Rust SDK clients are intended to be reused (even across threads), and client creation is expensive, so you definitely don't want to create a new client for each message. You should create the client once on Lambda start-up so that it becomes part of the cold start time. Some questions:
|
Beta Was this translation helpful? Give feedback.
Hi,
Inside the main function you should initialize the client;
and now you can pass it to the handler and you use it from there.
About SQS:
About Latency:
you can setup …