StackOverflowError when chaining multiple Unis #1107
-
Hi, there seems to be a limit to the amount of Units I can chain. See this code: AtomicLong counter = new AtomicLong();
Uni<Void> uni = Uni.createFrom().voidItem();
for (int i = 0; i < 1000; i++) {
uni = uni.chain(() -> Uni.createFrom().item(42).invoke(counter::addAndGet).replaceWithVoid());
}
uni.await().indefinitely();
System.out.println(counter.get()); Chaining about 500 Unis work, but above 600 it throws the StackOverflowError. I found this error when I was using the Quarkus version: 2.13 |
Beta Was this translation helpful? Give feedback.
Replies: 9 comments
-
/cc @cescoffier |
Beta Was this translation helpful? Give feedback.
-
There's indeed a point where calls can stack up, pretty much like in plain imperative code (but worse because of the design of reactive streams). Chaining 1000 I don't know the Redis client well, but isn't there a way to batch operations at the client level rather than doing so at the Mutiny level, and sending many requests rather than properly batched requests? Ultimately you might look at JVM tuning and |
Beta Was this translation helpful? Give feedback.
-
We don't have "good" pipelining support at the moment (for Redis). |
Beta Was this translation helpful? Give feedback.
-
yes, bulk insert (https://redis.io/docs/manual/pipelining/), but for thousands of commands. @cescoffier I think you answered me when I asked the same here: https://stackoverflow.com/questions/73694703/how-to-use-redis-pipelining. The solution is to use for(int i = 0; i < 1000; i++) {
redis.set("mykey", i, res -> {});
} This does NOT work with the Quarkus client, it uses all connections available in the pool. I think It should be mentioned in the documentation that there is currently not good support for pipelining since it is an essential feature; I really assumed this feature was available. By the way, there is this config available:
I think this is misleading; client only does pipelining when using |
Beta Was this translation helpful? Give feedback.
-
I meant that we don't have a good pipeline support in the redis data source. The underlying client provides a @Inject Redis redis;
// ...
List<Request> requests = ...
redis.batch(requests);
//... The main issue with the pipelining is collecting the deserializers. We do that for transactions, but for batching thousands of requests, the cost would be too high.
|
Beta Was this translation helpful? Give feedback.
-
Just tested that client (io.vertx.mutiny.redis.client.Redis) and it works well. Documentation only shows these clients though: @ApplicationScoped
public class RedisExample {
@Inject ReactiveRedisDataSource reactiveDataSource;
@Inject RedisDataSource redisDataSource;
@Inject RedisAPI redisAPI;
// ...
} I didn't know I could use this even-lower-level client 😕 UPDATE So I am using the Redis client with
Once an error is returned from Redis, the error is thrown in the UNI and the other responses are lost. So there is no way to access the other responses? Redis responds with all responses, even if there was an error for any of the commands. For example, here I am passing an incorrect command at the end, but good commands for which I get responses:
|
Beta Was this translation helpful? Give feedback.
-
This API is documented - See https://quarkus.io/guides/redis-reference#apis. About the failure, this is how the |
Beta Was this translation helpful? Give feedback.
-
BTW, not really pipelining, but this works: rds.withConnection(redis -> {
for (int i = 0; i < 5000; i++) {
unis.add(redis.value(Integer.class).set("key-" + i, i));
}
return Uni.join().all(unis).andCollectFailures()
.replaceWithVoid();
}).await().indefinitely(); |
Beta Was this translation helpful? Give feedback.
-
This really solves it, thanks! |
Beta Was this translation helpful? Give feedback.
BTW, not really pipelining, but this works: