Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch to Flux.flatMapSequential(…) to prevent backpressure shaping #4550

Closed
wants to merge 2 commits into from

Conversation

mp911de
Copy link
Member

@mp911de mp911de commented Nov 6, 2023

We now use Flux.flatMapSequential(…) instead of concatMap as concatMap reduces the request size to 1. The change in backpressure/request size reduces parallelism and impacts the batch size by fetching 2 documents instead of considering the actual backpressure.

flatMapSequential doesn't tamper the requested amount while retaining the sequence order.

We now use Flux.flatMapSequential(…) instead of concatMap as concatMap reduces the request size to 1. The change in backpressure/request size reduces parallelism and impacts the batch size by fetching 2 documents instead of considering the actual backpressure.

flatMapSequential doesn't tampe the requested amount while retaining the sequence order.
@mp911de mp911de added the type: task A general task label Nov 6, 2023
@mp911de mp911de linked an issue Nov 6, 2023 that may be closed by this pull request
christophstrobl pushed a commit that referenced this pull request Nov 7, 2023
We now use Flux.flatMapSequential(…) instead of concatMap as concatMap reduces the request size to 1. The change in backpressure/request size reduces parallelism and impacts the batch size by fetching 2 documents instead of considering the actual backpressure.

flatMapSequential doesn't tamper the requested amount while retaining the sequence order.

Closes: #4543
Original Pull Request: #4550
christophstrobl pushed a commit that referenced this pull request Nov 7, 2023
We now use Flux.flatMapSequential(…) instead of concatMap as concatMap reduces the request size to 1. The change in backpressure/request size reduces parallelism and impacts the batch size by fetching 2 documents instead of considering the actual backpressure.

flatMapSequential doesn't tamper the requested amount while retaining the sequence order.

Closes: #4543
Original Pull Request: #4550
christophstrobl pushed a commit that referenced this pull request Nov 7, 2023
We now use Flux.flatMapSequential(…) instead of concatMap as concatMap reduces the request size to 1. The change in backpressure/request size reduces parallelism and impacts the batch size by fetching 2 documents instead of considering the actual backpressure.

flatMapSequential doesn't tamper the requested amount while retaining the sequence order.

Closes: #4543
Original Pull Request: #4550
@christophstrobl
Copy link
Member

Merged to main and back ported to 4.0.x and 4.1.x.

@mp911de mp911de deleted the issue/4543 branch November 7, 2023 13:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: task A general task
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Reactive request is not properly propagated as cursor batch size
2 participants