Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set the max bytes to be read from a connection #118

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

edwellbrook
Copy link

@edwellbrook edwellbrook commented Apr 16, 2021

Add a NIOTSChannelOption to configure the maximum number of bytes to be read from a connection.

Motivation:

Currently the max number of bytes is hard coded to 8KB. In some scenarios API consumers may want to change this to better suit their project. See also: #92

Modifications:

  • Added a new struct to NIOTSChannelOptions.Types to represent the new option
  • Added a new property to ConnectionChannelOptions to store the default or configured value
  • Use the stored value when reading from the connection

Result:

  • Nothing will change to existing setups
  • A new option will be available to API consumers to configure the max bytes to be read from a connection

@swift-server-bot
Copy link

Can one of the admins verify this patch?

@Lukasa
Copy link
Collaborator

Lukasa commented Apr 16, 2021

Thanks for this patch!

@weissi In your original issue you suggested using RecvByteBufferAllocator. I kinda think that would be the better choice here as it'd be more compatible with other NIO channels: what do you think?

@weissi weissi requested a review from Lukasa April 16, 2021 17:40
@@ -28,6 +28,8 @@ public struct NIOTSChannelOptions {

public static let currentPath = NIOTSChannelOptions.Types.NIOTSCurrentPathOption()

public static let maxReceiveBytes = NIOTSChannelOptions.Types.NIOMaxReceiveBytesOption()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Lukasa I think a separate option (as suggested in this PR) is actually better than matching the socket channel with a RecvAllocator, wdyt? We can't sensibly control the strategy that NW.fw will use so I'm actually happy with this as is, wdyt?

@weissi
Copy link
Member

weissi commented Apr 16, 2021

Thanks for this patch!

@weissi In your original issue you suggested using RecvByteBufferAllocator. I kinda think that would be the better choice here as it'd be more compatible with other NIO channels: what do you think?

Oh sorry, I missed this. So yeah, initially I thought a RecvByteBufferAllocator would be better (because more in line with NIO). However, now I think I changed my mind because we may pointlessly over-allocate. Let's say the user installs the AdaptiveRecvByteBufferAllocator, that one chooses 64k, but then NW.fw only hands us say 1k. Then we just pointlessly over-allocated for no real reason (we'd copy just 1k into the 64k buffer). In NIO on sockets that makes sense because making better-fitting buffer would mean an extra alloc+copy but with NW.fw we need the alloc+copy anyway so I'd think picking the buffer that fits best makes more sense?

@weissi
Copy link
Member

weissi commented Apr 16, 2021

@Lukasa If we could design RecvByteBufferAllocator againn, then we could make it have a method that returns the next size (instead of just func buffer(allocator: ByteBufferAllocator) -> ByteBuffer) and use that returned size for NW.fw.

What we could obviously do is to add an optional requirement func nextAllocationSize() -> Int? and use that for NIOTS?

@Lukasa
Copy link
Collaborator

Lukasa commented Apr 19, 2021

Hmm. We kinda need a new allocator protocol for io_uring anyway, don't we? Or am I misunderstanding that?

@weissi
Copy link
Member

weissi commented Apr 19, 2021

Hmm. We kinda need a new allocator protocol for io_uring anyway, don't we? Or am I misunderstanding that?

I see, yes, that is a good point!

What about this PR then, would we want until we have a new allocator for io_uring (that would support not copying the (Dispatch)Data at all or should we move forward with either .maxReceiveBytes or adding a .nextAllocSize() to RecvAllocator?

@Lukasa
Copy link
Collaborator

Lukasa commented Apr 19, 2021

I think the new allocator protocol we need for io_uring is potentially very similar to the thing we want here. Might be good to address both tasks in one go? You could always compose the old protocol from the new one, right, so pre-existing channels can kinda trivially support the new API.

I don't think adding a nextAllocSize is the way to go: I think a new type is probably better.

@weissi
Copy link
Member

weissi commented Apr 19, 2021

I think the new allocator protocol we need for io_uring is potentially very similar to the thing we want here.

Yes, agreed.

Might be good to address both tasks in one go?

Fine by me. I think we should take the action to start designing that relatively soon though.

I don't think adding a nextAllocSize is the way to go: I think a new type is probably better.

Ok, sounds good to me. Said that, an (optional) nextAllocSize doesn't look like too big of a wrinkle to me. But then, I do support your idea of designing a new allocator that supports this and io_uring in a really good way!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants