Skip to content

feat: not return UnexpectedEof when server drops without close_notify #847

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 30, 2025

Conversation

jmwample
Copy link
Contributor

This PR extends the change made in #743 to include client connections that are closed by servers that drop connections without sending close_notify when the client has nothing to send.

This is motivated by server H2 implementations sending go_away frames and then closing the connections without a close_notify in longer lived applications (specifically DoH).

I originally reported this as an issue in the hickory-dns (hickory-dns/hickory-dns#3064) but after discussion with @djc the real underlying issue seems to be the handling of this type of intentional connections close within the H2 client.

@jmwample
Copy link
Contributor Author

Example logs:

2025-06-23T16:53:42.856860Z TRACE Connection{peer=Client}:poll: h2::proto::connection: connection.state=Open
2025-06-23T16:53:42.856908Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next: h2::codec::framed_read: poll
2025-06-23T16:53:42.856958Z TRACE Connection{peer=Client}:poll: h2::proto::streams::prioritize: poll_complete
2025-06-23T16:53:42.856972Z TRACE Connection{peer=Client}:poll: h2::proto::streams::prioritize: schedule_pending_open
2025-06-23T16:53:42.857003Z TRACE Connection{peer=Client}:poll:FramedWrite::flush: h2::codec::framed_write: flushing buffer
2025-06-23T16:53:42.857051Z TRACE Connection{peer=Client}:poll: h2::proto::connection: connection.state=Open
2025-06-23T16:53:42.857081Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next: h2::codec::framed_read: poll
2025-06-23T16:53:42.857114Z TRACE Connection{peer=Client}:poll: h2::proto::streams::prioritize: poll_complete
2025-06-23T16:53:42.857131Z TRACE Connection{peer=Client}:poll: h2::proto::streams::prioritize: schedule_pending_open
2025-06-23T16:53:42.857161Z TRACE Connection{peer=Client}:poll:pop_frame:popped{stream.id=StreamId(3) stream.state=State { inner: Closed(ScheduledLibraryReset(CANCEL)) }}: h2::proto::streams::prioritize: is_pending_reset=true
2025-06-23T16:53:42.857185Z TRACE Connection{peer=Client}:poll:pop_frame:popped{stream.id=StreamId(3) stream.state=State { inner: Closed(ScheduledLibraryReset(CANCEL)) }}: h2::proto::streams::prioritize: pop_frame; frame=Reset { stream_id: StreamId(3), error_code: CANCEL }
2025-06-23T16:53:42.857205Z TRACE Connection{peer=Client}:poll:pop_frame:popped{stream.id=StreamId(3) stream.state=State { inner: Closed(ScheduledLibraryReset(CANCEL)) }}: h2::proto::streams::counts: transition_after; stream=StreamId(3); state=State { inner: Closed(Error(Reset(StreamId(3), CANCEL, Library)))}; is_closed=true; pending_send_empty=true; buffered_send_data=0; num_recv=0; num_send=1
2025-06-23T16:53:42.857233Z TRACE Connection{peer=Client}:poll:pop_frame:popped{stream.id=StreamId(3) stream.state=State { inner: Closed(ScheduledLibraryReset(CANCEL)) }}: h2::proto::streams::counts: dec_num_streams; stream=StreamId(3)
2025-06-23T16:53:42.857261Z TRACE Connection{peer=Client}:poll: h2::proto::streams::prioritize: writing frame=Reset { stream_id: StreamId(3), error_code: CANCEL }
2025-06-23T16:53:42.857285Z DEBUG Connection{peer=Client}:poll:FramedWrite::buffer{frame=Reset { stream_id: StreamId(3), error_code: CANCEL }}: h2::codec::framed_write: send frame=Reset { stream_id: StreamId(3), error_code: CANCEL }
2025-06-23T16:53:42.857305Z TRACE Connection{peer=Client}:poll:FramedWrite::buffer{frame=Reset { stream_id: StreamId(3), error_code: CANCEL }}: h2::frame::reset: encoding RESET; id=StreamId(3) code=CANCEL
2025-06-23T16:53:42.857325Z TRACE Connection{peer=Client}:poll:FramedWrite::buffer{frame=Reset { stream_id: StreamId(3), error_code: CANCEL }}: h2::codec::framed_write: encoded reset rem=13
2025-06-23T16:53:42.857355Z TRACE Connection{peer=Client}:poll: h2::proto::streams::prioritize: schedule_pending_open
2025-06-23T16:53:42.857384Z TRACE Connection{peer=Client}:poll:FramedWrite::flush: h2::codec::framed_write: queued_data_frame=false
2025-06-23T16:53:42.857471Z TRACE Connection{peer=Client}:poll:FramedWrite::flush: h2::codec::framed_write: flushing buffer
2025-06-23T16:54:12.880542Z TRACE Connection{peer=Client}:poll: h2::proto::connection: connection.state=Open
2025-06-23T16:54:12.880654Z TRACE Connection{peer=Client}:poll: h2::proto::streams::counts: transition_after; stream=StreamId(1); state=State { inner: Closed(Error(Reset(StreamId(1), CANCEL, Library))) }; is_closed=true; pending_send_empty=true; buffered_send_data=0; num_recv=0; num_send=0
2025-06-23T16:54:12.880709Z TRACE Connection{peer=Client}:poll: h2::proto::streams::counts: transition_after; stream=StreamId(3); state=State { inner: Closed(Error(Reset(StreamId(3), CANCEL, Library))) }; is_closed=true; pending_send_empty=true; buffered_send_data=0; num_recv=0; num_send=0
2025-06-23T16:54:12.880772Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next: h2::codec::framed_read: poll
2025-06-23T16:54:12.880917Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next: h2::codec::framed_read: read.bytes=29
2025-06-23T16:54:12.880953Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=29}: h2::codec::framed_read: decoding frame from 29B
2025-06-23T16:54:12.880978Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=29}: h2::codec::framed_read: frame.kind=GoAway
2025-06-23T16:54:12.881018Z DEBUG Connection{peer=Client}:poll:FramedRead::poll_next: h2::codec::framed_read: received frame=GoAway { error_code: NO_ERROR, last_stream_id: StreamId(3), debug_data: b"idle timeout" }
2025-06-23T16:54:12.881060Z TRACE Connection{peer=Client}:poll: h2::proto::connection: recv GOAWAY frame=GoAway { error_code: NO_ERROR, last_stream_id: StreamId(3), debug_data: b"idle timeout" }
2025-06-23T16:54:12.881106Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next: h2::codec::framed_read: poll
2025-06-23T16:54:12.881126Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next: h2::codec::framed_read: read.bytes=29
2025-06-23T16:54:12.881154Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=29}: h2::codec::framed_read: decoding frame from 29B
2025-06-23T16:54:12.881177Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=29}: h2::codec::framed_read: frame.kind=GoAway
2025-06-23T16:54:12.881207Z DEBUG Connection{peer=Client}:poll:FramedRead::poll_next: h2::codec::framed_read: received frame=GoAway { error_code: NO_ERROR, last_stream_id: StreamId(3), debug_data: b"idle timeout" }
2025-06-23T16:54:12.881234Z TRACE Connection{peer=Client}:poll: h2::proto::connection: recv GOAWAY frame=GoAway { error_code: NO_ERROR, last_stream_id: StreamId(3), debug_data: b"idle timeout" }
2025-06-23T16:54:12.881273Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next: h2::codec::framed_read: poll
2025-06-23T16:54:12.881320Z DEBUG Connection{peer=Client}:poll: h2::proto::connection: Connection::poll; IO error error=UnexpectedEof
2025-06-23T16:54:12.881358Z TRACE h2::proto::streams::streams: Streams::recv_eof
2025-06-23T16:54:12.881510Z  WARN hickory_proto::h2::h2_client_stream: h2 connection failed: peer closed connection without sending TLS close_notify: https://docs.rs/rustls/latest/rustls/manual/_03_howto/index.html#unexpected-eof
2025-06-23T16:54:12.881563Z TRACE Connection{peer=Client}:poll: h2::proto::connection: connection.state=Open
2025-06-23T16:54:12.881616Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next: h2::codec::framed_read: poll
2025-06-23T16:54:12.881669Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next: h2::codec::framed_read: read.bytes=29
2025-06-23T16:54:12.881699Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=29}: h2::codec::framed_read: decoding frame from 29B
2025-06-23T16:54:12.881724Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=29}: h2::codec::framed_read: frame.kind=GoAway
2025-06-23T16:54:12.881763Z DEBUG Connection{peer=Client}:poll:FramedRead::poll_next: h2::codec::framed_read: received frame=GoAway { error_code: NO_ERROR, last_stream_id: StreamId(3), debug_data: b"idle timeout" }
2025-06-23T16:54:12.881792Z TRACE Connection{peer=Client}:poll: h2::proto::connection: recv GOAWAY frame=GoAway { error_code: NO_ERROR, last_stream_id: StreamId(3), debug_data: b"idle timeout" }
2025-06-23T16:54:12.881832Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next: h2::codec::framed_read: poll
2025-06-23T16:54:12.881853Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next: h2::codec::framed_read: read.bytes=29
2025-06-23T16:54:12.881882Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=29}: h2::codec::framed_read: decoding frame from 29B
2025-06-23T16:54:12.881906Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=29}: h2::codec::framed_read: frame.kind=GoAway
2025-06-23T16:54:12.881937Z DEBUG Connection{peer=Client}:poll:FramedRead::poll_next: h2::codec::framed_read: received frame=GoAway { error_code: NO_ERROR, last_stream_id: StreamId(3), debug_data: b"idle timeout" }
2025-06-23T16:54:12.881966Z TRACE Connection{peer=Client}:poll: h2::proto::connection: recv GOAWAY frame=GoAway { error_code: NO_ERROR, last_stream_id: StreamId(3), debug_data: b"idle timeout" }
2025-06-23T16:54:12.882004Z TRACE Connection{peer=Client}:poll:FramedRead::poll_next: h2::codec::framed_read: poll
2025-06-23T16:54:12.882038Z DEBUG Connection{peer=Client}:poll: h2::proto::connection: Connection::poll; IO error error=UnexpectedEof
2025-06-23T16:54:12.882067Z TRACE h2::proto::streams::streams: Streams::recv_eof
2025-06-23T16:54:12.882139Z  WARN hickory_proto::h2::h2_client_stream: h2 connection failed: peer closed connection without sending TLS close_notify: https://docs.rs/rustls/latest/rustls/manual/_03_howto/index.html#unexpected-eof

@seanmonstar
Copy link
Member

This probably makes sense (at least, nothing wrong comes immediately to mind!). Looks like CI didn't like a thing 🤓

@jmwample jmwample force-pushed the jmwample/client-eof-close branch from 5a3477a to bbae3d3 Compare June 23, 2025 21:30
@jmwample jmwample force-pushed the jmwample/client-eof-close branch from bbae3d3 to 76ee7bb Compare June 24, 2025 13:23
@jmwample
Copy link
Contributor Author

CI issues addressed 👍

@seanmonstar
Copy link
Member

Just looked at this again, and I think this is mostly right. One concern I thought of: if we got this error not from a TLS shutdown, but just from a server process that crashes suddenly without a GOAWAY frame even, shouldn't we report that error to the client?

@jmwample
Copy link
Contributor Author

I added a change so that in clients the EoF is propogated unless the server has sent a GoAway<NoReason>. Based on the trace logs the test that I have added this properly handles a GoAway frame followed by an Eof sent by the server.

If I don't include the tokio::time::sleep the mock tunnel closes before the client is able to read the GoAway frame. I am not sure if there is a better way to force the client to process the frames before the mock tunnel is closed with close_without_notify.

@seanmonstar seanmonstar merged commit 97704b5 into hyperium:master Jun 30, 2025
6 checks passed
@jmwample jmwample deleted the jmwample/client-eof-close branch June 30, 2025 18:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants