You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
On rare occasions, NettConnectionPool will crash the fiber when attempting to refresh a connection pulled from the pool.
The underlying failure is java.util.NoSuchElementException: READ_TIMEOUT_HANDLER (see stacktrace below).
My hypothesis is that this is a case where a Channel that is in the process of being closed and released is pulled from the pool and after the internal Netty logic has removed all handlers from the pipeline. This is why the ChannelPipeline.replace is failing with a NoSuchElementException.
My line of thinking is based on this Stackoverflow comment. It is the only way I could think of that would cause a handler that we know should be on a Channel's pipeline to magically disappear.
I will have a PR out for this soon that wraps the ChannelPipline.replace with a ZIO.attempt so that failure no longer crashes the fiber and should be correctly retried in the subsequent NettyConnectionPool logic if the replace fails.
I am also refactoring the logic so that it is always calling Channel.isOpen to determine whether or not the Channel in question is useable or not.
To Reproduce
Difficult to repro as it is most likely a race condition that only shows up rarely or under heavy load.
Expected behaviour
Should never crash the fiber and a fresh, open Channel should be returned if the Channel is closed.
Stacktrace
Exception in thread "zio-fiber-1467137482,1469468003" java.util.NoSuchElementException: READ_TIMEOUT_HANDLER
at io.netty.channel.DefaultChannelPipeline.getContextOrDie(DefaultChannelPipeline.java:1022)
at io.netty.channel.DefaultChannelPipeline.replace(DefaultChannelPipeline.java:464)
at zio.http.netty.client.NettyConnectionPool$.zio$http$netty$client$NettyConnectionPool$$$refreshIdleTimeoutHandler(NettyConnectionPool.scala:159)
at zio.http.netty.client.NettyConnectionPool$ZioNettyConnectionPool.get$$anonfun$1$$anonfun$2$$anonfun$2(NettyConnectionPool.scala:260)
at scala.Option.fold(Option.scala:263)
at zio.http.netty.client.NettyConnectionPool$ZioNettyConnectionPool.get$$anonfun$1$$anonfun$2(NettyConnectionPool.scala:260)
(removed internal stack info -scott)
at zio.http.HandlerAspect.applyHandler(HandlerAspect.scala:133)
at zio.http.HandlerAspect.applyHandler(HandlerAspect.scala:136)
at zio.http.HandlerAspect.applyHandler(HandlerAspect.scala:133)
at zio.http.HandlerAspect.applyHandler(HandlerAspect.scala:136)
Additional context
The text was updated successfully, but these errors were encountered:
Describe the bug
On rare occasions,
NettConnectionPool
will crash the fiber when attempting to refresh a connection pulled from the pool.The underlying failure is
java.util.NoSuchElementException: READ_TIMEOUT_HANDLER
(see stacktrace below).My hypothesis is that this is a case where a
Channel
that is in the process of being closed and released is pulled from the pool and after the internal Netty logic has removed all handlers from the pipeline. This is why theChannelPipeline.replace
is failing with aNoSuchElementException
.My line of thinking is based on this Stackoverflow comment. It is the only way I could think of that would cause a handler that we know should be on a Channel's pipeline to magically disappear.
I will have a PR out for this soon that wraps the
ChannelPipline.replace
with aZIO.attempt
so that failure no longer crashes the fiber and should be correctly retried in the subsequentNettyConnectionPool
logic if thereplace
fails.I am also refactoring the logic so that it is always calling
Channel.isOpen
to determine whether or not the Channel in question is useable or not.To Reproduce
Difficult to repro as it is most likely a race condition that only shows up rarely or under heavy load.
Expected behaviour
Should never crash the fiber and a fresh, open Channel should be returned if the Channel is closed.
Stacktrace
Additional context
The text was updated successfully, but these errors were encountered: