Impact
All servers running blaze-core <= 0.14.14, including blaze-http and http4s-blaze-server users, are affected.
Blaze, accepts connections unconditionally on a dedicated thread pool. This has the net effect of amplifying degradation in services that are unable to handle their current request load, since incoming connections are still accepted and added to an unbounded queue. Each connection allocates a socket handle, which drains a scarce OS resource. This can also confound higher level circuit breakers which work based on detecting failed connections.
The vast majority of affected users are using it as part of http4s-blaze-server <= 0.21.16. http4s provides a mechanism for limiting open connections, but is enforced inside the Blaze accept loop, after the connection is accepted and the socket opened. Thus, the limit only prevents the number of connections which can be simultaneously processed, not the number of connections which can be held open.
Patches
The issue is fixed in version 0.14.15 for NIO1SocketServerGroup
. A maxConnections
parameter is added, with a default value of 512. Concurrent connections beyond this limit are rejected. To run unbounded, which is not recommended, set a negative number.
The NIO2SocketServerGroup
has no such setting and is now deprecated.
Workarounds
- An Nginx side-car acting as a reverse proxy for the local http4s-blaze-server instance would be able to apply a connection limiting semantic before the sockets reach blaze-core. Nginx’s connection bounding is both asynchronous and properly respects backpressure.
- A similar sidecar strategy is viable for other non-HTTP services running on blaze-core.
- http4s-ember-server is an alternative to http4s-blaze-server, but does not yet have HTTP/2 or web socket support. Its performance in terms of RPS is appreciably behind Blaze’s, and as the newest backend, has substantially less industrial uptake.
- http4s-jetty is an alternative to http4s-blaze-server, but does not yet have web socket support. Its performance in terms of requests per second is somewhat behind Blaze’s, and despite Jetty's industrial adoption, the http4s integration has substantially less industrial uptake.
- http4s-tomcat is an alternative to http4s-blaze-server, but does not yet have HTTP/2 web socket support. Its performance in terms of requests per second is somewhat behind Blaze’s, and despite Jetty's industrial adoption, the http4s integration has substantially less industrial uptake.
For more information
If you have any questions or comments about this advisory:
References
Impact
All servers running blaze-core <= 0.14.14, including blaze-http and http4s-blaze-server users, are affected.
Blaze, accepts connections unconditionally on a dedicated thread pool. This has the net effect of amplifying degradation in services that are unable to handle their current request load, since incoming connections are still accepted and added to an unbounded queue. Each connection allocates a socket handle, which drains a scarce OS resource. This can also confound higher level circuit breakers which work based on detecting failed connections.
The vast majority of affected users are using it as part of http4s-blaze-server <= 0.21.16. http4s provides a mechanism for limiting open connections, but is enforced inside the Blaze accept loop, after the connection is accepted and the socket opened. Thus, the limit only prevents the number of connections which can be simultaneously processed, not the number of connections which can be held open.
Patches
The issue is fixed in version 0.14.15 for
NIO1SocketServerGroup
. AmaxConnections
parameter is added, with a default value of 512. Concurrent connections beyond this limit are rejected. To run unbounded, which is not recommended, set a negative number.The
NIO2SocketServerGroup
has no such setting and is now deprecated.Workarounds
For more information
If you have any questions or comments about this advisory:
References