Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Added a new global `set_connection_limit!` function for controlling the global connection limit that will be applied to all requests This is one way to resolve #1033. I added a deprecation warning when passing `connect_limit` to individual requests. So usage would be: calling `HTTP.set_connection_limit!` and any time this is called, it changes the global value. * Add a try-finally in keepalive! around our global IO lock usage just for good house-keeping * Refactored `try_with_timeout` to use a `Channel` instead of the non-threaded `@async`; it's much simpler, seems cleaner, and allows us to avoid the usage of `@async` when not needed. Note that I included a change in StreamRequest.jl however that wraps all the actual write/read IO operations in a `fetch(@async dostuff())` because this will currently prevent code in this task from migrating across threads, which is important for OpenSSL usage where error handling is done per-thread. I don't love the solution, but it seems ok for now. * I refactored a few of the stream IO functions so that we always know the number of bytes downloaded, whether in memory or written to an IO, so we can log them and use them in verbose logging to give bit-rate calculations * Ok, the big one: I rewrote the internal implementation of ConnectionPool.ConnectionPools.Pod `acquire`/`release` functions; under really heavy workloads, there was a ton of contention on the Pod lock. I also observed at least one "hang" where GDB backtraces seemed to indicate that somehow a task failed/died/hung while trying to make a new connection _while holding the Pod lock_, which then meant that no other requests could ever make progress. The new implementation includes a lock-free "fastpath" where an existing connection that can be re-used doesn't require any lock-taking. It uses a lock-free concurrent Stack implementation copied from JuliaConcurrent/ConcurrentCollections.jl ( doesn't seem actively maintained and it's not much code, so just copied). The rest of the `acquire`/`release` code is now modeled after Base.Event in how releasing always acquires the lock and slow-path acquires also take the lock to ensure fairness and no deadlocks. I've included some benchmark results on a variety of heavy workloads [here](https://everlasting-mahogany-a5f.notion.site/Issue-heavy-load-perf-degradation-1cd275c75037481a9cd6378b8303cfb3) that show some great improvements, a bulk of which are attributable to reducing contention when acquiring/releasing connections during requests. The other key change included in this rewrite is that we ensure we _do not_ hold any locks while _making new connections_ to avoid the possibility of the lock ever getting "stuck", and because it's not necessary: the pod is in charge of just keeping track of numbers and doesn't need to worry about whether the connection was actually made yet or not (if it fails, it will be immediately released back and retried). Overall, the code is also _much_ simpler, which I think is a huge win, because the old code was always pretty scary to have to dig into. * Added a new `logerrors::Bool=false` keyword arg that allows doing `@error` logs on errors that may otherwise be "swallowed" when doing retries; it can be helpful to sometimes be able to at least see what kinds of errors are happening * Added lots of metrics around various time spent in various layers, read vs. write durations, etc.
- Loading branch information