Skip to content

Commit

Permalink
ruff update, docs
Browse files Browse the repository at this point in the history
  • Loading branch information
joelb123 committed Feb 2, 2024
1 parent e68c43f commit f16e394
Show file tree
Hide file tree
Showing 5 changed files with 59 additions and 55 deletions.
42 changes: 14 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,8 +223,8 @@ among four different operating regimes:
to at least one server has occurred but not enough files
have been transferred so that all statistics can be calculated,
- **Updated**, where a sufficient number of transfers has
occurred to a server that file transfers may be
fully characterized.
occurred that file transfers may be characterized, either
for the collection of servers or for an individual server.

The optimistic rate at which _flardl_ launches requests for
a given server $j$ is given by the expectation rates for
Expand Down Expand Up @@ -265,37 +265,23 @@ given by the applicable value for $k_j$, testing is done
against four limits calculated by the methods in the [theory]
section:

- $D_{{\rm max}_j}$ the maximum per-server queue depth
which is an input parameter, revised downward if any
queue requests are rejected (default 100),
- $D_{\rm sat}$ the total queue depth at which the download
bit rate saturates or exceeds the maximum bit rate,
- $D_{{\rm crit}_j}$ the critical per-server queue depth,
calculated each session when updated information is available,
- $B_{\rm max}$ the maximum bandwidth allowed.
- The per-server queue depth must be less than the maximum
$D_{{\rm max}_j}$, an input parameter (default 100), revised
downward and stored for future use if any queue requests are
rejected (default 100),
- In the updated state with per-server stats available, the
per-server queue depth must be less than the calculated critical
per-server queue depth $D_{{\rm crit}_j}$,
- In the updated state, the total queue depth must be less than
the saturation queue depth, $D_{\rm sat}$, at which the
current download bit rate $B_{\rm cur}$ saturates,
- The curremt download bit rate must be less than $B_{\rm max}$,
the maximum bandwidth allowed.

If any of the limits are exceeded, a stochastic wait period
at the inverse of the current per-server rate $k_j$ is added
until the limit is no longer exceeded.

After enough files have come back from a server or set of
servers (a configurable parameter $N_{\rm min}$), _flardl_
fits the curve of observed network bandwidth versus queue
depth to obtain the effective download bit rate at saturation
$B_{\rm eff}$ and the total queue depth at saturation
$D*{\rm sat}$. Then, per-server, _flardl_ fits the curves
of service times versus file sized to the Equation of Time
to estimate server latencies $L_j$ and if the server queue
depth $D_j$ is run up high enough the critical queue depths
$D_{{\rm crit}_j}$. This estimates reflects local
network conditions, server policy, and overall server
load at time of request, so they are both adaptive and elastic.
These values form the bases for launching the remaining requests .
Servers with higher modal service rates (i.e., rates of serving
crappies) will spend less time waiting and thus stand a better
chance at nabbing an open queue slot, without penalizing servers
that happen to draw a big downloads (whales).

### If File Sizes are Known

The adapilastic algorithm assumes that file sizes are randomly-ordered
Expand Down
18 changes: 18 additions & 0 deletions THEORY.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,3 +97,21 @@ latency $L_j$ and slope governed by an expression whose only
unknown is a near-constant related to acknowledgements. As queue
depth increases, transfer times are dominated by $H_{ij}$, the
time spent waiting to get to the head of the queue.

After enough files have come back from a server or set of
servers (a configurable parameter $N_{\rm min}$), _flardl_
fits the curve of observed network bandwidth versus queue
depth to obtain the effective download bit rate at saturation
$B_{\rm eff}$ and the total queue depth at saturation
$D*{\rm sat}$. Then, per-server, _flardl_ fits the curves
of service times versus file sized to the Equation of Time
to estimate server latencies $L_j$ and if the server queue
depth $D_j$ is run up high enough the critical queue depths
$D_{{\rm crit}_j}$. This estimates reflects local
network conditions, server policy, and overall server
load at time of request, so they are both adaptive and elastic.
These values form the bases for launching the remaining requests .
Servers with higher modal service rates (i.e., rates of serving
crappies) will spend less time waiting and thus stand a better
chance at nabbing an open queue slot, without penalizing servers
that happen to draw a big downloads (whales).
36 changes: 18 additions & 18 deletions pdm.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

12 changes: 7 additions & 5 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,9 @@ addopts = ["-x"]
[tool.ruff]
src = ['src', 'tests']
line-length = 88
target-version = 'py39'

[tool.ruff.lint]
select = [
'A',
'ARG',
Expand Down Expand Up @@ -143,16 +146,15 @@ select = [
'UP',
'W',
]
target-version = 'py39'

[tool.ruff.isort]
[tool.ruff.lint.isort]
force-single-line = true
lines-after-imports = 2

[tool.ruff.mccabe]
[tool.ruff.lint.mccabe]
max-complexity = 10

[tool.ruff.per-file-ignores]
[tool.ruff.lint.per-file-ignores]
"__init__.py" = ['F401']
"tests/*" = [
'D104',
Expand All @@ -164,7 +166,7 @@ max-complexity = 10
'S101'
]

[tool.ruff.pydocstyle]
[tool.ruff.lint.pydocstyle]
convention = 'google'

[build-system]
Expand Down
6 changes: 2 additions & 4 deletions src/flardl/dict_to_indexed_list.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,16 +39,14 @@ def zip_dict_to_indexed_list(
"""Zip on the longest non-string iterables, adding an index."""
ret_list = []
iterable_args = [k for k in arg_dict if isinstance(arg_dict[k], NonStringIterable)]
idx = 0
for iter_tuple in zip_longest(
for idx, iter_tuple in enumerate(zip_longest(
*[cast(Iterable, arg_dict[k]) for k in iterable_args]
):
)):
args: dict[str, SIMPLE_TYPES] = {INDEX_KEY: idx}
for key in arg_dict:
if key in iterable_args:
args[key] = iter_tuple[iterable_args.index(key)]
else:
args[key] = cast(SIMPLE_TYPES, arg_dict[key])
idx += 1
ret_list.append(args)
return ret_list

0 comments on commit f16e394

Please sign in to comment.