feat: Implement a memoization-based cache #30
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a performance optimization for CAR mirror.
This helps, especially when DAGs are huge, blockstores are locally slow (e.g. reading from disk) and when latency is not the main issue anymore.
It improves benchmarks quite a bit:
What this PR is doing is simply implementing one version of #28: A cache for block references:
Cid -> Vec<Cid>
.This way, when there's a cache hit, we don't even need to fetch the block, parse it and find links. Instead, we get further references directly.
In some cases, this means running the same operation twice won't fetch any block from the blockstore at all.
Since different environments will want to implement different types of caches (e.g. the
quick-cache
based cache won't work in Wasm), the cache implementation is abstracted behind atrait Cache
.I've bundled two implementations:
NoCache
that simply always re-computed the value (no memoization), andInMemoryCache
which uses thequick_cache
library.This PR depends on #29