-
-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disable DataNucleus L2 cache globally #576
Labels
blocked
component/api-server
enhancement
New feature or request
p1
Critical bugs that prevent DT from being used, or features that must be implemented ASAP
size/S
Small effort
Comments
nscuro
added
enhancement
New feature or request
p1
Critical bugs that prevent DT from being used, or features that must be implemented ASAP
size/S
Small effort
component/api-server
labels
May 24, 2023
PR raised for Alpine: stevespringett/Alpine#494 |
PR is merged, but this issue is blocked until next Alpine version is released. |
Fixed in DependencyTrack/hyades-apiserver#327 |
nscuro
added a commit
to nscuro/dependency-track
that referenced
this issue
Oct 25, 2024
Currently, DataNucleus will put all objects into the L2 cache. Given the volume of objects being processed by DT, this behavior quickly adds up to enormous cache sizes. Users continue to be bamboozled by DT's memory requirements, which for a large part are driven by the wasteful L2 caching. While working on DependencyTrack#4305, it became obvious that the hit rates of the cache are absolutely dwarfed by the high rate of misses. Storing such large volumes of objects in RAM is simply not justified if hit rates are that low. Disabling the L2 cache solves a lot of recurring issues we and users are facing. If we want to introduce caching again in the future, we should do it in targeted areas, and preferably not directly in the persistence layer. We disabled the L2 cache in Hyades a long time ago, and it has worked out very well for us. It was a precondition to make the API server horizontally scalable. Some more context: * DependencyTrack/hyades#375 (comment) * DependencyTrack/hyades#576 Supersedes DependencyTrack#4305 Signed-off-by: nscuro <[email protected]>
nscuro
added a commit
to nscuro/dependency-track
that referenced
this issue
Oct 25, 2024
Currently, DataNucleus will put all objects into the L2 cache. Given the volume of objects being processed by DT, this behavior quickly adds up to enormous cache sizes. Users continue to be bamboozled by DT's memory requirements, which for a large part are driven by the wasteful L2 caching. While working on DependencyTrack#4305, it became obvious that the hit rates of the cache are absolutely dwarfed by the high rate of misses. Storing such large volumes of objects in RAM is simply not justified if hit rates are that low. Disabling the L2 cache solves a lot of recurring issues we and users are facing. If we want to introduce caching again in the future, we should do it in targeted areas, and preferably not directly in the persistence layer. We disabled the L2 cache in Hyades a long time ago, and it has worked out very well for us. It was a precondition to make the API server horizontally scalable. Some more context: * DependencyTrack/hyades#375 (comment) * DependencyTrack/hyades#576 Supersedes DependencyTrack#4305 Signed-off-by: nscuro <[email protected]>
1 task
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
blocked
component/api-server
enhancement
New feature or request
p1
Critical bugs that prevent DT from being used, or features that must be implemented ASAP
size/S
Small effort
See #375 (comment)
We have disabled the L2 cache for all functions controlled by Dependency-Track, but there are still others controlled by the underlying Alpine framework (mostly around access control) that still use the cache. We can't disable this in Dependency-Track.
Must be implemented in Alpine: stevespringett/Alpine#493
Effort depends on if we need to make the cache configurable in general, or if just adding a flag to disable it is sufficient.
The text was updated successfully, but these errors were encountered: