feat: LRU cache for flows in packetparser
#590
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR optimizes the
packetparser
plugin by introducing an LRU cache to reduce memory and CPU usage under heavy traffic. Instead of generating a newFlow
object for each packet event, the cache storesFlow
objects based on source/destination IPs, ports, protocol, and direction. Packet events with matching values will reuse the existingFlow
object, preventing excessiveToFlow
invocations and potential OOM kills for the agent.Related Issue
If this pull request is related to any issue, please mention it here. Additionally, make sure that the issue is assigned to you before submitting this pull request.
Checklist
git commit -S -s ...
). See this documentation on signing commits.Screenshots (if applicable) or Testing Completed
Over a similar period of time under the same workload, the total memory allocation for
ToFlow
without cachevs with cache
Please add any relevant screenshots or GIFs to showcase the changes made.
Additional Notes
Add any additional notes or context about the pull request here.
Please refer to the CONTRIBUTING.md file for more information on how to contribute to this project.