-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sink of type 'vector' leaves open file descriptors of logs read by kubernetes_logs source - so the original log files still take disk space #19679
Comments
Hi @gadisn ! The expected behavior is that Vector will hold open the file handle until it reaches EOF. If it still has the file handle open then that indicates it hasn't finished reading that file. |
This is mentioned over here for the |
Thanks @jszwedko. Is it possible that the |
Lines should be marked as "read" as soon as the source reads them. One guess I'd have is that the |
Thanks, will review |
Closing since this seems to be a case of back-pressure causing the |
A note for the community
Problem
Topology 1 of vector as ‘agent’:
kubernetes_logs(source) -> 2 transforms -> splunk_hec_logs(sink)
Topology 2 of vector as ‘agent’:
Same as topology 1 above, and in addition, a sink of type 'vector' which uses the same kubernetes_logs source
We run a load on the system – 2000 request-per-second, spread over 3 k8s pods.
Each request leads to a line in the container log file, so overall, we we get a lot of logs.
In topology 1, we see static consumption of the disk.
In topology 2, we see a constant increase in the disk.
After investigation it turned out that in topology 2, the original log files gets deleted but doesn’t get cleaned from the disk – since the files are still referenced by a file descriptor.
I would expect that the original log files will be de-referenced so they will actually be cleaned from the disk
Configuration
Version
0.34.1-distroless-libc
Debug Output
No response
Example Data
No response
Additional Context
No response
References
No response
The text was updated successfully, but these errors were encountered: