-
Notifications
You must be signed in to change notification settings - Fork 10
If there are more than 50 LogStreams per LogGroup the processing never finishes and keeps iterating over the last batch of streams #20
Comments
I think this is probably due to my fix that limited the number of streams per group, with the new parameter There should be a way to avoid that but I'd say it's harmless. |
Hi,
first of all I see the last bast of streams being ingested over and over
again the the loop never finishes therefore. This also leads to the
state never being written to disk as well unfortunately btw.
Reason is for that bug is line
https://github.com/sampointer/fluent-plugin-cloudwatch-ingest/blob/master/lib/fluent/plugin/in_cloudwatch_ingest.rb#L233
I added a '|| response.next_token.nil?' to fix that as this marks the
last batch of streams.
But even after that I still see entries being duplicated and ingested
multiple times and the loop never finishes ...
But i didnt have the time to debug any further ... also I am Python guy
rather than ruby so it takes me a while ;-)
Cheers
--
Stefan Reimer <https://startux.de>
On 2017-12-05 10:43, Francisco de Freitas wrote:
I think this is probably due to my fix that limited the number of streams per group, with the new parameter max_log_streams_per_group, but it should actually cope with that whenever there's a new token to get the next "50". What are you seeing?
--
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub [1], or mute the thread [2].
|
I'm also not a ruby coder myself, did that most out of necessity 😄 The idea was that, if I'll actually have to check on that and see if I also get it. |
Hi, I made a pull request, ... I found this issue right now |
Good spot. Merged and published as |
Thanks for this! Going to check it on my end. Cheers |
This is still a bug for me even with |
|
@sampointer - it seems 1.7.0.rc3 missing from repo...
Maybe it's a mirror sync issue as i notice it's not been long since you posted. I'll try again after 24 hours. |
For some reason rubygems thinks the two tags I've tried have already been pushed, despite them not being present. |
I've fixed the CI issues. |
@sampointer thanks sam. I was able to pull down rc4. Unfortunately, my state file is still not written to. If you need me to do anything - debug logs etc let me know. |
I'm afraid I no longer have an active development nor production environment in which to develop and test this plugin. You may have some luck posting your configuration and logging here for others to view. |
Hello, I believe I may have found a fix to this issue. Turns out when you retrieve the log events from CloudWatch and you reach the end of the stream, the next token stays the same and there weren't any checks to see if the next token is the same as the current token. So it was a pretty simple fix. The plugin finally writes to the state file and I haven't seen any duplicated logs ever since. Check out the pull request: #27 |
Happy to merge this, although I have no ability to test this in a live infrastructure. Pushed |
I ran the tests with rake spec but I didn't see any additional tests other then checking version number and checking if false == false. I have had it running on a live infrastructure and monitoring for any duplicate logs being sent to Elasticsearch. So far no duplicates have been found, and I know for sure that I have log groups with more than 50 log streams and a ton of log events in some of those streams. I have also checked to see if I'm still getting the latest log events and looks like I am. Lastly, it's able to store the state of each log stream to the state file. My fluentd config:
|
I've pushed |
No description provided.
The text was updated successfully, but these errors were encountered: