Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no implicit conversion of nil into String #193

Open
tkhalane opened this issue Apr 21, 2022 · 7 comments
Open

no implicit conversion of nil into String #193

tkhalane opened this issue Apr 21, 2022 · 7 comments

Comments

@tkhalane
Copy link

tkhalane commented Apr 21, 2022

Hi

Please help.

Our pipeline (postgres > logstash > opensearch) has been working fine in the integration environment, but in Prod (Same configuration) sometimes we get the following error message: This consistently happens once a day.

           "[ERROR][logstash.outputs.amazonelasticsearch] An unknown error occurred sending a bulk request to Elasticsearch. 
           We will retry indefinitely {:error_message=>\"no implicit conversion of nil into String\", 
           :error_class=>\"TypeError\", :backtrace=>[\"org/jruby/RubyString.java:1183:in `+'\","

Logstash Version = 6.8.22

The pipeline seems to be running smoothly when there are events to process, but it seems like sometimes, when there is nothing new, it prints that message. This is intermittent

More Detail

[2022-04-14T03:00:01,149][WARN ][logstash.outputs.amazonelasticsearch] UNEXPECTED POOL ERROR {:e=>#<TypeError: no implicit conversion of nil into String>}
[2022-04-14T03:00:01,153][ERROR][logstash.outputs.amazonelasticsearch] An unknown error occurred sending a bulk request to Elasticsearch. We will retry indefinitely {:error_message=>"no implicit conversion of nil into String", :error_class=>"TypeError", :backtrace=>["org/jruby/RubyString.java:1183:in +'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/aws-sdk-core-2.11.632/lib/aws-sdk-core/signers/v4.rb:117:in signature'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/aws-sdk-core-2.11.632/lib/aws-sdk-core/signers/v4.rb:107:in authorization'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/aws-sdk-core-2.11.632/lib/aws-sdk-core/signers/v4.rb:59:in sign'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-amazon_es-7.0.1-java/lib/logstash/outputs/amazon_es/http_client/manticore_adapter.rb:111:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-amazon_es-7.0.1-java/lib/logstash/outputs/amazon_es/http_client/pool.rb:291:in perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-amazon_es-7.0.1-java/lib/logstash/outputs/amazon_es/http_client/pool.rb:278:in block in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-amazon_es-7.0.1-java/lib/logstash/outputs/amazon_es/http_client/pool.rb:373:in with_connection'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-amazon_es-7.0.1-java/lib/logstash/outputs/amazon_es/http_client/pool.rb:277:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-amazon_es-7.0.1-java/lib/logstash/outputs/amazon_es/http_client/pool.rb:285:in block in Pool'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-amazon_es-7.0.1-java/lib/logstash/outputs/amazon_es/http_client.rb:133:in bulk_send'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-amazon_es-7.0.1-java/lib/logstash/outputs/amazon_es/http_client.rb:118:in bulk'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-amazon_es-7.0.1-java/lib/logstash/outputs/amazon_es/common.rb:275:in safe_bulk'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-amazon_es-7.0.1-java/lib/logstash/outputs/amazon_es/common.rb:180:in submit'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-amazon_es-7.0.1-java/lib/logstash/outputs/amazon_es/common.rb:148:in retrying_submit'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-amazon_es-7.0.1-java/lib/logstash/outputs/amazon_es/common.rb:38:in multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:118:in multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:390:in block in output_batch'", "org/jruby/RubyHash.java:1419:in each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:389:in output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:341:in worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:304:in `block in start_workers'"]}
[2022-04-14T04:00:00,160][INFO ][logstash.inputs.jdbc ] (0.000622s) SELECT CAST(current_setting('server_version_num') AS integer) AS v

@cmanning09
Copy link

Hi @tkhalane,

Thanks for raising the issue. Upon a deep dive of the stack trace and code it appears the issue lies within the aws-sdk-ruby while generating the signature for the request. Particularly getting the secret_access_key from the credentials.

I know you mentioned your logstash configuration is the same. Is your environment / credentials TTL the same as well? How are you managing your credentials? I am attempting to reproduce the issue but I have had no luck at this point. Any additional information you can provide would help.

@cmanning09
Copy link

Hi @tkhalane,

One last thing I want to point out is this plugin is in maintenance mode. We still support bug fixes and security patches in this plugin. We highly recommend migrating to logstash-output-opensearch which ships events from Logstash to OpenSearch 1.x and Elasticsearch 7.x clusters, and also supports SigV4 signing.

@tkhalane
Copy link
Author

Hi @tkhalane,

Thanks for raising the issue. Upon a deep dive of the stack trace and code it appears the issue lies within the aws-sdk-ruby while generating the signature for the request. Particularly getting the secret_access_key from the credentials.

I know you mentioned your logstash configuration is the same. Is your environment / credentials TTL the same as well? How are you managing your credentials? I am attempting to reproduce the issue but I have had no luck at this point. Any additional information you can provide would help.

Hi

Thank you for your response. We don't use long term credentials. We have attached an IAM Role to the instance running logstash. Could this be happening when the STS Token has kind of expired??? Is that a thing?

@tkhalane
Copy link
Author

Hi @tkhalane,

One last thing I want to point out is this plugin is in maintenance mode. We still support bug fixes and security patches in this plugin. We highly recommend migrating to logstash-output-opensearch which ships events from Logstash to OpenSearch 1.x and Elasticsearch 7.x clusters, and also supports SigV4 signing.

Hi @cmanning09

Thank you for making me aware of this new plugin, we will have a look. Maybe this might.

Please see my comment above regarding credentials.

Also, just additional information, the infrastructure is automated via Terraform, so we are quite confident that the configuration is the same between Prod and the lower environments where this issue doesn't occur

@sbayer55
Copy link
Contributor

sbayer55 commented May 2, 2022

Hi @tkhalane,

Thank you for the additional troubleshooting information you have provided. An EC2 instance with an assigned role will automatically rotate credentials before expiring. It sounds like logstash/logstash-output-amazon_es is not updating its credentials from the instance metadata. In this scenario its difficult to say if the issue is from the logstash authentication configuration or some other configuration in the environment.

I would still double check how roles/secrets/expirations are configured. I agree, it is unlikely there is an inconsistency in your environments if both are 100% controlled by Terraform.

@tkhalane
Copy link
Author

tkhalane commented May 3, 2022

Hi @sbayer55

Thank you for the reply. One last question, is there a data loss risk due to this issue? Are we losing data?

We are using persistent queues but we don't have a dead letter queue.

@sbayer55
Copy link
Contributor

sbayer55 commented May 3, 2022

Hi @tkhalane,

Based on the Logstash persistent queue documentation, there should be no data loss. I'm hesitant to say 100% there is no data loss without an in depth knowledge of your environment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants