Skip to content
This repository has been archived by the owner on Jun 5, 2020. It is now read-only.

Request limit exceeded #144

Closed
stepanstipl opened this issue Apr 7, 2015 · 7 comments
Closed

Request limit exceeded #144

stepanstipl opened this issue Apr 7, 2015 · 7 comments

Comments

@stepanstipl
Copy link

Hi, when trying to use this module and create single ec2 instance, I'll mostly (like 9 in 10 tries) end up with Request limit exceeded. error (I'll get /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/delegate.rb:295: [BUG] Segmentation fault ruby 2.0.0p481 (2014-05-08 revision 45883) [universal.x86_64-darwin14] in the rest of the tries). The AWS environment has over 1k instances, so it's not exactly small, but I've been using 'aws-sdk-core' directly without any issues in this environment. Looks like module is making too many API calls and hitting EC2 api requests limit, let me know what would help you to debug this issue... I'm looking into it myself.

I'm running puppet like this:

puppet apply init.pp --hiera_config ./hiera.yaml --modulepath ./modules --debug

Versions used:

  • aws-sdk-core: 2.0.28
  • puppet: 3.7.4
  • puppetlabs-aws: 1.0.0
  • ruby: 2.0.0p481

Thanks, Stepan

...
Info: Applying configuration version '1428425803'
Debug: Prefetching v2 resources for ec2_instance
Debug: Storing state
Debug: Stored state in 0.00 seconds
Debug: Using settings: adding file resource 'rrddir': 'File[/Users/stepan/.puppet/var/rrd]{:path=>"/Users/stepan/.puppet/var/rrd", :mode=>"750", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'
Debug: Finishing transaction 70179195852760
Debug: Received report to process from stepan-mb.local
Debug: Processing report from stepan-mb.local with processor Puppet::Reports::Store
Error: Could not run: Puppet detected a problem with the information returned from AWS
when looking up ec2_instance in eu-west-1. The specific error was:

Request limit exceeded.

Rather than report on ec2_instance resources in an inconsistent state we have exited.
This could be because some other process is modifying AWS at the same time.
@stepanstipl
Copy link
Author

So it seems like the problem is in provider instance_to_hash function, which is called for every instance when puppet's doing prefetch. As we have many instances this will be going on for a while and eventually end up with Request limit exceeded. I guess better way would be to prefetch all subnets in one call and use that.

@garethr
Copy link
Contributor

garethr commented Apr 8, 2015

Hi @stepanstipl, thanks for the details.

There are some performance improvements we're working on inhttps://github.com//pull/102 which help reduce the number of queries quite drastically for larger installs. I also have a number of other improvements in mind once we validate that PR.

As well as the above and future general improvements, you can also limit the calls to just a single region, which cuts down the queries as well. You do this using the AWS_REGION environment variable. There is some additional specification work around that going on in #117 and #132. Once that's we should publish an example showing how to use that for optimisations like this.

I'll keep this issue open and report back as we make improvements here. Cheers.

@stepanstipl
Copy link
Author

Hi Gareth,
I was looking at #102 as well as #117 and #132, but I believe none of those will help in my case. And I'm already limiting my calls to one region. I've done code change that solves the issue for me - getting info about all the subnets in one call in self.instances and then passing this to instance_to_hash as variable. I'll clone the repo and push my change to Github, so that you can have a look.

Seems to speed up stuff massively, from that request limit error happening after couple of minutes, everything is now done in about 10 secs.

I've come across different problem in the meantime - seems like Name tag is used as primary id for subnets, and we don't use Name tag, so that'll cause us issues., but that's separate thing.

Thanks, Stepan

@garethr
Copy link
Contributor

garethr commented Apr 8, 2015

@stepanstipl ah, excellent. I'd been meaning to look at doing that too. Happy to see a PR as an example, once we have a pattern I'm happy to convert everything else.

The use of named resources is a design decision, I don't think subnets have another way of a user providing a unique name? Puppet requires that the name of things can be set upfront.

@stepanstipl
Copy link
Author

@garethr so I have created pull requests, so you can see what change I did - #146. As I mentioned this meant massive speed improvement in my case (and getting rid of that error).

Although I wasn't able to update tests properly (yet?) as mentioned in the PR - I don't have AWS where I would be able to run those atm. and I'm not sure about VCR tests - 3 of them are failing. (1 used to fail for me even before my change) which I guess is expected as the calls will be different - so these needs to be updated as well. I'll try to have a look at that.

Also as you mentioned, would be worth going through the code and have a look where similar pattern happens - I haven't done anything really yet with this module, as this was first thing I've hit ;).

And yes I guess named resources - there's no way around it, as you won't know subnet id before creation, right? might be thinking about some fallback to subnet_ids if no tag "Name" is present? I mean that would work for existing resources partially I guess, not for newly created ones though... not sure about implications.

@garethr
Copy link
Contributor

garethr commented May 14, 2015

Just to note here as well for posterity. #102 has a number of improvements to the performance of the module which should address this issue. I'll leave this issue open for the moment until that's merged to master.

@garethr
Copy link
Contributor

garethr commented Dec 8, 2015

#102 was merged a while back and I forgot to close this issue.

@garethr garethr closed this as completed Dec 8, 2015
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants