-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: update ci caching logic #207
Conversation
Kudos, SonarCloud Quality Gate passed! |
@edgarrmondragon I'm wondering if the github api token used for the CI here is also used somewhere else. The changes in this PR seem effective, the last run took 35 minutes, 28 of which were waiting for new quotas. But everything was consumed within about 10 minutes. By my calculation (running tests locally), the full CI run with 5 python versions should use about 2500 REST requests, and this didn't happen here, we had much less actually happening (only 1 python version on 1 PR, caching on the other). Are you able to check if the token's rate allowance is eaten up by something else? CI runs took 3-4hrs over the past few days, which makes it very hard to work on the repo. |
@laurentS It's probably easiest to regenerate the token and document the permissions. Do you know the minimum set of permissions required for the tap to run CI? |
@edgarrmondragon Yes, let's try this.
That seems to be enough to get all tests to pass locally, so I suppose it should be the same with CI. It'd be great if you're able to rotate tokens to understand what is eating quotas so quickly. |
I've updated |
I was going to say the rate limits are per account, not per token, but it's actually worse than that (from the docs):
I think this explains why we're hitting limits so often. My (very rough) calculation above was that we use ~500 REST calls per python version (before this PR). If caching worked perfectly, then we'd hit the limit just once per day, but considering different branches and how they don't share cache, it's likely that we hit problems more frequently. I'll see if I can modify the tests a bit to use fewer requests per run. |
Fixes (hopefully) #119.
See details in the issue. Hopefully this PR's tests should already see the benefit of the cache.