Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CircleCI: add layer2 cache of newly added coreboot git forks to speedup builds from cache #1555

Conversation

tlaurion
Copy link
Collaborator

@tlaurion tlaurion commented Dec 16, 2023

Master doesn't currently caches nor reuses coreboot's git forks build dir cache (layer2) but if no modules config changed (layer3).

So cache coreboot git dirs so that crossgcc toolchain can be built once and reuse when cache layer3 is invalidated by other modules having changed bu coreboot hasn't to speed up builds from cache of coreboot build dir which should be reused to only compile new modules having changed.

As of today, cache layer 3 (only scripts having changed) encompassed all build dirs and was reused.

Reminder

  • cache layer 1: always reuse musl-cross-make measuring modules/musl file hash for reusal
  • cache layer 2: coreboot build cache + musl-cross-make measuring modules/coreboot+modules/musl for reusal
  • cache layer 3: all build dir cache measuring Makefile and modules files for reusal

@tlaurion tlaurion marked this pull request as draft December 16, 2023 20:02
@tlaurion
Copy link
Collaborator Author

Not sure about matrix bridge config for github. As of now, someone needs to put as draft and back to ready for review to have message from integration posted into channel to not spam (pull request creation is not publicised). But that means only reviewed and ready for review->draft-ready for review are posted. Testing.

@tlaurion tlaurion marked this pull request as ready for review December 16, 2023 20:05
@JonathonHall-Purism
Copy link
Collaborator

@tlaurion I thought last time we tried this, the problem was that we were going over a size limit, or saving the cache itself took too long, or something like that. (But maybe I recall incorrectly.) If this does save time and doesn't hit any limits, I'm all for it.

Currently though, I can't see from the available pipelines how long it takes to save/restore or what the cache sizes are:

Could you get a pipeline through that saves the caches and then one that restores it?

@tlaurion
Copy link
Collaborator Author

Changing cache name

@tlaurion
Copy link
Collaborator Author

Starting build clean

@JonathonHall-Purism
Copy link
Collaborator

It only got coreboot-4.19 and coreboot-nitrokey:

Warning: could not archive /root/project/build/x86/coreboot-4.11 - Not found
Warning: could not archive /root/project/build/x86/coreboot-4.13 - Not found
Warning: could not archive /root/project/build/x86/coreboot-4.14 - Not found
Warning: could not archive /root/project/build/x86/coreboot-4.15 - Not found
Warning: could not archive /root/project/build/x86/coreboot-4.17 - Not found
Warning: could not archive /root/project/build/x86/coreboot-dasharo-kgpe-d16 - Not found
Warning: could not archive /root/project/build/x86/coreboot-purism - Not found

The cache job does not appear to be downstream of any Librem boards using coreboot-purism. KGPE-D16 isn't built in CI at all currently. (I don't mind leaving the other keys in the hopes of fixing that, they don't really harm anything, but coreboot-purism was intended to be cached by the PR.)

Layer 2 cache went from 3.3 GB to 5.0 GB. Layer 3 is already 7.8 GB so that itself is fine, but moving the cache job downstream of coreboot-purism will probably increase layer 3 by 1-2 GB as well.

@tlaurion Do you want it to cache coreboot-purism or leave it as-is?

Given that Layer 3 is already 7.8 GB, the increase in layer 2 is probably fine (not sure it's a good use of scarce time to trigger another job to see how long it takes to download). But if Layer 3 increases to ~10 GB we might want to see how that impacts build time.

For reference, latest build: https://app.circleci.com/pipelines/github/tlaurion/heads/2174/workflows/03401bf0-276f-4f51-9a05-8880daab7d0c/jobs/38209
Last time master generated layer 2 cache: https://app.circleci.com/pipelines/github/linuxboot/heads/711/workflows/5685c1d7-cb4e-4da7-ba6a-e2c5ee682633/jobs/14195

@tlaurion
Copy link
Collaborator Author

tlaurion commented Dec 19, 2023

@tlaurion I thought last time we tried this, the problem was that we were going over a size limit, or saving the cache itself took too long, or something like that. (But maybe I recall incorrectly.) If this does save time and doesn't hit any limits, I'm all for it.

It only got coreboot-4.19 and coreboot-nitrokey:

Warning: could not archive /root/project/build/x86/coreboot-4.11 - Not found
Warning: could not archive /root/project/build/x86/coreboot-4.13 - Not found
Warning: could not archive /root/project/build/x86/coreboot-4.14 - Not found
Warning: could not archive /root/project/build/x86/coreboot-4.15 - Not found
Warning: could not archive /root/project/build/x86/coreboot-4.17 - Not found
Warning: could not archive /root/project/build/x86/coreboot-dasharo-kgpe-d16 - Not found
Warning: could not archive /root/project/build/x86/coreboot-purism - Not found

https://app.circleci.com/pipelines/github/tlaurion/heads/2174/workflows/03401bf0-276f-4f51-9a05-8880daab7d0c
2023-12-19-105846
So as of now, consider picture + OP explanation for the 3 layers of workspace cache on a clean build (workspace caches can overwrite previous layer cache content when passed to next layer) and where save_cache cannot combine cache content otherwise failing(content of combined cache needs to be exclusive, which is why only different architecture caches (/build/x86 /build/ppc64) can be combined):

  • x230-hotp-maximized is 4.19 workspace cache layer 2 for coreboot, talos-2 is coreboot dasharo git fork workspace cache layer 2 for coreboot
    • nitropad-nv41 is coreboot dasharo novacustom git fork workspace cache layer 2 for coreboot
      • save_cache combines previous workspace caches and save layer 1 2 and 3 caches
    • librem_14 is coreboot purism git fork worspace cache layer 2 for coreboot and reuses build modules of x230-hotp-maximized
      • this is not part of save_cache. Consequently, it is not restored on prep_step and coreboot toolchain is rebuild at each build, clean or not, which other modules cache is reused.

Previous discussions, if my memory is good, thought of changing layers dependencies so that

  • x230-hotp-maximized and talos-2 are layer 2 workspace caches and passed to other layers
  • librem_14 and nitropad-nv41 are swapped, letting librem_14 cahce being part of save_cache layer 1-2-3 restored at prep_step

That way, all purism boards should benefit of the cache save/restore (6 other purism librems boards reusing the cache vs 1 other nitrokey board reusing cache as currently)


@JonathonHall-Purism I'll launch a rebuild with cache now, then change CACHE_VERSION to today's date and rebuild clean after having implemented hierarchy change above in circleci config).

There must be something better to do with the caches but even if I digged in orhter github projects using CircleCI, there doesn't seem to be a lot of other projects on free tier doing massive caches like Heads do, and therefore nobody complained or seemed to have done something better. Wish I had a CircleCI specialist on-hand here).

…lpful to build 6 boards (librems) not 1 (ns50)

Signed-off-by: Thierry Laurion <[email protected]>
@tlaurion
Copy link
Collaborator Author

@tlaurion
Copy link
Collaborator Author

Finished being par with this PR under #1604's 7fe2f9d

@tlaurion tlaurion closed this Mar 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants