-
-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use nix for reproducible build env #1269
Conversation
@tlaurion here's my branch.... finally! I ended up going with good old
You can give it a go by running |
5bd03a2
to
d8b5671
Compare
Ok made a little more progress, got past the zlib.h issue and added a few more package after trying from scratch with
Next step is to try with an older gcc instead, currently its at 11.3. |
c65ae3a
to
02f1fa6
Compare
@mmlb : Issues are opened upstream (nixos) for GCC and Ada. General idea (fixed for la test ada) seems to always try to bootstrap ADA from an older version to build newer versions. Haven't checked progression of issues but important as part of coreboot builder project as well Edit So I guess target would be gnat6 to obtain both older GCC and ada to build musl-cross-make and each coreboot's dependent version buildstack? |
Yep I'm trying that now locally. Looks like its going to build gcc/gnat for the nixpkgs revision I pinned to. Now I remember why the initial flake had the old gnat, I had a hunch that newer gcc/gnat would fail to build old one and went back and found out I was right :D. I'll see if gnat builds now and also check around to see if there's a revision that has gnat built by hydra so users won't have to rebuild gnat/gcc. |
Ok back to nix flakes. The nice thing about flakes is that its easy to manage the pinned versions as flakes locks the inputs and knows how to update them. Also makes it really easy to add new inputs of nixpkgs at specific commits for example. So I've switched back over to the flake so that I can grab the last "known good" version of gnat6 (from an old nixpkgs commit), latest nixpkgs gnat6 still doesn't build for me. Unfortunately the old gnat6 + buildFHSUserEnv don't seem to play well because I'm now getting:
I'm going to go back to nixpkgs and see whats up with gnat6 on master and if there's anything to be done about vdso. --- edit Also, the coreboot patch is so coreboot will accept gnatbind over just gnat. Not sure if thats alright or not, will have to see once vdso is issue is figured out. |
02f1fa6
to
53e869f
Compare
@osresearch @mmlb Have you fell upon https://git.sr.ht/~amjoseph/ownerboot ? |
@tlaurion I had not see that no, thanks. All of the successful things I've seen so far are building coreboot with nix and nix built toolchains, which is not the route I thought we wanted to be going initially. I think we might want to reconsider that, maybe we build coreboot with make but with nix's gcc as the initial stab instead? |
Well, depending of how incremental we want changes to Heads to happen (Makefile)
So to go the route of using coreboot buildstack under heads, nix would either have to also support older versions of coreboot (not the case right now, I think that 4.16+ are supported under nix). Otherwise, we could switch things around and get rid of musl-cross-make and use coreboots buoldstack, but I'm worried that would increase maintainsership cost. What is interesting about ownerboot is that they are building initrd tools directly from nix buikdstack as well to be packed. This is neat, but we would need to use something stable as a buildstack there. That would be better then using per version coreboot buildstaxk to build everything inside of heads, which would also augment cahce differences we would need to keep since they won't be shareable between different board's coreboot version depending buildstack. So my thoughts on previous comment of yours is that:
|
I don't think I follow here. Are you suggesting to use coreboot's build of gcc stack for the rest of heads instead of just for coreboot? Thats not exactly what I was suggesting earlier. I was suggesting we use the same gcc setup for both coreboot and rest of heads. (I wasn't aware of usage of musl-cross-make, I assumed heads used the same compilers coreboot builds/uses. I'm not sure what it would take to replace musl-cross-make if wanted.
Yep this is what I was suggesting too. You mention:
What do you mean when you say "stable"? Why isn't nix's gcc+ada considered stable for this purpose?
So I took a look at fixing gnat6 in latest nixpkgs instead of relying on an old one which should get past the gettimeofday vdso issue I think. This isn't currently working due to bootstrapping issues. nixpkgs uses binary builds of gnat from https://github.com/alire-project/GNAT-FSF-builds to build nixpkg's gcc+ada. Both nixpkgs and alire-project only actually go back to v11 :(. So this needs to be figured out somehow 🤔. |
As of now:
So following your comment:
I thought the first step would be to have nix have an earlier version if the toolstack (mostly for gnat) to be able to compile coreboot 4.11 and later buildstack AND musl-cross-make. If we can also use Nix's musl-cross-make builds: awesome. There will just need to have some adaptation to Heads makefile, since first thing built is musl-cross-make here. @mmlb makes sense? |
@mmlb any news? :/ |
@tlaurion nope sorry I got pulled off into other things at work and at home. I was actually thinking about this earlier today. I tried out https://github.com/alire-project/GNAT-FSF-builds on my own but got no where. I think its way past time I solicit the help of others. I'll ask around in nix's matrix to see how we can get gnat6. |
🎉 Now to open up a PR to nixpkgs. |
@mmlb Fan-Tas-Tic! |
In parallel, trying to make boards not depend on ada by removing libgfx support on all boards, explicitly relying on board config's kernel configs to provide fb through drm that will either work by abusing kernel fb exposition (no compression of fb, exposing fb address new config options that taint Heads on 5.x kernel for i915 driver), hacking kexec to provide exposed flat fb address in kexec call (whitelisting approch under kexec patch in master) or require that the next kexec'ed kernel provides DRM+GPU in initrd so there is fb avail when asking for LUKS decryption key passphrase. First results shows that most boards did not really need coreboot libgfx for all non-dgpu enabled boards. If tests with dgpu vbios provided blobs also confirms non requirement of libgfx (they shouldn't, only doubt is for EDP variant of x230 which provides modified vbt which seems to be relaying on libgfx at least in doc, but a quick test by board owners should validate/invalidate this.) Then ada requirement to libgfxiniy will be gone for Heads. Long story short, we might not need ada altogether. History will tell soon enough. Crossposting here from NixOS/nixpkgs#225191 (comment) |
… configs without libgfxinit linuxboot#1269 Modified with limited understanding absolute local builds to reuse Nix cache of already confirmed reproducible stuff Removed gnat6/ada, added swtpm, texinfo and rsync for which otherwise there was local failings
… configs without libgfxinit linuxboot#1269 Modified with limited understanding absolute local builds to reuse Nix cache of already confirmed reproducible stuff Removed gnat6/ada, added swtpm, texinfo and rsync for which otherwise there was local failings
… configs without libgfxinit linuxboot#1269 Modified with limited understanding absolute local builds to reuse Nix cache of already confirmed reproducible stuff Removed gnat6/ada, added swtpm, texinfo and rsync for which otherwise there was local failings
… configs without libgfxinit linuxboot#1269 Modified with limited understanding absolute local builds to reuse Nix cache of already confirmed reproducible stuff Removed gnat6/ada, added swtpm, texinfo and rsync for which otherwise there was local failings
@mmlb well some progress, but as of 6423000 on CircleCI: Builds ends with
|
… configs without libgfxinit linuxboot#1269 Modified with limited understanding absolute local builds to reuse Nix cache of already confirmed reproducible stuff Removed gnat6/ada, added swtpm, texinfo and rsync for which otherwise there was local failings
be1393c
to
ea3e7a2
Compare
Hello there, I follow this PR, and saw that some work has been done.
right ? |
Yep thats more or less what I do. Ping me on the matrix channel so we can avoid a tangent here. |
I'm confused how the boards the want the [tw]530 roms are passing in CI since I skipped the neutering done in prep_env. |
[tw]230 me blobs: Boards are dealing with blobs download directly now.
We add it under CirecleCI to download once on clean builds only
Ex:
https://github.com/linuxboot/heads/blob/master/boards%2Fw530-maximized%2Fw530-maximized.config#L71-L72
Includes
https://github.com/linuxboot/heads/blob/master/targets%2Fxx30_me_blobs.mk
…On Thu, Mar 28, 2024, 7:10 PM Manuel Mendez ***@***.***> wrote:
I'm confused how the boards the want the [tw]530 roms are passing in CI
since I skipped the neutering done in prep_env.
—
Reply to this email directly, view it on GitHub
<#1269 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAGKBMUIMRYMG4PKIW6ZUH3Y2SPOVAVCNFSM6AAAAAATSODU6GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMRWGI4TIOBRGY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@mmlb I saw CircleCi empty steps for vbios download. Well, the boards that were depending on vbios have been moved to unmaintained directories since nobody having the hardware tested them in the last PRs. First they were moved to untested but still built, then moved to unmaitained_boards. There is no way I can maintain boards that are not officially used by anybody. When somebody interested to at least testing the ROMs with and external reprogrammer to revert back to working state in case of bricking, those will be brought back to being built by CricleCI. |
Discussion continued over matrix over thread https://matrix.to/#/!pAlHOfxQNPXOgFGTmo:matrix.org/$4OPI4He9fhXYWtGUrpJyAMWjvOPHZ9iD0S7WQyRQBhw?via=matrix.org&via=nitro.chat&via=fairydust.space The current state is that I pushed docker counterpart of this current PR state over https://github.com/tlaurion/heads/tree/nix_with_develop_created_container and https://github.com/tlaurion/heads/tree/nix_with_develop_created_container-updated_flake_lock but the currently produced docker image doesn't expose env iwht proper PATH, which I think is the source of the builds failing using pushed dockerhub images. You can test in a branch of yours and if registered on CircleCI and Github having your ssh public key, you will be able to relunch builds and give ssh access. When you will land in the docker, you will see that env doesn't show the same as in local docker launched environements. You can also see at https://matrix.to/#/!pAlHOfxQNPXOgFGTmo:matrix.org/$mGsDZ0Bzvt0SyE6i_GPGFBVLxnvtQwfvn8BSIPZ2QmQ?via=matrix.org&via=nitro.chat&via=fairydust.space the commands I used to build docker and run locally, which also fails loclly, but not for coreboot crossgcc, but when comes the time to build tpm2-tools where PATH doesn't include sbin where addgroup/groupadd is found. That will need fixing somehow as well. |
Signed-off-by: Manuel Mendez <[email protected]>
Signed-off-by: Manuel Mendez <[email protected]>
Until nix PR is merged to not interfere with master/other pr caches Signed-off-by: Manuel Mendez <[email protected]>
#1630 should be ready for merge. Rebase/merge this and retest needed |
@mmlb I spent the day on https://github.com/tlaurion/heads/tree/wip-nix-for-build and made it work rebased/fixed from #1630 #1630 cannot be used as is for whatever reason, sys-root needs to be there still under tpm2* modules, couldnt get rid of that one and haven't figure out why still. CircleCI builds can be observed and dived in under https://app.circleci.com/pipelines/github/tlaurion/heads?branch=wip-nix-for-build I was able to build local without nix, with nix layer and from CircleCI. Notes: Don't forget to change CircleCI project's variable CACHE_VERSION to something new if you want to skip reusing CircleCI caches layers defined under .CircleCI, or change the prefix inside of CircleCI. The docker is next to be tested Let me know what's next |
handover holiday
|
Use
nix
for the build environment. This gives us a reproducible build environment. Use it by entering a "pure" nix shell so only what is specified in the shell.nix is available by runningnix-shell --pure
.