Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: Use cross compilation for arm64 #4106

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

SamantazFox
Copy link
Member

No description provided.

@SamantazFox SamantazFox force-pushed the cross-compilation branch 4 times, most recently from bdabf89 to 058618d Compare September 16, 2023 22:26
Copy link
Member

@unixfox unixfox Sep 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we use the official crystal docker image here?

This would require to compile the binary outside the Dockerfile then for each architecture copy the file in the correct location.

This is actually a technique used by a lot of golang projects.

My idea is to finally have a docker image for all the architecture instead of having separate docker images.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They don't provide official ARM64 images:
https://hub.docker.com/r/crystallang/crystal/tags

Copy link
Member

@unixfox unixfox Sep 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nah you did not understood what I meant. You use the official docker image for crystal just for crosscompiling the binary then you create a separate docker image with just alpine where you copy the binary for each architecture.

Now that I think of it, we can also compile the binary on the host and not inside docker. But that would make that much more difficult for anyone to compile invidious using only docker.

I'm on the phone so I can't easily find an example but most golang projects do it like this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see. The problem with that is we need two dockerfiles (one for each arch), and AFAIK, you can't COPY --from between different build contexts.

Copy link
Member

@unixfox unixfox Sep 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SamantazFox SamantazFox force-pushed the cross-compilation branch 4 times, most recently from c9689a9 to a56eeda Compare September 17, 2023 11:18
@SamantazFox
Copy link
Member Author

This is a perfect example of "works on my machine" ...
I'll see if I can force a specific version of PCRE.

@SamantazFox
Copy link
Member Author

Ahhh, finally :D

@SamantazFox SamantazFox marked this pull request as ready for review September 17, 2023 12:12
@SamantazFox SamantazFox requested a review from a team as a code owner September 17, 2023 12:12
@SamantazFox SamantazFox requested review from syeopite and removed request for a team September 17, 2023 12:12
@unixfox
Copy link
Member

unixfox commented Sep 17, 2023

It's great but my issue is that I don't want to have many Dockerfile for building invidious, this may confuse our users.

If this new Dockerfile docker/Dockerfile.arm64-musl-cross mostly only work with Github actions (require previous steps) then it shouldn't be placed in the folder docker/.

Ideally we should still keep a Dockerfile that works out of the box, or we create a make command that do all of that.

I know it's just testing for the moment but let's not forget about that.

I'm fine in the future with becoming the main way to build invidious but we have to keep the ability to easily build invidious through a single command or (two).

@SamantazFox
Copy link
Member Author

I should have added all the necessary targets into the Makefile.

For regular people, either of the following will build invidious:

  • docker-compose up
  • docker build -f docker/Dockerfile .
  • make

Then, for user wanting to cross-compile (e.g if the Crystal compiler is not available, or the destination is not very powerful, like a RPi-like SBC), invidious can be compiled using this process:

  1. run make invidious-cross-{arch}
  2. copy invidious-{arch}.o over to the target
  3. install dev dependencies (libxml, openssl, libyaml, xz, pcre, gcc) on the target
  4. run make invidious-{arch} to link the .o with this platform's libraries

Copy link
Member

@unixfox unixfox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For me I prefer to implement the solution done in #4106 (comment)

So I don't accept this merge request for now sorry.

@SamantazFox
Copy link
Member Author

Realistically, cross-compilation with Docker is only required for CI/CD, right?
In that case, what is the difference between building the .o + fetching VideoJS dependencies in the host runner, vs in docker?

@unixfox
Copy link
Member

unixfox commented Sep 17, 2023

Realistically, cross-compilation with Docker is only required for CI/CD, right?
In that case, what is the difference between building the .o + fetching VideoJS dependencies in the host runner, vs in docker?

Not being required to install crystal locally.

My idea was to have the cross-compilation Dockerfile used by default Dockerfile. It works great for intel x86 processors and this will now allow anyone to build custom images for other architectures out of the box with docker buildx.

About the people that are using ARM processor, we can keep the Dockerfile.arm64 for now until crystal release one.

This way we can keep just two Dockerfile in the repository.

@SamantazFox
Copy link
Member Author

Not being required to install crystal locally.

About the people that are using ARM processor, we can keep the Dockerfile.arm64 for now until crystal release one.

This way we can keep just two Dockerfile in the repository.

Yeah, but given that most people will be building for the same target as their host, can't we simplify and only have one Dockerfile (i.e adapt the "arm" one to be generic, and stop using the upstream one?

Also, given that the work is almost done, I was thinking of adding support for armhf (ARM v7 hard float EABI).

My idea was to have the cross-compilation Dockerfile used by default Dockerfile. It works great for intel x86 processors and this will now allow anyone to build custom images for other architectures out of the box with docker buildx.

Can it be done so that a single Dockerfile works all ways? (either arm64 or amd64 host -> either arm64 or amd64 target)

@unixfox
Copy link
Member

unixfox commented Sep 17, 2023

Not being required to install crystal locally.
About the people that are using ARM processor, we can keep the Dockerfile.arm64 for now until crystal release one.
This way we can keep just two Dockerfile in the repository.

Yeah, but given that most people will be building for the same target as their host, can't we simplify and only have one Dockerfile (i.e adapt the "arm" one to be generic, and stop using the upstream one?

That would be very difficult to do and much more maintenance for us to maintain a whole crystal docker image.

My idea was to have the cross-compilation Dockerfile used by default Dockerfile. It works great for intel x86 processors and this will now allow anyone to build custom images for other architectures out of the box with docker buildx.

Can it be done so that a single Dockerfile works all ways? (either arm64 or amd64 host -> either arm64 or amd64 target)

We would have to use crystal package with alpine linux for that, but the issue is that we can't easily control the crystal version we want to use, because alpine linux doesn't offer the ability to use a specific crystal version. We have to use the crystal version shipped the alpine linux version we would use.

So if one day for example we have to use crystal 10.10.1 due to breaking changes in the newer version but 10.10.2 is only available on alpine linux we are screwed.

@SamantazFox SamantazFox marked this pull request as draft October 8, 2023 18:41
@SamantazFox SamantazFox added the exempt-stale Exempt the issue from staling label Oct 8, 2023
@SamantazFox
Copy link
Member Author

Note: I'm putting this PR aside until I have the time/willingness to address unixfox's remark. For whoever finds this in-between: the code is fully functional, feel free to reuse it.

@github-actions github-actions bot added the stale label Jan 7, 2024
@iv-org iv-org deleted a comment from github-actions bot Jan 27, 2024
@SamantazFox SamantazFox removed stale exempt-stale Exempt the issue from staling labels Jan 27, 2024
@SamantazFox SamantazFox added unfinished More work is needed on this PR, or on something this PR uses. exempt-stale Exempt the issue from staling labels Jan 27, 2024
PCRE2 support was added in Crystal v1.7.0, and used by default
in Crystal v1.8.0.

As we don't want to have to guess what version of the PCRE was
used on the build host, force the use of the legacy version until
we drop support for older versions of Crystal.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
exempt-stale Exempt the issue from staling unfinished More work is needed on this PR, or on something this PR uses.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants