Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dockerfile build approach doesn't ensure consistent openssl lib at runtime #178

Open
j-m-harris opened this issue Oct 3, 2024 · 6 comments · May be fixed by #180
Open

Dockerfile build approach doesn't ensure consistent openssl lib at runtime #178

j-m-harris opened this issue Oct 3, 2024 · 6 comments · May be fixed by #180

Comments

@j-m-harris
Copy link
Contributor

This problem has been discovered while investigating:

The build approach at https://github.com/cloudamqp/amqproxy/blob/main/Dockerfile results in a dynamically linked amqpproxy, but does not ensure the same library version is used at runtime compared to compile time.

It might be the case they match, as https://github.com/84codes/crystal-container-images/blob/main/alpine/Dockerfile defaults to alpine:latest as well, but it depends on when the builds are run.

Not using the same version at runtime can lead to illegal instruction errors as noted in issue #174 , when using TLS for an upstream host.

For example in our build environment right now:

  • Builder image:
    • libcrypto3-3.3.1-r3 x86_64 {openssl} (Apache-2.0) [installed]
    • libssl3-3.3.1-r3 x86_64 {openssl} (Apache-2.0) [installed]
  • Runtime image:
    • libcrypto3-3.3.2-r0 x86_64 {openssl} (Apache-2.0) [installed]
    • libssl3-3.3.2-r0 x86_64 {openssl} (Apache-2.0) [installed]

Workarounds

  • Use shard build --static for amqproxy.
  • Copy libssl/libcrypto from the builder image to the runtime image.
@mrmason
Copy link

mrmason commented Oct 7, 2024

@dentarg - Do you have a preferred way of solving this ? Typically the build process and the run image are the same image, rather than different images - but if we do that here then the run image ends up being almost 400MB due to all of the build tools. If we simply change to build static then we're making a 32MB image, which seems acceptable.

REPOSITORY   TAG       IMAGE ID       CREATED              SIZE
run-using-cystal       <none>    ca4896a7c404   6 seconds ago        377MB
run-using-alpine       <none>    8d798901d8db   About a minute ago   13.1MB
run-using-alpine-build-static       <none>    2fbf17928e34   About a minute ago   31.8MB

The other options I can see are:

  1. You to lock your upstream images e.g. 84codes/crystal:1.13.2-alpine to an image that isn't latest - e.g. alpine:3.20.3
  2. You also ship a runtime image as well as a build image e.g - crystal-runtime:1.13.2-alpine

@mrmason mrmason linked a pull request Oct 7, 2024 that will close this issue
@dentarg
Copy link
Member

dentarg commented Oct 7, 2024

32MB sounds acceptable to me too, but I will let other people move this forward.

@carlhoerberg
Copy link
Member

22MB when a scratch image is used: #183

@carlhoerberg
Copy link
Member

But what we really should do is to build a new 84codes/crystal image every day or so, so that we get the latest libraries in the build image.

@carlhoerberg
Copy link
Member

Thank you for narrowing down the issue!

carlhoerberg added a commit to 84codes/crystal-container-images that referenced this issue Oct 8, 2024
Version incompabilities between library versions in the build image and
the runtime image can otherwise cause segfaults and other problems.

See: cloudamqp/amqproxy#178
@carlhoerberg
Copy link
Member

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants