Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman machine os apply command should abort if arch is different #19851

Closed
benoitf opened this issue Sep 4, 2023 · 34 comments
Closed

podman machine os apply command should abort if arch is different #19851

benoitf opened this issue Sep 4, 2023 · 34 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. podman-desktop stale-issue

Comments

@benoitf
Copy link
Contributor

benoitf commented Sep 4, 2023

Issue Description

There is a FCOS image including podman available at https://quay.io/repository/podman/fcos?tab=tags

but this image is a x64 image

Then if I ran the command to apply this podman version but then restarting the machine is failing

it's because my system is an arm64 system and it tries to change packages to its x64 counterpart.

the command os apply should detect and abort the sequence if platform arch and image arch are not matching

Steps to reproduce the issue

Steps to reproduce the issue

podman machine os apply quay.io/podman/fcos:7cde6ab
Pulling manifest: ostree-unverified-image:docker://quay.io/podman/fcos:7cde6ab
Importing: ostree-unverified-image:docker://quay.io/podman/fcos:7cde6ab (digest: sha256:8bf5a428bbd000ec69d06d9d45439109275a76046401475aca6a496be8941df9)
ostree chunk layers needed: 51 (735.3 MB)
custom layers needed: 3 (33.7 MB)
Checking out tree 4c91b67...done
Inactive base removals:
  moby-engine
Staging deployment...done
...

But then I restart the podman machine

podman machine stop
podman machine start

but it's never booting

podman machine start
Starting machine "podman-machine-default"
Waiting for VM ...
Error: EOF

qemu process is at 99-100%

I also tried on a fresh machine (no container, etc)

Describe the results you received

error

Describe the results you expected

podman machine using the recent podman's version

podman info output

If you are unable to run podman info for any reason, please provide the podman version, operating system and its version and the architecture you are running.

Podman in a container

No

Privileged Or Rootless

None

Upstream Latest Release

Yes

Additional environment details

Additional environment details

Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting

@benoitf benoitf added the kind/bug Categorizes issue or PR as related to a bug. label Sep 4, 2023
@vrothberg
Copy link
Member

@ashley-cui can you take a look?

@lsm5 lsm5 linked a pull request Sep 4, 2023 that will close this issue
@lsm5 lsm5 self-assigned this Sep 4, 2023
@lsm5
Copy link
Member

lsm5 commented Sep 4, 2023

@ashley-cui can you take a look?

This one's for me :)

EDIT: For context, @benoitf is trying the new fcos image build using packages from podman-next that we'll be publishing to quay.io/podman/fcos:podman-next for bleeding edge testing on Mac OS. Initial work done in #19477 but there are pending issues with github actions that #19830 will resolve.

@baude
Copy link
Member

baude commented Sep 5, 2023

@benoitf does it work with an image you have built?

@benoitf
Copy link
Contributor Author

benoitf commented Sep 5, 2023

@baude let me check

@benoitf
Copy link
Contributor Author

benoitf commented Sep 5, 2023

so I built an image quay.io/fbenoit/coreos-podman:2023-09-05 using https://github.com/coreos/layering-examples/blob/main/podman-next/Containerfile (removing conmon package from the list else image can't be built) and it worked

podman machine os apply quay.io/fbenoit/coreos-podman:2023-09-05 foo
Pulling from containers-storage: quay.io/fbenoit/coreos-podman:2023-09-05
Copying blob sha256:a3b80797e4fcbab57db054061f4280ea55901de6580c9df04a6d78f72ff6b307
Copying blob sha256:69ba596dfedd5ef06bebc36d69dc1c008499bb98e07a34502a70d7c2d278d226
Copying blob sha256:e0e2e95c0d01c12cdff033450593aed3250007cd746c85a1973c6251d3fa0698
Copying blob sha256:4e1c90f89decef5734ac1bfb03d1567174a3aadf4fd81d8fbd008a02e66670de
Copying blob sha256:4652c09428421200c04cb4359d3c63e429da1252014d431170333f0df4b0b454
Copying blob sha256:5fa86cdec434f23ef68c2b029c9c5f1a2660fbd2d9e91bb23d1aa19c44472cc0
Copying blob sha256:789b78d36a66158d4cbdca3cae4bbd285a76e380c737d881aebb012e6e771177
Copying blob sha256:93ed258cc569e4b1673ba72f5acde886caed0cb2fed4e5dfae9f4da15c0d87bd
Copying blob sha256:554caf97ebccc91de4e204c680720c6fdf05e71f8a19957204878085dfc5cbd7
Copying blob sha256:eb204ec45b800289266b27c69f8e8fd423094cca1f25edf98acc547eeadcb897
Copying blob sha256:5362130a9a9e012eaeffd4a96811c8aad1efb2d510e476aa8a1cc5258b45849d
Copying blob sha256:a6a962287674dabb97bc8c8f7d77345fbec0a71e1d184a37816b5454c22a13e5
Copying blob sha256:32f5c1bd1d14d98a9966625b5a4b595caf7a7a5ee6168a15b44c1f0b83e96c47
Copying blob sha256:fef399adbce5fc7cb5589b8181bc5380f7ff178498e87938c2f41e074a1c9522
Copying blob sha256:ee60f7a64772e5e98ae1dd020b127d92bf0ab25242ce489dff58210ec7b702dd
Copying blob sha256:0148c6b45ddb6ae54c3c580aa3d991d3b9b698befdd8e44a90947d46bff594fc
Copying blob sha256:13bf414ce6cadbe578268fea88c225bb69c5f1166d3e76c58a552fa45c06a258
Copying blob sha256:66dcd7a602393b45f5c1e5ec4a06c84e24c05244060280056311d70953dcfe98
Copying blob sha256:f6b5d53785d0bbd61054b94b850d3a05bee200ed792d0c18533bc10d234f4751
Copying blob sha256:de75f47bcc619659e516e900d8080e445a74fee65edd0b9b2f54070b44765e28
Copying blob sha256:1c182e88e5a5ec6cba6a5c59ba76f9ebb8b5aefbb12ddd9c4d6e610702d4275e
Copying blob sha256:801572bac51f67f7b52d523163661e125aff3c730074cef5cae9ea4283f451de
Copying blob sha256:d8a4624764ff8ac6844b47513a5d5720cd38ca09a4c4781af6868ac041baf5e4
Copying blob sha256:a5a69343c9f4705fd52f0486134b0849c56b95d4ca3d17bc1ab86b9c34525cfb
Copying blob sha256:a1f807766f2ae3c314f6ed54e7f2cc7cd9fcfbb3615fcfdc72bb2918246ffc4d
Copying blob sha256:1abad1e8de0133ae5392a09ca5310110093e86029e06b7e16686cc8de93e84a2
Copying blob sha256:074b6be293f61de22a247fd32a35aa36b01cb1ecf45563cb1d4ce03816437d68
Copying blob sha256:85b43661494074aff8707655d056fd36a1596396405dae2079d08fa6f937a0be
Copying blob sha256:09fe590b07d8f70c078bd3a4d200a3ef3221289ed0dc2729d11d048ddab2c709
Copying blob sha256:42b1aac55bf256ec7bfc44c2a937881c5c2b6fe6a1ec744419636092cdbc27ef
Copying blob sha256:db7a3e5a22f3732c96213b5bff80b407c6de660b01b4995cfe9760ce6354d516
Copying blob sha256:49db824bc7247573b0cb664aa9b439a11210bef47fb950b0b6e7fc1245e3712a
Copying blob sha256:a5e37dd03c94b6daed4241a438793e4d91248c66984de4e3ab57b3b9ba0bd7e6
Copying blob sha256:aeade8f96e184327d385da915cd03e1bcbb61f65b63f85ae07f2004d2725b5c4
Copying blob sha256:e91ea756d69aaab84044af51fdab3ebba5cfc6af31390126bb2934e43900b509
Copying blob sha256:ee38dbc84e149b5a61c2d1c110481b2cf59b485c0f06af4f9dd4ef71fcac412f
Copying blob sha256:5a6df4cf0da9df73023d35b6c44086ef9421210afda72b29bb9f7c401271d253
Copying blob sha256:d57ea6db0979b6af6d364bc613a5c353ceaaa9a824ee3d72566c895255e63124
Copying blob sha256:c8279b48d5d79b8db43a2269b23e3078ce61f7bdd385f1e3a4024d438b04e03a
Copying blob sha256:de29eabff67a7ea4e05198bf8a80ca85fc8f3343851188eaaceff092988d305b
Copying blob sha256:3b1e599f1ae9113f4540478388e54973b84d226e40bf84db97426648a112f2b8
Copying blob sha256:bb8ac1edd3fa0135d723c64dbaff7774af9ad217c18a73766ab0ab16ff4bd8ae
Copying blob sha256:d186db234758f9ccc7745f1687ef3684d33ffbea9b957e4d3182b603f04e0796
Copying blob sha256:317bd932ff60d856ff909a335f6a2ee1c8ce1164699cfa3c27590f996d485bd1
Copying blob sha256:81e0ff6d2fe83a8a5ab46899d149a2d9a45931afe21c78712e0b83f0d417a772
Copying blob sha256:a41f323e313959ea18da294a198eb70bba5a7c51cee7c9b7e7c9ed9487ddb49d
Copying blob sha256:dcd58e4ba811b15764bf1e76a18cc0d4aa3cbcdad5d371c6153bb9ebc4bca401
Copying blob sha256:a618d81beeffaf91186a0ef01bae9aa473bb42f2bdac5bcb62ed35026491b0b1
Copying blob sha256:fbb295d1a168a4de6851cbed871be682d3d7098077757066c98cbf5365583723
Copying blob sha256:30248c216c5484afd7fbc575d3be1f271e64bcab498fbc16903e01718d6a5d1f
Copying blob sha256:12787d84fa137cd5649a9005efe98ec9d05ea46245fdc50aecb7dd007f2035b1
Copying blob sha256:74d04f7c58efa26fd0c36d550bb610bbc28730408de7197acbe907944a190782
Copying blob sha256:5ef1e52ec680ff0e3ce5a33c39eb28638d491e207f038cfc28e8e4bf1326ce2e
Copying blob sha256:8695f5e66d27651fa8e70e537ac80d4449496f307206f60de50cde65f30b83e3
Copying config sha256:f04a52a8aee82d07c3f8796fa73e51d0e1685c800e97ef91aecf5f793a49e23f
Writing manifest to image destination
Pulling manifest: ostree-unverified-image:oci:/tmp/quayiofbenoitcoreospodman2023090585450636
Importing: ostree-unverified-image:oci:/tmp/quayiofbenoitcoreospodman2023090585450636 (digest: sha256:443a74d2017714b080aaac4f1247d5ba982589196047ce2fa1547e7d087d74e8)
ostree chunk layers already present: 51
custom layers already present: 3
Checking out tree 778c243...done
Inactive base removals:
  moby-engine
Staging deployment...done
Freed: 18.3 kB (pkgcache branches: 0)
Upgraded:
  aardvark-dns 1.7.0-1.fc38 -> 102:1.7.0-1.20230830211559328417.main.50.gc9cbf7d
  containers-common 4:1-89.fc38 -> 4:1-95.fc38
  containers-common-extra 4:1-89.fc38 -> 4:1-95.fc38
  crun 1.8.6-1.fc38 -> 102:1.8.7-1.20230905082613939026.main.30.gfd0bae5
  netavark 1.7.0-1.fc38 -> 102:1.7.0-1.20230902140856039369.main.95.gbf9f117
  podman 5:4.6.1-1.fc38 -> 102:4.7.0~dev-1.20230905032641640272.main.1652.0e3b492fa.fc38
Downgraded:
  kernel 6.4.11-200.fc38 -> 6.4.7-200.fc38
  kernel-core 6.4.11-200.fc38 -> 6.4.7-200.fc38
  kernel-modules 6.4.11-200.fc38 -> 6.4.7-200.fc38
  kernel-modules-core 6.4.11-200.fc38 -> 6.4.7-200.fc38
Removed:
  containerd-1.6.19-1.fc38.aarch64
  moby-engine-20.10.23-1.fc38.aarch64
  runc-2:1.1.7-1.fc38.aarch64
Changes queued for next boot. Run "systemctl reboot" to start a reboot
$ podman machine stop foo
Waiting for VM to exit...
Machine "foo" stopped successfully
 $ podman machine start foo
Starting machine "foo"
Waiting for VM ...
Mounting volume... /Users:/Users
Mounting volume... /private:/private
Mounting volume... /var/folders:/var/folders

This machine is currently configured in rootless mode. If your containers
require root permissions (e.g. ports < 1024), or if you run into compatibility
issues with non-podman clients, you can switch using the following command:

	podman machine set --rootful foo

API forwarding listening on: /Users/benoitf/.local/share/containers/podman/machine/qemu/podman.sock

Another process was listening on the default Docker API socket address.
You can still connect Docker API clients by setting DOCKER_HOST using the
following command in your terminal session:

	export DOCKER_HOST='unix:///Users/benoitf/.local/share/containers/podman/machine/qemu/podman.sock'

Machine "foo" started successfully

@lsm5
Copy link
Member

lsm5 commented Sep 5, 2023

@benoitf did you have dependency issues with conmon? I thought I got rid of all the problem builds from the copr. Could you please post any relevant logs?

@benoitf
Copy link
Contributor Author

benoitf commented Sep 5, 2023

@lsm5 I applied the following patch

diff --git a/podman-next/Containerfile b/podman-next/Containerfile
index f284c76..988ca58 100644
--- a/podman-next/Containerfile
+++ b/podman-next/Containerfile
@@ -11,6 +11,6 @@ COPY rhcontainerbot-podman-next-fedora.gpg /etc/pki/rpm-gpg/
 # Note: Currently does not result in a size reduction for the container image
 RUN rpm-ostree override replace --experimental --freeze \
     --from repo="copr:copr.fedorainfracloud.org:rhcontainerbot:podman-next" \
-    aardvark-dns conmon crun netavark podman containers-common containers-common-extra && \
+    aardvark-dns crun netavark podman containers-common containers-common-extra && \
     rpm-ostree override remove moby-engine containerd runc && \
     ostree container commit

else I have

STEP 4/4: RUN rpm-ostree override replace --experimental --freeze     --from repo="copr:copr.fedorainfracloud.org:rhcontainerbot:podman-next"     aardvark-dns conmon crun netavark podman containers-common containers-common-extra &&     rpm-ostree override remove moby-engine containerd runc &&     ostree container commit
Enabled rpm-md repositories: copr:copr.fedorainfracloud.org:rhcontainerbot:podman-next fedora updates-modular updates fedora-cisco-openh264 fedora-modular updates-archive
Updating metadata for 'copr:copr.fedorainfracloud.org:rhcontainerbot:podman-next'...done
Updating metadata for 'fedora'...done
Updating metadata for 'updates-modular'...done
Updating metadata for 'updates'...done
Updating metadata for 'fedora-cisco-openh264'...done
Updating metadata for 'fedora-modular'...done
Updating metadata for 'updates-archive'...done
Importing rpm-md...done
rpm-md repo 'copr:copr.fedorainfracloud.org:rhcontainerbot:podman-next'; generated: 2023-09-05T14:00:29Z solvables: 1293
rpm-md repo 'fedora'; generated: 2023-04-13T20:36:48Z solvables: 59720
rpm-md repo 'updates-modular'; generated: 2023-08-19T01:34:25Z solvables: 1074
rpm-md repo 'updates'; generated: 2023-09-05T00:32:09Z solvables: 17225
rpm-md repo 'fedora-cisco-openh264'; generated: 2023-03-14T10:56:46Z solvables: 4
rpm-md repo 'fedora-modular'; generated: 2023-04-13T20:30:28Z solvables: 1068
rpm-md repo 'updates-archive'; generated: 2023-09-05T00:46:24Z solvables: 34320
error: No matches for "conmon" in repo 'copr:copr.fedorainfracloud.org:rhcontainerbot:podman-next'
Error: building at STEP "RUN rpm-ostree override replace --experimental --freeze     --from repo="copr:copr.fedorainfracloud.org:rhcontainerbot:podman-next"     aardvark-dns conmon crun netavark podman containers-common containers-common-extra &&     rpm-ostree override remove moby-engine containerd runc &&     ostree container commit": while running runtime: exit status 1

let me try now with the podman's copy of this Containerfile at https://github.com/containers/podman/tree/main/contrib/podman-next/fcos-podmanimage

@lsm5
Copy link
Member

lsm5 commented Sep 5, 2023

ack, btw there are still some issues in the GitHub Actions for building the fcos image that I'm working through right now. I'll update here once I have a quay.io/podman/fcos:podman-next ready for testing.

@benoitf
Copy link
Contributor Author

benoitf commented Sep 5, 2023

so, I was able to build successfully https://github.com/containers/podman/blob/main/contrib/podman-next/fcos-podmanimage/Containerfile instead of https://github.com/coreos/layering-examples/blob/main/podman-next/Containerfile

I built it, pushed it to quay.io/fbenoit/coreos-podman:2023-09-05b and then it worked using os apply command

was able to stop and start the machine

but it's always failing to restart using podman machine os apply quay.io/podman/fcos:7cde6ab

I also tried on a fresh machine (so this time the image is not already on the podman machine)

podman machine os apply quay.io/fbenoit/coreos-podman:2023-09-05b
Pulling manifest: ostree-unverified-image:docker://quay.io/fbenoit/coreos-podman:2023-09-05b
Importing: ostree-unverified-image:docker://quay.io/fbenoit/coreos-podman:2023-09-05b (digest: sha256:9877fde9324279a7620dce8b22091a47cf932a12e606ec31901177a87b5d0418)
ostree chunk layers needed: 51 (650.1 MB)
custom layers needed: 3 (33.6 MB)
Checking out tree cf9c453...done
Inactive base removals:
  moby-engine
Staging deployment...done
Upgraded:
  aardvark-dns 1.7.0-1.fc38 -> 102:1.7.0-1.20230830211559328417.main.50.gc9cbf7d
  containers-common 4:1-89.fc38 -> 4:1-95.fc38
  containers-common-extra 4:1-89.fc38 -> 4:1-95.fc38
  crun 1.8.6-1.fc38 -> 102:1.8.7-1.20230905082613939026.main.30.gfd0bae5
  netavark 1.7.0-1.fc38 -> 102:1.7.0-1.20230902140856039369.main.95.gbf9f117
  podman 5:4.6.1-1.fc38 -> 102:4.7.0~dev-1.20230905032641640272.main.1652.0e3b492fa.fc38
Downgraded:
  kernel 6.4.11-200.fc38 -> 6.4.7-200.fc38
  kernel-core 6.4.11-200.fc38 -> 6.4.7-200.fc38
  kernel-modules 6.4.11-200.fc38 -> 6.4.7-200.fc38
  kernel-modules-core 6.4.11-200.fc38 -> 6.4.7-200.fc38
Removed:
  containerd-1.6.19-1.fc38.aarch64
  moby-engine-20.10.23-1.fc38.aarch64
  runc-2:1.1.7-1.fc38.aarch64
Changes queued for next boot. Run "systemctl reboot" to start a reboot
 podman machine stop
podman machine start
Starting machine "podman-machine-default"
Waiting for VM ...
Mounting volume... /Users:/Users
Mounting volume... /private:/private
Mounting volume... /var/folders:/var/folders

This machine is currently configured in rootless mode. If your containers
require root permissions (e.g. ports < 1024), or if you run into compatibility
issues with non-podman clients, you can switch using the following command:

	podman machine set --rootful

API forwarding listening on: /Users/benoitf/.local/share/containers/podman/machine/qemu/podman.sock

Another process was listening on the default Docker API socket address.
You can still connect Docker API clients by setting DOCKER_HOST using the
following command in your terminal session:

	export DOCKER_HOST='unix:///Users/benoitf/.local/share/containers/podman/machine/qemu/podman.sock'

Machine "podman-machine-default" started successfully

it's also working

so on my side it seems to be specific to the image published

@benoitf
Copy link
Contributor Author

benoitf commented Sep 5, 2023

looking at the difference

Removed:
  grub2-efi-aa64-1:2.06-95.fc38.aarch64
  libatomic-13.2.1-1.fc38.aarch64
  qemu-user-static-x86-2:7.2.4-2.fc38.aarch64
  shim-aa64-15.6-2.aarch64
Added:
  grub2-efi-x64-1:2.06-95.fc38.x86_64
  grub2-pc-1:2.06-95.fc38.x86_64
  grub2-pc-modules-1:2.06-95.fc38.noarch
  microcode_ctl-2:2.1-55.fc38.x86_64
  shim-x64-15.6-2.x86_64

I think the issue is that the image published by the CI is a x64 image while my image is a ARM64 image (as my system)
so this should probably explain why it's failing, I guess it won't like to have a system switched from arm64 to x64 :-)

@benoitf
Copy link
Contributor Author

benoitf commented Sep 5, 2023

so quay.io/podman/fcos:podman-next should be a multi-arch image (including both x64 and arm64 images)

@benoitf
Copy link
Contributor Author

benoitf commented Sep 5, 2023

and probably machine os apply command could verify that if it's fetching a x64 image while the system being arm64 it should report an error/abort

@lsm5
Copy link
Member

lsm5 commented Sep 5, 2023

@cevich remind me, is there any special handling involved to build multi-arch image? Also, are multi-arch image issues fixed or still pending? don't remember if that was a cirrus or quay thing.

@benoitf benoitf changed the title podman machine os apply works but when restarting the machine it fails podman machine os apply command should abort if arch is different Sep 5, 2023
@benoitf
Copy link
Contributor Author

benoitf commented Sep 5, 2023

I've updated the title of the issue and the description

@ashley-cui
Copy link
Member

Hmm, Is this issue better fixed in rpm-ostree?

@benoitf
Copy link
Contributor Author

benoitf commented Sep 5, 2023

@ashley-cui I don't know who is responsible of doing what but yes at the end user think it might work while it doesn't so process should fail before.

@lsm5
Copy link
Member

lsm5 commented Sep 5, 2023

For now the issue is that the image build in the new github actions addition didn't work as expected which the linked PR should fix once complete.

@baude
Copy link
Member

baude commented Sep 6, 2023

@rhatdan @vrothberg here we have again another example of podman just doing whatever and not helping our users. when i am back from PTO (and @vrothberg is as well), ill huddle us once and for all on this topic.

thanks @benoitf

@vrothberg
Copy link
Member

We cannot enforce platform checks by default (see #12682 for details). We tried it a number of times and immediately broke users. The biggest problem is that some images claim to be of a different architecture. There were bugs in practically all tools causing "wrong platforms" (e.g., Podman/Buildah, Docker, buildkit). We've tried enforcing in c/image and in libimage etc. and every time we enforced platform checks, things broke in the wild immediately.

I think it boils down to adding an --enforce-arch flag when pulling images, such that users who know what they're doing and certain commands (e.g., machine os apply) can have a strict enforcement. Could even be a field in containers.conf but it must be configurable for the sake of portability. As mentioned above, I cannot stress out enough how many images get their platform wrong.

@ashley-cui
Copy link
Member

In this instance, since the image is from a remote registry, we're letting rpm-ostree take care of the pulling as well as the rebase. Podman isn't directly doing the pulls, so unless we make a different call to check arch before letting rpm-ostree do the pull. I think the better option is to have this implemented in rpm-ostree, but I can see letting podman check the arch as a temporary workaround.

@lsm5
Copy link
Member

lsm5 commented Sep 6, 2023

can we all please hold off any further discussion until we get the linked PR in?

I'm working on building a multi-arch image in github actions https://github.com/containers/automation_sandbox/actions/runs/6097256491/job/16544521868 . Once that's done, we should be good to go.

@benoitf
Copy link
Contributor Author

benoitf commented Sep 6, 2023

@lsm5 users could reference other images that they're building, etc. Issue is still there (just that it'll work for the referenced image). And when it happens, the podman machine is stuck/broken we can't recover

@lsm5
Copy link
Member

lsm5 commented Sep 6, 2023

@lsm5 users could reference other images that they're building, etc. Issue is still there (just that it'll work for the referenced image). And when it happens, the podman machine is stuck/broken we can't recover

ohh, then I gotta remove the linked PR and let @baude @vrothberg @ashley-cui handle that separately. My apologies.

@ashley-cui
Copy link
Member

All good, thanks @lsm5 for updating the podman-next image :)

@lsm5 lsm5 removed their assignment Sep 6, 2023
@benoitf
Copy link
Contributor Author

benoitf commented Sep 6, 2023

but anyway yes @lsm5 thanks for fixing the image quay.io/podman/fcos so it'll work smoothly with this one

@lsm5
Copy link
Member

lsm5 commented Sep 6, 2023

btw, final update from my side here and then I'll stop making noise.

amd64, arm64 multiarch image is available at: https://quay.io/repository/podman/fcos?tab=tags&tag=latest . The expiration doesn't look correct, I'll fix that soon. But hopefully the image should work.

EDIT: Continuing my GHA fix in #19877

@benoitf
Copy link
Contributor Author

benoitf commented Sep 7, 2023

@lsm5 FYI the current image works like a charm

$ podman machine ssh


Connecting to vm podman-machine-default. To close connection, use `~.` or `exit`
Fedora CoreOS 38.20230819.3.0
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/tag/coreos

[core@localhost ~]$ podman --version
podman version 4.7.0-dev

@benoitf
Copy link
Contributor Author

benoitf commented Sep 7, 2023

could the version contains the git sha ?

4.7.0-dev-<git-short-sha>

@lsm5
Copy link
Member

lsm5 commented Sep 7, 2023

could the version contains the git sha ?

4.7.0-dev-<git-short-sha>

is rpm -q available on this environment?

@lsm5
Copy link
Member

lsm5 commented Sep 7, 2023

could the version contains the git sha ?

4.7.0-dev-<git-short-sha>

btw, I think I can make this happen, just wanna confirm that API Version will also say: 4.7.0-dev-<git-short-sha>. May not be a big deal since this case isn't really production use.

@benoitf
Copy link
Contributor Author

benoitf commented Sep 7, 2023

@lsm5 yes it's available on the podman machine

rpm -q podman
podman-4.7.0~dev-1.20230906141109912503.main.1672.2806378c1.fc38.aarch64

but basically we don't connect to the machine in ssh, we run CLI on the host/macOS computer

For example if I run the podman version command on macOS it's displaying

$ podman version
Client:       Podman Engine
Version:      4.6.2
API Version:  4.6.2
Go Version:   go1.21.0
Git Commit:   5db42e86862ef42c59304c38aa583732fd80f178
Built:        Mon Aug 28 16:55:03 2023
OS/Arch:      darwin/arm64

Server:       Podman Engine
Version:      4.7.0-dev
API Version:  4.7.0-dev
Go Version:   go1.20.7
Built:        Wed Sep  6 16:34:27 2023
OS/Arch:      linux/arm64

so I would be fine if there was a Git Commit section for the server part as well

@lsm5
Copy link
Member

lsm5 commented Sep 7, 2023

@benoitf ack, I'll include the change in #19830

@github-actions
Copy link

github-actions bot commented Oct 8, 2023

A friendly reminder that this issue had no activity for 30 days.

@benoitf
Copy link
Contributor Author

benoitf commented Apr 3, 2024

closing

@benoitf benoitf closed this as not planned Won't fix, can't repro, duplicate, stale Apr 3, 2024
@stale-locking-app stale-locking-app bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Jul 3, 2024
@stale-locking-app stale-locking-app bot locked as resolved and limited conversation to collaborators Jul 3, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. podman-desktop stale-issue
Projects
None yet
Development

No branches or pull requests

5 participants