You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
sudo dnf install crun-krun
and running podman with krun runtime causes odd behavior and doesnt start podman run --runtime=krun --rm -it alpine
Whats odd is that if I keep entering that same command over and over ill eventually get it to work. It is bizarre indeed.
Now running with sudo it works properly! Im not sure what permission needs to be changed in order for this to work.
The preinstalled crun binary in fedora 40 has libkrun capabilities and using --annotation=run.oci.handler=krun
I can invoke libkrun capabilities but it still causes the same issue
Steps to reproduce the issue
Steps to reproduce the issue
Install Fedora 40 Workstation (ideally with amd cpu)
Install crun-krun using dnf
run podman run --runtime=krun --rm -it alpine
Describe the results you received
Describe the results you received
podman run --runtime=krun --rm -it alpine Error: krun: failed configuring mounts for handler at phase: HANDLER_CONFIGURE_AFTER_MOUNTS: No such file or directory: OCI runtime attempted to invoke a command that was not found
Here is the log output which isnt very helpful as there is no logs from conmon
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman run --log-level=debug --runtime=krun --rm -it alpine)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/brian/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000/containers
DEBU[0000] Using static dir /home/brian/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/brian/.local/share/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=btrfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/krun"
INFO[0000] Setting parallel job count to 37
DEBU[0000] Pulling image alpine (policy: missing)
DEBU[0000] Looking up image "alpine" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf"
DEBU[0000] Trying "docker.io/library/alpine:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/brian/.local/share/containers/storage+/run/user/1000/containers]@63b790fccc9078ab8bb913d94a5d869e19fca9b77712b315da3fa45bb8f14636"
DEBU[0000] Found image "alpine" as "docker.io/library/alpine:latest" in local containers storage
DEBU[0000] Found image "alpine" as "docker.io/library/alpine:latest" in local containers storage ([overlay@/home/brian/.local/share/containers/storage+/run/user/1000/containers]@63b790fccc9078ab8bb913d94a5d869e19fca9b77712b315da3fa45bb8f14636)
DEBU[0000] exporting opaque data as blob "sha256:63b790fccc9078ab8bb913d94a5d869e19fca9b77712b315da3fa45bb8f14636"
DEBU[0000] Looking up image "docker.io/library/alpine:latest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Trying "docker.io/library/alpine:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/brian/.local/share/containers/storage+/run/user/1000/containers]@63b790fccc9078ab8bb913d94a5d869e19fca9b77712b315da3fa45bb8f14636"
DEBU[0000] Found image "docker.io/library/alpine:latest" as "docker.io/library/alpine:latest" in local containers storage
DEBU[0000] Found image "docker.io/library/alpine:latest" as "docker.io/library/alpine:latest" in local containers storage ([overlay@/home/brian/.local/share/containers/storage+/run/user/1000/containers]@63b790fccc9078ab8bb913d94a5d869e19fca9b77712b315da3fa45bb8f14636)
DEBU[0000] exporting opaque data as blob "sha256:63b790fccc9078ab8bb913d94a5d869e19fca9b77712b315da3fa45bb8f14636"
DEBU[0000] Looking up image "alpine" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Trying "docker.io/library/alpine:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/brian/.local/share/containers/storage+/run/user/1000/containers]@63b790fccc9078ab8bb913d94a5d869e19fca9b77712b315da3fa45bb8f14636"
DEBU[0000] Found image "alpine" as "docker.io/library/alpine:latest" in local containers storage
DEBU[0000] Found image "alpine" as "docker.io/library/alpine:latest" in local containers storage ([overlay@/home/brian/.local/share/containers/storage+/run/user/1000/containers]@63b790fccc9078ab8bb913d94a5d869e19fca9b77712b315da3fa45bb8f14636)
DEBU[0000] exporting opaque data as blob "sha256:63b790fccc9078ab8bb913d94a5d869e19fca9b77712b315da3fa45bb8f14636"
DEBU[0000] Inspecting image 63b790fccc9078ab8bb913d94a5d869e19fca9b77712b315da3fa45bb8f14636
DEBU[0000] exporting opaque data as blob "sha256:63b790fccc9078ab8bb913d94a5d869e19fca9b77712b315da3fa45bb8f14636"
DEBU[0000] Inspecting image 63b790fccc9078ab8bb913d94a5d869e19fca9b77712b315da3fa45bb8f14636
DEBU[0000] Inspecting image 63b790fccc9078ab8bb913d94a5d869e19fca9b77712b315da3fa45bb8f14636
DEBU[0000] Inspecting image 63b790fccc9078ab8bb913d94a5d869e19fca9b77712b315da3fa45bb8f14636
DEBU[0000] using systemd mode: false
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0000] Allocated lock 1 for container 53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206
DEBU[0000] exporting opaque data as blob "sha256:63b790fccc9078ab8bb913d94a5d869e19fca9b77712b315da3fa45bb8f14636"
DEBU[0000] Cached value indicated that idmapped mounts for overlay are not supported
DEBU[0000] Check for idmapped mounts support
DEBU[0000] Created container "53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206"
DEBU[0000] Container "53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206" has work directory "/home/brian/.local/share/containers/storage/overlay-containers/53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206/userdata"
DEBU[0000] Container "53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206" has run directory "/run/user/1000/containers/overlay-containers/53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206/userdata"
DEBU[0000] Handling terminal attach
INFO[0000] Received shutdown.Stop(), terminating! PID=4980
DEBU[0000] Enabling signal proxying
DEBU[0000] Cached value indicated that volatile is being used
DEBU[0000] overlay: mount_data=lowerdir=/home/brian/.local/share/containers/storage/overlay/l/SSQTFI5KWW4MEEK2N26TKPMV4P,upperdir=/home/brian/.local/share/containers/storage/overlay/8f87a8245201aff8d49a0cc6b428b08d1a881658dbc4666c0da96dc5fd8abbf6/diff,workdir=/home/brian/.local/share/containers/storage/overlay/8f87a8245201aff8d49a0cc6b428b08d1a881658dbc4666c0da96dc5fd8abbf6/work,userxattr,volatile,context="system_u:object_r:container_file_t:s0:c420,c677"
DEBU[0000] Made network namespace at /run/user/1000/netns/netns-b10e3ff6-f156-d368-3346-b30fcbf9a4ca for container 53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206
DEBU[0000] pasta arguments: --config-net --dns-forward 169.254.1.1 -t none -u none -T none -U none --no-map-gw --quiet --netns /run/user/1000/netns/netns-b10e3ff6-f156-d368-3346-b30fcbf9a4ca --map-guest-addr 169.254.1.2
DEBU[0000] Mounted container "53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206" at "/home/brian/.local/share/containers/storage/overlay/8f87a8245201aff8d49a0cc6b428b08d1a881658dbc4666c0da96dc5fd8abbf6/merged"
DEBU[0000] Created root filesystem for container 53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206 at /home/brian/.local/share/containers/storage/overlay/8f87a8245201aff8d49a0cc6b428b08d1a881658dbc4666c0da96dc5fd8abbf6/merged
INFO[0000] pasta logged warnings: "Couldn't get any nameserver address"
DEBU[0000] /proc/sys/crypto/fips_enabled does not contain '1', not adding FIPS mode bind mounts
DEBU[0000] Setting Cgroups for container 53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206 to user.slice:libpod:53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0000] Workdir "/" resolved to host path "/home/brian/.local/share/containers/storage/overlay/8f87a8245201aff8d49a0cc6b428b08d1a881658dbc4666c0da96dc5fd8abbf6/merged"
DEBU[0000] Created OCI spec for container 53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206 at /home/brian/.local/share/containers/storage/overlay-containers/53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206/userdata/config.json
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon args="[--api-version 1 -c 53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206 -u 53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206 -r /usr/bin/krun -b /home/brian/.local/share/containers/storage/overlay-containers/53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206/userdata -p /run/user/1000/containers/overlay-containers/53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206/userdata/pidfile -n relaxed_jang --exit-dir /run/user/1000/libpod/tmp/exits --persist-dir /run/user/1000/libpod/tmp/persist/53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206 --full-attach -s -l journald --log-level debug --syslog -t --conmon-pidfile /run/user/1000/containers/overlay-containers/53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/brian/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/brian/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg krun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --stopped-only --exit-command-arg --rm --exit-command-arg 53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206]"
INFO[0000] Running conmon under slice user.slice and unitName libpod-conmon-53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206.scope
DEBU[0000] Cleaning up container 53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206
DEBU[0000] Tearing down network namespace at /run/user/1000/netns/netns-b10e3ff6-f156-d368-3346-b30fcbf9a4ca for container 53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206
DEBU[0000] Unmounted container "53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206"
DEBU[0000] Removing container 53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206
DEBU[0000] Cleaning up container 53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206
DEBU[0000] Network is already cleaned up, skipping...
DEBU[0000] Container 53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206 storage is already unmounted, skipping...
DEBU[0000] Removing all exec sessions for container 53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206
DEBU[0000] Container 53e2d7048d9787fa6f45e8d7eb8b660b28d4dfebd2f22551b960279b1a72e206 storage is already unmounted, skipping...
DEBU[0000] ExitCode msg: "container create failed (no logs from conmon): conmon bytes \"\": readobjectstart: expect { or n, but found \x00, error found in #0 byte of ...||..., bigger context ...||..."
Error: container create failed (no logs from conmon): conmon bytes "": readObjectStart: expect { or n, but found , error found in #0 byte of ...||..., bigger context ...||...
DEBU[0000] Shutting down engines
brian@fedora:~$
Describe the results you expected
Describe the results you expected
I expect to see podman use the krun runtime. On my Windows machine using podman desktop inside podman machine on the fedora 40 container it works as expected. Specifically I should see that the kernel is different inside the container which showcase krun in action.
brian@fedora:~$ uname -a
Linux fedora 6.11.8-200.fc40.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 14 20:38:18 UTC 2024 x86_64 GNU/Linux
brian@fedora:~$ sudo podman run --runtime=krun --rm -it alpine
/ # uname -a
Linux f47b38a03895 6.6.52 #1 SMP PREEMPT_DYNAMIC Tue Oct 8 13:02:33 CEST 2024 x86_64 Linux
/ #
If you are unable to run podman info for any reason, please provide the podman version, operating system and its version and the architecture you are running.
Podman in a container
No
Privileged Or Rootless
None
Upstream Latest Release
Yes
Additional environment details
I have an amd 7600 and an 7800xt graphics card with SVM virtualization enabled
bmahabirbu
changed the title
Using krun with podman on Fedora 40 Workstation causes a race condition
Using Krun with Podman on Fedora 40 Workstation auses a Race Condition
Nov 20, 2024
bmahabirbu
changed the title
Using Krun with Podman on Fedora 40 Workstation auses a Race Condition
Using Krun with Podman on Fedora 40 Workstation Causes a Race Condition
Nov 20, 2024
Issue Description
Installing krun using
sudo dnf install crun-krun
and running podman with krun runtime causes odd behavior and doesnt start
podman run --runtime=krun --rm -it alpine
Whats odd is that if I keep entering that same command over and over ill eventually get it to work. It is bizarre indeed.
Now running with sudo it works properly! Im not sure what permission needs to be changed in order for this to work.
The preinstalled crun binary in fedora 40 has libkrun capabilities and using
--annotation=run.oci.handler=krun
I can invoke libkrun capabilities but it still causes the same issue
Steps to reproduce the issue
Steps to reproduce the issue
podman run --runtime=krun --rm -it alpine
Describe the results you received
Describe the results you received
podman run --runtime=krun --rm -it alpine Error: krun: failed configuring mounts for handler at phase: HANDLER_CONFIGURE_AFTER_MOUNTS: No such file or directory: OCI runtime attempted to invoke a command that was not found
Here is the log output which isnt very helpful as there is no logs from conmon
Describe the results you expected
Describe the results you expected
I expect to see podman use the krun runtime. On my Windows machine using podman desktop inside podman machine on the fedora 40 container it works as expected. Specifically I should see that the kernel is different inside the container which showcase krun in action.
podman info output
`host:
arch: amd64
buildahVersion: 1.38.0
cgroupControllers:
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.12-2.fc40.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.12, commit: '
cpuUtilization:
idlePercent: 98.2
systemPercent: 0.49
userPercent: 1.31
cpus: 12
databaseBackend: sqlite
distribution:
distribution: fedora
variant: workstation
version: "40"
eventLogger: journald
freeLocks: 2047
hostname: fedora
idMappings:
gidmap:
host_id: 1000
size: 1
host_id: 524288
size: 65536
uidmap:
host_id: 1000
size: 1
host_id: 524288
size: 65536
kernel: 6.11.8-200.fc40.x86_64
linkmode: dynamic
logDriver: journald
memFree: 27559034880
memTotal: 32673906688
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.12.2-2.fc40.x86_64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.12.2
package: netavark-1.12.2-1.fc40.x86_64
path: /usr/libexec/podman/netavark
version: netavark 1.12.2
ociRuntime:
name: crun
package: crun-1.17-1.fc40.x86_64
path: /usr/bin/crun
version: |-
crun version 1.17
commit: 000fa0d4eeed8938301f3bcf8206405315bc1017
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-0^20241030.gee7d0b6-1.fc40.x86_64
version: |
pasta 0^20241030.gee7d0b6-1.fc40.x86_64
Copyright Red Hat
GNU General Public License, version 2 or later
https://www.gnu.org/licenses/old-licenses/gpl-2.0.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
rootlessNetworkCmd: pasta
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: ""
package: ""
version: ""
swapFree: 8589930496
swapTotal: 8589930496
uptime: 0h 17m 58.00s
variant: ""
plugins:
authorization: null
log:
network:
volume:
registries:
search:
store:
configFile: /home/brian/.config/containers/storage.conf
containerStore:
number: 1
paused: 0
running: 0
stopped: 1
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/brian/.local/share/containers/storage
graphRootAllocated: 498403901440
graphRootUsed: 16475881472
graphStatus:
Backing Filesystem: btrfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 11
runRoot: /run/user/1000/containers
transientStore: false
volumePath: /home/brian/.local/share/containers/storage/volumes
version:
APIVersion: 5.3.0
Built: 1731456000
BuiltTime: Tue Nov 12 19:00:00 2024
GitCommit: ""
GoVersion: go1.22.7
Os: linux
OsArch: linux/amd64
Version: 5.3.0
`
If you are unable to run podman info for any reason, please provide the podman version, operating system and its version and the architecture you are running.
Podman in a container
No
Privileged Or Rootless
None
Upstream Latest Release
Yes
Additional environment details
I have an amd 7600 and an 7800xt graphics card with SVM virtualization enabled
Additional information
Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting
The text was updated successfully, but these errors were encountered: