-
Notifications
You must be signed in to change notification settings - Fork 22
I'm trying to run nabla containers on Kubernetes with cri-containerd and flannel CNI (also tried calico), but my pod keeps crashing "CrashLoopBackOff" #83
Comments
Hi - are you using a /32 based network? if so, you may run into something related to #40 |
Thank you for your response. It's a /23 based network. |
Hmm... is it possible to share with us what |
Thank you, a little delayed response is totally understandable. I made a fresh setup on bare metal but I'm still getting the same error. I only use one physical node for this cluster. A bridge (cni0) is used to connect the kubernetes pods.
|
Could you do the ip addr inside the container and paste the output? Tks.
…On Fri, Sep 20, 2019, 7:53 AM Im ***@***.***> wrote:
Thank you, a little delayed response is totally understandable.
I made a fresh setup on bare metal but I'm still getting the same error. I
use only one physical node for this cluster. A bridge (cni0) is used to
connect the kubernetes pods.
ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 9c:5c:8e:c2:1c:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.1.11/24 brd 192.168.1.255 scope global dynamic enp3s0
valid_lft 1813516sec preferred_lft 1813516sec
inet6 fe80::9e5c:8eff:fec2:1cfc/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:58:0a:7e:89 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 9a:ac:50:59:de:9d brd ff:ff:ff:ff:ff:ff
inet 10.30.0.1/16 brd 10.30.255.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::98ac:50ff:fe59:de9d/64 scope link
valid_lft forever preferred_lft forever
5: ***@***.***: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni0 state UP group default
link/ether 02:87:c8:5c:43:c4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::87:c8ff:fe5c:43c4/64 scope link
valid_lft forever preferred_lft forever
6: ***@***.***: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni0 state UP group default
link/ether 7e:80:95:17:cf:5a brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::7c80:95ff:fe17:cf5a/64 scope link
valid_lft forever preferred_lft forever
ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 9c:5c:8e:c2:1c:fc brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:58:0a:7e:89 brd ff:ff:ff:ff:ff:ff
4: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 9a:ac:50:59:de:9d brd ff:ff:ff:ff:ff:ff
5: ***@***.***: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni0 state UP mode DEFAULT group default
link/ether 02:87:c8:5c:43:c4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
6: ***@***.***: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni0 state UP mode DEFAULT group default
link/ether 7e:80:95:17:cf:5a brd ff:ff:ff:ff:ff:ff link-netnsid 1
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#83?email_source=notifications&email_token=AAXLDBR7CPRJXSIHQ6ASQV3QKS2TBA5CNFSM4IT7GZOKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7GOM7I#issuecomment-533522045>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAXLDBRCEANBKTCBKMAFH23QKS2TBANCNFSM4IT7GZOA>
.
|
Containers in nabla pod exit immediately, at the very moment they are created.
I can show the logs and inspect the container, but I can't run any command in it.
|
Sorry, i think i was a bit vague in my description. I mean a non-nabla
regular container. I want to see what the network interfaces are configured
with. The reasoning is that how it works is it nsenters in the namespace of
what is given. In this case: `"path": "/proc/5397/ns/net"`.
The error message looks like it can't find the interface. my guess is that
its named something different. Or something changed with the sequence of
network operations. Right now the code does something really simple which
is just looking for `eth0` and doing the network plumbing based on the
settings of this interface... I suspect the handling there could be done
better, which is what's causing the error.
…On Fri, Sep 20, 2019 at 5:07 PM Im ***@***.***> wrote:
Containers in nabla pod exit immediately, at the very moment they are
created.
crictl ps -a
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID
a8c36b0452bfd de4bf8bf3a939 3 minutes ago Exited nabla 6 1e5c8431c141d
9ab8637065a77 de4bf8bf3a939 6 minutes ago Exited nabla 5 1e5c8431c141d
f8ef7a5cf5fee de4bf8bf3a939 7 minutes ago Exited nabla 4 1e5c8431c141d
9a2b7ee3316a5 de4bf8bf3a939 8 minutes ago Exited nabla 3 1e5c8431c141d
39a7b5cefd5cd de4bf8bf3a939 9 minutes ago Exited nabla 2 1e5c8431c141d
913c412432b08 de4bf8bf3a939 9 minutes ago Exited nabla 1 1e5c8431c141d
460d2580455f9 de4bf8bf3a939 9 minutes ago Exited nabla 0 1e5c8431c141d
cbc84505053b4 bf261d1579144 28 minutes ago Running coredns 0 8e2dd40c0cb94
587ad129f7b95 bf261d1579144 28 minutes ago Running coredns 0 fce4c11d35059
75ad6b37ae39a c21b0c7400f98 28 minutes ago Running kube-proxy 0 d25eff4272e8e
1af0f6b5f033e 06a629a7e51cd 28 minutes ago Running kube-controller-manager 1 d72416d88eae8
ae7a6048fe2e5 b2756210eeabf 28 minutes ago Running etcd 1 b166c515178b5
12a64289a565b b305571ca60a5 28 minutes ago Running kube-apiserver 1 2d322ee55ad94
3873e055548ef 301ddc62b80b1 28 minutes ago Running kube-scheduler 1 00a6cce669b37
I can show the logs and inspect the container, but I can't run any command
in it.
crictl exec -i -t a8c36b0452bfd ip addr
FATA[0000] execing command in container failed: rpc error: code = Unknown desc = container is in CONTAINER_EXITED state
crictl logs a8c36b0452bfd
Error running llc Network handler: Unable to configure network runtime: master should have an IPERR: Error running llc Network handler: Unable to configure network runtime: master should have an IPERR: Error running llc Network handler: Unable to configure network runtime: master should have an IP
crictl inspect a8c36b0452bfd
{
"status": {
"id": "a8c36b0452bfdb9d9cde3f796653ea27cae606202d077bd9fe9ead0d3e8d0d3e",
"metadata": {
"attempt": 6,
"name": "nabla"
},
"state": "CONTAINER_EXITED",
"createdAt": "2019-09-20T20:33:50.447605166Z",
"startedAt": "1970-01-01T00:00:00Z",
"finishedAt": "2019-09-20T20:33:50.568652413Z",
"exitCode": 128,
"image": {
"image": "docker.io/nablact/node-express-nabla:v0.2"
},
"imageRef": ***@***.***:35c204f1937eac0851ee6f55c784a7074fb82a59535daaf9bdb2f00fb8b1fd69",
"reason": "StartError",
"message": "failed to start containerd task \"a8c36b0452bfdb9d9cde3f796653ea27cae606202d077bd9fe9ead0d3e8d0d3e\": cannot start a stopped process: unknown",
"labels": {
"io.kubernetes.container.name": "nabla",
"io.kubernetes.pod.name": "nabla-76f9c565bc-kq8kv",
"io.kubernetes.pod.namespace": "default",
"io.kubernetes.pod.uid": "dbfc0be8-45cf-430f-b228-15eaf44aaf3e"
},
"annotations": {
"io.kubernetes.container.hash": "2c84bea9",
"io.kubernetes.container.ports": "[{\"containerPort\":8080,\"protocol\":\"TCP\"}]",
"io.kubernetes.container.restartCount": "6",
"io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
"io.kubernetes.container.terminationMessagePolicy": "File",
"io.kubernetes.pod.terminationGracePeriod": "30"
},
"mounts": [
{
"containerPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"hostPath": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/volumes/kubernetes.io~secret/default-token-jd8c8",
"propagation": "PROPAGATION_PRIVATE",
"readonly": true,
"selinuxRelabel": false
},
{
"containerPath": "/etc/hosts",
"hostPath": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/etc-hosts",
"propagation": "PROPAGATION_PRIVATE",
"readonly": false,
"selinuxRelabel": false
},
{
"containerPath": "/dev/termination-log",
"hostPath": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/containers/nabla/409a54a9",
"propagation": "PROPAGATION_PRIVATE",
"readonly": false,
"selinuxRelabel": false
}
],
"logPath": "/var/log/pods/default_nabla-76f9c565bc-kq8kv_dbfc0be8-45cf-430f-b228-15eaf44aaf3e/nabla/6.log"
},
"info": {
"sandboxID": "1e5c8431c141d3305db77c42140327a3e403126d1b3797be7654b7faaa982ae4",
"pid": 0,
"removing": false,
"snapshotKey": "a8c36b0452bfdb9d9cde3f796653ea27cae606202d077bd9fe9ead0d3e8d0d3e",
"snapshotter": "overlayfs",
"runtimeType": "io.containerd.runtime.v1.linux",
"runtimeOptions": {
"runtime": "/usr/local/bin/runnc"
},
"config": {
"metadata": {
"name": "nabla",
"attempt": 6
},
"image": {
"image": "sha256:de4bf8bf3a9399dc6978aeb35a9b7ba4b26d59f36e10852c20817d5d58577261"
},
"envs": [
{
"key": "KUBERNETES_PORT_443_TCP_PROTO",
"value": "tcp"
},
{
"key": "KUBERNETES_PORT_443_TCP_PORT",
"value": "443"
},
{
"key": "KUBERNETES_PORT_443_TCP_ADDR",
"value": "10.96.0.1"
},
{
"key": "KUBERNETES_SERVICE_HOST",
"value": "10.96.0.1"
},
{
"key": "KUBERNETES_SERVICE_PORT",
"value": "443"
},
{
"key": "KUBERNETES_SERVICE_PORT_HTTPS",
"value": "443"
},
{
"key": "KUBERNETES_PORT",
"value": "tcp://10.96.0.1:443"
},
{
"key": "KUBERNETES_PORT_443_TCP",
"value": "tcp://10.96.0.1:443"
}
],
"mounts": [
{
"container_path": "/var/run/secrets/kubernetes.io/serviceaccount",
"host_path": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/volumes/kubernetes.io~secret/default-token-jd8c8",
"readonly": true
},
{
"container_path": "/etc/hosts",
"host_path": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/etc-hosts"
},
{
"container_path": "/dev/termination-log",
"host_path": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/containers/nabla/409a54a9"
}
],
"labels": {
"io.kubernetes.container.name": "nabla",
"io.kubernetes.pod.name": "nabla-76f9c565bc-kq8kv",
"io.kubernetes.pod.namespace": "default",
"io.kubernetes.pod.uid": "dbfc0be8-45cf-430f-b228-15eaf44aaf3e"
},
"annotations": {
"io.kubernetes.container.hash": "2c84bea9",
"io.kubernetes.container.ports": "[{\"containerPort\":8080,\"protocol\":\"TCP\"}]",
"io.kubernetes.container.restartCount": "6",
"io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
"io.kubernetes.container.terminationMessagePolicy": "File",
"io.kubernetes.pod.terminationGracePeriod": "30"
},
"log_path": "nabla/6.log",
"linux": {
"resources": {
"cpu_period": 100000,
"cpu_shares": 2,
"oom_score_adj": 1000
},
"security_context": {
"namespace_options": {
"pid": 1
},
"run_as_user": {},
"masked_paths": [
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"readonly_paths": [
"/proc/asound",
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
}
}
},
"runtimeSpec": {
"ociVersion": "1.0.1-dev",
"process": {
"user": {
"uid": 0,
"gid": 0
},
"args": [
"/node.nabla",
"/home/node/app/app.js"
],
"env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME=nabla-76f9c565bc-kq8kv",
"KUBERNETES_PORT_443_TCP_PROTO=tcp",
"KUBERNETES_PORT_443_TCP_PORT=443",
"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1",
"KUBERNETES_SERVICE_HOST=10.96.0.1",
"KUBERNETES_SERVICE_PORT=443",
"KUBERNETES_SERVICE_PORT_HTTPS=443",
"KUBERNETES_PORT=tcp://10.96.0.1:443",
"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443"
],
"cwd": "/",
"capabilities": {
"bounding": [
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FSETID",
"CAP_FOWNER",
"CAP_MKNOD",
"CAP_NET_RAW",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETFCAP",
"CAP_SETPCAP",
"CAP_NET_BIND_SERVICE",
"CAP_SYS_CHROOT",
"CAP_KILL",
"CAP_AUDIT_WRITE"
],
"effective": [
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FSETID",
"CAP_FOWNER",
"CAP_MKNOD",
"CAP_NET_RAW",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETFCAP",
"CAP_SETPCAP",
"CAP_NET_BIND_SERVICE",
"CAP_SYS_CHROOT",
"CAP_KILL",
"CAP_AUDIT_WRITE"
],
"inheritable": [
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FSETID",
"CAP_FOWNER",
"CAP_MKNOD",
"CAP_NET_RAW",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETFCAP",
"CAP_SETPCAP",
"CAP_NET_BIND_SERVICE",
"CAP_SYS_CHROOT",
"CAP_KILL",
"CAP_AUDIT_WRITE"
],
"permitted": [
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FSETID",
"CAP_FOWNER",
"CAP_MKNOD",
"CAP_NET_RAW",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETFCAP",
"CAP_SETPCAP",
"CAP_NET_BIND_SERVICE",
"CAP_SYS_CHROOT",
"CAP_KILL",
"CAP_AUDIT_WRITE"
]
},
"apparmorProfile": "cri-containerd.apparmor.d",
"oomScoreAdj": 1000
},
"root": {
"path": "rootfs"
},
"mounts": [
{
"destination": "/proc",
"type": "proc",
"source": "proc",
"options": [
"nosuid",
"noexec",
"nodev"
]
},
{
"destination": "/dev",
"type": "tmpfs",
"source": "tmpfs",
"options": [
"nosuid",
"strictatime",
"mode=755",
"size=65536k"
]
},
{
"destination": "/dev/pts",
"type": "devpts",
"source": "devpts",
"options": [
"nosuid",
"noexec",
"newinstance",
"ptmxmode=0666",
"mode=0620",
"gid=5"
]
},
{
"destination": "/dev/mqueue",
"type": "mqueue",
"source": "mqueue",
"options": [
"nosuid",
"noexec",
"nodev"
]
},
{
"destination": "/sys",
"type": "sysfs",
"source": "sysfs",
"options": [
"nosuid",
"noexec",
"nodev",
"ro"
]
},
{
"destination": "/sys/fs/cgroup",
"type": "cgroup",
"source": "cgroup",
"options": [
"nosuid",
"noexec",
"nodev",
"relatime",
"ro"
]
},
{
"destination": "/etc/hosts",
"type": "bind",
"source": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/etc-hosts",
"options": [
"rbind",
"rprivate",
"rw"
]
},
{
"destination": "/dev/termination-log",
"type": "bind",
"source": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/containers/nabla/409a54a9",
"options": [
"rbind",
"rprivate",
"rw"
]
},
{
"destination": "/etc/hostname",
"type": "bind",
"source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/1e5c8431c141d3305db77c42140327a3e403126d1b3797be7654b7faaa982ae4/hostname",
"options": [
"rbind",
"rprivate",
"rw"
]
},
{
"destination": "/etc/resolv.conf",
"type": "bind",
"source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/1e5c8431c141d3305db77c42140327a3e403126d1b3797be7654b7faaa982ae4/resolv.conf",
"options": [
"rbind",
"rprivate",
"rw"
]
},
{
"destination": "/dev/shm",
"type": "bind",
"source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/1e5c8431c141d3305db77c42140327a3e403126d1b3797be7654b7faaa982ae4/shm",
"options": [
"rbind",
"rprivate",
"rw"
]
},
{
"destination": "/var/run/secrets/kubernetes.io/serviceaccount",
"type": "bind",
"source": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/volumes/kubernetes.io~secret/default-token-jd8c8",
"options": [
"rbind",
"rprivate",
"ro"
]
}
],
"annotations": {
"io.kubernetes.cri.container-type": "container",
"io.kubernetes.cri.sandbox-id": "1e5c8431c141d3305db77c42140327a3e403126d1b3797be7654b7faaa982ae4"
},
"linux": {
"resources": {
"devices": [
{
"allow": false,
"access": "rwm"
}
],
"memory": {
"limit": 0
},
"cpu": {
"shares": 2,
"quota": 0,
"period": 100000
}
},
"cgroupsPath": "/kubepods/besteffort/poddbfc0be8-45cf-430f-b228-15eaf44aaf3e/a8c36b0452bfdb9d9cde3f796653ea27cae606202d077bd9fe9ead0d3e8d0d3e",
"namespaces": [
{
"type": "pid"
},
{
"type": "ipc",
"path": "/proc/5397/ns/ipc"
},
{
"type": "uts",
"path": "/proc/5397/ns/uts"
},
{
"type": "mount"
},
{
"type": "network",
"path": "/proc/5397/ns/net"
}
],
"maskedPaths": [
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"readonlyPaths": [
"/proc/asound",
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
}
}
}
}
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#83?email_source=notifications&email_token=AAXLDBW66JZHHNWX3O3APOLQKU3RNA5CNFSM4IT7GZOKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7H4DRQ#issuecomment-533709254>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAXLDBXZHZQEOXE6K2MH423QKU3RNANCNFSM4IT7GZOA>
.
|
Thank you for the informative answer. For a non-nabla container I see:
And for the nabla container I see:
|
Ist there some solution availabe? I ran into the exact same problem. |
I'm trying to run nabla containers on Kubernetes with cri-containerd and flannel CNI (also tried calico), but my pod keeps crashing "CrashLoopBackOff". I'm running a single-machine Kubernetes cluster which can run other containers without a problem.
Versions:
The containerd config file ->
All pods are up and running
When I run the following deployment, which I got from nabla website, I only get errors and container keeps restarting
Extra infos for this pod
and the logs ->
The text was updated successfully, but these errors were encountered: