Skip to content
This repository has been archived by the owner on May 2, 2023. It is now read-only.

I'm trying to run nabla containers on Kubernetes with cri-containerd and flannel CNI (also tried calico), but my pod keeps crashing "CrashLoopBackOff" #83

Open
I-m2310 opened this issue Sep 5, 2019 · 9 comments · May be fixed by #86

Comments

@I-m2310
Copy link

I-m2310 commented Sep 5, 2019

I'm trying to run nabla containers on Kubernetes with cri-containerd and flannel CNI (also tried calico), but my pod keeps crashing "CrashLoopBackOff". I'm running a single-machine Kubernetes cluster which can run other containers without a problem.

Versions:

  • runnc : 1.0.1
  • kubernetes : 1.15.3-00
  • containerd : 1.2.6
  • kernel : 4.4.0-109-generic
  • OS : Ubuntu 16.04.3 LTS (I also tried on Ubuntu 18.04.3 LTS, kernel 4.15 but I got the same error)

The containerd config file ->

cat /etc/containerd/config.toml
subreaper = true
oom_score = -999
[debug]
        level = "debug"
[metrics]
        address = "127.0.0.1:1338"
[plugins.linux]
        runtime = "runc"
        shim_debug = true
[plugins]
  [plugins.cri.containerd]
  [plugins.cri.containerd.untrusted_workload_runtime]
  runtime_type = "io.containerd.runtime.v1.linux"
  runtime_engine = "/usr/local/bin/runnc"

All pods are up and running

kubectl get pods --all-namespaces -o wide -w
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
kube-system   coredns-5c98db65d4-7d7hw             1/1     Running   0          5m9s    10.244.0.2       node-1       <none>           <none>
kube-system   coredns-5c98db65d4-lp9q5             1/1     Running   0          5m9s    10.244.0.3       node-1       <none>           <none>
kube-system   etcd-node-1                          1/1     Running   0          4m16s   xx.212.xx.1      node-1       <none>           <none>
kube-system   kube-apiserver-node-1                1/1     Running   0          4m36s   xx.212.xx.1      node-1       <none>           <none>
kube-system   kube-controller-manager-node-1       1/1     Running   0          4m41s   xx.212.xx.1      node-1       <none>           <none>
kube-system   kube-flannel-ds-amd64-lvx7f          1/1     Running   0          86s     xx.212.xx.1      node-1       <none>           <none>
kube-system   kube-proxy-kdzsd                     1/1     Running   0          5m9s    xx.212.xx.1      node-1       <none>           <none>
kube-system   kube-scheduler-node-1                1/1     Running   0          4m15s   xx.212.xx.1      node-1       <none>           <none>
kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
node-1   Ready    master   22m   v1.15.3

When I run the following deployment, which I got from nabla website, I only get errors and container keeps restarting

cat nabla.yaml

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  labels:
    app: nabla
  name: nabla
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nabla
      name: nabla
      annotations:
        io.kubernetes.cri.untrusted-workload: "true"
    spec:
      containers:
        - name: nabla
          image: nablact/node-express-nabla:v0.3
          imagePullPolicy: Always
          ports:
          - containerPort: 8080
kubectl get pods --all-namespaces -o wide -w
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
kube-system   coredns-5c98db65d4-7d7hw             1/1     Running             0          10m     10.244.0.2       node-1   <none>           <none>
kube-system   coredns-5c98db65d4-lp9q5             1/1     Running             0          10m     10.244.0.3       node-1   <none>           <none>
kube-system   etcd-node-1                          1/1     Running             0          9m53s   xx.212.xx.1      node-1   <none>           <none>
kube-system   kube-apiserver-node-1                1/1     Running             0          10m     xx.212.xx.1      node-1   <none>           <none>
kube-system   kube-controller-manager-node-1       1/1     Running             0          10m     xx.212.xx.1      node-1   <none>           <none>
kube-system   kube-flannel-ds-amd64-lvx7f          1/1     Running             0          7m3s    xx.212.xx.1      node-1   <none>           <none>
kube-system   kube-proxy-kdzsd                     1/1     Running             0          10m     xx.212.xx.1      node-1   <none>           <none>
kube-system   kube-scheduler-node-1                1/1     Running             0          9m52s   xx.212.xx.1      node-1   <none>           <none>
default       nabla-777c857776-4dv2w               0/1     RunContainerError   1          5s      10.244.0.5       node-1   <none>           <none>
default       nabla-777c857776-4dv2w               0/1     CrashLoopBackOff    1          6s      10.244.0.5       node-1   <none>           <none>

Extra infos for this pod

kubectl describe pod nabla-777c857776-4dv2w
Name:           nabla-777c857776-4dv2w
Namespace:      default
Priority:       0
Node:           node-1/xx.212.xx.1
Start Time:     Thu, 05 Sep 2019 17:17:39 +0300
Labels:         app=nabla
                pod-template-hash=777c857776
Annotations:    io.kubernetes.cri.untrusted-workload: true
Status:         Running
IP:             10.244.0.5
Controlled By:  ReplicaSet/nabla-777c857776
Containers:
  nabla:
    Container ID:   containerd://13213cb66b31c7786f1327a13c1a9641124e664be2e6282071ebb0494af1f50f
    Image:          nablact/node-express-nabla:v0.3
    Image ID:       docker.io/nablact/node-express-nabla@sha256:d8b9c8e16cbc8d77056ddf0981abf0f75018c898f078dded854d63f3b59356eb
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       StartError
      Message:      failed to start containerd task "13213cb66b31c7786f1327a13c1a9641124e664be2e6282071ebb0494af1f50f": cannot start a stopped process: unknown
      Exit Code:    128
      Started:      Thu, 01 Jan 1970 02:00:00 +0200
      Finished:     Thu, 05 Sep 2019 17:17:59 +0300
    Ready:          False
    Restart Count:  2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-h594g (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-h594g:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-h594g
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From                 Message
  ----     ------     ----               ----                 -------
  Normal   Scheduled  37s                default-scheduler    Successfully assigned default/nabla-777c857776-4dv2w to node-1
  Normal   Pulling    19s (x3 over 36s)  kubelet, node-1  Pulling image "nablact/node-express-nabla:v0.3"
  Normal   Pulled     18s (x3 over 34s)  kubelet, node-1  Successfully pulled image "nablact/node-express-nabla:v0.3"
  Normal   Created    18s (x3 over 34s)  kubelet, node-1  Created container nabla
  Warning  Failed     17s (x3 over 34s)  kubelet, node-1  Error: failed to start containerd task "nabla": cannot start a stopped process: unknown
  Warning  BackOff    6s (x4 over 32s)   kubelet, node-1  Back-off restarting failed container

and the logs ->

kubectl logs nabla-777c857776-4dv2w
Error running llc Network handler: Unable to configure network runtime: master should have an IPERR: Error running llc Network handler: Unable to configure network runtime: master should have an IPERR: Error running llc Network handler: Unable to configure network runtime: master should have an IP
@I-m2310 I-m2310 changed the title I'm trying to run nabla containers on Kubernetes with containerd and flannel CNI (also tried calico), but my pod keep crashing "CrashLoopBackOff" I'm trying to run nabla containers on Kubernetes with cri-containerd and flannel CNI (also tried calico), but my pod keep crashing "CrashLoopBackOff" Sep 5, 2019
@I-m2310 I-m2310 changed the title I'm trying to run nabla containers on Kubernetes with cri-containerd and flannel CNI (also tried calico), but my pod keep crashing "CrashLoopBackOff" I'm trying to run nabla containers on Kubernetes with cri-containerd and flannel CNI (also tried calico), but my pod keeps crashing "CrashLoopBackOff" Sep 5, 2019
@lumjjb
Copy link
Member

lumjjb commented Sep 6, 2019

Hi - are you using a /32 based network? if so, you may run into something related to #40

@I-m2310
Copy link
Author

I-m2310 commented Sep 6, 2019

Thank you for your response. It's a /23 based network.

@lumjjb
Copy link
Member

lumjjb commented Sep 17, 2019

Hmm... is it possible to share with us what ip addr and ip link looks like on a regular node? Our team is currently tied up with some other projects right now, so response will be a little slow.

@I-m2310
Copy link
Author

I-m2310 commented Sep 20, 2019

Thank you, a little delayed response is totally understandable.

I made a fresh setup on bare metal but I'm still getting the same error. I only use one physical node for this cluster. A bridge (cni0) is used to connect the kubernetes pods.

ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 9c:5c:8e:c2:1c:fc brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.11/24 brd 192.168.1.255 scope global dynamic enp3s0
       valid_lft 1813516sec preferred_lft 1813516sec
    inet6 fe80::9e5c:8eff:fec2:1cfc/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:58:0a:7e:89 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9a:ac:50:59:de:9d brd ff:ff:ff:ff:ff:ff
    inet 10.30.0.1/16 brd 10.30.255.255 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::98ac:50ff:fe59:de9d/64 scope link 
       valid_lft forever preferred_lft forever
5: veth0916170d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni0 state UP group default 
    link/ether 02:87:c8:5c:43:c4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::87:c8ff:fe5c:43c4/64 scope link 
       valid_lft forever preferred_lft forever
6: veth57e6fa2b@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni0 state UP group default 
    link/ether 7e:80:95:17:cf:5a brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::7c80:95ff:fe17:cf5a/64 scope link 
       valid_lft forever preferred_lft forever
ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 9c:5c:8e:c2:1c:fc brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:58:0a:7e:89 brd ff:ff:ff:ff:ff:ff
4: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 9a:ac:50:59:de:9d brd ff:ff:ff:ff:ff:ff
5: veth0916170d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni0 state UP mode DEFAULT group default 
    link/ether 02:87:c8:5c:43:c4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
6: veth57e6fa2b@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni0 state UP mode DEFAULT group default 
    link/ether 7e:80:95:17:cf:5a brd ff:ff:ff:ff:ff:ff link-netnsid 1

@lumjjb
Copy link
Member

lumjjb commented Sep 20, 2019 via email

@I-m2310
Copy link
Author

I-m2310 commented Sep 20, 2019

Containers in nabla pod exit immediately, at the very moment they are created.

crictl ps -a
CONTAINER ID        IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
a8c36b0452bfd       de4bf8bf3a939       3 minutes ago       Exited              nabla                     6                   1e5c8431c141d
9ab8637065a77       de4bf8bf3a939       6 minutes ago       Exited              nabla                     5                   1e5c8431c141d
f8ef7a5cf5fee       de4bf8bf3a939       7 minutes ago       Exited              nabla                     4                   1e5c8431c141d
9a2b7ee3316a5       de4bf8bf3a939       8 minutes ago       Exited              nabla                     3                   1e5c8431c141d
39a7b5cefd5cd       de4bf8bf3a939       9 minutes ago       Exited              nabla                     2                   1e5c8431c141d
913c412432b08       de4bf8bf3a939       9 minutes ago       Exited              nabla                     1                   1e5c8431c141d
460d2580455f9       de4bf8bf3a939       9 minutes ago       Exited              nabla                     0                   1e5c8431c141d
cbc84505053b4       bf261d1579144       28 minutes ago      Running             coredns                   0                   8e2dd40c0cb94
587ad129f7b95       bf261d1579144       28 minutes ago      Running             coredns                   0                   fce4c11d35059
75ad6b37ae39a       c21b0c7400f98       28 minutes ago      Running             kube-proxy                0                   d25eff4272e8e
1af0f6b5f033e       06a629a7e51cd       28 minutes ago      Running             kube-controller-manager   1                   d72416d88eae8
ae7a6048fe2e5       b2756210eeabf       28 minutes ago      Running             etcd                      1                   b166c515178b5
12a64289a565b       b305571ca60a5       28 minutes ago      Running             kube-apiserver            1                   2d322ee55ad94
3873e055548ef       301ddc62b80b1       28 minutes ago      Running             kube-scheduler            1                   00a6cce669b37


I can show the logs and inspect the container, but I can't run any command in it.

crictl exec -i -t a8c36b0452bfd ip addr
FATA[0000] execing command in container failed: rpc error: code = Unknown desc = container is in CONTAINER_EXITED state
crictl logs a8c36b0452bfd
Error running llc Network handler: Unable to configure network runtime: master should have an IPERR: Error running llc Network handler: Unable to configure network runtime: master should have an IPERR: Error running llc Network handler: Unable to configure network runtime: master should have an IP
 crictl inspect a8c36b0452bfd
{
  "status": {
    "id": "a8c36b0452bfdb9d9cde3f796653ea27cae606202d077bd9fe9ead0d3e8d0d3e",
    "metadata": {
      "attempt": 6,
      "name": "nabla"
    },
    "state": "CONTAINER_EXITED",
    "createdAt": "2019-09-20T20:33:50.447605166Z",
    "startedAt": "1970-01-01T00:00:00Z",
    "finishedAt": "2019-09-20T20:33:50.568652413Z",
    "exitCode": 128,
    "image": {
      "image": "docker.io/nablact/node-express-nabla:v0.2"
    },
    "imageRef": "docker.io/nablact/node-express-nabla@sha256:35c204f1937eac0851ee6f55c784a7074fb82a59535daaf9bdb2f00fb8b1fd69",
    "reason": "StartError",
    "message": "failed to start containerd task \"a8c36b0452bfdb9d9cde3f796653ea27cae606202d077bd9fe9ead0d3e8d0d3e\": cannot start a stopped process: unknown",
    "labels": {
      "io.kubernetes.container.name": "nabla",
      "io.kubernetes.pod.name": "nabla-76f9c565bc-kq8kv",
      "io.kubernetes.pod.namespace": "default",
      "io.kubernetes.pod.uid": "dbfc0be8-45cf-430f-b228-15eaf44aaf3e"
    },
    "annotations": {
      "io.kubernetes.container.hash": "2c84bea9",
      "io.kubernetes.container.ports": "[{\"containerPort\":8080,\"protocol\":\"TCP\"}]",
      "io.kubernetes.container.restartCount": "6",
      "io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
      "io.kubernetes.container.terminationMessagePolicy": "File",
      "io.kubernetes.pod.terminationGracePeriod": "30"
    },
    "mounts": [
      {
        "containerPath": "/var/run/secrets/kubernetes.io/serviceaccount",
        "hostPath": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/volumes/kubernetes.io~secret/default-token-jd8c8",
        "propagation": "PROPAGATION_PRIVATE",
        "readonly": true,
        "selinuxRelabel": false
      },
      {
        "containerPath": "/etc/hosts",
        "hostPath": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/etc-hosts",
        "propagation": "PROPAGATION_PRIVATE",
        "readonly": false,
        "selinuxRelabel": false
      },
      {
        "containerPath": "/dev/termination-log",
        "hostPath": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/containers/nabla/409a54a9",
        "propagation": "PROPAGATION_PRIVATE",
        "readonly": false,
        "selinuxRelabel": false
      }
    ],
    "logPath": "/var/log/pods/default_nabla-76f9c565bc-kq8kv_dbfc0be8-45cf-430f-b228-15eaf44aaf3e/nabla/6.log"
  },
  "info": {
    "sandboxID": "1e5c8431c141d3305db77c42140327a3e403126d1b3797be7654b7faaa982ae4",
    "pid": 0,
    "removing": false,
    "snapshotKey": "a8c36b0452bfdb9d9cde3f796653ea27cae606202d077bd9fe9ead0d3e8d0d3e",
    "snapshotter": "overlayfs",
    "runtimeType": "io.containerd.runtime.v1.linux",
    "runtimeOptions": {
      "runtime": "/usr/local/bin/runnc"
    },
    "config": {
      "metadata": {
        "name": "nabla",
        "attempt": 6
      },
      "image": {
        "image": "sha256:de4bf8bf3a9399dc6978aeb35a9b7ba4b26d59f36e10852c20817d5d58577261"
      },
      "envs": [
        {
          "key": "KUBERNETES_PORT_443_TCP_PROTO",
          "value": "tcp"
        },
        {
          "key": "KUBERNETES_PORT_443_TCP_PORT",
          "value": "443"
        },
        {
          "key": "KUBERNETES_PORT_443_TCP_ADDR",
          "value": "10.96.0.1"
        },
        {
          "key": "KUBERNETES_SERVICE_HOST",
          "value": "10.96.0.1"
        },
        {
          "key": "KUBERNETES_SERVICE_PORT",
          "value": "443"
        },
        {
          "key": "KUBERNETES_SERVICE_PORT_HTTPS",
          "value": "443"
        },
        {
          "key": "KUBERNETES_PORT",
          "value": "tcp://10.96.0.1:443"
        },
        {
          "key": "KUBERNETES_PORT_443_TCP",
          "value": "tcp://10.96.0.1:443"
        }
      ],
      "mounts": [
        {
          "container_path": "/var/run/secrets/kubernetes.io/serviceaccount",
          "host_path": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/volumes/kubernetes.io~secret/default-token-jd8c8",
          "readonly": true
        },
        {
          "container_path": "/etc/hosts",
          "host_path": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/etc-hosts"
        },
        {
          "container_path": "/dev/termination-log",
          "host_path": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/containers/nabla/409a54a9"
        }
      ],
      "labels": {
        "io.kubernetes.container.name": "nabla",
        "io.kubernetes.pod.name": "nabla-76f9c565bc-kq8kv",
        "io.kubernetes.pod.namespace": "default",
        "io.kubernetes.pod.uid": "dbfc0be8-45cf-430f-b228-15eaf44aaf3e"
      },
      "annotations": {
        "io.kubernetes.container.hash": "2c84bea9",
        "io.kubernetes.container.ports": "[{\"containerPort\":8080,\"protocol\":\"TCP\"}]",
        "io.kubernetes.container.restartCount": "6",
        "io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
        "io.kubernetes.container.terminationMessagePolicy": "File",
        "io.kubernetes.pod.terminationGracePeriod": "30"
      },
      "log_path": "nabla/6.log",
      "linux": {
        "resources": {
          "cpu_period": 100000,
          "cpu_shares": 2,
          "oom_score_adj": 1000
        },
        "security_context": {
          "namespace_options": {
            "pid": 1
          },
          "run_as_user": {},
          "masked_paths": [
            "/proc/acpi",
            "/proc/kcore",
            "/proc/keys",
            "/proc/latency_stats",
            "/proc/timer_list",
            "/proc/timer_stats",
            "/proc/sched_debug",
            "/proc/scsi",
            "/sys/firmware"
          ],
          "readonly_paths": [
            "/proc/asound",
            "/proc/bus",
            "/proc/fs",
            "/proc/irq",
            "/proc/sys",
            "/proc/sysrq-trigger"
          ]
        }
      }
    },
    "runtimeSpec": {
      "ociVersion": "1.0.1-dev",
      "process": {
        "user": {
          "uid": 0,
          "gid": 0
        },
        "args": [
          "/node.nabla",
          "/home/node/app/app.js"
        ],
        "env": [
          "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
          "HOSTNAME=nabla-76f9c565bc-kq8kv",
          "KUBERNETES_PORT_443_TCP_PROTO=tcp",
          "KUBERNETES_PORT_443_TCP_PORT=443",
          "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1",
          "KUBERNETES_SERVICE_HOST=10.96.0.1",
          "KUBERNETES_SERVICE_PORT=443",
          "KUBERNETES_SERVICE_PORT_HTTPS=443",
          "KUBERNETES_PORT=tcp://10.96.0.1:443",
          "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443"
        ],
        "cwd": "/",
        "capabilities": {
          "bounding": [
            "CAP_CHOWN",
            "CAP_DAC_OVERRIDE",
            "CAP_FSETID",
            "CAP_FOWNER",
            "CAP_MKNOD",
            "CAP_NET_RAW",
            "CAP_SETGID",
            "CAP_SETUID",
            "CAP_SETFCAP",
            "CAP_SETPCAP",
            "CAP_NET_BIND_SERVICE",
            "CAP_SYS_CHROOT",
            "CAP_KILL",
            "CAP_AUDIT_WRITE"
          ],
          "effective": [
            "CAP_CHOWN",
            "CAP_DAC_OVERRIDE",
            "CAP_FSETID",
            "CAP_FOWNER",
            "CAP_MKNOD",
            "CAP_NET_RAW",
            "CAP_SETGID",
            "CAP_SETUID",
            "CAP_SETFCAP",
            "CAP_SETPCAP",
            "CAP_NET_BIND_SERVICE",
            "CAP_SYS_CHROOT",
            "CAP_KILL",
            "CAP_AUDIT_WRITE"
          ],
          "inheritable": [
            "CAP_CHOWN",
            "CAP_DAC_OVERRIDE",
            "CAP_FSETID",
            "CAP_FOWNER",
            "CAP_MKNOD",
            "CAP_NET_RAW",
            "CAP_SETGID",
            "CAP_SETUID",
            "CAP_SETFCAP",
            "CAP_SETPCAP",
            "CAP_NET_BIND_SERVICE",
            "CAP_SYS_CHROOT",
            "CAP_KILL",
            "CAP_AUDIT_WRITE"
          ],
          "permitted": [
            "CAP_CHOWN",
            "CAP_DAC_OVERRIDE",
            "CAP_FSETID",
            "CAP_FOWNER",
            "CAP_MKNOD",
            "CAP_NET_RAW",
            "CAP_SETGID",
            "CAP_SETUID",
            "CAP_SETFCAP",
            "CAP_SETPCAP",
            "CAP_NET_BIND_SERVICE",
            "CAP_SYS_CHROOT",
            "CAP_KILL",
            "CAP_AUDIT_WRITE"
          ]
        },
        "apparmorProfile": "cri-containerd.apparmor.d",
        "oomScoreAdj": 1000
      },
      "root": {
        "path": "rootfs"
      },
      "mounts": [
        {
          "destination": "/proc",
          "type": "proc",
          "source": "proc",
          "options": [
            "nosuid",
            "noexec",
            "nodev"
          ]
        },
        {
          "destination": "/dev",
          "type": "tmpfs",
          "source": "tmpfs",
          "options": [
            "nosuid",
            "strictatime",
            "mode=755",
            "size=65536k"
          ]
        },
        {
          "destination": "/dev/pts",
          "type": "devpts",
          "source": "devpts",
          "options": [
            "nosuid",
            "noexec",
            "newinstance",
            "ptmxmode=0666",
            "mode=0620",
            "gid=5"
          ]
        },
        {
          "destination": "/dev/mqueue",
          "type": "mqueue",
          "source": "mqueue",
          "options": [
            "nosuid",
            "noexec",
            "nodev"
          ]
        },
        {
          "destination": "/sys",
          "type": "sysfs",
          "source": "sysfs",
          "options": [
            "nosuid",
            "noexec",
            "nodev",
            "ro"
          ]
        },
        {
          "destination": "/sys/fs/cgroup",
          "type": "cgroup",
          "source": "cgroup",
          "options": [
            "nosuid",
            "noexec",
            "nodev",
            "relatime",
            "ro"
          ]
        },
        {
          "destination": "/etc/hosts",
          "type": "bind",
          "source": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/etc-hosts",
          "options": [
            "rbind",
            "rprivate",
            "rw"
          ]
        },
        {
          "destination": "/dev/termination-log",
          "type": "bind",
          "source": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/containers/nabla/409a54a9",
          "options": [
            "rbind",
            "rprivate",
            "rw"
          ]
        },
        {
          "destination": "/etc/hostname",
          "type": "bind",
          "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/1e5c8431c141d3305db77c42140327a3e403126d1b3797be7654b7faaa982ae4/hostname",
          "options": [
            "rbind",
            "rprivate",
            "rw"
          ]
        },
        {
          "destination": "/etc/resolv.conf",
          "type": "bind",
          "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/1e5c8431c141d3305db77c42140327a3e403126d1b3797be7654b7faaa982ae4/resolv.conf",
          "options": [
            "rbind",
            "rprivate",
            "rw"
          ]
        },
        {
          "destination": "/dev/shm",
          "type": "bind",
          "source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/1e5c8431c141d3305db77c42140327a3e403126d1b3797be7654b7faaa982ae4/shm",
          "options": [
            "rbind",
            "rprivate",
            "rw"
          ]
        },
        {
          "destination": "/var/run/secrets/kubernetes.io/serviceaccount",
          "type": "bind",
          "source": "/var/lib/kubelet/pods/dbfc0be8-45cf-430f-b228-15eaf44aaf3e/volumes/kubernetes.io~secret/default-token-jd8c8",
          "options": [
            "rbind",
            "rprivate",
            "ro"
          ]
        }
      ],
      "annotations": {
        "io.kubernetes.cri.container-type": "container",
        "io.kubernetes.cri.sandbox-id": "1e5c8431c141d3305db77c42140327a3e403126d1b3797be7654b7faaa982ae4"
      },
      "linux": {
        "resources": {
          "devices": [
            {
              "allow": false,
              "access": "rwm"
            }
          ],
          "memory": {
            "limit": 0
          },
          "cpu": {
            "shares": 2,
            "quota": 0,
            "period": 100000
          }
        },
        "cgroupsPath": "/kubepods/besteffort/poddbfc0be8-45cf-430f-b228-15eaf44aaf3e/a8c36b0452bfdb9d9cde3f796653ea27cae606202d077bd9fe9ead0d3e8d0d3e",
        "namespaces": [
          {
            "type": "pid"
          },
          {
            "type": "ipc",
            "path": "/proc/5397/ns/ipc"
          },
          {
            "type": "uts",
            "path": "/proc/5397/ns/uts"
          },
          {
            "type": "mount"
          },
          {
            "type": "network",
            "path": "/proc/5397/ns/net"
          }
        ],
        "maskedPaths": [
          "/proc/acpi",
          "/proc/kcore",
          "/proc/keys",
          "/proc/latency_stats",
          "/proc/timer_list",
          "/proc/timer_stats",
          "/proc/sched_debug",
          "/proc/scsi",
          "/sys/firmware"
        ],
        "readonlyPaths": [
          "/proc/asound",
          "/proc/bus",
          "/proc/fs",
          "/proc/irq",
          "/proc/sys",
          "/proc/sysrq-trigger"
        ]
      }
    }
  }
}

@lumjjb
Copy link
Member

lumjjb commented Sep 21, 2019 via email

@I-m2310
Copy link
Author

I-m2310 commented Sep 22, 2019

Thank you for the informative answer.

For a non-nabla container I see:

nsenter -t xxxxx -n ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if41: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether b6:7c:bb:09:bc:c5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.30.0.6/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::b47c:bbff:fe09:bcc5/64 scope link 
       valid_lft forever preferred_lft forever

And for the nabla container I see:

nsenter -t xxxxx -n ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default 
    link/ether aa:aa:aa:aa:bb:cc brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::8e9:2aff:fe20:fe6e/64 scope link 
       valid_lft forever preferred_lft forever
4: tapac51e6a47d28: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel master br0 state DOWN group default qlen 1000
    link/ether ae:8a:a0:58:f0:09 brd ff:ff:ff:ff:ff:ff
5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether aa:aa:aa:aa:bb:cc brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a8aa:aaff:feaa:bbcc/64 scope link 
       valid_lft forever preferred_lft forever

@lestich
Copy link

lestich commented Jul 13, 2020

Ist there some solution availabe? I ran into the exact same problem.

@lestich lestich linked a pull request Jul 28, 2020 that will close this issue
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants