Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On minishift, admission controller webhook is not working #111

Closed
sandrich opened this issue Nov 23, 2018 · 8 comments
Closed

On minishift, admission controller webhook is not working #111

sandrich opened this issue Nov 23, 2018 · 8 comments
Labels
Priority: High Project: Platform Support GKE/OpenShift/etc.; K8s config variations

Comments

@sandrich
Copy link

sandrich commented Nov 23, 2018

Hi

I run minishift

oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth

Server https://192.168.99.101:8443
kubernetes v1.11.0+d4cacc0

Getting the following alerts in the kubedirector pod when I try to apply a cdh cluster

ime="2018-11-23T14:59:28Z" level=info msg="Go Version: go1.11.1"
time="2018-11-23T14:59:28Z" level=info msg="Go OS/Arch: linux/amd64"
time="2018-11-23T14:59:28Z" level=info msg="operator-sdk Version: 0.0.6"
time="2018-11-23T14:59:28Z" level=info msg="Metrics service kubedirector created"
time="2018-11-23T14:59:28Z" level=info msg="Watching kubedirector.bluedata.io/v1alpha1, KubeDirectorCluster, bigdata, 30000000000"
time="2018-11-23T14:59:28Z" level=info msg="Starting admission validation server"
time="2018-11-23T14:59:29Z" level=warning msg="cluster{bigdata/cdh5142cm-instance}: unknown with incoming gen uid 68df492b-0e0c-4e2f-9e4a-9381ac613767"
ERROR: logging before flag.Parse: E1123 14:59:29.013381       8 runtime.go:66] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/Cellar/go/1.11.1/libexec/src/runtime/asm_amd64.s:522
/usr/local/Cellar/go/1.11.1/libexec/src/runtime/panic.go:513
/usr/local/Cellar/go/1.11.1/libexec/src/runtime/panic.go:82
/usr/local/Cellar/go/1.11.1/libexec/src/runtime/signal_unix.go:390
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/roles.go:141
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/roles.go:40
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/cluster.go:141
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/handler.go:43
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:88
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:52
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:36
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:98
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/usr/local/Cellar/go/1.11.1/libexec/src/runtime/asm_amd64.s:1333
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xf73eda]
goroutine 113 [running]:
github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x108
panic(0x10babc0, 0x1d78870)
	/usr/local/Cellar/go/1.11.1/libexec/src/runtime/panic.go:513 +0x1b9
github.com/bluek8s/kubedirector/pkg/reconciler.initRoleInfo(0xc00036eb00, 0xc000369c26, 0x9, 0xc00013e3c0, 0x0, 0x0)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/roles.go:141 +0x17a
github.com/bluek8s/kubedirector/pkg/reconciler.syncRoles(0xc00036eb00, 0x0, 0x0, 0x0, 0xc0005ef4a0, 0x40b94f, 0xc00055e680)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/roles.go:40 +0x40
github.com/bluek8s/kubedirector/pkg/reconciler.syncCluster(0x1348380, 0xc00036eb00, 0x409b00, 0xc00036eb00, 0xc0005ef4a0, 0x0, 0x0)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/cluster.go:141 +0x19a
github.com/bluek8s/kubedirector/pkg/reconciler.(*Handler).Handle(0xc0005ef4a0, 0x135db80, 0xc000040040, 0x1348380, 0xc00036eb00, 0x42a500, 0x0, 0x0)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/handler.go:43 +0x86
github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.(*informer).sync(0xc0000da000, 0xc0003ea120, 0x1a, 0x105b800, 0xc0002a2060)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:88 +0x12d
github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.(*informer).processNextItem(0xc0000da000, 0xc000235800)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:52 +0xd3
github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.(*informer).runWorker(0xc0000da000)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:36 +0x2b
github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.(*informer).runWorker-fm()
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:98 +0x2a
github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000583160)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000583160, 0x3b9aca00, 0x0, 0x1, 0x0)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbe
github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000583160, 0x3b9aca00, 0x0)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.(*informer).Run
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:98 +0x20d

This is the custom config

kind: "KubeDirectorCluster"
metadata:
  name: "cdh5142cm-instance"
spec:
  app: cdh5142cm
  serviceType: LoadBalancer
  roles:
  - id: controller
    storage:
      size: "10Gi"
    resources:
      requests:
        memory: "16Gi"
        cpu: "2"
      limits:
        memory: "16Gi"
        cpu: "2"
  - id: worker
    members: 1
    storage:
      size: "10Gi"
    resources:
      requests:
        memory: "16Gi"
        cpu: "2"
      limits:
        memory: "16Gi"
        cpu: "2"
@swamibluedata
Copy link

Looks like the "members" field for controller role is not getting mutated.
Could be because of the mutating webhook is not getting called. Could you try adding members: 1 for the controller role as well to see if that is the issue?

kind: "KubeDirectorCluster"
metadata:
  name: "cdh5142cm-instance"
spec:
  app: cdh5142cm
  serviceType: LoadBalancer
  roles:
  - id: controller
    members: 1
    storage:
      size: "10Gi"
    resources:
      requests:
        memory: "16Gi"
        cpu: "2"
      limits:
        memory: "16Gi"
        cpu: "2"
  - id: worker
    members: 1
    storage:
      size: "10Gi"
    resources:
      requests:
        memory: "16Gi"
        cpu: "2"
      limits:
        memory: "16Gi"
        cpu: "2"

@sandrich
Copy link
Author

Hi

Similar results. I had to lower the memory request to 1Gi each as I am running it inside minishift on my laptop

time="2018-11-24T20:40:45Z" level=info msg="Go Version: go1.11.1"
--
  | time="2018-11-24T20:40:45Z" level=info msg="Go OS/Arch: linux/amd64"
  | time="2018-11-24T20:40:45Z" level=info msg="operator-sdk Version: 0.0.6"
  | time="2018-11-24T20:40:45Z" level=info msg="Metrics service kubedirector created"
  | time="2018-11-24T20:40:45Z" level=info msg="Watching kubedirector.bluedata.io/v1alpha1, KubeDirectorCluster, bigdata, 30000000000"
  | time="2018-11-24T20:40:46Z" level=info msg="Starting admission validation server"
  | time="2018-11-24T20:40:46Z" level=info msg="cluster{bigdata/cdh5142cm-instance}: new"
  | time="2018-11-24T20:40:46Z" level=info msg="cluster{bigdata/cdh5142cm-instance}: creating role{controller}"
  | ERROR: logging before flag.Parse: E1124 20:40:46.175986       8 runtime.go:66] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
  | /usr/local/Cellar/go/1.11.1/libexec/src/runtime/asm_amd64.s:522
  | /usr/local/Cellar/go/1.11.1/libexec/src/runtime/panic.go:513
  | /usr/local/Cellar/go/1.11.1/libexec/src/runtime/panic.go:82
  | /usr/local/Cellar/go/1.11.1/libexec/src/runtime/signal_unix.go:390
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/executor/statefulset.go:327
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/executor/statefulset.go:257
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/executor/statefulset.go:52
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/roles.go:264
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/roles.go:64
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/cluster.go:141
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/handler.go:43
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:88
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:52
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:36
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:98
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
  | /usr/local/Cellar/go/1.11.1/libexec/src/runtime/asm_amd64.s:1333
  | panic: runtime error: invalid memory address or nil pointer dereference [recovered]
  | panic: runtime error: invalid memory address or nil pointer dereference
  | [signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0xf6b4c7]
  | goroutine 108 [running]:
  | github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x108
  | panic(0x10babc0, 0x1d78870)
  | /usr/local/Cellar/go/1.11.1/libexec/src/runtime/panic.go:513 +0x1b9
  | github.com/bluek8s/kubedirector/pkg/executor.getVolumeClaimTemplate(0xc0003cb1e0, 0xc000372000, 0x1200779, 0x3, 0xc0002ad740, 0x20, 0xc00037fb30)
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/executor/statefulset.go:327 +0x187
  | github.com/bluek8s/kubedirector/pkg/executor.getStatefulset(0xc0003cb1e0, 0xc000372000, 0x0, 0x10, 0x105b800, 0xc0002a53b0)
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/executor/statefulset.go:257 +0x860
  | github.com/bluek8s/kubedirector/pkg/executor.CreateStatefulSet(0xc0003cb1e0, 0xc000372000, 0x4, 0x1209587, 0x11)
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/executor/statefulset.go:52 +0x41
  | github.com/bluek8s/kubedirector/pkg/reconciler.handleRoleCreate(0xc0003cb1e0, 0xc0002a5110, 0xc000583bc6, 0x8, 0x1daada0)
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/roles.go:264 +0x116
  | github.com/bluek8s/kubedirector/pkg/reconciler.syncRoles(0xc0003cb1e0, 0x0, 0x0, 0x0, 0xc0004c2fc0, 0x40b94f, 0xc00032a000)
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/roles.go:64 +0x2df
  | github.com/bluek8s/kubedirector/pkg/reconciler.syncCluster(0x1348380, 0xc0003cb1e0, 0x409b00, 0xc0003cb1e0, 0xc0004c2fc0, 0x0, 0x0)
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/cluster.go:141 +0x19a
  | github.com/bluek8s/kubedirector/pkg/reconciler.(*Handler).Handle(0xc0004c2fc0, 0x135db80, 0xc00003e040, 0x1348380, 0xc0003cb1e0, 0xc0000b9d00, 0x0, 0x0)
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/handler.go:43 +0x86
  | github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.(*informer).sync(0xc0000fc690, 0xc00039d0a0, 0x1a, 0x105b800, 0xc0004b4ff0)
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:88 +0x12d
  | github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.(*informer).processNextItem(0xc0000fc690, 0xc000106300)
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:52 +0xd3
  | github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.(*informer).runWorker(0xc0000fc690)
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:36 +0x2b
  | github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.(*informer).runWorker-fm()
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:98 +0x2a
  | github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00019a070)
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
  | github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00019a070, 0x3b9aca00, 0x0, 0x1, 0x0)
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbe
  | github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00019a070, 0x3b9aca00, 0x0)
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
  | created by github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.(*informer).Run
  | /Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:98 +0x20d

This is the config

apiVersion: "kubedirector.bluedata.io/v1alpha1"
kind: "KubeDirectorCluster"
metadata:
  name: "cdh5142cm-instance"
spec:
  app: cdh5142cm
  serviceType: LoadBalancer
  roles:
  - id: controller
    members: 1
    storage:
      size: "10Gi"
    resources:
      requests:
        memory: "1Gi"
        cpu: "2"
      limits:
        memory: "1Gi"
        cpu: "2"
  - id: worker
    members: 1
    storage:
      size: "10Gi"
    resources:
      requests:
        memory: "1Gi"
        cpu: "2"
      limits:
        memory: "1Gi"
        cpu: "2"

@joel-bluedata
Copy link
Member

joel-bluedata commented Nov 26, 2018

Different crash point this time, but I think it's a similar issue. The storageClass field is not being set to its default by the mutating webhook.

We'll need to figure out why that webhook is not active. Tomorrow (Monday) we'll get together some debugging ideas.

If the state of this particular KD installation has been changing, with some elements left in place and others updated (e.g. I see one indication in the log of KD starting up when a CDH cluster already exists), then it's possible that there is some out-of-sync-ness in the versions of those pieces that could be fixed by doing a teardown and clean re-deploy.

However I suspect it's more likely that the webhook simply isn't being called for reasons that have to do with the particular k8s configuration. Quite possibly the default minishift configuration doesn't have that stuff active; we haven't tried this on minishift yet.

@joel-bluedata
Copy link
Member

As an initial comment about that path: what we need to do is make sure that MutatingAdmissionWebhook is enabled.

On OpenShift for example you have to do something like this (substituting MutatingAdmissionWebhook for ValidatingAdmissionWebhook): openshift/openshift-ansible#7983 (comment)

minishift may also need some sort of action. From minishift/minishift#2676 I would guess it's not enabled by default. I haven't looked closely yet but there may be guidance on how to turn it on either in that issue thread or in one of these:
minishift/minishift#2677
minishift/minishift-addons#170

@sandrich
Copy link
Author

You were right. I have activated the MutatingAdmissionWebhook and ValidatingAdmissionWebhook by enabling a minishift addon

admissionConfig:
  pluginConfig:
    MutatingAdmissionWebhook:
      configuration:
        apiVersion: v1
        disable: false
        kind: DefaultAdmissionConfig
    ValidatingAdmissionWebhook:
      configuration:
        apiVersion: v1
        disable: false
        kind: DefaultAdmissionConfig
    openshift.io/ImagePolicy:
      configuration:
        apiVersion: v1
        executionRules:
        - matchImageAnnotations:
          - key: images.openshift.io/deny-execution
            value: "true"
          name: execution-denied
          onResources:
          - resource: pods
          - resource: builds
          reject: true
          skipOnResolutionFailure: true
        kind: ImagePolicyConfig
      location: ""

I killed the kubedirector pod but still getting similar errors

time="2018-11-26T06:41:51Z" level=info msg="Go Version: go1.11.1"
time="2018-11-26T06:41:51Z" level=info msg="Go OS/Arch: linux/amd64"
time="2018-11-26T06:41:51Z" level=info msg="operator-sdk Version: 0.0.6"
time="2018-11-26T06:41:51Z" level=info msg="Metrics service kubedirector created"
time="2018-11-26T06:41:51Z" level=info msg="Watching kubedirector.bluedata.io/v1alpha1, KubeDirectorCluster, bigdata, 30000000000"
time="2018-11-26T06:41:51Z" level=info msg="Starting admission validation server"
time="2018-11-26T06:43:52Z" level=info msg="cluster{bigdata/cdh5142cm-instance}: new"
time="2018-11-26T06:43:52Z" level=info msg="cluster{bigdata/cdh5142cm-instance}: dropping stale poll"
time="2018-11-26T06:43:52Z" level=info msg="cluster{bigdata/cdh5142cm-instance}: creating role{worker}"
ERROR: logging before flag.Parse: E1126 06:43:52.950106       7 runtime.go:66] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/Cellar/go/1.11.1/libexec/src/runtime/asm_amd64.s:522
/usr/local/Cellar/go/1.11.1/libexec/src/runtime/panic.go:513
/usr/local/Cellar/go/1.11.1/libexec/src/runtime/panic.go:82
/usr/local/Cellar/go/1.11.1/libexec/src/runtime/signal_unix.go:390
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/executor/statefulset.go:327
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/executor/statefulset.go:257
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/executor/statefulset.go:52
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/roles.go:264
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/roles.go:64
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/cluster.go:141
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/handler.go:43
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:88
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:52
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:36
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:98
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/usr/local/Cellar/go/1.11.1/libexec/src/runtime/asm_amd64.s:1333
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0xf6b4c7]
goroutine 96 [running]:
github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x108
panic(0x10babc0, 0x1d78870)
	/usr/local/Cellar/go/1.11.1/libexec/src/runtime/panic.go:513 +0x1b9
github.com/bluek8s/kubedirector/pkg/executor.getVolumeClaimTemplate(0xc0006ffb80, 0xc0005ee3c8, 0x1200779, 0x3, 0xc00034fec0, 0x20, 0xc00029c870)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/executor/statefulset.go:327 +0x187
github.com/bluek8s/kubedirector/pkg/executor.getStatefulset(0xc0006ffb80, 0xc0005ee3c8, 0x0, 0x10, 0x105b800, 0xc00053c2a0)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/executor/statefulset.go:257 +0x860
github.com/bluek8s/kubedirector/pkg/executor.CreateStatefulSet(0xc0006ffb80, 0xc0005ee3c8, 0x4, 0x1209587, 0x11)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/executor/statefulset.go:52 +0x41
github.com/bluek8s/kubedirector/pkg/reconciler.handleRoleCreate(0xc0006ffb80, 0xc00053c270, 0xc000151bc6, 0x8, 0x1daada0)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/roles.go:264 +0x116
github.com/bluek8s/kubedirector/pkg/reconciler.syncRoles(0xc0006ffb80, 0x0, 0x0, 0x0, 0xc0000a6000, 0x40b94f, 0xc00055d680)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/roles.go:64 +0x2df
github.com/bluek8s/kubedirector/pkg/reconciler.syncCluster(0x1348380, 0xc0006ffb80, 0x409b00, 0xc0006ffb80, 0xc0000a6000, 0x0, 0x0)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/cluster.go:141 +0x19a
github.com/bluek8s/kubedirector/pkg/reconciler.(*Handler).Handle(0xc0000a6000, 0x135db80, 0xc000040040, 0x1348380, 0xc0006ffb80, 0xc00025dd00, 0x0, 0x0)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/pkg/reconciler/handler.go:43 +0x86
github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.(*informer).sync(0xc0000c6700, 0xc00034f780, 0x1a, 0x105b800, 0xc0002c6690)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:88 +0x12d
github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.(*informer).processNextItem(0xc0000c6700, 0xc000368200)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:52 +0xd3
github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.(*informer).runWorker(0xc0000c6700)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer-sync.go:36 +0x2b
github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.(*informer).runWorker-fm()
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:98 +0x2a
github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0005ac000)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005ac000, 0x3b9aca00, 0x0, 0x1, 0x0)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbe
github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0005ac000, 0x3b9aca00, 0x0)
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk.(*informer).Run
	/Users/jbaxter/go/src/github.com/bluek8s/kubedirector/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:98 +0x20d

@joel-bluedata
Copy link
Member

The webhook is still not being called.

If you want an easy way to for-sure test whether the webhook is working, you can go back to creating a cluster CR where one of the roles does not have an explicit member count. If the webhook is working, then as soon as you create the CR you should be able to read it back again (using kubectl or whatever) and see that a default value for the members property has been added to the CR. (Regardless of whether or not KD has then crashed.)

@sandrich, did you restart the relevant services after changing the configuration? For example on OpenShift you would need to do systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers; I don't know if the minishift services are the same.

I'm going to tag @tap in here briefly to see if he remembers any other relevant admission webhook shenanigans on OpenShift.


BTW, looking ahead just a bit... this was not the only issue we hit on OpenShift, so I'm guessing there will be additional problems in minishift too unless things have changed since we last tried OpenShift. If you are wanting to go down this route you should also have a look at all of the other sub-issues of issue #1.

If you don't mind giving some context: what's the situation that is leading you to use minishift for this? Convenience because you're used to it, or some other criterion?

I ask because it might be hard for us to shake out all the issues here remotely -- it may require us to set up minishift ourselves and give it a go. However we don't have minishift support prioritized at the moment, so it's nice to gather info about whether we should re-prioritize.

@joel-bluedata joel-bluedata added Project: Platform Support GKE/OpenShift/etc.; K8s config variations and removed Project: CR Validation beyond schemas in the CRDs labels Nov 26, 2018
@joel-bluedata
Copy link
Member

I was looking at those OpenShift related issues again just now with @swamibluedata.

issue #2: This is the webhook thing that we're talking about in the thread above. Perhaps this is just that the relevant controller services need to be restarted, perhaps it's something else.

issue #3: I'm not sure what the symptoms of this RBAC issue were (@tap might remember, but he's at a conference right now). This may still need to be adjusted in your system if it's not something that minishift deals with more gracefully than "normal" OpenShift.

issue #4: The symptom for this one was that the webhook wasn't able to bind to its port and KD would refuse to start. Since you don't seem to be suffering that problem, this one may not be an issue for you.

issue #6: You'll probably need to make this change to the catalog entries.

It looks like we may have a bit of schedule wiggle room here to give minishift a try ourselves in the near future and see how it pans out.

@joel-bluedata joel-bluedata changed the title Observed a panic: "invalid memory address or nil pointer dereference" On minishift, admission controller webhook is not working Nov 26, 2018
@sandrich
Copy link
Author

sandrich commented Nov 27, 2018

If you don't mind giving some context: what's the situation that is leading you to use minishift for this? Convenience because you're used to it, or some other criterion?

I ask because it might be hard for us to shake out all the issues here remotely -- it may require us to set up minishift ourselves and give it a go. However we don't have minishift support prioritized at the moment, so it's nice to gather info about whether we should re-prioritize.

Hi @joel-bluedata thanks for looking into it. I will answer a bit later this week on the other issues and will have a closer look.
We use OpenShift at work but I did not want to test this out on that cluster. Minishift is an all-in-one Openshift (OKD) that does not by default activate all the bells and whistles. I think the priority should be on OpenShift rather than minishift due to its use-cases. If I got time I will try to install a full OpenShift environment on a few VMs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Priority: High Project: Platform Support GKE/OpenShift/etc.; K8s config variations
Projects
None yet
Development

No branches or pull requests

3 participants