Skip to content
This repository has been archived by the owner on Feb 9, 2022. It is now read-only.

Nginx Ingress goes down after updates for config #998

Open
IvanKolbasiuk opened this issue Dec 7, 2020 · 9 comments
Open

Nginx Ingress goes down after updates for config #998

IvanKolbasiuk opened this issue Dec 7, 2020 · 9 comments

Comments

@IvanKolbasiuk
Copy link

Hello there, I faced with an issue for Nginx ingress. I need to add some config to the nginx pod and I used kubeprod-manifest.jsonnet.
// Cluster-specific configuration (import "https://releases.kubeprod.io/files/v1.6.1/manifests/platforms/aks.jsonnet") { config:: import "kubeprod-autogen.json", // Place your overrides here nginx_ingress+: { config+: { data+: { "proxy-body-size": "20m", }, }, }, prometheus+: { storage:: 32768, } }
I applied this manifest with kubeprod install aks ...

And received an error from Prometheus like
ERROR Error: Error updating statefulsets kubeprod.prometheus: StatefulSet.apps "prometheus" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden

and more importantly I couldn't load kibana, grafana or prometheus page anymore from my domain
image
image

Could you suggest something here?

@IvanKolbasiuk
Copy link
Author

Hello there, maybe this output from Nginx ingress could help

` - - [09/Dec/2020:07:21:47 +0000] "GET /oauth2/auth HTTP/1.1" 401 21 "-" "Mozilla/5.0 [kubeprod-oauth2-proxy-4180] [] 102:4180 21 0.004 401 4549f29063eb6dc6533b9
2020/12/09 07:21:47 [alert] 23#23: worker process 19880 exited on signal 11 (core dumped)
[09/Dec/2020:07:23:03 +0000] "GET / HTTP/2.0" 404 19 "https://kibana.myhost.net/" "Mozilla/5.0 [kubeprod-oauth2-proxy-4180] [] :4180 19 0.004 404 1d5dc91c49e70943d64
2020/12/09 07:23:03 [alert] 23#23: worker process 20417 exited on signal 11 (core dumped)

    • [09/Dec/2020:07:23:19 +0000] "GET /oauth2/auth HTTP/1.1" 202 0 "-" [kubeprod-oauth2-proxy-4180] [] :4180 0 0.004 202 0574d8a22bf443586a46f8dd53
      terminate called after throwing an instance of 'std::logic_error'
      what(): basic_string::_M_construct null not valid
      2020/12/09 07:23:20 [alert] 23#23: worker process 19846 exited on signal 6 (core dumped)
    • [09/Dec/2020:07:23:20 +0000] "GET /oauth2/auth HTTP/1.1" 202 0 "-" " [kubeprod-oauth2-proxy-4180] [] 2:4180 0 0.004 202 3081e55b59b3aac1b86b32
      terminate called after throwing an instance of 'std::logic_error'
      what(): basic_string::_M_construct null not valid
      2020/12/09 07:23:20 [alert] 23#23: worker process 20457 exited on signal 6 (core dumped)
      `

@javsalgar
Copy link
Contributor

Hi,

I am afraid that, in the case of statefulsets, not all parameters can be edited after deploying (immutable fields). In this kind of cases, you may need to remove and redeploy the prometheus statefulset in order to work. About the other issue, could you show the status of all the deployed pods?

@IvanKolbasiuk
Copy link
Author

IvanKolbasiuk commented Dec 14, 2020

Hi @javsalgar,

Regarding Prometheus I used doc from this repo here is the link https://github.com/bitnami/kube-prod-runtime/blob/master/docs/components.md#override-storage-parameters

But, in any case, the second problem is more important for me.
image

There everything is up and running, but ingress doesn't work. It works partially. I can authorise myself with auth.mydomain.com. And after that, I could load kibana or grafana pages but always receive those errors like in my screenshot from first comment. I had similar before and only one thing helped - reinstalling all k8s cluster. How to rebuild or reconfigure ingress without reinstalling all cluster? Thanks

@javsalgar
Copy link
Contributor

Could you provide the Kibana and Grafana logs to see if there's anything meaningful?

@IvanKolbasiuk
Copy link
Author

IvanKolbasiuk commented Dec 15, 2020

I could connect to Grafana or Kibana with kubectl port-forwarding from my local PC. I can see logs for all pods in kibana. The problem is that I couldn't connect that stuff through ingress. Not only app from kubeprod namespace also our app through ingress. With port-forwarding everything works. Output from Nginx you already have seen in previous comments. What logs are you interested in? Pod?

@IvanKolbasiuk
Copy link
Author

For example, if you try to open kibana.myhost.com with a clean browser. You couldn't do that. Nginx doesn't redirect to the authorization page for some reason.

@javsalgar
Copy link
Contributor

Hi,

Indeed, it seems that the nginx process could be crashing for some reason

2020/12/09 07:23:03 [alert] 23#23: worker process 20417 exited on signal 11 (core dumped)

Did you try installing it with the default values and it worked? That could help us get to the exact configuration value that is making nginx crash.

@IvanKolbasiuk
Copy link
Author

Hi,
The problem is that if I delete everything with
kubecfg delete kubeprod-manifest.jsonnet
And install with a clean configuration with
kubeprod install aks .... I'll get the same behaviour for ingress.

The only way is to delete the AKS cluster and create a new one. After that, I can install BKPR with even customized values.
But if I want to add something later I can't do that due to that ingress behaviour. I got the same result after 3 attempts.

@IvanKolbasiuk
Copy link
Author

Hello there, As I see it is difficult to find the reason for my problem. Does somebody use BKPR with AKS?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants