Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nginx/Alertmanager redirect loosing tenant headers #10306

Open
nesspll opened this issue Dec 23, 2024 · 0 comments
Open

Nginx/Alertmanager redirect loosing tenant headers #10306

nesspll opened this issue Dec 23, 2024 · 0 comments

Comments

@nesspll
Copy link

nesspll commented Dec 23, 2024

We have this issue which revolves around the gateway/nginx and alertmanager in Mimir, we are trying to set the tenant headers based on the path, and grab them from the path and pass them as a header to the proxy, but those headers are getting lost, we need some input in here.

We are trying to have different paths for tenants, for example if we go to alertmanage/sre-eks the "sre-eks" which is set as a value via "proxy_set_header X-Scope-OrgID sre-eks;" at the alertmanager tenant endpoint, but when that is passed to the proxy via "proxy_pass" is getting lost "proxy_pass http://$alertmanager:{{ include "mimir.serverHttpListenPort" . }}{{ template "mimir.alertmanagerHttpPrefix" . }};"

This is the section of nginx config:

file:  |
        worker_processes  5;  ## Default: 1
        error_log  /dev/stderr {{ .Values.gateway.nginx.config.errorLogLevel }};
        pid        /tmp/nginx.pid;
        worker_rlimit_nofile 8192;

        events {
          worker_connections  4096;  ## Default: 1024
        }

        http {
          client_body_temp_path /tmp/client_temp;
          proxy_temp_path       /tmp/proxy_temp_path;
          fastcgi_temp_path     /tmp/fastcgi_temp;
          uwsgi_temp_path       /tmp/uwsgi_temp;
          scgi_temp_path        /tmp/scgi_temp;

          default_type application/octet-stream;
          log_format   {{ .Values.gateway.nginx.config.logFormat }}

          {{- if .Values.gateway.nginx.verboseLogging }}
          access_log   /dev/stderr  main;
          {{- else }}

          map $status $loggable {
            ~^[23]  0;
            default 1;
          }
          access_log   {{ .Values.gateway.nginx.config.accessLogEnabled | ternary "/dev/stderr  main  if=$loggable;" "off;" }}
          {{- end }}

          sendfile           on;
          tcp_nopush         on;
          proxy_http_version 1.1;

          {{- if .Values.gateway.nginx.config.resolver }}
          resolver {{ .Values.gateway.nginx.config.resolver }};
          {{- else }}
          resolver {{ .Values.global.dnsService }}.{{ .Values.global.dnsNamespace }}.svc.{{ .Values.global.clusterDomain }};
          {{- end }}

          {{- with .Values.gateway.nginx.config.httpSnippet }}
          {{ . | nindent 2 }}
          {{- end }}

          # Ensure that X-Scope-OrgID is always present, default to the no_auth_tenant for backwards compatibility when multi-tenancy was turned off.
          map $http_x_scope_orgid $ensured_x_scope_orgid {
            default $http_x_scope_orgid;
            "" "{{ include "mimir.noAuthTenant" . }}";
          }

          map $http_x_scope_orgid $has_multiple_orgid_headers {
            default 0;
            "~^.+,.+$" 1;
          }

          proxy_read_timeout 300;
          server {
            listen {{ include "mimir.serverHttpListenPort" . }};
            {{- if .Values.gateway.nginx.config.enableIPv6 }}
            listen [::]:{{ include "mimir.serverHttpListenPort" . }};
            {{- end }}

            {{- if .Values.gateway.nginx.config.clientMaxBodySize }}
            client_max_body_size {{ .Values.gateway.nginx.config.clientMaxBodySize }};
            {{- end }}

            {{- if .Values.gateway.nginx.basicAuth.enabled }}
            auth_basic           "Mimir";
            auth_basic_user_file /etc/nginx/secrets/.htpasswd;
            {{- end }}

            if ($has_multiple_orgid_headers = 1) {
                return 400 'Sending multiple X-Scope-OrgID headers is not allowed. Use a single header with | as separator instead.';
            }

            location = / {
              return 200 'OK';
              auth_basic off;
            }

            location = /ready {
              return 200 'OK';
              auth_basic off;
            }

            proxy_set_header X-Scope-OrgID $ensured_x_scope_orgid;

            # Distributor endpoints
            location /distributor {
              set $distributor {{ template "mimir.fullname" . }}-distributor-headless.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$distributor:{{ include "mimir.serverHttpListenPort" . }}$request_uri;
            }
            location = /api/v1/push {
              set $distributor {{ template "mimir.fullname" . }}-distributor-headless.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$distributor:{{ include "mimir.serverHttpListenPort" . }}$request_uri;
            }
            location /otlp/v1/metrics {
              set $distributor {{ template "mimir.fullname" . }}-distributor-headless.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$distributor:{{ include "mimir.serverHttpListenPort" . }}$request_uri;
            }

            
           # Alertmanager endpoints
            location {{ template "mimir.alertmanagerHttpPrefix" . }} {
              set $alertmanager {{ template "mimir.fullname" . }}-alertmanager-headless.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$alertmanager:{{ include "mimir.serverHttpListenPort" . }}$request_uri;
            }
            location = /multitenant_alertmanager/status {
              set $alertmanager {{ template "mimir.fullname" . }}-alertmanager-headless.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$alertmanager:{{ include "mimir.serverHttpListenPort" . }}$request_uri;
            }
            location = /multitenant_alertmanager/configs {
              set $alertmanager {{ template "mimir.fullname" . }}-alertmanager-headless.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$alertmanager:{{ include "mimir.serverHttpListenPort" . }}$request_uri;
            }
            location = /api/v1/alerts {
              set $alertmanager {{ template "mimir.fullname" . }}-alertmanager-headless.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$alertmanager:{{ include "mimir.serverHttpListenPort" . }}$request_uri;
            }
           

           # Alertmanager tenant endpoints
            location /alertmanager/sre-eks {
              proxy_set_header X-Scope-OrgID sre-eks;
              set $alertmanager {{ template "mimir.fullname" . }}-alertmanager-headless.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$alertmanager:{{ include "mimir.serverHttpListenPort" . }}{{ template "mimir.alertmanagerHttpPrefix" . }};
            }
            location /alertmanager/sre-prod {
              proxy_set_header X-Scope-OrgID sre-prod;
              set $alertmanager {{ template "mimir.fullname" . }}-alertmanager-headless.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$alertmanager:{{ include "mimir.serverHttpListenPort" . }}{{ template "mimir.alertmanagerHttpPrefix" . }};
            }
            location /alertmanager/sre-dev {
              proxy_set_header X-Scope-OrgID sre-dev;
              set $alertmanager {{ template "mimir.fullname" . }}-alertmanager-headless.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$alertmanager:{{ include "mimir.serverHttpListenPort" . }}{{ template "mimir.alertmanagerHttpPrefix" . }};
            }


            # Ruler endpoints
            location {{ template "mimir.prometheusHttpPrefix" . }}/config/v1/rules {
              set $ruler {{ template "mimir.fullname" . }}-ruler.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$ruler:{{ include "mimir.serverHttpListenPort" . }}$request_uri;
            }
            location {{ template "mimir.prometheusHttpPrefix" . }}/api/v1/rules {
              set $ruler {{ template "mimir.fullname" . }}-ruler.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$ruler:{{ include "mimir.serverHttpListenPort" . }}$request_uri;
            }

            location {{ template "mimir.prometheusHttpPrefix" . }}/api/v1/alerts {
              set $ruler {{ template "mimir.fullname" . }}-ruler.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$ruler:{{ include "mimir.serverHttpListenPort" . }}$request_uri;
            }
            location = /ruler/ring {
              set $ruler {{ template "mimir.fullname" . }}-ruler.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$ruler:{{ include "mimir.serverHttpListenPort" . }}$request_uri;
            }

            # Rest of {{ template "mimir.prometheusHttpPrefix" . }} goes to the query frontend
            location {{ template "mimir.prometheusHttpPrefix" . }} {
              set $query_frontend {{ template "mimir.fullname" . }}-query-frontend.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$query_frontend:{{ include "mimir.serverHttpListenPort" . }}$request_uri;
            }

            # Buildinfo endpoint can go to any component
            location = /api/v1/status/buildinfo {
              set $query_frontend {{ template "mimir.fullname" . }}-query-frontend.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$query_frontend:{{ include "mimir.serverHttpListenPort" . }}$request_uri;
            }

            # Compactor endpoint for uploading blocks
            location /api/v1/upload/block/ {
              set $compactor {{ template "mimir.fullname" . }}-compactor.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
              proxy_pass      http://$compactor:{{ include "mimir.serverHttpListenPort" . }}$request_uri;
            }

            {{- with .Values.gateway.nginx.config.serverSnippet }}
            {{ . | nindent 4 }}
            {{- end }}
          }
        }

So when we test via browser or do the curl on the endpoint we get redirected to the default alertmanager. Is this REDIRECT that is making the headers lost.

$ curl -v http://mimir.sre-poc.aws.k8s/alertmanager
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* Host mimir.sre-poc.aws.k8s:80 was resolved.
* IPv6: (none)
* IPv4: 10.48.4.154
*   Trying 10.48.4.154:80...
* Connected to mimir.sre-poc.aws.k8s (10.48.4.154) port 80
* using HTTP/1.x
> GET /alertmanager HTTP/1.1
> Host: mimir.sre-poc.aws.k8s
> User-Agent: curl/8.10.1
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 301 Moved Permanently              #### THIS REDIRECT, all the headers are dropped after this redirect.
< Server: nginx/1.27.3
< Date: Thu, 19 Dec 2024 15:06:27 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 49
< Connection: keep-alive
< Location: /alertmanager/
< Vary: Accept-Encoding
<
{ [49 bytes data]
100    49  100    49    0     0     73      0 --:--:-- --:--:-- --:--:--    74<a href="/alertmanager/">Moved Permanently</a>.


* Connection #0 to host mimir.sre-poc.aws.k8s left intact

Same happens when do also pass the header:

$ curl -v -H "X-Scope-OrgID: sre-eks" http://mimir.sre-poc.aws.k8s/alertmanager
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* Host mimir.sre-poc.aws.k8s:80 was resolved.
* IPv6: (none)
* IPv4: 10.48.4.154
*   Trying 10.48.4.154:80...
* Connected to mimir.sre-poc.aws.k8s (10.48.4.154) port 80
* using HTTP/1.x
> GET /alertmanager HTTP/1.1
> Host: mimir.sre-poc.aws.k8s
> User-Agent: curl/8.10.1
> Accept: */*
> X-Scope-OrgID: sre-eks
>
* Request completely sent off
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.27.3
< Date: Thu, 19 Dec 2024 15:10:20 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 49
< Connection: keep-alive
< Location: /alertmanager/
< Vary: Accept-Encoding
<
{ [49 bytes data]
100    49  100    49    0     0     94      0 --:--:-- --:--:-- --:--:--    95<a href="/alertmanager/">Moved Permanently</a>.


* Connection #0 to host mimir.sre-poc.aws.k8s left intact
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant