Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Oathkeeper reverse proxy continuous high memory utilization #1186

Open
3 of 5 tasks
DenisPnko opened this issue Sep 19, 2024 · 1 comment
Open
3 of 5 tasks

Oathkeeper reverse proxy continuous high memory utilization #1186

DenisPnko opened this issue Sep 19, 2024 · 1 comment
Labels
bug Something is not working.

Comments

@DenisPnko
Copy link

DenisPnko commented Sep 19, 2024

Preflight checklist

Ory Network Project

No response

Describe the bug

I have tried to setup Oathkeeper as a reverse proxy with SSL termination for an application with high request rate (~6k rps) that stores and distributes files. Oathkeeper is running as a side-container in a pod together with the main application container, accepts and then proxies the requests (mainly GET requests). Currently the main usage is the SSL termination.

While load testing this setup with a large number of requests I was seeing a consistent increase in memory usage and a noticeable slow down in request processing speed. Increasing the resources and the number of pods improves the performance but I am still seeing the consistent increase in memory utilization until the Oathkeeper containers hit their limits and restart (and with big resource additions the improvement is moderate in comparison, also comparing to the main application this requires much more resources).

I wanted to ask whether you think this could be an issue or maybe there is something I’d need to change in my configs for this setup. Any insights in terms of memory utilization would also be helpful.

Reproducing the bug

Running in Kubernetes and installed with the official helm chart.

The requests are going to the internal load balancer (port 443) -> Oathkeeper (port 4455) -> Main application (port 8080 and 8081 on the same pod).

Relevant log output

In terms of errors I haven’t found anything noticeable in the logs besides:

{"audience":"application","error":{"message":"dial tcp 127.0.0.1:8080: connect: cannot assign requested address"},"level":"error","msg":"http: gateway error","service_name":"ORY Oathkeeper","service_version":"v0.40.6"}

That was resolved with adding more pods, but maybe there’s also other approaches I can use here?

Relevant configuration

accessRules: |
    - id: "1"
      upstream:
        url: "http://127.0.0.1:8080"
      match:
        url: "https://<external-site>/<**>”
        methods:
          - POST
          - GET
          - HEAD
          - PUT
          - PATCH
          - DELETE
      authenticators:
        - handler: anonymous
      mutators:
        - handler: noop
      authorizer:
        handler: allow
    - id: "2"
      upstream:
        url: "http://127.0.0.1:8081"
      match:
        url: "https://<internal-site>/<**>”
        methods:
          - POST
          - GET
          - HEAD
          - PUT
          - PATCH
          - DELETE
      authenticators:
        - handler: anonymous
      mutators:
        - handler: noop
      authorizer:
        handler: allow

  oathkeeperConfig: |
    log:
      level: debug
      format: json

    serve:
      proxy:
        port: 4455
        tls:
          key:
            path: /etc/certs/tls.key
          cert:
            path: /etc/certs/tls.crt
        cors:
          enabled: true
          allowed_origins:
            - http://127.0.0.1:3001
          allowed_methods:
            - POST
            - GET
            - HEAD
            - PUT
            - PATCH
            - DELETE
          allowed_headers:
            - Authorization
            - Content-Type
          exposed_headers:
            - Content-Type
          allow_credentials: true
          debug: true

    errors:
      fallback:
        - json

      handlers:
        json:
          enabled: true
          config:
            verbose: true

    access_rules:
      matching_strategy: glob
      repositories:
        - file:///etc/config/oathkeeper/access-rules.yml

    authenticators:
      anonymous:
        enabled: true
        config:
          subject: guest

      noop:
        enabled: true

    authorizers:
      allow:
        enabled: true

    mutators:
      noop:
        enabled: true

Version

v0.40.6

On which operating system are you observing this issue?

Other

In which environment are you deploying?

Kubernetes with Helm

Additional Context

No response

@DenisPnko DenisPnko added the bug Something is not working. label Sep 19, 2024
@aeneasr
Copy link
Member

aeneasr commented Nov 13, 2024

The cache is probably misconfigured and you have a max_cost that is too high. The defaults in oathkeeper are currently way too high for these caches

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something is not working.
Projects
None yet
Development

No branches or pull requests

2 participants