Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: “High Availability” with Autocert #35

Open
polds opened this issue Jan 9, 2025 · 3 comments
Open

Feature Request: “High Availability” with Autocert #35

polds opened this issue Jan 9, 2025 · 3 comments

Comments

@polds
Copy link
Contributor

polds commented Jan 9, 2025

Hi, it’d be nice to be able to scale out GoDoxy to a few nodes to provide a more highly available solution. It should already work (haven’t tested) if you bring your own certs, but it’d be nice to have support with autocert. I imagine to do so, some basic locking, or leader election would need to be implemented (or just add a Redis node to the compose file) so only one instance attempts to register certs at a given time. It would require a clustered file system unless they could share the certs with each other which would need service discovery 😅 though I guess if you have the docker hosts enabled, there’s already some amount of service discovery happening.

Anyways, I over complicated the problem, but it would be nice to be able to run multiple with Autocert.

@yusing
Copy link
Owner

yusing commented Jan 9, 2025

Could you elaborate more about it? Multiple machines running same services?

@polds
Copy link
Contributor Author

polds commented Jan 10, 2025

No, sorry I'm meaning the ability to horizontally scale GoDoxy to multiple instances.

graph TD
    subgraph User Requests
        User1[User] --> GoDoxy1[GoDoxy Instance 1]
        User1[User] --> GoDoxy2[GoDoxy Instance 2]
        User1[User] --> GoDoxyN[GoDoxy Instance n]
    end

    subgraph GoDoxy Instances
        GoDoxy1 --|Handles Traffic|--> Applications
        GoDoxy2 --|Handles Traffic|--> Applications
        GoDoxyN --|Handles Traffic|--> Applications
    end

    subgraph Let's Encrypt Renewal
        GoDoxy1 --> Election[Leader Election]
        GoDoxy2 --> Election
        GoDoxyN --> Election
        Election -->|Leader| Leader[Leader GoDoxy]
        Leader -->|Renew Certs| ACME[Let's Encrypt]
        ACME --> Leader
        Leader -->|Distributes Certs| GoDoxy1
        Leader --> GoDoxy2
        Leader --> GoDoxyN
    end
Loading

But with Let's Encrypt all the instances attempt to renew the certificate which causes issues.

@gedw99
Copy link

gedw99 commented Feb 1, 2025

Hey @polds and @yusing

I agree that this is a problem with go-proxy.

For Use Cases where you have many Servers serving the same Domain, you need shared storage of the Certs.

This is because when a Certificate is issued on Server A , the other Servers need the same Cert, thus creating a "thundering herd" of Cert Issuance requests hitting Lets Encrypt from all your Servers. Lets Encrypt blocks you in this case for about 24 hours.

When you have Shared Storage, the first of your Servers will check if the Cert exists in your Shared Storage, and on Cache miss it will get it from Lets Encrypt and store it in the Shared Storage. Your others Servers, will do the same, and find it in the Shared Storage, and so not create the "thundering herd".

Here are some example implementations of this...

https://github.com/lucaslorentz/caddy-docker-proxy?tab=readme-ov-file#volumes uses a local volume that is shared over NFS. This presumes that your provider can give you a Redundant volume, such as Hetzner. Its a sensible default, as its does not rely on third party.

https://caddyserver.com/docs/json/storage/ lists the ones Caddy supports, such as S3, Vault, NATS Jetstream, etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants