Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tcp dns resolver #182

Open
nick-phillips-dev opened this issue Jan 15, 2020 · 1 comment
Open

tcp dns resolver #182

nick-phillips-dev opened this issue Jan 15, 2020 · 1 comment

Comments

@nick-phillips-dev
Copy link

nick-phillips-dev commented Jan 15, 2020

I'm trying to create a new REQ socket that can connect to multiple servers automatically and keep those backends updated from dns and load balance between them.

The code is just like the req/rep example except that there are multiple clients connecting to multiple servers.

Kubernetes allows you to deploy a Headless service for a StatefulSet. When DNS makes a request, it returns all the endpoints available. This bypasses kube-proxy so we can handle load balancing at the client side.

When I run the following code, the socket is connected to one of the endpoints at random:

sock, err := req.NewSocket()
if err != nil {
    panic(err)
}

err = sock.Dial("tcp://myset.default.svc.cluster.local:2000")
if err != nil {
    panic(err)    
}

sock.Send([]byte("hello"))

Inside the container, I can see from netstat that only one tcp connection is alive:

netstat -Wt
tcp        0      0 my-service-b9bddbb85-tjj7g:36718                 my-service-1.my-service.default.svc.cluster.local:cisco-sccp ESTABLISHED

To explain the endpoint, please refer to how pods within a statefulset maintain a stable network id here.

Lets say I set my replica set to 3 replicas, the pods will have the following DNS entries:

my-service-0.my-service.default.svc.cluster.local
my-service-1.my-service.default.svc.cluster.local
my-service-2.my-service.default.svc.cluster.local

Here is a good article on how gRPC can be setup to load balance from the client side.

I would like to do this for Mangos if this isn't already possible. Do you know what would be the best approach here?

I thought about starting with the tcp transport and adding a layer on top of it to manage each connection. We could continually resolve DNS every 5-10 seconds to see if any members have changed.

Please let me know your thoughts. Thank you!

@gdamore
Copy link
Contributor

gdamore commented Jan 16, 2020

For REQ or REP with TCP this is probably fairly straight-forward to do -- except that instead of just one connection alive, we would have one to each returned server.

We would want to add a property indicating that we want to connect to all the returned DNS entries, not just the first one. At that point, each connection is going to be used and considered when issuing request, giving round robin load balancing for example. Hopefully that's what you want, and you don't have to be concerned about state sharing between the far side.

This won't work for PAIR for obvious reasons. It would probably work out ok for pretty much the rest of the protocols though.

So the way to handle this is with a Dialer property (DialAll or something like that).

If you want to resolve just to one, well that's what happens to today. If the remote peer disconnects for any reason, we automatically reconnect. (The dialer does -- obviously the accepter can't initiate a new connection). We hit up DNS each time we do that, so that hopefully we get a different answer (depends on the resolver).

A cool enhancement might be to have an algorithm like HappyEyeballs where we dial out simultaneously, but then disconnect them all except the first one to negotiate. That would tend to resolve to whatever comes back first. That work could be done in the TCP dialer.

If this is something you need commercially, let me know and we can talk about how Staysail can help -- otherwise I'm happy to consider a PR if you feel equipped to do the work yourself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants