Replies: 2 comments
-
Hey Hernan, Thanks for taking my course. That's an excellent plan. There are lots of pieces that will take you some time to work through. I can't cover all those topics in a reply, so here's a few things that can help you get going:
In the end, I recommend starting simple and join our Discord to ask questions. Oh, and we have a bi-monthly video hangout called Swarm Fans that is *this Friday if you can join. This link should help get you the calendar invite after you join our server: https://discord.gg/nyvGcWrb?event=1148311041554534570 |
Beta Was this translation helpful? Give feedback.
-
Answering from my on-permise production swarm experience:
Depending on your situation, HA storage options vary. If you have access to enterprise level SANs, then you have access to HA NFS. If you need to create your own storage, then a distributed file system like glusterfs, ceph, etc. are easy to set up. We went from SAN provided nfs to a glusterfs cluster. Both worked well.
Each docker node needs to store its own data, and /var/lib/docker should for performance reasons be as close to the node as possible. Also, at the application level you generally want to be quite explicit about wether your stack is storing local (per node) or shared (cluster wide) data, so mounting /var/lib/docker[...] is not a good idea. What people generally seem to do is mount one (or more) shared file systems on the node: e.g. /mnt/nfs-docker-0/ and then explicitly reference this host relative location as either bind mounts or as volumes with a specific target. In our case - after looking at the available docker volume plugins - I made a docker volume plugin to allow
Definately case by case basis. Given your use cases, you might actually want to include a Minio cluster as s3 compatible object storage is a great place to store registry data and will be used a lot in the grafana / loki space too. Alternately if you have access to SANs or if whatever distributed filesystem might have added s3 api support...
Definately this. keepalived must monitor the health of the docker service at least. It might add a hop, but if reliability rather than performance is a concern, then don't put the reverse proxy in host mode. As long as you hit :80/:443 on any healthy docker node, dockers own overlay lb will route to a healthy traefik/caddy/nginx (if you must) instance. |
Beta Was this translation helpful? Give feedback.
-
Hi Bret,
My name is Hernan and first of all, sorry for my english (is not my native language) and thank you in advance.
I've taken your Udemy course "Docker Swarm Mastery" and it was great.
I'm planning to build a Swarm cluster with three nodes, all managers/workers on premise and i have some of questions concerning shared storage, reverse proxy and backups that i'm sure you can help me to clarify.
A little description of the solution i'm planning:
First, what is the best approach to obtain high availavility of the cluster regarding shared storage?
For example, is a good idea to mount on the three nodes an nfs volume on /var/lib/docker (or /var/lib/docker/volumes)?
Or is better to manage that on a case by case basis?
For example, one shared nfs volume for the Portainer data, one for images of the private registry, etc.?
In this case, what is the recomended way? Volumes mounted in the three nodes and reference it as local paths or manage the network volumes in the service's configs?
Second, regarding reverse proxy in HA.
My idea is install NGINX directly on the three hosts and use Keepalived to implement vrrp.
Is that a sensible approach?
What do you recomend in the case that it's not?
Again, thank you very much for the help.
Regards,
Hernan
Beta Was this translation helpful? Give feedback.
All reactions