Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow for Blue/Green deployment #202

Open
allamand opened this issue Sep 17, 2021 · 3 comments
Open

Allow for Blue/Green deployment #202

allamand opened this issue Sep 17, 2021 · 3 comments
Labels
enhancement New feature or request

Comments

@allamand
Copy link

We see lots of situation today where customers choose to do rolling update of their EKS clusters. This is the way to go when you just have 1 version to catch-up, to be able to keep your cluster up to date. The Rolling update mechanism has no rollback possible on the control plane, so it is just a one way go.

There are case where customers may need to upgrade several versions at a time, ou simply want to be able to quickly rollback the operation in case of issue. This is where the Blue/Green cluster approach can be used.

When using EKS and SSP constructs, then there are lots of part that are created from the cluster such as

  • IAM Roles for service Accounts
  • Persistent Volumes
  • Load Balancers and DNS endpoints

While using tools like external-dns or csi drivers, it is easy to create and attach DNS name to a workload defined in the cluster, and to specify volumes to be used.

In the case of a Blue/Green migration, there are thinkgs that we would want to recreate (deployments, roles..) but we will want our new Green cluster, to inherate of the persistent volumes ov previous cluster, and that the DNS created for the cluster A will be migrated to cluster B.

This issue is to see how ssp-amazon-eks pattern can be used to help such migration pattern.

@askulkarni2 askulkarni2 added the enhancement New feature or request label Sep 17, 2021
@shapirov103
Copy link
Collaborator

Looks like a candidate for the patterns repo.

I am unclear on the strategy related to the persistent volumes. Is the expectation to back them up and restore for the second cluster? Or do you propose to create an identical configuration with respect to the AZs so that the same volumes could be mounted? It will have limited use case with multi-attach.

@allamand
Copy link
Author

If the volume was creating byt K8s via PVC, then I don't see how the new cluster could inherit the volume. So Maybe in this a solution with velero backup/restore would be an option ?

At least for stateless workloads this would be easier, but still need to address how to recover from LB/DNS registration to allow migration from cluster A to cluster B smoothly.

@elamaran11
Copy link
Collaborator

@allamand This should be an issue in Blueprints Patterns Repo. Not here. Feel free to close this and open an issue there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants