-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[IBCDPE-1034] Move deployment model to ArgoCD #16
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this PR. @BWMac, did you want to take a stab at a review ?
README.md
Outdated
down when we destroy the VPC and EKS cluster. | ||
|
||
|
||
In addition to this scanning that is in place we are also taking advantage of the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a note, for trivy, lets focus on the HIGH/CRITICAL and issues that we can fix: https://github.com/Sage-Bionetworks/shiny-base/blob/e5770f31fe4a7d79150a25989a34ff3c3b1a78d1/.github/workflows/trivy.yml#L60-L69
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So Trivy is smart enough to know which vulnerabilities can and cannot be fixed if you configure it a certain way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I set 50ac49a ignoreUnfixed: true
,
We saw today that the number of issues flagged was around half
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't had a chance to look closer at the argo.cd config, but just a few comments.
@@ -12,5 +12,9 @@ terraform { | |||
source = "hashicorp/helm" | |||
version = "~> 2.0" | |||
} | |||
kubectl = { | |||
source = "gavinbunney/kubectl" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why a secondary source of kubectl?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed the helm provider here - I accidentally left it in.
But to answer why I moved over to using this over provider to deploy a kubectl_manifest
:
hashicorp/terraform-provider-kubernetes#1367
The root of the problem is a race condition that exists when you are installing a CRD (Custom Resource definition) and the CR (Custom Resource) in the same terraform stack. These are ArgoCD specific CRDs that require the ArgoCD helm chart is installed first. The official hashicorp k8s provider requires that the CRD is already installed on the cluster during the plan stage, which has obvious problems here.
The use of this tf provider does not require that the CRD is install during plan stage and correctly applies it to the cluster in the right order as I am using a depends_on
relationship between the module I have the CR and the ArgoCD module which installs the CRD.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a custom yaml config?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This values.yaml comes from the ArgoCD helm chart:
https://github.com/argoproj/argo-helm/tree/main/charts/argo-cd
It can also come from this helm command: https://helm.sh/docs/helm/helm_show_values/
Basically what is happening here is that I am pointing to this values.yaml
file with this terraform resource:
resource "helm_release" "argo-cd" {
name = "argo-cd"
repository = "https://argoproj.github.io/argo-helm"
chart = "argo-cd"
namespace = "argocd"
version = "7.4.3"
depends_on = [kubernetes_namespace.argocd]
values = [templatefile("${path.module}/templates/values.yaml", {})]
}
This is how we are configuring the specifics of how the helm chart is deployed to the cluster.
Then, for anything that ArgoCD is deploying, I am using the concept of multiple sources:
https://argo-cd.readthedocs.io/en/stable/user-guide/multiple_sources/
Multiple sources mean that any helm charts we are installing use values.yaml file that we have created in our git repo. Look at the README.md in modules/apache-airflow
for more specifics on how that is configured.
Eventually, if we have more specific templating needs to dynamically configure the values.yaml
files we'll want to implement something like kustomize
: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/ - Since out use case is relatively simple for now I am just pointing out to a static yaml file.
Problem:
Solution:
Overview of deployment steps:
Testing:
1.Tested the deployment of the cluster to K8s and deploying the applications via ArgoCD