-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document how Capsule integrates in an ArgoCD GitOps based environment #527
Comments
What we do in order to configure ArgoCD to work with capsule and capsule-proxy:
where we specify that argocd service account should be created in argocd-system namespace And that's all :) |
Am I wrong or assigning With that setup, if I understood correctly, ArgoCD would deploy only the Tenant namespaces, other components would be managed by a different instance due to the |
Yes and no :) we had a single instance of argocd, which manages both, user application and cluster components (they are located in different argocd projects). |
Isn't this a chicken and egg problem? How do you create the
|
Hi all. How I understand it is if I create a namespace as any user and use the label See example of YAML below:
Hoping someone can clarify how to link the namespace resource to the specific Tenant. |
HI @krugerm-4c Have you added |
I added that under the The big difference I can see is that the namespace created from an tenant via If I add the below metadata to the namespace YAML manually it links up correctly:
But this seems like a very dirty way as I would need to pull the Is there maybe something else I missed? |
@krugerm-4c no, the ownerReference is done by the Capsule mutating webhook that intercepts the Namespace creation calls. These calls are filtered if the user issuing those is part of the Capsule groups, I'd say something is not working there properly due to a misconfiguration. I know that @MaxFedotov is using that without any problem: would be great if you could share the |
See below configuration I work with:
The git repository is a public one I am using for the creation of a PoC. For context, it has a single YAML file to be synced:
|
There's a typo in the apiVersion: capsule.clastix.io/v1alpha1
kind: CapsuleConfiguration
metadata:
annotations:
capsule.clastix.io/enable-tls-configuration: 'true'
capsule.clastix.io/mutating-webhook-configuration-name: capsule-mutating-webhook-configuration
capsule.clastix.io/tls-secret-name: capsule-tls
capsule.clastix.io/validating-webhook-configuration-name: capsule-validating-webhook-configuration
meta.helm.sh/release-name: capsule
meta.helm.sh/release-namespace: capsule-system
name: default
spec:
forceTenantPrefix: false
protectedNamespaceRegex: ''
userGroups:
- capsule.clastix.io
- - system:serviceaccount:argocd:argocd-server
+ - system:serviceaccounts:argocd Since we're talking about groups, you must specify the group, not the user. Let me know if this works for you. |
That worked! The namespaces synced by ArgoCD are showing up on the Tenant custom resource. Thanks @prometherion. Looked back at the documentation and didn't see this part about the default CapsuleConfiguration related to namespaces and service accounts. |
One challenge that we are facing with Argocd is that it does not support pre-delete hook used in Capsule. Is there an alternate for this? |
@meetdpv unfortunately it seems missing on the ArgoCD side: argoproj/argo-cd#7575. I don't see this issue as such a blocking one, since Capsule shouldn't be installed and uninstalled. Upon a Helm re-installation, these will be created at runtime by Capsule. Unless there's a specific situation, I don't see any problem with this preventing the usage of Capsule.
FluxCD is widely used along with Capsule, AFAIK. |
Describe the feature
Document how Capsule integrates in an ArgoCD GitOps based environment.
What would the new user story look like?
The cluster admin can learn how to configure an ArgoCD GitOps based environment with Capsule
Expected behaviour
A detailed user guide is provided into documentation.
The text was updated successfully, but these errors were encountered: