-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: supporting OwnerReferencesPermissionEnforcement admission controller #776
Conversation
✅ Deploy Preview for capsule-documentation canceled.
|
fb5c61b
to
fab7de6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like the proposed change will fix the issue, but I wonder if the additional cluster-level RBAC granted to standard users could represent some risks.
I would also like to see some tests added - to ensure this fix does not just "disappear".
Also suggesting merging this RBAC controller for tenants with the standard RBAC granted to tenant users in a follow-up PR. That could make the permissions granted to namespaces more explicit. I haven't dug deeply into the code, but I expect a webhook enforcing update/delete permissions on namespaces. This could also be expressed more clearly in a composite cluster role granted to tenant owners. And ensure cleanup of resources in a cluster after uninstalling Capsule. This is now implemented as a hackish post-delete Helm Job.
} | ||
|
||
func (o OwnerReferencesPermissionEnforcement) resourceName(tnt *capsulev1beta2.Tenant) string { | ||
return fmt.Sprintf("%s-orpe", tnt.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return fmt.Sprintf("%s-orpe", tnt.Name) | |
return fmt.Sprintf("capsule-tenant:%s-orpe", tnt.Name) |
I suggest adding more context to the resource names to ensure they don't collide with other resources in the cluster. This will also make it easier for a cluster-admin to see where the CR and CRB are originating from.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion, however, I would suggest the pattern capsule:%s:orpe
.
crb.RoleRef = rbacv1.RoleRef{ | ||
APIGroup: rbacv1.SchemeGroupVersion.Group, | ||
Kind: "ClusterRole", | ||
Name: tnt.Name, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Name: tnt.Name, | |
Name: o.resourceName(tnt), |
@erikgb I marked the PR ready for review and added a new CLI flag named For backward compatibility reasons, the default value of the flag is set to May I ask you to give it a try, please? |
The right is granted only to the tenant owned by the tenant owner. I would prefer having a concrete example of a potential exploit case rather than wonderings.
This is not required since the cluster scoped resources, such as ClusterRole and ClusterRoleBinding, have controller reference by the Tenant resource which deletion is propagated automatically thanks to it. |
@prometherion I could try to test this PR in one of our Openshfit clusters or one of our other clusters with the Kubernetes OwnerReferencesPermissionEnforcement admission controller enabled. Are images published from pull requests? If not, could you kindly push an image I can use for the test? |
Stale |
Closes #773.
Let's say I got this Tenant definition:
As a result, the new controller will reconcile the following resources:
If the Tenant will be updated with different users, the resulting
ClusterRoleBinding
will reflect the owners.As a result,
alice
will be allowed to patch Tenant finalizers.