-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cleanup left-over iptables rules from kubeproxy and cilium #788
Conversation
Do you think it's useful to back up the original output in /var/snap before overwriting in case we mess up someone's iptables? Also, how sure are you about the assumption that every rule created by cilium or kubernetes contains the keywords you defined? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work! Thanks a lot Berkay! Overall LGTM, just some minor comments and questions.
+1 on the backup suggestion by Etienne.
for i, line := range lines { | ||
for _, word := range []string{"cilium", "kube", "CILIUM", "KUBE"} { | ||
if strings.Contains(line, word) { | ||
lines[i] = "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit worried that this might mess with any other rule that might be there in the iptables but we don't want to remove. What if we have a rule with "kubernetes" in it but not related to cilium? Is this scenario even possible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've adjusted to let kube-proxy cleanup necessary rules. The x-cleanup
command is only called on removal, so we would not be nuking things while they are in use. For affecting other rules, this would be the case if there are multiple "k8s"s running on the same node which is unsupported.
As for backing up the initial state, I believe this is not feasible/practical. And we can't save it to any snap location since this command is called on snap remove
. I've minimized the risks, if something goes wrong a restart can always fix things as we are not persisting the rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @berkayoz! LGTM!
cilium-dbg
seems to clean up interfaces, bpf programs, network namespaces properly. Seems like we are leaking some iptables rules so added a cleanup step for left-over iptables rules from kubeproxy and cilium.