You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the aftermath of reddit's pi day outage and subsequent post detailing the cause, we could not help but notice that we were a part of the infrastructure components mentioned.
While the root cause was not Calico’s fault, the thread brought a few things to light:
That there are a few remaining rough edges related to Calico’s configuration model
We haven’t communicated some of the configuration improvements we have already made, and our docs don’t reflect the state-of-the-art in this area.
This thread is intended to provide clarity on the above, and discuss any other suggestions on how we can continue to provide the best experience with Calico.
Here are some of the learnings from that thread:
A lot of users aren’t aware that kubectl can be used to manage projectcalico.org/v3 resources, and are still using calicoctl unnecessarily.
Our docs don’t explain how to use declarative configuration with kubectl in all cases.
Current State
Our goal is to provide the ability to configure Calico entirely from declarative configuration. Nothing should require bespoke configuration per-cluster.
For most of our resources, we have our own Kubernetes aggregated API server that exposes Calico configuration as a part of the Kubernetes API. This means that Calico resources can be manipulated in the same way as other Kubernetes resources through kubectl. We also provide a Golang client in case anyone is looking to integrate through any Golang components.
Other pieces of configuration can be set directly on the Kubernetes Node. However, our documentation still uses our CLI tool, calicoctl, to do this and so needs to be updated. For instance, our route reflector docs, as of writing, direct our users to use calicoctl in order to annotate nodes with the route reflector configuration. This becomes just another dependency for any teams managing infrastructure which creates friction in terms of managing the configuration of vital components.
Moving Forward
The difficulties around maintaining networking configuration has not been lost on us. We have already been exploring some changes to improve the Calico configuration experience. This is mostly accomplished by making more and more of our Calico configuration accessible via the Kubernetes API. For instance, we have added validation guardrails when setting the configuration for route reflectors so that only valid changes made through the more convenient Kubernetes API take effect. A summary of those changes can be found in this PR. We also have a documentation PR to direct users to use the Kubernetes API instead of calicoctl to avoid any future friction. These route reflector configuration changes are expected to become widely available in the following releases:
v3.26.0
v3.25.1
v3.24.6
Though we have the above mentioned changes planned to roll out, we can always do better. Let’s use this issue to collect feedback on configuring Calico.What are some pain points you all are hitting? Any thoughts on how to make the documentation better? Please let us know on this thread and we can discuss what the best path forward is.
Appreciate the discussion 😄
During any updates to the chart and/or operator, it would be useful to both document and include options around configuring available securityContexts for the various components. Per #7282
Overview
In the aftermath of reddit's pi day outage and subsequent post detailing the cause, we could not help but notice that we were a part of the infrastructure components mentioned.
While the root cause was not Calico’s fault, the thread brought a few things to light:
This thread is intended to provide clarity on the above, and discuss any other suggestions on how we can continue to provide the best experience with Calico.
Here are some of the learnings from that thread:
calicoctl
unnecessarily.Current State
Our goal is to provide the ability to configure Calico entirely from declarative configuration. Nothing should require bespoke configuration per-cluster.
For most of our resources, we have our own Kubernetes aggregated API server that exposes Calico configuration as a part of the Kubernetes API. This means that Calico resources can be manipulated in the same way as other Kubernetes resources through kubectl. We also provide a Golang client in case anyone is looking to integrate through any Golang components.
Other pieces of configuration can be set directly on the Kubernetes Node. However, our documentation still uses our CLI tool, calicoctl, to do this and so needs to be updated. For instance, our route reflector docs, as of writing, direct our users to use calicoctl in order to annotate nodes with the route reflector configuration. This becomes just another dependency for any teams managing infrastructure which creates friction in terms of managing the configuration of vital components.
Moving Forward
The difficulties around maintaining networking configuration has not been lost on us. We have already been exploring some changes to improve the Calico configuration experience. This is mostly accomplished by making more and more of our Calico configuration accessible via the Kubernetes API. For instance, we have added validation guardrails when setting the configuration for route reflectors so that only valid changes made through the more convenient Kubernetes API take effect. A summary of those changes can be found in this PR. We also have a documentation PR to direct users to use the Kubernetes API instead of calicoctl to avoid any future friction. These route reflector configuration changes are expected to become widely available in the following releases:
Though we have the above mentioned changes planned to roll out, we can always do better. Let’s use this issue to collect feedback on configuring Calico.What are some pain points you all are hitting? Any thoughts on how to make the documentation better? Please let us know on this thread and we can discuss what the best path forward is.
To do:
The text was updated successfully, but these errors were encountered: