-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
expose cluster load balancer spec and metadata via KubevirtCluster resource #80
Conversation
Pull Request Test Coverage Report for Build 1781479533
💛 - Coveralls |
/ok-to-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One recommendation.
// Service metadata. | ||
ObjectMeta metav1.ObjectMeta `json:"metadata,omitempty"` | ||
// Service specification. | ||
Spec corev1.ServiceSpec `json:"spec,omitempty" valid:"required"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it would make sense to introduce here a ServiceSpecTemplate? Otherwise it is pretty hard to understand which fields are considered. It seems to only be the Type
right now. Right? API validation would let pass all kinds of fields which are finally ignored.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes that makes sense. originally, I thought we could expose the entire service spec to users (and it theory that could also work, we could set selector and ports in the template, too), but at the same time we might not want to complicate KubevirtCluster
spec too much, so, for now I just exposed the service type, and service metadata, with potential to expose other service spec parts in future if needed.
but agree, for now, we can just have our own ServiceSpecTemplate, to keep things explicit 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, note, that service spec is optional in KubevirtCluster object, and by default we fall back to the ClusterIP type as before. the main use case for now is being able to specify a different type (e.g. LoadBalancer) as well as setting labels and annotations for the service.
@rmohr addressed your recommendation, please take another look |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One comment which may help users understanding what they exactly configure with this new spec. The implementation itself looks good to me!
|
||
// LoadBalancerServiceTemplate describes load balancer service template. | ||
// +optional | ||
LoadBalancerServiceTemplate LoadBalancerServiceTemplate `json:"loadBalancerServiceTemplate,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may make sense to explicitly state what exactly gets routed over this service. Like: apiserver traffic, ingress, ...
maybe it makes sense to not call this LodaBalancer...
since in fact we can also choose non-load-balancer service types and instead use a name which hints what exactly is routed over this service.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree, will update. historically, we called it a "load balancer" in a sense of it being a load balancer for control plane nodes, even though as far as implementation goes it's a simple service. maybe make more sense to name it "control plane service" in the spec to be generic, and add more details in the api docs as you suggest 👍
here a retriggered e2e test which is green: https://prow.ci.kubevirt.io/view/gs/kubevirt-prow/pr-logs/pull/kubernetes-sigs_cluster-api-provider-kubevirt/80/pull-kubernetes-sigs-cluster-api-provider-kubevirt-e2e/1488464929697566720 We merged kubevirt/project-infra#1885 in kubevirts CI system which hopefully fix the GOPROXY issues from now on. |
@rmohr addressed, please take another look |
…source Signed-off-by: Alex Gradouski <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Looks great!
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: agradouski, rmohr The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…r-api-provider-kubevirt-mce-27 Red Hat Konflux update cluster-api-provider-kubevirt-mce-27
What this PR does / why we need it:
This PR enables the way to expose cluster load balancer spec via
KubevirtCluster
object. This allows users to specify different types of service for cluster load balancer, as opposed to it always being hardcoded toClusterIP
.cluster-template
has an example of how to specify a service type, but it's optional. By default,ClusterIP
service type is used.Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes #64Release notes: