Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: v1alpha2 API #293

Open
65278 opened this issue Oct 2, 2024 · 12 comments
Open

RFC: v1alpha2 API #293

65278 opened this issue Oct 2, 2024 · 12 comments
Labels
enhancement New feature or request kind/feature
Milestone

Comments

@65278
Copy link
Collaborator

65278 commented Oct 2, 2024

v1alpha1 has problems in the networking department which will require breaking changes.

Specifically, ProxmoxMachines have a split of default network device and additional network devices for no reason that I can discern. This duplicates code pathes everywhere and actually removes possibilities (due to how network interfaces are structured in the API, default doesn't have options that other interfaces have).
On top of that, the split of IPv4 and IPv6 pools is arbitrary (makes no sense), and we can simply get away with having an array of IPPools. This also allows multiple IPPools per interface, a feature which we can not support yet. On top of that, we possibly want to support network interfaces without IPPools attached (DHCP, BGP exclusively).

For now I'd change the API of ProxmoxMachines in the following way:

apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: ProxmoxMachineTemplate
metadata: 
  name: test-cluster-control-plane
  namespace: default
spec:   
  template:
    spec:
      disks:
        bootVolume:
          disk: scsi0
          sizeGb: 100
      format: qcow2
      full: true
      memoryMiB: 4096
      network:
        networkDevices:
         -  bridge: vmbr128
            model: virtio
            mtu: 9000
        - bridge: vmbr129
          dnsServers:
          - 10.4.1.1
          ipPoolRef:
           -  apiGroup: ipam.cluster.x-k8s.io
              kind: GlobalInClusterIPPool
              name: shared-int-service-v4-inclusterippool
           -  apiGroup: ipam.cluster.x-k8s.io
              kind: GlobalInClusterIPPool
              name: shared-int-service-v6-inclusterippool
          model: virtio
          name: net1
          linkMtu: 9000
      numCores: 4
      numSockets: 2
      sourceNode: proxmox1
      templateID: 100

I would resolve the conflicts about the first network device by appending the InclusterIPPool in front of the ipPoolRef array. There's no need for the first device to be called net0, but we can make the webhook add names automatically as per their position in the array.

Regarding ProxmoxCluster, I'd leave the API as is, because a Cluster can only have an IPv4 and/or an IPv6 pool. Potentially, we want to add DHCP there, but that's not a breaking change as far as I can see. We can discuss turning IPv4Pool and IPv6Pool into an actual pool reference that we supply with the cluster, which I think is the better choice, but this would be just breakage for breakages sake (we only get better validation on the to be created pool).

Anything else you would like to add:
If you have any more things you'd like to change about the API, lets coordinate here. Feedback very welcome.

@65278 65278 added enhancement New feature or request kind/feature labels Oct 2, 2024
@mcbenjemaa
Copy link
Member

Proposal

network:
  networkDevices:
    - bridge: vmbr128
      model: virtio
      mtu: 9000
    - bridge: vmbr129
      model: virtio
      name: net1
      linkMtu: 9000
      dnsServers:
        - 10.4.1.1
      ipPoolRef:
        - apiGroup: ipam.cluster.x-k8s.io
          kind: GlobalInClusterIPPool
          name: shared-int-service-v4-inclusterippool
        - apiGroup: ipam.cluster.x-k8s.io
          kind: GlobalInClusterIPPool
          name: shared-int-service-v6-inclusterippool

Here are my remarks:

I liked the way you restructured the schema for networkSpec; it's more convenient.
I have some remarks regarding the IPPoolRefs:
You wanted to change it into a list, that's okay:

  • But How do we know which IpPool is for ipv4 and which for ipv6?
  • Changing IPPoolRef to list is a good choice, but that also means that you need so much effort in refactoring the current code, especially for rendering CloudInit, Bootstrap Data, and getting the machine addresses.
  • Given the remark above, you will also need more effort to refactor tests and create new ones.
  • Finally, I would assume the conversation webhook places the old ipv4 and ipv6 into the list.

Conclusion
I'm okay with this APIChange, but I'm a bit concerned about the efforts you would put for the change of IPPoolRef into a list, if you have some clarification about that that would be great.

@65278
Copy link
Collaborator Author

65278 commented Oct 8, 2024

  1. We don't need to know which IPPool is for IPv4 and which is for IPv6. That data is essentially useless (where is it used?).
  2. The refactor makes the code much easier to maintain. At the moment it's a spaghetti nightmare, which makes implementing features like DHCP hard.
  3. Very true
  4. That's the plan.

@mcbenjemaa
Copy link
Member

  1. We don't need to know which IPPool is for IPv4 and which is for IPv6. That data is essentially useless (where is it used?).
  2. The refactor makes the code much easier to maintain. At the moment it's a spaghetti nightmare, which makes implementing features like DHCP hard.
  3. Very true
  4. That's the plan.

Alright, Good to know)
I have talked to Maurice regarding the efforts we are gonna make on this Change.
And he approved it. and that makes me happy.

I think we need to set up a meeting and figure out how I can assist, how we can split some work, and how to do the whole refactoring.
and we need to get people involved, specifically, Jan is interested,

@3deep5me
Copy link
Contributor

3deep5me commented Oct 8, 2024

I would also like to participate at a meeting, but like here, as a more silent listener.

@65278
Copy link
Collaborator Author

65278 commented Oct 8, 2024

I've thought about it, here's the plan:

  1. We make a new branch, v1alpha2. All changes get merged in there eventually (in case we want to do more changes to the API)
  2. I do the networking API changes, these are vast and don't subdivide well (they break stuff all over the place)
  3. Jan can write the migrations form v1alpha1 to v1alpha2 when I'm done with getting the controller working with the new API
  4. Mohamed does the code review

@wikkyk wikkyk added this to the v0.6.0 milestone Oct 8, 2024
@wikkyk
Copy link
Collaborator

wikkyk commented Oct 8, 2024

We could just merge to main, 0.6.0 won't be released until we're happy with v1alpha2 anyway.

@mcbenjemaa
Copy link
Member

I've thought about it, here's the plan:

  1. We make a new branch, v1alpha2. All changes get merged in there eventually (in case we want to do more changes to the API)
  2. I do the networking API changes, these are vast and don't subdivide well (they break stuff all over the place)
  3. Jan can write the migrations form v1alpha1 to v1alpha2 when I'm done with getting the controller working with the new API
  4. Mohamed does the code review

It's good, but still, we need to be aware of some dependant things
I think, We need some tasks for:

  • v1alpha2 CRD
  • Conversation Webhook
  • Validation Webhook
  • Cloud-init Changes to accept this new Spec.
  • Change Controller Logic for reconcile networkSpec.
  • Adjust reconcileGetMachineAddresses.

If you agree, we can create a GitHub issue for each.

@wikkyk
Copy link
Collaborator

wikkyk commented Oct 23, 2024

ref #304

@lknite
Copy link

lknite commented Oct 24, 2024

per #304

  • let's be sure to add in automatic setting of the vip in addition to the vm ips

@lknite
Copy link

lknite commented Oct 24, 2024

maybe now would be a good time to add in a 'kubernetesrelease' crd also, #312

@wikkyk
Copy link
Collaborator

wikkyk commented Nov 5, 2024

ref #105 #53/#29

@wikkyk
Copy link
Collaborator

wikkyk commented Nov 5, 2024

ref #16

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request kind/feature
Projects
None yet
Development

No branches or pull requests

5 participants