Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support DHCP #29

Open
mcbenjemaa opened this issue Dec 14, 2023 · 9 comments
Open

Support DHCP #29

mcbenjemaa opened this issue Dec 14, 2023 · 9 comments
Assignees
Labels
enhancement New feature or request kind/feature
Milestone

Comments

@mcbenjemaa
Copy link
Member

mcbenjemaa commented Dec 14, 2023

Describe the solution you'd like
[A clear and concise description of what you want to happen.]

Our current setup is based on IPAM and static IP Allocation.
Since the QEMU-guest-agent is pre-installed, we can support dhcp in the network-config.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

PLEASE NOTE: since we rely on the kube-vip for control planes, the CONTROL_PLANE_ENDPOINT shall remain static and must be set when creating a new cluster.

Environment:

  • Cluster-api-provider-proxmox version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):
@mcbenjemaa mcbenjemaa added the enhancement New feature or request label Dec 14, 2023
@wikkyk wikkyk changed the title Suport DHCP Support DHCP Dec 14, 2023
@wikkyk
Copy link
Collaborator

wikkyk commented Dec 14, 2023

This could be a good first candidate feature to port from https://github.com/k8s-proxmox/cluster-api-provider-proxmox.

@mcbenjemaa
Copy link
Member Author

Proposal

Since introducing a new field can be a breaking change, we probably need to introduce a new API version.

There are three proposals as follows.

(1) v1alpha2 with DHCP option

In this case, v1alpha2 will be the default and the storage version
Adding:

  • spec.ipv4Config.dhcp
  • spec.ipv6Config.dhcp

The ipam pool config will now be wrapped into static

The conversation webhook will convert v1alpha1 resources.

apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: ProxmoxCluster
metadata:
  name: "test-cluster"
spec:
  controlPlaneEndpoint:
    host: 10.10.10.3
    port: 6443
  ipv4Config:
    dhcp: true
  dnsServers: [10.1.1.1]
---
## OR

apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: ProxmoxCluster
metadata:
  name: "test-cluster"
spec:
  controlPlaneEndpoint:
    host: 10.10.10.3
    port: 6443
  ipv4Config:
    static:
      addresses: ["10.10.10.5-10.10.10.60"]
      prefix: 24
      gateway: 10.10.10.1
  dnsServers: [10.1.1.1]

Apart from the changes in the ProxmoxCluster,
a new field can also be introduced to the machines.

kind: ProxmoxMachineTemplate
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
metadata:
  name: "test-control-plane"
spec:
  template:
    spec:
      sourceNode: "pve"
      templateID: 100
      format: "qcow2"
      full: true
      network:
        default:
          bridge: vmbr1
          model: virtio
          dhcp4: true  # This will force the machine to use DHCP instead of the cluster static config
          dhcp6: true  # This will cause the machine to use DHCP instead of the cluster static config
        additionalDevices:
        - name: net1
          bridge: vmbr2
          model: virtio
          dhcp4: true. # Either DHCP or ipam pool config must be defined 
          dhcp6: true. # Either DHCP or ipam pool config must be defined 

(2) v1alpha1 with DHCP option

In this case the API is backward compatible and will not need to introduce v1alpha2.

apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: ProxmoxCluster
metadata:
  name: "test-cluster"
spec:
  controlPlaneEndpoint:
    host: 10.10.10.3
    port: 6443
  dhcp4: true # replaces ipv4config
  dhcp6: true  # replaces ipv6config
  dnsServers: [10.1.1.1]

(3) machine v1alpha1 with DHCP option

in this case DHCP option is only available in the machine, while the cluster will set IP pool config as optional.
this is also a backward compatible

kind: ProxmoxMachineTemplate
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
metadata:
  name: "test-control-plane"
spec:
  template:
    spec:
      sourceNode: "pve"
      templateID: 100
      format: "qcow2"
      full: true
      network:
        default:
          bridge: vmbr1
          model: virtio
          dhcp4: true  # This will force the machine to use DHCP instead of the cluster static config
          dhcp6: true  # This will cause the machine to use DHCP instead of the cluster static config
        additionalDevices:
        - name: net1
          bridge: vmbr2
          model: virtio
          dhcp4: true. # Either DHCP or ipam pool config must be defined 
          dhcp6 true. # Either DHCP or ipam pool config must be defined 

For any further implementation, please suggest.

This require your attention @avorima

@mcbenjemaa mcbenjemaa self-assigned this Dec 18, 2023
@pborn-ionos
Copy link
Contributor

Just a thought: A user might want to have control-plane nodes addressed with static IPs, while at the same time not wanting to worry about worker nodes and use DHCP there. Or just globally want to enable DHCP for everything.

So Option 1 might be my favorite.

@avorima
Copy link
Collaborator

avorima commented Dec 18, 2023

I think the first option of extending the cluster ipv4/6config fields makes the most semantic sense, but I wouldn't use the word static. It's IPAM after all, so it's not the same as providing static config. Maybe just creating a new struct with all the fields from the InClusterIPPoolSpec and adding a dhcp boolean field. This way you could also get rid of the ipam provider dependency in the api package.

As for the machines, adding the dhcp4/6 field to the Network struct is probably better than relying on the absence of data. There are a few ways of handling defaults here, so it may be desirable to force users to enable dhcp or provide an ipam ref. Especially considering we support dual stack. This wouldn't necessarily be a breaking change since right now the ipam refs are required in the additional network devices.

Another option would be to use the cluster's ip config as template for defaults, i.e. cluster has dual-stack, machine uses dhcp dual-stack for additional NICs.

Here are a few permutations of cluster ip config and machine addtional network devices to consider:

  1. cluster none, machine none
  2. cluster ipam, machine none
  3. cluster dhcp, machine none
  4. cluster none, machine ipam
  5. cluster none, machine dhcp

These are meant to be taken as the config that is given at the point of resource creation, so before defaults are applied. I think the first option should probably be invalid, while others could default to using dhcp for ipv4/6 or dual-stack.

@mcbenjemaa
Copy link
Member Author

We could have this in Go and probably get rid of *ipamicv1.InClusterIPPoolSpec, as you stated.
Or embed it into the IPConfig.

// IPConfig contains information about available IP config.
type IPConfig struct {
	// DHCP indicates whether DHCP is enabled for the machines.
	// DHCP is mutually exclusive with IPv4Config static addresses.
	// If DHCP is true, IPv4Config and IPv6Config must not be set.
	// +optional
	// +kubebuilder:default=false
	DHCP bool `json:"dhcp,omitempty"`

	// Static contains information about available static IP pools.
	// Static is mutually exclusive with DHCP.
	// +optional
	// +kubebuilder:validation:XValidation:rule="self.addresses.size() > 0",message="IP Config addresses must be provided"
	Static *ipamicv1.InClusterIPPoolSpec `json:"static,omitempty"`
}

So, you also suggest adding DHCP options to the machines?
and do you agree with moving to v1alpha2?

@avorima
Copy link
Collaborator

avorima commented Dec 18, 2023

Yes, I think the changes warrant a new API version. Machines should definitely also be extended with DHCP config for both ipv4 and ipv6.

@avorima
Copy link
Collaborator

avorima commented Dec 18, 2023

I was just saying that the additions to the API types should be easily convertible and also be straightforward in their usage. That's why I stated these possible combinations of configs that should be considered

This was referenced Dec 18, 2023
65278 pushed a commit that referenced this issue Jan 17, 2024
* handle power state

* moved to different package

* set task ref

* remove stopping logic
@mkamsikad2
Copy link

Would it be possible to add vlan tag support at the same time?

@mcbenjemaa
Copy link
Member Author

@mkamsikad2 that could go to a separate PR,

@mcbenjemaa mcbenjemaa added this to the v0.3.0 milestone Jan 29, 2024
@wikkyk wikkyk modified the milestones: v0.3.0, v0.4.0 Feb 20, 2024
@wikkyk wikkyk modified the milestones: v0.4.0, v0.5.0 Apr 18, 2024
@wikkyk wikkyk modified the milestones: v0.5.0, v0.6.0 Jun 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request kind/feature
Projects
None yet
Development

No branches or pull requests

5 participants