-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support DHCP #29
Comments
This could be a good first candidate feature to port from https://github.com/k8s-proxmox/cluster-api-provider-proxmox. |
ProposalSince introducing a new field can be a breaking change, we probably need to introduce a new API version. There are three proposals as follows. (1) v1alpha2 with DHCP optionIn this case, v1alpha2 will be the default and the storage version
The ipam pool config will now be wrapped into The conversation webhook will convert v1alpha1 resources. apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: ProxmoxCluster
metadata:
name: "test-cluster"
spec:
controlPlaneEndpoint:
host: 10.10.10.3
port: 6443
ipv4Config:
dhcp: true
dnsServers: [10.1.1.1]
---
## OR
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: ProxmoxCluster
metadata:
name: "test-cluster"
spec:
controlPlaneEndpoint:
host: 10.10.10.3
port: 6443
ipv4Config:
static:
addresses: ["10.10.10.5-10.10.10.60"]
prefix: 24
gateway: 10.10.10.1
dnsServers: [10.1.1.1] Apart from the changes in the ProxmoxCluster, kind: ProxmoxMachineTemplate
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
metadata:
name: "test-control-plane"
spec:
template:
spec:
sourceNode: "pve"
templateID: 100
format: "qcow2"
full: true
network:
default:
bridge: vmbr1
model: virtio
dhcp4: true # This will force the machine to use DHCP instead of the cluster static config
dhcp6: true # This will cause the machine to use DHCP instead of the cluster static config
additionalDevices:
- name: net1
bridge: vmbr2
model: virtio
dhcp4: true. # Either DHCP or ipam pool config must be defined
dhcp6: true. # Either DHCP or ipam pool config must be defined (2) v1alpha1 with DHCP optionIn this case the API is backward compatible and will not need to introduce v1alpha2. apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: ProxmoxCluster
metadata:
name: "test-cluster"
spec:
controlPlaneEndpoint:
host: 10.10.10.3
port: 6443
dhcp4: true # replaces ipv4config
dhcp6: true # replaces ipv6config
dnsServers: [10.1.1.1] (3) machine v1alpha1 with DHCP optionin this case DHCP option is only available in the machine, while the cluster will set IP pool config as optional. kind: ProxmoxMachineTemplate
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
metadata:
name: "test-control-plane"
spec:
template:
spec:
sourceNode: "pve"
templateID: 100
format: "qcow2"
full: true
network:
default:
bridge: vmbr1
model: virtio
dhcp4: true # This will force the machine to use DHCP instead of the cluster static config
dhcp6: true # This will cause the machine to use DHCP instead of the cluster static config
additionalDevices:
- name: net1
bridge: vmbr2
model: virtio
dhcp4: true. # Either DHCP or ipam pool config must be defined
dhcp6 true. # Either DHCP or ipam pool config must be defined For any further implementation, please suggest. This require your attention @avorima |
Just a thought: A user might want to have control-plane nodes addressed with static IPs, while at the same time not wanting to worry about worker nodes and use DHCP there. Or just globally want to enable DHCP for everything. So Option 1 might be my favorite. |
I think the first option of extending the cluster ipv4/6config fields makes the most semantic sense, but I wouldn't use the word As for the machines, adding the dhcp4/6 field to the Another option would be to use the cluster's ip config as template for defaults, i.e. cluster has dual-stack, machine uses dhcp dual-stack for additional NICs. Here are a few permutations of cluster ip config and machine addtional network devices to consider:
These are meant to be taken as the config that is given at the point of resource creation, so before defaults are applied. I think the first option should probably be invalid, while others could default to using dhcp for ipv4/6 or dual-stack. |
We could have this in Go and probably get rid of
So, you also suggest adding DHCP options to the machines? |
Yes, I think the changes warrant a new API version. Machines should definitely also be extended with DHCP config for both ipv4 and ipv6. |
I was just saying that the additions to the API types should be easily convertible and also be straightforward in their usage. That's why I stated these possible combinations of configs that should be considered |
* handle power state * moved to different package * set task ref * remove stopping logic
Would it be possible to add vlan tag support at the same time? |
@mkamsikad2 that could go to a separate PR, |
Describe the solution you'd like
[A clear and concise description of what you want to happen.]
Our current setup is based on IPAM and static IP Allocation.
Since the QEMU-guest-agent is pre-installed, we can support dhcp in the network-config.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
PLEASE NOTE: since we rely on the kube-vip for control planes, the
CONTROL_PLANE_ENDPOINT
shall remain static and must be set when creating a new cluster.Environment:
kubectl version
):/etc/os-release
):The text was updated successfully, but these errors were encountered: