Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory hotplug support for VMs #986

Open
nanjj opened this issue Jul 12, 2024 · 4 comments
Open

Memory hotplug support for VMs #986

nanjj opened this issue Jul 12, 2024 · 4 comments
Labels
Documentation Documentation needs updating Feature New feature, not a bug
Milestone

Comments

@nanjj
Copy link
Contributor

nanjj commented Jul 12, 2024

Is it possible for incus to support qemu memory hotplug feature defined here?
The usage is similar with CPU hotplug, launch qemu instance via

-m [size=]megs[,slots=n,maxmem=size]

and change memory via QMP like:

 (qemu) object_add memory-backend-ram,id=mem1,size=1G
 (qemu) device_add pc-dimm,id=dimm1,memdev=mem1
 (qemu) device_del dimm1
 (qemu) object_del mem1

From incus source code we can see CPU hotplug feature has been supported so I am asking why QEMU memory hotplug not supported, very confusing.

And the usage is obviously valuable.

Memory hotplug has been supported broadly by most guest OS(even the windows, almost all the versions have support for this as you may know, CPU hotplug has been supported by only windows server edition), and qemu has the support, too.

For incus we may need to add check for memory hotplug feature, give an initial slots value(maybe 2) and maxmem size(maybe 32G) , and when user set limits.memory (or maybe limits.memory.size and limits.slots) I dont know clearly, incusd using the qmp client to handle the config change.

@stgraber stgraber changed the title QEMU Memory Hotplug Feature Memory hotplug support for VMs Jul 12, 2024
@stgraber stgraber added Feature New feature, not a bug Documentation Documentation needs updating labels Jul 12, 2024
@stgraber stgraber added this to the later milestone Jul 12, 2024
@stgraber
Copy link
Member

Yeah, that's been something we've been meaning to add for a while, but it's also a very complex one to handle right as you need to decide on the right granularity, consider NUMA nodes, handle hugepages, ...

We already have 3-4 different code paths for memory as it stands today and all of those will need to handle DIMM hotplug. The other side of this will be to know how well the OS will handle this.

For CPU we can very easily hotplug/hotremove CPUs and the OS usually handles that pretty well. For memory, hotplug should be okay, hotremove likely to be more problematic, so we may need to use ballooning for hotremove.

For now the trick you can use is start the VM with a higher allocation than needed and the reduce limits.memory which will use the memory balloon driver to shrink things.

@nanjj
Copy link
Contributor Author

nanjj commented Jul 13, 2024

As for how well the OS will handle this
I am using vwmare guest OS compatibility guide to check this, for linux memory hotplug as an example:https://www.vmware.com/resources/compatibility/search.php?deviceCategory=software&details=1&osFamily=2&virtualHardware=23&page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc&testConfig=16

@srd424
Copy link

srd424 commented Jul 28, 2024

For now the trick you can use is start the VM with a higher allocation than needed and the reduce limits.memory which will use the memory balloon driver to shrink things.

Any chance of an option to 'pre-inflate' the balloon at boot time? That would be a quick and dirty way of way of getting something equivalent to memory hotplug, up to a pre-defined limit :)

@stgraber
Copy link
Member

stgraber commented Aug 1, 2024

That's an option I considered but it's a bit tricky as the balloon requires a kernel driver to work properly, so it would effectively still allow the guest to consume more memory by preventing that driver from getting loaded.

That's particularly relevant when you consider multi-tenant Incus deployments where users have access to individual projects with resource limits in place. If ballooning is used to allow growing the VM memory, then one of those tenants could tweak their VM to prevent ballooning and far exceed their memory allocation.

We could still do it but would need a key like limits.memory.max which would then still be considered as used memory against a project's quota. That would far reduce its use though, so it may be best to focus on actual memory hotplug instead, even if that's a bit tricky to get right.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Documentation Documentation needs updating Feature New feature, not a bug
Development

No branches or pull requests

3 participants