-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shutdown/suspend inactive VMs #832
Comments
Comment by marmarek on 27 Apr 2014 21:00 UTC But ok, some VMs (all AppVMs by default?) can be selected for such mechanism. Perhaps if "inactive" means "without any visible window" (no user application is running), the VM can be simply shut off. VM startup (especially Linux) is rather fast.
|
I think that such VM can be also shut down. This would be especially useful if there is low free RAM. But I agree that the largest issue now is determining if the VM is active. I hope that detection by no active window (or systray icon or something like it) will work for most VMs. At the start, it can be opt-in, so users wanting the feature (like me) would enable it maybe in all VMs except sys-firewall and USBVM and some similar VMs. (Note that even sys-net has systray icon by default.) |
If listing active windows would be enough, it should be simple to write
I'm not sure why, but it looks like there is always at least one window, Best Regards, |
I don't see the windows for some apps. There are some hidden windows with unknown purpose, but they can be excluded by I've hacked a bash script for this purpose. Creating a simple working script is easy. Handling some edge cases, especially race conditions, requires some tight cooperation with qragent. There is the script with my more detailed notes about challenges: https://github.com/v6ak/qubes-vm-autoshutdown I hope you will find it useful. If something is unclear, do not hesitate to ask. |
I haven't implemented automatic pause, only automatic shutdown. If everything is implemented well, the automatic pause should not be needed, because inactive VM should consume almost no CPU. Moreover, a paused VM can't return any memory to Dom0 without unpausing, so pausing a VM with good memory conditions might cause worse memory conditions later. This could be improved, but it should be a separate issue. |
So, my approach with |
Maybe you can enumerate the windows inside the VM - it looks to be not Best Regards, |
Hmm, actually the whole script can run inside VM - check if there are Best Regards, |
It seems to bring more issues than solves. First, it requires xdotool having installed on all such VMs, but that might be OK. However, maybe delegating strategy for “Am I active?” to particular VMs would be cleaner than checking it from dom0. But while I get rid of “VMapp command” windows, I get some windows like this one (slightly modified xprop dump):
Maybe there is something that tells us that the window is unused. If so, it would be cleaner to blacklist these windows than to blacklist some windows based on their title. |
Isn't this root window? When I execute Best Regards, |
It may be something like that, I am not sure. But I see that “window” even with I've, however tested that windows are considered in AppVM as “visible” even if they are minimized. When I call |
I think this is a possible usability issue. One would wonder if one detached a job ( |
Hrm, I wonder if from a usability standpoint, we might want to ask the user to decide when certain criteria is met, for instance:
That way a user can use their brain to determine "no, I have some sort of thing going on in that VM" or "I need to toggle back to it once I finish task in other VM" |
If you must add this, I suggest another IF.
That would be dialogs popping out of nowhere interrupting whatever the user is currently doing. And 4 inactive VMs (ex: vault, social, bank, private) would result in 4 such dialogs. So I guess these notifications should be accumulated into a single dialog. [or less optimal, be in a queue, not all pop up at the same time] Another perhaps good place (maybe not for the first, but second iteration of this) might be the global Qubes [security] status systray icon. Instead of "green" it could perhaps still glow "green" but with a small additional info symbol. And once users are curios and click it, this VM shutdown questions are shown. |
Actually it is really tricky to check for background services. Every
I don't think it's necessary. Even if not for RAM saving, shutting down I think it's better to have this feature being optional. So it can be Best Regards, |
Sounds good to me!
|
I don't think that unused VMs consume considerable amount of power. They, however, consume some RAM which could be used for caches and so on. If VM pausing will be implemented, it would be useful if RAM footprint is reduced before the pause, because the paused VM can't participate in memory balancing. Without this, paused VM could have worse effect on amount of available RAM than a running VM. I am not sure if pausing will be a simple task. There might be some grey zone. Consider a VM with tasks or calendar. When I minimize it, I don't need it and it might be shut down (like in Android). However, this will work better after some raise-or-run mechanism is implemented. I will probably have some motivation for improving the script (mainly offloading the “active VM” strategy from dom0 to domU) soon, because I'll have to downgrade my RAM due to some RAM failure… |
Actually this task (as in description) is about shutting down the VMs (or some other means of releasing RAM). Updated title to less confusing. |
I see both options, suspend and shutdown. (I am actually not sure if suspend is something different from pause.) I have commented something about chalenges of VM suspending, but I am not going to implement it, at least not now. I will rather implement the autoshutdown, because it is simpler to implement and it IMHO gives more benefit. |
Yes, in case of Xen (we don't have other cases right now ;) ), probably the easiest and most beneficial way would be to shutdown unused VMs. |
Hmm, I can call RPC from domU… But, is there any way to call it from dom0? I haven't found it at https://www.qubes-os.org/doc/qrexec3/ . However, this brings me some idea: The design might be reversed: AppVMs would inform dom0 about being unused. Any call of RPC would clear the unused flag and the AppVM would have to send it again (if desired). |
On Sat, Nov 28, 2015 at 06:35:59AM -0800, Vít Šesták wrote:
Not a nice API, but you can call RPC with:
What about simply shutting down unused VM from within? [1] https://www.qubes-os.org/doc/qubes-service/ Best Regards, |
Is there a reason we are not considering hibernation (#2273) rather than shutdown of inactive VMs? |
Yes, shutdown is much easier to implement. For suspend/restore you need not only to restore VM internal state (memory etc - handled by Xen tools), but also all related VM connections - networking, GUI, qrexec services running when VM was hibernated etc. |
Shutdown:
I know there can be some arguments for preserving the state. But if you need it, you probably use Qubes wrong. And keeping outdated VMs (i.e., making it worse than today) is a great argument against use of hibernation here, I believe. |
On Mar 12, 2017 8:39 PM, "Vít Šesták" <[email protected]> wrote:
Shutdown:
- is much better for making VM up-to-date: When you update a VM, you
have to restart it. If a VM hibernates, it would have to be restored to the
old state. Consider a following scenario: You start a banking VM, then you
close the browser. It gets hibernated. Then you update the TemplateVM. Then
you might even reboot your computer. And even in this case, the banking VM
will still use the outdated template. Not nice.
I'd like to have limited hibernate anyway. I.e. if hibernate will destroy
vm with missing template or require reboot of app vm once the template has
changed - it's okay.
- is probably less resource-intensive (at least for I/O) than hibernate
When I'm moving to long trip I've to either shutdown or suspend to RAM . If
I susppend to RAM - I may loose all VM states once battery is over. What I
really never care from user point of view is how much i/o it takes.
- does not break potential anti-forensic nature like hibernate would do
All forensics outside of crypto container is not something you should care
about, isn't it? It's good to provide a warning though.
- the reverse process (i.e., VM start) might take slightly more time
than with hibernation (i.e., VM restore)
I know there can be some arguments for preserving the state. But if you
need it, you probably use Qubes wrong.
What I really need is usability.
And keeping outdated VMs (i.e., making it worse than today) is a great
argument against use of hibernation here, I believe.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#832 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ALHcA3dFpogAKiT3RiosmJ2WOXUuDnu2ks5rlC3UgaJpZM4DrUd6>
.
|
Well, I am not against having hibernation in Qubes. I am saying:
1. It is hard and has its sharp edges.
2. It is not suitable for automatic cleanup of unused VMs. Some of my arguments against are valid only against automatic hibernation of unused VMs.
I'd like to have limited hibernate anyway. I.e. if hibernate will
destroy
vm with missing template or require reboot of app vm once the template
has
changed - it's okay.
VM with missing template – maybe this should not happen in the first place and this is hardly something I've mentioned.
But consider VM template change. Currently, Qubes uses volatile.img for saving template modifications (and swap) and AFAIK root.cow.img for tracking changes of template's root.img. When implementing hibernate for template-based VMs, Qubes would have to keep not only volatile.img (which should be easy), but also all relevant versions of root.cow.img. And what's worse: Qubes would have to update all of them when template is being modified. That's bad for two reasons:
* It requires nontrivial implementation effort, I think.
* Some unused hibernated template-based VM can slow down template updates. Maybe this could be solved in various ways, but it further increases complexity of the task.
Of course, we can put some limitation on hibernation in order to make things easier. But I am not sure who would be satisfied with the result:
a. When VM template has a hibernated template-based VM, don't allow to start it. This prevents those issues, but it is a huge counter-intuitive restriction.
b. When template VM is started, remove all saved states of related template-based VMs. This is also a huge counter-intuitive limitation.
c. Don't hibernate per-VM. Allow just hibernating and restoring all the VMs. Maybe this is closest to your use case and most intuitive, although most restricted solution. But this is becoming totally offtopic when we are discussing what to do with VMs that aren't being used.
What I
really never care from user point of view is how much i/o it takes.
It is not an issue for hibernation on demand, but it is for hibernating VMs automatically. Maybe end user will not care about IO load, but she will care about some sudden hiccups caused by some background process.
- does not break potential anti-forensic nature like hibernate would do
All forensics outside of crypto container is not something you should
care
about, isn't it? It's good to provide a warning though.
Today, Qubes can be easily tuned for storing volatile.img on a separate partition encrypted by a random key stored only in RAM. Hibernation breaks this.
|
I just created a small python script which fulfils my needs. It needs to run in dom0 and the guest VM's need the tool The script ignores
https://gist.github.com/maximilize/62071aed9d7b55f4a887cc56a6d91785 |
Status update: the script shutting down idle VM from within is packaged here: https://github.com/QubesOS/qubes-app-shutdown-idle
|
Hi, few more details about the problem I reported. I've done another test, creating 3 new qubes: Thanks and tell me if there is something I can do in order to provide more helpful information to solve this problem. |
Check
(you may need to install |
Hi @marmarek and thanks for your answer. Here is my
If I type
Running
and
Here is my
If I type
Running
and
Hope this is helpful. |
Please consider building this and putting it in testing for debian. |
@dakka2 it is already there: QubesOS/updates-status#782 |
@marmarek I am just curious why wasn't the tool made to be a script in dom0 that is enabled per qube via qvm-prefs? At least for the windows it seems dom0 has all the information needed and it will work for any type of domain (even windows). |
It's mostly about reducing complexity in dom0. One can easily think of expanding this tool based on various VM behaviors (like "if process X is running" or "is some TCP connection active"), where you'd need to extract data from within the VM anyway. Avoiding the need to parse something coming from VM in dom0 is almost always an improvement. Even with the current two conditions, it's easier to have more complete look for the active windows than from dom0 - for example you could easily exclude some applications, while in dom0 you'd need to guess based on window title or such. |
Since Debian problem got a separate issue, closing this now. |
Dom0 alternative: https://github.com/3hhh/qidle |
I am a bit skeptical about one of your advertised advantages. You mention
the VM cannot prevent the shutdown. I am afraid this can be bypassed easily.
a. VM occasionally performs some formal activity like opening a 1*1 window
for 10ms.
b. When shutdown is initiated, do you ensure it is finished?
|
I like people who think like attackers. :-) Anyway it's not so easily as you mention, but let's see:
In theory you could prevent the VM idle that way, not the user idle. So while the user is not idle, s/he should notice flickering windows. In practice your 10ms is not enough as I discretely measure the VM idle only 3 times or so during the period (assuming a 60s "period" I e.g. discretely measure at 0s, 30s and 60s.) and the attacker doesn't know the exact points in time. Also, 1x1 won't be enough as the user probably configured such small windows to be ignored (cf. default config @ https://github.com/3hhh/qidle/blob/master/qidled.conf#L40-L43).
Not by myself, no. Qubes OS Anyway if you have a test machine, I'd recommend trying the attacks yourself. |
Reported by joanna on 27 Apr 2014 12:29 UTC
If a VM is open but not "active" for some time (e.g. 1h) then we might consider pausing/suspending it to save on resources (RAM in case of laptop, etc).
The specific action (suspend, pause, etc) should be VMM-specific.
Migrated-From: https://wiki.qubes-os.org/ticket/832
The text was updated successfully, but these errors were encountered: