Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add filter for qubes stored on inaccessible storage pools #399

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Ofenhed
Copy link

@Ofenhed Ofenhed commented Nov 30, 2024

This is useful when you're working for multiple customers and compartmentalize them to separate (removable?) disks. If you decide that this can be merged, I'd also like to add the same filter to qubes-app-menu as well.

This is slightly related to QubesOs/qubes-issues#1293, as this is part of a work flow where VMs can be stored on another partition and encrypted with a separate key, as described here. Of course, this PR doesn't add any security, as all metadata is still stored on the same place. It's purely a visual improvement if users have themselves implemented the separation.

For context, the way I'm using this is with a salt script which adds some safe guards to avoid corrupted data.

@Ofenhed
Copy link
Author

Ofenhed commented Dec 1, 2024

I think the reason this works and updates is because Qubes periodically queries for disk space usage. Should this use the event system, instead of polling for volumes?

There is a quirk in this implementation, if you do weird setups where you keep AppVMs separate from their TemplateVMs, but the TemplateVM isn't on main storage, as it currently does not verify that the template volume of an AppVM is accessible. This would result in false positives where the AppVM is visible, even though it cannot be started.

@marmarta
Copy link
Member

I really like the idea! Don't love the polling, for two reasons:

  • it can be pretty resource-intense
  • the way it's done, it breaks a bit the separation we have between model and display data. If you look at the manager code, we try to keep the VM-relevant API calls in the VMInfo class - it would be better to put the logic there, and update it using the events system; if you need advice on which events and where to find them, tell me here (if you know it well, I don't want to be explaining things you know to you :) )

@Ofenhed
Copy link
Author

Ofenhed commented Dec 11, 2024

Thank you for the positive feedback. I agree about the polling, I just did what felt manageable to get some feedback on the idea.

I'd gladly take some advice. My understanding right now is that there would have to be polling, but that I could use the polling already happening in qubes-core-admin. As far as I can tell there is currently no events implemented, so I'd have to add events ("pool-added", "pool-removed"?) in qubes-core-admin/qubes/storage/. Is this correct? I haven't found a way to do it yet, outside of implementing change detection separately for each Pool backend. It seems that I might be able to use Volume for added volumes, but I haven't tried yet, and I haven't found a way to detect removed devices. It seems easy if they are willfully removed via API calls, but I haven't found a general way of detecting pools lost because their backend (such as an LVM group dissapearing) was removed.

@marmarta
Copy link
Member

marmarta commented Jan 2, 2025

Sorry for the late reply, Christmas season was a bit overwhelming.

Anyway.

  • if you want to add new events, the events need to go in both core-admin and core-admin-client - core-admin is the backend who throws actual events, core-admin-client is the thing frontend apps use and this just passes the event forward - you can check existing events like property-set to see how it's done; those repos like a lot of tests, otherwise @marmarek is unhappy
  • so, there is a new device API happening (PR here: new device API qubes-core-admin#579) - maybe this can be tied to existing there events like device-list-change:block (I think block...) instead of adding the logic to pools? OTOH I imagine there could be changes in pools that are not related to adding/removing devices (although this does boggle and terrify the mind a bit, at least my mind, when I think about editing existing storage pools while using them)

@marmarek
Copy link
Member

marmarek commented Jan 3, 2025

I'd gladly take some advice. My understanding right now is that there would have to be polling, but that I could use the polling already happening in qubes-core-admin. As far as I can tell there is currently no events implemented, so I'd have to add events ("pool-added", "pool-removed"?) in qubes-core-admin/qubes/storage/. Is this correct? I haven't found a way to do it yet, outside of implementing change detection separately for each Pool backend. It seems that I might be able to use Volume for added volumes, but I haven't tried yet, and I haven't found a way to detect removed devices. It seems easy if they are willfully removed via API calls, but I haven't found a general way of detecting pools lost because their backend (such as an LVM group dissapearing) was removed.

There may be already events when a new pool is created, but not really when a pool becomes available/unavailable. The latter in practice requires some polling anyway, as it may change outside of knowledge of qvm-* tools, for example when you mount something in dom0, or open related LUKS volume (theoretically some of the cases could be covered by udev events, but it's hard to map events to pools, and still probably some will be missing).
And with polling needed anyway, I'd rather avoid it if nobody uses it at the time - with events, we don't really know (at the core-admin side) what events are listened for, so there is no way to enable polling only when some app is waiting for pool-specific events.

So, as long as polling is needed, unfortunately it seems it needs to be on the qubes-manager side...

And BTW, events for disk usage would be even more tricky, as it needs to be some polling anyway, or at least rate-limiting (the non-polling version would be "send event on any disk write" which is totally impractical). And on top of that, has the same issues as pools availability (no system events on changes, no info whether anybody listens, etc).

@marmarek
Copy link
Member

marmarek commented Jan 5, 2025

PipelineRetry

@qubesos-bot
Copy link

qubesos-bot commented Jan 6, 2025

OpenQA test summary

Complete test suite and dependencies: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025012221-4.3&flavor=pull-requests

Test run included the following:

Installing updates failed, skipping the report!

@Ofenhed
Copy link
Author

Ofenhed commented Jan 8, 2025

I've noticed an issue with this, where I have a standalone VM which only shows up when I disable the filtering.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants