-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storage: btrfs volumes that require mounting to perform an action should auto-mount #20855
Comments
(cc: @jkonecny12 & @KKoukiou) |
Just to clarify: Instead of disabling the "Create subvolume" action with an explanation of "Subvolume needs to be mounted", we would automatically mount the subvolume before creating a new subvolume on it, right? For a subvolume that has mount point, we can just add a text to the "Create subvolume" dialog that says something like " will be mounted in order to create a subvolume in it." The subvolume would remain mounted after creation of the new subvolume. What would we do for a subvolume that does not have a configured mount point. Would the "Create subvolume" dialog allow specification of a mount point? Or would Cockpit mount the subvolume temporarily in a hidden place? Deleting is similar, but it needs a parent to be mounted. Cockpit could chose the closest parent that has a configured mount point. The simplest thing would probably be to ignore all configured mount points, and always mount a subvolume temporarily in a hidden place (if needed). The dialog should still announce that, I guess, but maybe not in Anaconda mode. |
So I think I start with this: whenever some operation needs a mounted subvolume but can't find it, we will temporarily and silently mount the root volume somewhere. |
Hmm, if we automount during creation, I think we probably should also automount for listing. |
Yes, wasn't that actually the reason we didn't automount for creation earlier? We should not automount a btrfs filesystem in order to create a new subvolume, if we can not show that subvolume after creation. With btrfs, there is no way to list the subvolumes of a filesystem without mounting at least one of them, and we also need to periodically poll the list of subvolumes since there are no change notifications when the list changes. But periodically mounting and unmounting each btrfs filesystem seemed like too much. What should we do?
Any other ideas? |
Not acceptable in either Cockpit or especially in Anaconda So, the requirements:
So... I had the idea of mounting every btrfs partition we want to know about (so the main volume would be enough right, to get subvolume info?) as read-only in a special hidden location without exposing that to the UI. Would this work for grabbing info? We could even mount read-only, extract the info we're looking for, cache it, and unmount... or keep it mounted. The biggest issue would be to keep it in sync if the partition changes for whatever reason, right? I tried mounting an already-mounted partition (that's currently read-write) again as a read-only mount, and it seemed to work just fine. So it should be possible to keep it mounted in the background if that makes things easier? One drawback is that it can take a good deal of time, if you have a RAID made of spinning disks, for example. Disks may need a little bit of time to spin up if they're not already spinning. (However, disks usually have a timeout time and someone will have recently booted the system from Anaconda, so realistically, at least in Anaconda, it probably wouldn't really be a problem.) Then, the question is if we want something like this for Cockpit for Servers in addition to Anaconda — if we even want something like this at all. WDYT? |
I thought it might be acceptable because there are no external changes to subvolumes during a Anconda session. All changes are made by Cockpit. If that is not true, then we need to pool also in Anaconda mode. |
Yes, that would be nice. The question is when to unmount it again. If we leave it mounted forever, it might interfere with other operations that people are doing with the btrfs filesystem outside of Cockpit, by keeping it busy. People would be rightfully surprised to find their filesystems randomly mounted somewhere, possibly readable by everyone. A minute ago, I thought that Cockpit has no way to reliably do cleanup actions when people close the browser, but we could spawn a process from the bridge that keeps btrfs filesystems mounted, and that process is stopped cleanly when the bridge exits. Hmm. This all only works with administrative priviliges, but that is probably ok. As a non-admin, you might not be able to see all subvolumes, but you also can't create new ones so the worst effect (creating something that you can't see afterwards) will not happen. Also, in Anaconda mode, it is probably acceptable to never unmount it. We could also silently mount subvolumes on their configured mount point. So, what about only changing Cockpit's behavior in Anaconda mode? In Anconda mode I would feel comfortable just mounting btrfs filesystems once when Cockpit starts, and leaving them mounted forever. |
Temporarily mounting a btrfsfilesystem periodically when polling for the list of subvolumes will spam the journal with entries like
Mounting is just not something that is expected to happen frequently. So I think this kills the "tmp mount for listing during poll" option. Mounting once when Cockpit starts is okay, if we can leave enough traces in the system that this random mount is for Cockit's sake. We could just mount them in /var/lib/cockpit/btrfs/$uuid. |
BTRFS volumes that require mounting before performing actions should automatically mount to be able to perform that action.
The fact that you have to figure out that you need to mount the volume first in order to do things is confusing, especially (but not only) from within Anaconda.
Basically, we should handle the mounting as an implementation detail that is required on behalf of the various features which require mounting it first.
(I remember talking with @mvollmer about this prior, but I couldn't find an issue or PR, so perhaps it was either as a comment in another issue or PR, or it was during a video call.)
Anaconda-related context: During our most recent Anaconda Web UI sync call, Jiri said this:
The text was updated successfully, but these errors were encountered: