-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Rollback snapper default layout snapshots (e.g. @home
)
#11
Comments
@home
)
Hey @tkna91, there's no reason for this tool to not enable you to rollback your home directory today [1]. However this setup isn't as pretty as it could be. I'll have to add support for multiple config entries to the tool. I'm afraid it might take a little while but contributions are welcome! For the time being, I can suggest the following: you'd need to create a new [1] except for the "modify the fstab" part, namely |
I see. In the following, In addition to that, I am also concerned about the snapshot directory being |
Indeed. Personally I wouldn't advise having home as a subvolume: for example, reinstalling the system, filesystem corruption, multiboot all become (more) problematic. That being said it's a valid usecase. I'm afraid my system is offline at the moment so it'll take me a while to get around to supporting it, but I'll get to it eventually.
I don't believe so, ~/.snapshots should be its own subvolume in any case so it wouldn't be affected by rollback. But what problems do you foresee? |
Indeed, I concur with your perspective. Nonetheless, I regularly synchronize the majority of data from In devices with limited SSD capacity, like laptops, managing free space becomes simpler without partition splitting. Thus, I typically utilize a single partition with subvolumes beneath it.
Indeed, to rephrase your words, in order to support Filesystem layout
However, Snapper's default layout (without a subvolume for |
It looks like https://wiki.archlinux.org/title/Snapper#Restore_using_the_default_layout is what you're talking about here? Compared to what this script's intended goal is: recover a borked system while maintaining a clean subvolume system per the archwiki's description here and this I won't add support for this usecase myself, but I'll be happy to review PRs. I'd also like to add support for |
It looks like https://wiki.archlinux.org/title/Snapper#Restore_using_the_default_layout is what you're talking about here?
Yes, I wrote that section. I also tested it.
There may be a more sophisticated method, but I thought this form seemed more reliable.
I won't add support for this usecase myself, but I'll be happy to review PRs. I'd also like to add support for `/home/` with `/home/.snapshots` as a subvolume, once I get some bandwidth.
Understood. I can't use python properly yet, so I'm hoping for someone who can.
|
I'm wondering if consistency with the rootfs system isn't worth pursuing. Moving a
If instead, the
|
Indeed, when using `mv` or `cp` between subvolumes on a Btrfs filesystem, even though inodes differ, the underlying data is the same due to Btrfs's fundamental features, which alleviates concerns about insufficient space during copying.
https://btrfs.readthedocs.io/en/latest/Reflink.html
Based on inquiries in the #btrfs channel on irc.libera.chat, both `mv` and `cp` appear to have Copy-on-Write (CoW) as the default option, as indicated below.
- 5.18 kernel added support for cross-mountpoint reflink
- mv, core-utils since 8.28 and cp defaulting to reflink since 9.0
However, as you mentioned, managing different subvolume layouts and directory structures within the same script can undoubtedly increase complexity. Considering the critical nature of the scenarios being handled, it may indeed be more prudent to keep them separate.
|
My concern wasn't so much with space, you're right when you say the data itself wouldn't be copied thanks to CoW. The metadata however is a different story. As noted in the link you shared:
Increasing complexity is ok, I would be open to reviewing and accepting PRs as long as the new logic was separated out from the existing logic in a way where things were properly encapsulated. |
Certainly the addition of the following steps means that you are making that change. mv /mnt/@home-backup/.snapshots /mnt/@home/ When I did it, it took a moment, just like a normal |
I think the OP is pretty much describing the same feature that I would like to see added to My fave Linux distro is SpiralLinux ( https://spirallinux.github.io/ ), which is based upon Debian. I prefer it over normal Debian because it has better BTRFS support than the official Debian installers. SL installs This is what the default subvolume layout of a SpiralLinux install with a couple of snapper snapshots looks like:
I would like Maybe this would only be able to work for those using this exact subvol config but that's OK with me. The timeshift gui gives BTRFS users the option of easily restoring their @home subvolume at the same time as restoring the root @ subvol so I'm hoping we can basically replicate that functionality in an easy manner with this script. If we're not asking for the same thing here then I'll create a separate issue for this feature request, presuming Thanks |
I have found this guide on how to rollback snapper @home snapshots under Arch and I'd imagine its pretty much exactly the same process to do it with SpiralLinux / Debian https://wiki.archlinux.org/title/snapper#Restore_using_the_default_layout Its quite tricky so it would be handy to have a script to help automate it. SpiralLinux uses the same subvolume layout as is recommended for use with snapper on the Arch wiki. I have created a ticket with SpiralLinux to suggest that we try to get the @home subvol rollback process documented, or at least link to that guide on the Arch wiki. |
The "Snapshots for the /home subvolume" section of this guide describes how to rollback @home subvolumes under Debian: |
It would be more useful to be able to rollback the
@home
subvolume below.Is it possible to incorporate the following steps to rollback the home directory?
The text was updated successfully, but these errors were encountered: