-
-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Initial ZFS clones support #82
Conversation
Awesome! I have been thinking about clone support for a long time, but its pretty hard to do in a clean and correct way. That's why i have some issues with this implementation:
So to support cloning in every way imaginable, it gets VERY complicated. We assume cloning support will have to be enabled with --clone so we can do some extra checking and raise extra errors. When a new dataset will be created on the target, there are three possibilities:
Now keep in mind that this needs to be recursive as well: e.g. a clone of a snapshot that is part of a dataset that itself is a clone should also work. So that will need some rewriting of code. Perhaps an option like --origin-snapshots that also sends over all snapshots that are origins helps. (like a filtered version of --other-snapshots) Now my head is spinning, and i'm too tired to think about promoting/demoting clones. :) Let me know what you think about all this. |
Well, it's really an extreme situation so why not make users plan their backups thoroughly? ;-)
Yes, sure I had to provide a more detailed error message, with the destroy part.
We don't try to support promotion here, as well as dataset renaming and recreation is not handled correctly at the moment. Basically, this patch only allows for my specific use case, but I thought it could be useful to someone.
I thought on --flatten-clones or --uncow-clones to keep the old behaviour.
Another option then is to keep the old (flatten/unCOW) behaviour by default, emitting a warning ("Your clone {} is going to be unCOWed! Prepare to lose {} TB space in 30 seconds!"), and add a --clones-magic option.
This could be documented as a configuration unsupported with --clones-magic for now. A side thought: If zfs_autobackup had access to its full configuration, that is, a persistent storage (config file/zfs properties/...) as opposed to command line of the current single run, it could peek there for the other backup sets' info.
If it's selected, then it's already sent, as datasets are sorted by
How about:
I think this specific configuration is supported by the current patch.
It's a good name. |
Im busy with some other projects, but i will come back on this at some later time, after i have given it some more thought. I agree we can atleast try to support bookmarks in most cases to start with, and later enhance it. And perhaps emit warnings or errors to alert the user on "unreachable" origins. Its in my nature to immediately start thinking about all possible edge cases that can cause failures. :) |
i'll get started on this after 3.1.1 is released. |
19de921
to
e11c332
Compare
Rebased against current master. |
i accidently already merged it. i was reviewing it but decided i still need to fix some 3.1.x stuff first. will remerge/refactor it after those things sorry |
It looks like this isn't implemented in 3.2.2. Without support of clones I can't use it in my setup. Any news on when we can expect it? |
@knuuuut The clones are supported, with the caveats outlined in the discussion above. |
Hmm, but I don't get them as expected.
Target after command:
What's wrong here? |
@tuffnatty Thank you! |
This adds initial support for ZFS clones (#36). The datasets are now synced in
creation
order, thus any clone comes after its origin. If a clone dataset is selected for sync, its origin snapshot must already exist on the target node. If it's not the case, an error occurs, and user is given a suggestion to retransfer the origin dataset with--other-snapshots
.