-
-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zfs_autobackup and TrueNAS compatibility issues with mounted datasets #254
Comments
For point 3 try zfs-autobackup v3.3 beta. Maybe this also fixes 6, but maybe thats just a truenas issue? |
Hi psy0rz, Thanks a lot for your response. I do agree it is a bit tricky regarding whose issue this is (TrueNAS or zfs-autobackup). I will try to raise a TrueNAS bugreport for this as well later (but I fear they'll just dismiss it because I'm not supposed to run external scripts to do this on "an appliance"). I actually did do all my latest testing with v3.3 beta 3. Attached is the output of my 2 --debug runs out of which I've extracted all manual commands to perform my testing. '3.' wasn't fixed in this version I'm afraid (see the first run - it only complained that the dataset was modified, but doesn't roll it back). Thanks a lot for your hard work btw. Much appreciated. I first started scripting this myself in bash, but I could have never done it as good as you've done over the years ;) I did create a bash wrapper script already that works around issue 5. and 6. Issue 3. I cannot work around, as it happens in the middle of your script (but also only occurs the first initial replication) Example below:
|
Try setting canmount=noauto on the target datasets to see if this solves the issue. zfs problaby tries to mount the datasets, and creates a mountpoint directory in a higher dataset. (e.g. https://github.com/psy0rz/zfs_autobackup/wiki/Mounting#target-side) Also use the |
Hi,
I'm using your script on TrueNAS Scale Dragonfish and found 2 issues when replicating to a mounted dataset.
After running the script with --debug and trying all commands one by one manually, I was able to figure out the missing commands to make things work properly.
This is the zfs_autobackup command I used for my replication:
root@truenas-backup:~# autobackup-venv/bin/python -m zfs_autobackup.ZfsAutobackup --verbose --debug --ssh-config ../../../.ssh/config --ssh-target truenas-master --rollback --keep-source=0 --keep-target=0 --allow-empty --snapshot-format {}-%Y-%m-%d_%H-%M --zfs-compressed --decrypt --clear-refreservation --strip-path 2 --exclude-received --other-snapshots test master-pool/encrypted-ds
Below a summary of the individual commands that are required to make it work on TrueNAS (I skipped all the snapshot, hold, release commands during my testing, as they are not relevant for these issues):
Below the 3 errors that TrueNAS throws when not remounting the dataset:
[EFAULT] Failed retreiving USER quotas for master-pool/encrypted-ds/test-ds
[EFAULT] Failed retreiving GROUP quotas for master-pool/encrypted-ds/test-ds
[ENOENT] Path /mnt/master-pool/encrypted-ds/test-ds not found
The text was updated successfully, but these errors were encountered: