You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm seeing some weird behavior which I've asked about before in the context of thinking it was an issue with ZFS <-> NFS server interaction. But asking again as I now have an example that seems to be a pure ZFS issue. After replicating to another server, I see this difference. First, the source server (I had to scrub some values to protect the innocent):
Here’s a new view of our ZFS replication issue. On storage4 (the user_apps master:
We are using znapzend to handle the replication of these and replicating hourly. There is a second slave server, identical to this one and updated by znapzend which is working fine and the ACLs look exactly like those on the upstream source server.
The systems are all CentOS, using zfs-kmod from the repo:
root@apps2:~# rpm -qa | grep zfs
kmod-zfs-0.8.6-1.el7.x86_64
libzfs2-0.8.6-1.el7.x86_64
zfs-release-1-7.9.noarch
zfs-0.8.6-1.el7.x86_64
root@apps2:~# uname -a
Linux apps2 3.10.0-1160.11.1.el7.x86_64 #1 SMP Fri Dec 18 16:34:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
The filesystems are exported via NFS.
I plan to try rebooting this server over the weekend to see if that will clear this up. We've been struggling with similar issues to this for over a year now and have found no clear pattern as to why/when this happens. Would love to know what we're doing wrong here.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi,
I'm seeing some weird behavior which I've asked about before in the context of thinking it was an issue with ZFS <-> NFS server interaction. But asking again as I now have an example that seems to be a pure ZFS issue. After replicating to another server, I see this difference. First, the source server (I had to scrub some values to protect the innocent):
Here’s a new view of our ZFS replication issue. On storage4 (the user_apps master:
On the TARGET host, we see that the snapshot is there and exactly as expected, but the parent filesystem still shows old values.
Why aren't those ACLs updating from the snapshot?
We are using znapzend to handle the replication of these and replicating hourly. There is a second slave server, identical to this one and updated by znapzend which is working fine and the ACLs look exactly like those on the upstream source server.
The systems are all CentOS, using zfs-kmod from the repo:
The filesystems are exported via NFS.
I plan to try rebooting this server over the weekend to see if that will clear this up. We've been struggling with similar issues to this for over a year now and have found no clear pattern as to why/when this happens. Would love to know what we're doing wrong here.
Thanks,
griznog
Beta Was this translation helpful? Give feedback.
All reactions