Replies: 3 comments 9 replies
-
This could be handled by BRT (block cloning). You can just copy from one
dataset to the other with 'cp --reflink=auto'. BRT though is not considered
production stable at this point and disabled by default on ZFS-2.2, there
are a lot of bug fixes on 2.2.3-staging for this feature. The good part is
that once you delete the original dataset the reflink count will go to 1
and BRT will be disengaged. The problem is that the feature will be enabled
in the pool, so some other code can use reflink copy which can potentially
explode the zfs send stream. This can be solved by disabling BRT again via
kernel parameters, so no new reflinks can be created. The other problem is
that you cannot go back to the older version of ZFS, thus you are stuck
with ZFS 2.2 and above, you may want to carefully consider that before
doing 'zpool upgrade'.
--
Regards,
Ivan
…On Fri, Feb 16, 2024 at 5:00 AM DaLiV ***@***.***> wrote:
initialy system has been planned on single zpool/zfs
now comes decision to split datas on some criteria but some TB are used
and drive usage is at 75% ...
pool1/ALL-VMs
want to have
pool1/VM0-DATA + pool1/VM1-DATA + pool1/VM2-DATA + pool1/VM3-DATA +
pool1/VM4-DATA+...
so every Vm-Data must be backed (sent-received) onto diffirent "cold"
server and syncronized with current master on daily basis (there is no
plans to shut off current mater)
for copying full - that takes huge amount of time also as sending full to
every point even if that is not related to destination over net.
local zfs send-receive will lasts also too long and result requires 2
times of space (and with current usage of pool at ca 75% that is actually
even not possible to be done)
so how can to act for splitting such pool into subpools ?
—
Reply to this email directly, view it on GitHub
<#15900>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABXQ6HISO2B7TYMEHKSY7V3YTZEKNAVCNFSM6AAAAABDKWVLUSVHI2DSMVQWIX3LMV43ERDJONRXK43TNFXW4OZWGIZDGMRRGM>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Wouldn't you snapshot master, then zfs clone it to |
Beta Was this translation helpful? Give feedback.
-
master -> snapshot -> clone -> promote reflinks as mentioned in one if previous comments are not "production-safe" |
Beta Was this translation helpful? Give feedback.
-
initialy system has been planned on single zpool/zfs
now comes decision to split datas on some criteria but some TB are used and drive usage is at 75% ...
pool1/ALL-VMs
want to have
pool1/VM0-DATA + pool1/VM1-DATA + pool1/VM2-DATA + pool1/VM3-DATA + pool1/VM4-DATA+...
so every Vm-Data must be backed (sent-received) onto diffirent "cold" server and syncronized with current master on daily basis (there is no plans to shut off current mater)
for copying full - that takes huge amount of time also as sending full to every point even if that is not related to destination over net.
local zfs send-receive will lasts also too long and result requires 2 times of space (and with current usage of pool at ca 75% that is actually even not possible to be done)
so how can to act for splitting such pool into subpools ?
Beta Was this translation helpful? Give feedback.
All reactions