Replies: 4 comments 2 replies
-
Can be related #11353
…On Wed, 31 May 2023, 05:07 Adel KARZAZI, ***@***.***> wrote:
*The Issue*
The performance of zfs send drops significantly when using Raidz layouts
comparatively to the pool read/write performance.
To highlight the problem I use
zfs send -L ***@***.*** | pv > /dev/null
The setup
10 HDD drives: 18 To @260MB/s
Here are some benchmarks to understand the problem.
Layout ZFS send Fio Read Fio Write
stripe 10*1 830 1800 2800
mirror 5*2 900 1700 1500
raidz 1*10 *180 !!* 1300 1500
raidz 2*5 *210 !!* 1100 1700
raidz2 1*10 *220 !!* 1400 1600
Remarks:
-
The zfs receive does not exhibit the same behavior, speeds are fine.
-
This might go unnoticed on NVMe, I achieved 3 GB/s, so this might not
be the bottleneck of the replication process.
*Reproduce the issue*
Pool setup
# stripe 10*1
zpool create -O recordsize=1m tank /dev/disk/by-vdev/hdd{01..10}
# mirror 5*2
zpool create -O recordsize=1m tank mirror hdd01 hdd02 mirror hdd03 hdd04 mirror hdd05 hdd06 mirror hdd07 hdd08 mirror hdd09 hdd10
# raidz
zpool create -O recordsize=1m tank raidz /dev/disk/by-vdev/hdd{01..10}
# raidz 2*5
zpool create -O recordsize=1m tank raidz /dev/disk/by-vdev/hdd{01..05} raidz /dev/disk/by-vdev/hdd{06..10}
# raidz2
zpool create -O recordsize=1m tank raidz2 /dev/disk/by-vdev/hdd{01..10}
The test
# Creates a 16 GB file & Fio write
fio --ioengine=libaio --name=a --group_reporting=1 --eta-newline=1 --iodepth=16 --direct=1 --bs=1M --filename=/tank/a.dat --numjobs=4 --size=4G --offset_increment=4G --rw=write;
zfs snapshot ***@***.***;
# Clear cache
zpool export tank; zpool import tank;
# fio read
fio --ioengine=libaio --name=a --group_reporting=1 --eta-newline=1 --iodepth=16 --direct=1 --bs=1M --filename=/tank/a.dat --numjobs=4 --size=4G --offset_increment=4G --rw=read;
# clear cache
zpool export tank; zpool import tank;
# Test
zfs send -L ***@***.*** | pv > /dev/null
*Context*
I use ZFS Send to replicate dataset.
We have few TeraBytes written and ereased every day, it is a simple n day
backup rotation.
Until now I used mirror setups, but when trying a raidzx layout I noticed
a very low performance with zfs send on HDD.
Unfortunately, with such low speeds, there no way we can replicate the
data within the time window.
*ZFS version*
Ubuntu 22.04.2 LTS
zfs-2.1.5-1ubuntu6~22.04.1
zfs-kmod-2.1.5-1ubuntu6~22.04.1
*Is the issue specific to my setup ?*
No.
I've reproduced the issue on a fresh install on another machine with a
newer version of Ubuntu and ZFS.
Ubuntu 23.04
zfs-2.1.9-2ubuntu1
zfs-kmod-2.1.9-2ubuntu1
*Summary*
I think this is a performance bug.
Do you experience the same issue ?
Is there any flag to mitigate this ?
—
Reply to this email directly, view it on GitHub
<#14916>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABXQ6HPHZRQNKDI4EIKIRHTXIZAOXANCNFSM6AAAAAAYUN2J6M>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Since I think this is bug, I've created an issue here: #14917 |
Beta Was this translation helpful? Give feedback.
-
What ashift do you usually use? |
Beta Was this translation helpful? Give feedback.
-
Try to reduce parallel writes:
|
Beta Was this translation helpful? Give feedback.
-
The Issue
The performance of
zfs send
drops significantly when using Raidz layouts comparatively to the pool read/write performance.To highlight the problem I use
zfs send -L tank@1 | pv > /dev/null
The setup
10 HDD drives: 18 To @260MB/s
Here are some benchmarks to understand the problem.
Remarks:
The
zfs receive
does not exhibit the same behavior, speeds are fine.This might go unnoticed on NVMe, I achieved 3 GB/s, so this might not be the bottleneck of the replication process.
Reproduce the issue
Pool setup
The test
Context
I use ZFS Send to replicate dataset.
We have few TeraBytes written and ereased every day, it is a simple n day backup rotation.
Until now I used
mirror
setups, but when trying araidzx
layout I noticed a very low performance withzfs send
on HDD.Unfortunately, with such low speeds, there no way we can replicate the data within the time window.
ZFS version
Is the issue specific to my setup ?
No.
I've reproduced the issue on a fresh install on another machine with a newer version of Ubuntu and ZFS.
(edit)
Also reproduced the same performance numbers on a fresh install of an old debian and older zfs
(edit)
Also reproduced the same performance numbers on a fresh install of freebsd and zfs
Summary
I think this is a performance bug.
Do you experience the same issue ?
Is there any flag to mitigate this ?
(edit)
hardware details
Beta Was this translation helpful? Give feedback.
All reactions