You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Where I've got the s3pool mounted locally with these parameters (access key, etc. redacted): --size=10T,--blockSize=1M,--listBlocks,--ssl,--debug,--debug-http,--directIO
I'm experiencing pretty massive iowait within ZFS when sending bulk data through the bucket. The write speed appears to be about 100 MB/s actual, but according tools like pv the stream between the two programs is about 1GB/s. When I write more than 30 GB of data (the amount that I can write in <6 minutes, which will become apparent shortly), zed starts blowing up the logs with deadman timeouts:
Here the delay reports as roughly 3 hours, when the send to about 20 minutes to complete.
I sort of expect that making a zpool out of a single file that's supported by s3backer is a tad far out of the zfs use-case, but it would be great if I could figure out a way to configure it so I didn't blow up my logs everytime I try to send a snapshot back up for DR.
Does anyone have a good solution for this? I looked at tuning deadman times higher, but that wouldn't help much since I'd eventually outpace it. Is there some kind of behavior I can turn on to get ZFS to back-off on write speed on this particular pool?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
System information
Debian | Buster
4.19.208 | amd64
OpenZFS | 2.0.8-12
Describe the problem you're observing
Howdy!
I've got a bit of an odd one for you!
I'm trying to send snapshots from a pool of local disks to a pool supported by s3backer. I'm doing something like:
zfs send -L -c -i local_pool/fs@snap1 local_pool/fs@snap2 | zfs receive -s -F -d s3pool
Where I've got the s3pool mounted locally with these parameters (access key, etc. redacted):
--size=10T,--blockSize=1M,--listBlocks,--ssl,--debug,--debug-http,--directIO
I'm experiencing pretty massive iowait within ZFS when sending bulk data through the bucket. The write speed appears to be about 100 MB/s actual, but according tools like pv the stream between the two programs is about 1GB/s. When I write more than 30 GB of data (the amount that I can write in <6 minutes, which will become apparent shortly), zed starts blowing up the logs with deadman timeouts:
Furthermore, when the writes actually complete, I get delay numbers that seem impossiby long:
5,3,2022-02-16T18:56:01.443117-08:00,host,zed: eid=221508 class=deadman pool='s3pool' vdev=file size=1048576 offset=1346134605824 priority=3 err=0 flags=0x184880 delay=11384119ms bookmark=556:11:0:71361
Here the delay reports as roughly 3 hours, when the send to about 20 minutes to complete.
I sort of expect that making a zpool out of a single file that's supported by s3backer is a tad far out of the zfs use-case, but it would be great if I could figure out a way to configure it so I didn't blow up my logs everytime I try to send a snapshot back up for DR.
Does anyone have a good solution for this? I looked at tuning deadman times higher, but that wouldn't help much since I'd eventually outpace it. Is there some kind of behavior I can turn on to get ZFS to back-off on write speed on this particular pool?
Beta Was this translation helpful? Give feedback.
All reactions