Replies: 2 comments 3 replies
-
Fragmentation is produced by specific application workloads. One of prominent examples is torrent, downloading blocks in random order and so creating fragmentation, since ZFS on a large scale writes the data in the order of arrival (rounded by transaction groups), not the logical order within the files. In case of at least Transmission torrent client it is possible to set up separate small pool for incoming files, and then it will move the completed ones to the main pool sequentially without fragmentation. Theoretically by blowing transaction group size in ZFS hard you may reduce some of fragmentation, since within each TXG data are written in logical order, but you can do that only up to certain percent of your ARC size. The default is 10% or 4GB, so there can be some space for possible tuning, but not much. In case of torrents fragmentation is produced by hours and days of workload, not some seconds. You can't stretch transaction group to hours. |
Beta Was this translation helpful? Give feedback.
-
Thanks. I've been playing around with this today. I have the disks sitting idle for a good 15 seconds or so and then a burst of write activity that saturates all four spindles, then back to idle, repeat. I started with
Years ago I had a sweet set of dtrace scripts for BSD that would clearly show txg events/sizes, ZIL activity, and so forth. However it seems like dtrace for Linux is kind of an afterthought and doesn't enjoy much popularity or support. How are people getting relevant telemetry from Linux beyond |
Beta Was this translation helpful? Give feedback.
-
I recall reading (years ago) a discussion surrounding ZFS tunings to minimize fragmentation -- possibly at the expense of data security and overall performance. I think it went along the lines of highly delayed writes to maximize coalescence.
Am I imagining this?
Beta Was this translation helpful? Give feedback.
All reactions