special_small_blocks expected behaviour? #15255
Replies: 2 comments
-
it depends on the ZFS "flavour" you used, I does see merged before covid, and it says, if you type
but on e.g. archlinux
There seem to be a divergence between my package manager, I wouldn't worry of 1M in your case, as you have shown even with 1M record size
already using twice planned capacity on special vdev than planned, if your data isn't skewed in some particular way now compared to future workloads, u run out of special device space when the pool is half filled (i.e. if you have |
Beta Was this translation helpful? Give feedback.
-
Thank you for the insight, I appreciate it. |
Beta Was this translation helpful? Give feedback.
-
I setup a brand new pool with 3x HDD in raidz and a mirror special device with 2x SSD. I set a recordsize of 4M and a special_small_blocks value of 2M. I start filling the pool which is being utilized by a Proxmox Backup Server to store data.
I expected for the
zdb
histogram 2M asize cumulative value to equal the data stored on my special device plus metadata. It appears like in fact the 1M value is being stored, does this look correct? I saw in earlier versions there might have been a<
vs<=
change but that was a while back and I am using a pretty recent version of zfs I believe. This might just be a misunderstanding on my part but I thought the histogram would show values of blocks and those blocks should be the same as what I would select for values special_small_blocks for items which are equal to and less than in size for storage to the special device.zdb histogram
zfs special device stored data
Beta Was this translation helpful? Give feedback.
All reactions