Replies: 6 comments
-
I can make some wild guesses about what is going on but I can provide more useful feedback if you share your job file. |
Beta Was this translation helpful? Give feedback.
-
My original one which created the mentioned 1TB file was this (using CLI instead of a job file in this case): I then added --offset=100G and --offset_incremement=100G to the same command, which is when I saw the behavior in my first post. |
Beta Was this translation helpful? Give feedback.
-
If I understand what you're trying to accomplish I would suggest a command like this:
Use |
Beta Was this translation helpful? Give feedback.
-
Ahhh alrighty this makes more sense, I got size and filesize confused. I am using very large files intentionally, this is a ZFS system with 256GB of RAM, and I am trying to test closer to direct SSD reads rather than reading from ARC (ZFS is getting direct I/O soon but doesn't yet have it). So, with that being said, would it make sense to do 1T as the filesize= setting? Thanks a ton, greatly appreciate the response here. |
Beta Was this translation helpful? Give feedback.
-
Yes, of course. I just swapped your T for G and your G for M because I don't have as much storage as you do on my test system. Also, I would run with |
Beta Was this translation helpful? Give feedback.
-
OK perfect, I'll give these a run, thank you! This helps me a ton, I'm doing testing on a pretty beefy server that's going to be hosting a database over iSCSI and the database is bigger than the ARC so I'm wanting to get as much direct I/O as I can for performance data. |
Beta Was this translation helpful? Give feedback.
-
I've read the manual, but the behavior I am seeing is still not what I would expect, I'm sure I'm misunderstanding something.
I am testing a NAS running ZFS with 256GB of RAM, so having to use very large files to test with to get proper SSD VDEV performance.
I created a 1TB file with FIO and did a test with that, it worked as I expected. However, the ARC was used a lot when doing a sequential read from this (makes sense), so I wanted to setup an offset and an offset_increment to have my 8 threads all grab a different part of the file to ensure I'm not using all ARC.
I did an offset of 100G and an increment of 100G as well. I left the test file name the same as before (to prevent re-writing 1TB), the previous test file got deleted and FIO started "laying out the file."
From what the docs say, it sounds like the offset should start at different blocks on the already created file, am I right about that or misunderstanding something?
It also looked like it was going to write several roughly 100GB files instead of using the 1TB one.
What am I not getting here?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions