L2ARC only using ~200GB of a 1TB device #14782
Unanswered
FallingSnow
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Actually records stored in L2ARC can be smaller than 128kB, thus ratio can be smaller. You can inspect ram overhead looking at arcstats. I think this should give you good estimate of ram usage for l2arc headers:
You can adjust the ARC fraction used for L2 headers if you know what you are doing, see zfs_arc_meta_limit_percent
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I've noticed a strange relationship between the size of ARC and L2ARC. I had a 4GB ARC cache and L2ARC would only fill to 100GB over about a day of usage. It did this fairly quickly and just remained there, not increasing. So I increased my ARC to 8GB and instantly saw L2ARC size climbing, but it seems to have leveled off to ~200GB, twice what it was with a 4GB ARC. My ARC cache hit rate is between 50-90% depending on what I'm doing at the moment.
Now my understanding is that with my record size (128K) the L2ARC indexes (which take up space in ARC) should only be about 550MB of ARC using the formula
1TB / 128kB × 70 byte = 550MB
. So there should be plenty of space for L2ARC to continue to grow.Is there some kind of formula that limits the size of the L2ARC based on the size of ARC? Why isn't my L2ARC growing?
Beta Was this translation helpful? Give feedback.
All reactions