Replies: 3 comments 4 replies
-
I don't really know, but I suspect that, among other things, In case the reason is fragmentation, we can try to see if the following idea helps:
We may consider adding an API call to the library to initialize it for multi-threading, which will define (1) (and possibly other one-time things).
We can also experience with
Maybe due to (BTW, I'm also working on improving the library, including saving memory, reducing cache table accesses, converting lists to arrays, avoiding random accesses, etc.) |
Beta Was this translation helpful? Give feedback.
-
Can the vector memory just be deleted at once at certain points, like |
Beta Was this translation helpful? Give feedback.
-
I'm closing this discussion, since this appears to be "normal" behavior. and I don't see any easy alternatives. So, "noted" and "interesting" bu "won't fix". |
Beta Was this translation helpful? Give feedback.
-
I'm seeing a harmless but alarming behavior: virtual memory has blown up to 616 GB, but RSS stays at 40GB. Virtual is much larger than what the machine has in physical RAM, but the OS seems happy, cause RSS is low (and no swap usage). Any clue what is causing this? (As I wrote this, I decided it is badly fragmented heap...)
System is using about 20 threads to do LG parsing from a huge dictionary, and it has run 16 hours, now. Most of the 40GB RSS is NOT LG, but other things. However, during parsing, sometimes I hit #1402 (Explosive RAM usage) which is now fixed, because RAM usage is now clamped to a reasonable value. But, with 20 threads, if half of them are "unlucky", RSS does get large -- up to 165GB, but then it shrinks back down after sentence delete.
But Virt...well, it doesn't grow, but overnight, it got large... while I am writing this, it shrank from 616 GB to 595 GB. Any clue what this is about? Is this a badly fragmented heap? I guess it has to be... OK, never mind, that is what it must be. Huh.
Beta Was this translation helpful? Give feedback.
All reactions