Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

continuation of a gist. out of memory and/or munmap_chunk(): invalid pointer #35

Closed
acuifex opened this issue Jun 29, 2021 · 8 comments
Closed

Comments

@acuifex
Copy link

acuifex commented Jun 29, 2021

look here for context:
ckolivas/lrzip#200
https://gist.github.com/acuifex/2e8ceb5076379a0ace20ccea7590f072

i was running it with ~ gig of used ram and ~ 600 mib of used swap, it used everything that was left free.

@acuifex
Copy link
Author

acuifex commented Jun 29, 2021

just ran it again to confirm, from 750 MiB of ram and 750 MiB of swap to ~99% used of both

@pete4abw
Copy link
Owner

free output please. Not verbiage.

@acuifex
Copy link
Author

acuifex commented Jun 29, 2021

изображение

it haven't changed much from the last time.

изображение

@pete4abw
Copy link
Owner

You simply don't have enough ram or swap. Using -L9 -z -T you are doomed. Adding -U can't work.

lrzip-next calculates memory base on total memory and its sliding map is based on that too. In your case, the calculation is wrong because I think lrzip-next should consider available memory as well as total memory.

I'll have to look at that. Thank you for the report.

In your case, forget -L9, forget zpaq. Use the default settings for now.

@pete4abw
Copy link
Owner

pete4abw commented Jul 1, 2021

In reviewing the above, you have only 134MB of free memory and only 4GB of Available. lrzip-next should take into account free and/or available memory. But starting the program with 134MB of free memory should cause an immediate fail. lrzip-next needs ram to work - lots of ram! But, in the meantime, please use -T## or don't use the -T option. The purpose of threshold testing is to make lrzip-next faster by not passing incompressible data to the backend.

@acuifex
Copy link
Author

acuifex commented Jul 1, 2021

free doesn't mean it's the only memory available, it means that it's not used by processes and by system cache. You should look at available memory.
Some chunks can be incompressible by lz4 but compressible by zpaq. (can't find an example right now but i remember that being the case)
lrzip doesn't use as much memory. i'm gonna guess that is because of bigger chunks on your fork and/or some chunks not being released after use

also on these pictures i had firefox and a bunch of stuff open.

@pete4abw
Copy link
Owner

pete4abw commented Jul 1, 2021

Of course. But lrzip-next will still look only at total ram, not free or available or even swap. As I said, this IS a bug as the sliding map should not run out of memory. But the sliding map taking up almost as much ram as you have available IS a problem. And, as the below evaluation shows, ZPAQ has a problem too.

As for zpaq being able to compress incompressable data, sure it can. But what is your return? Here's 1GB random file.I should mention I have NVME, 16GB Ram and 16GB swap, so throughput to disk is very fast.

Comp Level Threshold Comp Method Comp Ratio Comp Time
9 100 LZMA 1.000 0:10.14
9 -T LZMA 1.000 1:18.19
9 100 ZPAQ 1.000 0:10.56
9 -T ZPAQ ERR ~20:00.00
7 -T ZPAQ 1.000 6:37.78

ZPAQ took all ram, and ate into swap as well. Just for a 1GB file. This taken DURING compression. It failed with a BUS ERROR. So ZPAQ needs a look. This will be tough to track down for now. Recommend not to use -T for now.

L9 -T zpaq memory

$ free -h
               total        used        free      shared  buff/cache   available
Mem:            15Gi        14Gi       218Mi       181Mi       705Mi       452Mi
Swap:           15Gi       1.2Gi        14Gi

L7 -T zpaq memory

$ free -h
               total        used        free      shared  buff/cache   available
Mem:            15Gi       8.5Gi       5.2Gi       182Mi       1.7Gi       6.4Gi
Swap:           15Gi       951Mi        15Gi

And, after all that work by ZPAQ, it could not compress anything but stream 0.

Block   Comp    Percent Size
1       zpaq    0.3%    154 / 48010     Offset: 1048576121      Head: 0
Stream: 1
Offset: 30
Block   Comp    Percent Size
1       none    100.0%  209715200 / 209715200   Offset: 56      Head: 209715239
2       none    100.0%  209715200 / 209715200   Offset: 209715269       Head: 419430452
3       none    100.0%  209715200 / 209715200   Offset: 419430482       Head: 629145665
4       none    100.0%  209715200 / 209715200   Offset: 629145695       Head: 838860878
5       none    100.0%  209715200 / 209715200   Offset: 838860908       Head: 1048576258
6       none    0.0%    0 / 0   Offset: 1048576288      Head: 0

Lots of work ahead...Thanks again for sharing this. Your feedback and help is appreciated!

@pete4abw
Copy link
Owner

pete4abw commented Sep 1, 2021

I'm closing this issue. Not much I can do when -T or even -U might be used. The default zpaq block sizes are lower than what I set for -L9 so maybe that can have a look at some point. Right now, recommend NOT using -T. Better yet, if you use something like -T95 - -T99 it won't try and compress anything that lz4 can't at least estimate a 1%-5% savings.

@pete4abw pete4abw closed this as completed Sep 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants