Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running out of memory when generating report for Bitcoin Core #1742

Open
dergoegge opened this issue Sep 13, 2024 · 2 comments
Open

Running out of memory when generating report for Bitcoin Core #1742

dergoegge opened this issue Sep 13, 2024 · 2 comments
Assignees

Comments

@dergoegge
Copy link

Context

Currently, Bitcoin Core has ~200 fuzz harnesses which are compiled into one big binary fuzz. Selecting a harness for fuzzing is done through setting the FUZZ environment variable, e.g. FUZZ=minisketch ./fuzz. Internally this works by having a global map, mapping harness names to harness functions, and looking up the requested harness at startup.

In oss-fuzz we hack around this (since oss-fuzz requires individual binaries per harness), by copying the original fuzz binary for all our harnesses and manually editing them to hardcode a specific harness to be fuzzed (see https://github.com/google/oss-fuzz/blob/173cae95c4d1810ee56484a8e63e0cf6f990afeb/projects/bitcoin-core/build.sh#L83-L106). In the end, each of the binaries still includes a hardcoded runtime lookup of a harness function.

Our understanding is that these runtime lookups prevent introspector from statically determining which code is reachable by a given harness. This is reflected by the introspector report produced by oss-fuzz: https://storage.googleapis.com/oss-fuzz-introspector/bitcoin-core/inspector-report/20240911/fuzz_report.html. It only detects one harness and can't analyze it properly (e.g. the call tree: https://storage.googleapis.com/oss-fuzz-introspector/bitcoin-core/inspector-report/20240911/calltree_view_0.html).

Issue

I've started work on splitting up Bitcoin Core's monolithic fuzz binary into individual binaries that no longer include the runtime lookups: bitcoin/bitcoin#30882, so that we can properly start using introspector. The problem now is that introspector runs out of memory while generating the report (the machine has 128GB of ram).

I've been using this branch from my personal oss-fuzz fork and invoke the introspector build with python3 infra/helper.py introspector bitcoin-core --seconds 5 (this uses the corpora available in the image and also fuzzes for an additional 5 sec).

2024-09-12 20:37:39.062 INFO fuzzer_profile - accummulate_profile: primitives_transaction: setting unreached funcs
2024-09-12 20:37:39.193 INFO fuzzer_profile - accummulate_profile: primitives_transaction: loading coverage
2024-09-12 20:37:39.193 INFO fuzzer_profile - _load_coverage: Loading coverage of type c-cpp
2024-09-12 20:37:39.193 INFO code_coverage - load_llvm_coverage: Loading LLVM coverage for target primitives_transaction
2024-09-12 20:37:39.195 INFO code_coverage - load_llvm_coverage: Found 191 coverage reports
2024-09-12 20:37:39.195 INFO code_coverage - load_llvm_coverage: Using the following coverages ['/src/inspector/primitives_transaction.covreport']
2024-09-12 20:37:39.195 INFO code_coverage - load_llvm_coverage: Reading coverage report: /src/inspector/primitives_transaction.covreport
2024-09-12 20:37:39.451 INFO code_coverage - load_llvm_coverage: found case outside a switch?!
  849|       |        // A special case for std::vector<bool>, as dereferencing
2024-09-12 20:37:39.532 INFO code_coverage - load_llvm_coverage: found case outside a switch?!
  849|       |        // A special case for std::vector<bool>, as dereferencing
2024-09-12 20:37:39.533 INFO code_coverage - load_llvm_coverage: found case outside a switch?!
  849|       |        // A special case for std::vector<bool>, as dereferencing
2024-09-12 20:37:39.533 INFO code_coverage - load_llvm_coverage: found case outside a switch?!
  849|       |        // A special case for std::vector<bool>, as dereferencing
2024-09-12 20:37:39.587 INFO fuzzer_profile - accummulate_profile: primitives_transaction: setting file targets
2024-09-12 20:37:39.587 INFO fuzzer_profile - accummulate_profile: primitives_transaction: setting total basic blocks
2024-09-12 20:37:39.588 INFO fuzzer_profile - accummulate_profile: primitives_transaction: setting cyclomatic complexity
2024-09-12 20:37:39.664 INFO fuzzer_profile - accummulate_profile: primitives_transaction: setting fd cache
2024-09-12 20:37:39.729 INFO fuzzer_profile - accummulate_profile: primitives_transaction: finished accummulating profile
Traceback (most recent call last):
  File "/fuzz-introspector/src/main.py", line 154, in <module>
    main()
  File "/fuzz-introspector/src/main.py", line 138, in main
    return_code = commands.run_analysis_on_dir(
  File "/fuzz-introspector/src/fuzz_introspector/commands.py", line 61, in run_analysis_on_dir
    introspection_proj.load_data_files(parallelise, correlation_file)
  File "/fuzz-introspector/src/fuzz_introspector/analysis.py", line 98, in load_data_files
    new_profiles.append(return_dict[idx])
  File "<string>", line 2, in __getitem__
  File "/usr/local/lib/python3.8/multiprocessing/managers.py", line 835, in _callmethod
    kind, result = conn.recv()
  File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 250, in recv
    buf = self._recv_bytes()
  File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 421, in _recv_bytes
    return self._recv(size)
  File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 385, in _recv
    raise OSError("got end of file during message")
OSError: got end of file during message
ERROR:__main__:Building fuzzers failed.
ERROR:__main__:Failed to build project with introspector

From dmesg:

[26423.649108] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=docker-931199ccd18a61f132ab621226f676e0b35c0f379a6644fc00617f6c92e47baf.scope,mems_allowed=0,global_oom,task_memcg=/system.slice/docker-931199ccd18a61f132ab621226f676e0b35c0f379a6644fc00617f6c92e47baf.scope,task=python3,pid=174073,uid=0
[26423.649118] Out of memory: Killed process 174073 (python3) total-vm:99688120kB, anon-rss:98703620kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:194004kB oom_score_adj:0

I've tried limiting the whole process to one CPU core, in the hope that avoiding parallelism also avoids excessive memory use but that didn't work out.

@dergoegge
Copy link
Author

I tried reverting the multi processing changes for profile accumulation, since that involves storing all profiles in memory multiple times.

With that, fuzz-introspector still runs out of memory but at a later stage during the creation of the html report (i think?):

2024-09-30 15:30:44.639 INFO debug_info - extract_all_functions_in_debug_info: Extracting functions
2024-09-30 15:30:45.420 INFO debug_info - extract_all_functions_in_debug_info: Extracting functions
2024-09-30 15:30:48.702 INFO debug_info - extract_all_functions_in_debug_info: Extracting functions
2024-09-30 15:30:49.420 INFO debug_info - extract_all_functions_in_debug_info: Extracting functions
2024-09-30 15:30:52.701 INFO debug_info - extract_all_functions_in_debug_info: Extracting functions
2024-09-30 15:30:52.969 INFO debug_info - load_debug_all_yaml_files: Set base loader to use CSafeLoader
2024-09-30 17:53:40.388 INFO debug_info - load_debug_all_yaml_files: Set base loader to use CSafeLoader
/usr/local/bin/compile: line 333: 20421 Killed                  python3 /fuzz-introspector/src/main.py report $REPORT_ARGS
ERROR:__main__:Building fuzzers failed.
Building project failed

From dmesg:

[Mon Sep 30 19:43:29 2024] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/docker-fcf7e99325ff30022b7afc78c1455e2f3d9244091aae82afc2dceace340e0b05.scope,task=python3,pid=3224215,uid=0
[Mon Sep 30 19:43:29 2024] Out of memory: Killed process 3224215 (python3) total-vm:130907700kB, anon-rss:129127460kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:253660kB oom_score_adj:0
[Mon Sep 30 19:43:32 2024] oom_reaper: reaped process 3224215 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

@DavidKorczynski DavidKorczynski self-assigned this Oct 9, 2024
@DavidKorczynski
Copy link
Contributor

apologies for the delay here @dergoegge -- I'm looking at reducing the memory overhead!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants