Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix backtracs_storage slice index wrapping #54

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

danburkert
Copy link

This PR fixes a couple of instances of panics I encountered when
attempting to load and view a large data file with the cli server. I'm
not entirely sure the fixes are correct, since I don't fully understand
the intent of the code, in particular why backtrace_offset is
guaranteed to be fit an a u32 if backtrace_offset + backtrace_len
can not.

Example panic backtrace:

thread '<unnamed>' panicked at 'slice index starts at 4294967290 but ends at 391', cli-core/src/data.rs:708:10
stack backtrace:
   0: rust_begin_unwind
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/std/src/panicking.rs:517:5
   1: core::panicking::panic_fmt
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/panicking.rs:93:14
   2: core::slice::index::slice_index_order_fail
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/slice/index.rs:48:5
   3: server_core::filter::match_allocation
   4: server_core::get_allocations::{{closure}}
             at ./server-core/src/lib.rs:652:31
   5: <core::iter::adapters::filter::Filter<I,P> as core::iter::traits::iterator::Iterator>::count::to_usize::{{closure}}
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/adapters/filter.rs:80:22
   6: core::iter::adapters::map::map_fold::{{closure}}
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/adapters/map.rs:84:28
   7: core::iter::adapters::map::map_fold::{{closure}}
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/adapters/map.rs:84:21
   8: core::iter::traits::iterator::Iterator::fold
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/traits/iterator.rs:2174:21
   9: <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::fold
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/adapters/map.rs:124:9
  10: <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::fold
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/adapters/map.rs:124:9
  11: <usize as core::iter::traits::accum::Sum>::sum
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/traits/accum.rs:42:17
  12: core::iter::traits::iterator::Iterator::sum
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/traits/iterator.rs:3000:9
  13: <core::iter::adapters::filter::Filter<I,P> as core::iter::traits::iterator::Iterator>::count
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/adapters/filter.rs:83:9
  14: server_core::get_allocations
             at ./server-core/src/lib.rs:650:9
  15: server_core::handler_allocations::{{closure}}
             at ./server-core/src/lib.rs:702:24
  16: server_core::async_data_handler::{{closure}}
             at ./server-core/src/lib.rs:208:9

This PR fixes a couple of instances of panics I encountered when
attempting to load and view a large data file with the cli server. I'm
not entirely sure the fixes are correct, since I don't fully understand
the intent of the code, in particular why `backtrace_offset` is
guaranteed to be fit an a `u32` if `backtrace_offset + backtrace_len`
can not.

```
thread '<unnamed>' panicked at 'slice index starts at 4294967290 but ends at 391', cli-core/src/data.rs:708:10
stack backtrace:
   0: rust_begin_unwind
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/std/src/panicking.rs:517:5
   1: core::panicking::panic_fmt
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/panicking.rs:93:14
   2: core::slice::index::slice_index_order_fail
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/slice/index.rs:48:5
   3: server_core::filter::match_allocation
   4: server_core::get_allocations::{{closure}}
             at ./server-core/src/lib.rs:652:31
   5: <core::iter::adapters::filter::Filter<I,P> as core::iter::traits::iterator::Iterator>::count::to_usize::{{closure}}
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/adapters/filter.rs:80:22
   6: core::iter::adapters::map::map_fold::{{closure}}
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/adapters/map.rs:84:28
   7: core::iter::adapters::map::map_fold::{{closure}}
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/adapters/map.rs:84:21
   8: core::iter::traits::iterator::Iterator::fold
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/traits/iterator.rs:2174:21
   9: <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::fold
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/adapters/map.rs:124:9
  10: <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::fold
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/adapters/map.rs:124:9
  11: <usize as core::iter::traits::accum::Sum>::sum
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/traits/accum.rs:42:17
  12: core::iter::traits::iterator::Iterator::sum
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/traits/iterator.rs:3000:9
  13: <core::iter::adapters::filter::Filter<I,P> as core::iter::traits::iterator::Iterator>::count
             at /rustc/25b764849625cb090e8b81d12d2bb2295d073788/library/core/src/iter/adapters/filter.rs:83:9
  14: server_core::get_allocations
             at ./server-core/src/lib.rs:650:9
  15: server_core::handler_allocations::{{closure}}
             at ./server-core/src/lib.rs:702:24
  16: server_core::async_data_handler::{{closure}}
             at ./server-core/src/lib.rs:208:9
```
@koute
Copy link
Owner

koute commented Aug 16, 2021

Do you perhaps have a data file which you could send me and which panics here?

@danburkert
Copy link
Author

Unfortunately I can't share the data file, sorry.

@danburkert
Copy link
Author

If there's an easy way to strip the backtraces out, or otherwise replace them with placeholders I'd be able to share. I'm not aware of any such tool though.

@koute
Copy link
Owner

koute commented Aug 17, 2021

If you use the postprocess subcommand on your data file before loading it is the panic still happening?

If so you could probably apply this patch:

diff --git a/cli-core/src/postprocessor.rs b/cli-core/src/postprocessor.rs
index f7efaf4..9fa3304 100644
--- a/cli-core/src/postprocessor.rs
+++ b/cli-core/src/postprocessor.rs
@@ -43,7 +43,7 @@ pub fn postprocess< F, G, D, I  >( ifp: F, ofp: G, debug_symbols: I ) -> Result<
 
     let mut frames = Vec::new();
     let mut frames_to_write = Vec::new();
-    let mut emitted_strings = HashSet::new();
+    // let mut emitted_strings = HashSet::new();
     let mut expected_backtrace_id = 0;
     let mut expected_frame_id = 0;
     for event in event_stream {
@@ -139,22 +139,22 @@ pub fn postprocess< F, G, D, I  >( ifp: F, ofp: G, debug_symbols: I ) -> Result<
                     }
                 }
 
-                let library = intern!( frame.library() );
-                let raw_function = intern!( frame.raw_function() );
-                let function = intern!( frame.function() );
-                let source = intern!( frame.source() );
+                // let library = intern!( frame.library() );
+                // let raw_function = intern!( frame.raw_function() );
+                // let function = intern!( frame.function() );
+                // let source = intern!( frame.source() );
 
                 assert_eq!( frame_id, expected_frame_id );
                 expected_frame_id += 1;
 
                 Event::DecodedFrame {
                     address: frame.address().raw(),
-                    library,
-                    raw_function,
-                    function,
-                    source,
-                    line: frame.line().unwrap_or( 0xFFFFFFFF ),
-                    column: frame.column().unwrap_or( 0xFFFFFFFF ),
+                    library: 0xFFFFFFFF,
+                    raw_function: 0xFFFFFFFF,
+                    function: 0xFFFFFFFF,
+                    source: 0xFFFFFFFF,
+                    line: 0xFFFFFFFF,
+                    column: 0xFFFFFFFF,
                     is_inline: frame.is_inline()
                 }.write_to_stream( &mut ofp )?;
             }

...and then use the postprocess command to strip out the backtraces and your program's binary (which otherwise would be embedded in the data file). This will still leave a few minor details like your binary's name, the environment variables and memory map dumps (from /proc/self/maps) though. (Though those can also be sanitized with minor effort in the same file.)

@koute
Copy link
Owner

koute commented Aug 17, 2021

Actually, scratch that. I've added an --anonymize parameter to the postprocess command; you should be able to use it with --anonymize=full to strip away all of the backtraces/filenames/etc.

@danburkert
Copy link
Author

Thanks. It may take me a bit of time to send you a link. I tried to find the offending file yesterday but it seems like I may have deleted it. I also was running a version about 3 weeks old, so it's possible that the bug is fixed on master already. Regardless, I'll check back in if/when I do hit this panic again. Feel free to close if that's useful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants