Skip to content

S3 sink broken on FreeBSD when using buffer.type = "disk" #2966

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
0xYUANTI opened this issue Jul 7, 2020 · 29 comments
Closed

S3 sink broken on FreeBSD when using buffer.type = "disk" #2966

0xYUANTI opened this issue Jul 7, 2020 · 29 comments
Assignees
Labels
domain: buffers Anything related to Vector's memory/disk buffers have: must We must have this feature, it is critical to the project's success. It is high priority. type: bug A code related bug.

Comments

@0xYUANTI
Copy link

0xYUANTI commented Jul 7, 2020

Hi,
I'm trying out Vector 0.9.2 on FreeBSD 12.1.
With this config

[sources.my_file]
type                = "file"
file_key            = ""
include             = ["/svc/vector/test.json"]
oldest_first        = true
max_line_bytes      = 1000000 #1MB
max_read_bytes      = 1000000 #1MB

[transforms.my_json_parser]
type                = "json_parser"
inputs              = ["my_file"]

[sinks.my_s3]
type                 = "aws_s3"
inputs               = ["my_json_parser"]
bucket               = "my.bucket"
key_prefix           = "my/"
batch.timeout_secs   = 10
encoding.codec       = "ndjson"
compression          = "gzip"
region               = "eu-west-1"
filename_append_uuid = false
filename_extension   = "json.gz"
filename_time_format = "%Y/%m/%d/%H%M%S"

everything works fine, but when I add

buffer.type          = "disk"
buffer.max_size      = 10490000
buffer.when_full     = "block"

Vector immediately starts using 100% cpu, no events are added to any batch (I'm running with -vv), and Vector hangs after shutting down with TERM (needs to be KILLed)

As an immediate workaround, I guess I don't need a disk buffer when I'm ingesting from a log file (i.e. I assume Vector's file checkpointing takes into account whether events have actually been submitted successfully)?

I can set up SSH access to a test box if that helps.

cheers!

@binarylogic binarylogic added this to the Move to Github Actions milestone Jul 7, 2020
@binarylogic binarylogic added domain: buffers Anything related to Vector's memory/disk buffers domain: operations have: must We must have this feature, it is critical to the project's success. It is high priority. type: bug A code related bug. labels Jul 7, 2020
@Hoverbear
Copy link
Contributor

Hi @cannedprimates ,

That's not good! Is this on ZFS or XFS? Can you show me your test json?

@0xYUANTI
Copy link
Author

0xYUANTI commented Jul 9, 2020

It's a DO box with ZFS. I sent the test.json to [email protected] :)

@Hoverbear
Copy link
Contributor

Thanks! I'm going to try to reproduce.

@Hoverbear
Copy link
Contributor

So, seems this doesn't happen with a file or console sink:

image

@Hoverbear
Copy link
Contributor

Unless this is AWS specific (this uses DO spaces), I can't seem to replicate?

image

@0xYUANTI
Copy link
Author

Hm, I am using AWS S3 not DO spaces. I'll set up a minimal test instance!

@0xYUANTI
Copy link
Author

0xYUANTI commented Jul 14, 2020

Hm, I did manage to replicate (using the test.json file).

1) Manually started an DO instance (FreeBSD12.1 ZFS, 6cpu, 16gb, london)

2) SSHed in and
# freebsd-update fetch install
# pkg update
# pkg install ca_root_nss duo fish jq lsof mg mosh openjdk11 pstree py37-boto runit tmux tree
# pkg install gmake
# portsnap fetch extract
# pkg install  git bash rust
# wget https://github.com/timberio/vector/archive/v0.9.2.tar.gz
# tar xf v0.9.2.tar.gz
# cd vector-0.9.2
# gmake build
# mv target/release/vector /home/freebsd

3) cat run.sh
#!/bin/sh
chpst -e ./env ./vector --require-healthy -vv --config ./vector.toml

4) cat vector.toml
data_dir            = "/home/freebsd/var"

[sources.my_file]
type                = "file"
file_key            = ""
include             = ["/home/freebsd/test.json"]
oldest_first        = true
max_line_bytes      = 1000000 #1MB
max_read_bytes      = 1000000 #1MB

[transforms.my_json_parser]
type                = "json_parser"
inputs              = ["my_file"]

[sinks.my_s3]
type                 = "aws_s3"
inputs               = ["my_json_parser"]
bucket               = "my.bucket"
key_prefix           = "test/"
batch.timeout_secs   = 10
encoding.codec       = "ndjson"
compression          = "gzip"
region               = "eu-west-1"
filename_append_uuid = false
filename_extension   = "json.gz"
filename_time_format = "%Y/%m/%d/%H%M%S"
#buffer.type          = "disk"
#buffer.max_size      = 10490000
#buffer.when_full     = "block"

# eof

5) ls env
AWS_ACCESS_KEY_ID       AWS_SECRET_ACCESS_KEY

6) ./run
#works as expected

7) rm -rf var/*

8) uncomment buffer.*

9) ./run 
#100% cpu etc

@0xYUANTI
Copy link
Author

Um I also just noticed, when I run without the buffer.* stuff, the file uploaded to S3 contains all the events but doesn't seem to be gzipped?? Very weird since I've been running vector with a small production service with essentially the same s3 config (longer batch.timeout_secs) and those files are definitely gzipped!

@binarylogic
Copy link
Contributor

Thanks for letting us know @cannedprimates, I've opened #3064 to investigate the gzip issue. That shouldn't have changed.

@Hoverbear
Copy link
Contributor

Hmmm that's roughly what I did. I think the only difference is I used the official Vector packages from pkg install vector that @myfreeweb maintains. (Very expertly, I might add!)

Could I invite you to try that? I think it might make deployment easier, too. :)

@valpackett
Copy link

well, the package is also a bit out of date right now.. Maybe there was some kind of regression between 0.7.1 and 0.9.2.

If doing what you did but with https://github.com/timberio/vector/archive/v0.7.1.tar.gz doesn't reproduce the issue, you should clone the git repo and do a git bisect to find the regression.

But before that: where is it spinning?

To check userspace stacks, run under a debugger: lldb ./vector -- -vv --config ./vector.toml, c, hit Ctrl-C to interrupt, bt to look at the backtrace.

To check kernel stacks, set sysctl kern.tty_info_kstacks=1 and hit Ctrl-T (SIGINFO) when running vector in the foreground (preferably not under a debugger). Or run procstat -k <PID-of-your-vector-process>.

@Hoverbear
Copy link
Contributor

I'll poke around and try to reproduce. :)

@Hoverbear
Copy link
Contributor

Confirmed I can replicate what you're seeing, @cannedprimates
image

From procstat

root@2966:~ # procstat -k 99200
  PID    TID COMM                TDNAME              KSTACK
99200 100406 vector              -                   mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_wait __umtx_op_wait_uint_private amd64_syscall fast_syscall_common
99200 100710 vector              quanta-upkeep       mi_switch sleepq_catch_signals sleepq_timedwait_sig _sleep kern_clock_nanosleep sys_nanosleep amd64_syscall fast_syscall_common
99200 100728 vector              -                   mi_switch sleepq_catch_signals sleepq_wait_sig _sleep kqueue_kevent kern_kevent kern_kevent_generic freebsd11_kevent amd64_syscall fast_syscall_common
99200 100729 vector              tokio-runtime-worke mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_wait __umtx_op_wait_uint_private amd64_syscall fast_syscall_common
99200 100744 vector              tokio-runtime-worke mi_switch sleepq_catch_signals sleepq_wait_sig _sleep kqueue_kevent kern_kevent kern_kevent_generic freebsd11_kevent amd64_syscall fast_syscall_common
99200 100745 vector              tokio-runtime-worke mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_wait __umtx_op_wait_uint_private amd64_syscall fast_syscall_common
99200 100748 vector              tokio-runtime-worke <running>
99200 100818 vector              hyper-dns0          mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common
99200 100823 vector              -                   mi_switch sleepq_catch_signals sleepq_wait_sig _sleep kqueue_kevent kern_kevent kern_kevent_generic freebsd11_kevent amd64_syscall fast_syscall_common
99200 100824 vector              hyper-dns1          mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common
99200 100825 vector              hyper-dns2          mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_wait __umtx_op_wait_uint_private amd64_syscall fast_syscall_common
99200 100829 vector              tokio-runtime-worke mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_wait __umtx_op_wait_uint_private amd64_syscall fast_syscall_common
99200 100835 vector              hyper-dns3          mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common
99200 100839 vector              hyper-dns0          mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_wait __umtx_op_wait_uint_private amd64_syscall fast_syscall_common
99200 100840 vector              hyper-dns1          mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common
99200 100843 vector              hyper-dns2          mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common
99200 101469 vector              hyper-dns3          mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common

It seems like Vector is unable to close the file source? Very interesting.

image

@Hoverbear
Copy link
Contributor

Hoverbear commented Jul 14, 2020

Going to see if I can replicate on UFS based on that.

@valpackett
Copy link

(Do you mean UFS?)

Yeah, the kernel stacks aren't that interesting, we need userspace ones (interrupt vector under LLDB and view the backtrace)

@Hoverbear
Copy link
Contributor

Hoverbear commented Jul 14, 2020

Yeah, UFS, sorry. :)

root@2966:~ # ./run.sh
chpst -e ./env lldb ./vector -- --require-healthy -vv --config ./vector.toml
(lldb) target create "./vector"
c
Current executable set to './vector' (x86_64).
(lldb) settings set -- target.run-args  "--require-healthy" "-vv" "--config" "./vector.toml"
(lldb) process launch
Process 11609 launching
Process 11609 launched: '/root/vector' (x86_64)
Jul 14 20:51:56.361  INFO vector: Log level "trace" is enabled.
Jul 14 20:51:56.363  INFO vector: Loading configs. path=["vector.toml"]
Jul 14 20:51:56.364 TRACE vector: Parsing config. path="vector.toml"
Jul 14 20:51:56.386  INFO vector: Vector is starting. version="0.9.2" git_version="" released="Tue, 14 Jul 2020 20:47:16 +0000" arch="x86_64"
Jul 14 20:51:56.427 DEBUG vector::tls::settings: Fetching system root certs.
Jul 14 20:51:56.428 DEBUG vector::sinks::util::rusoto: using default credentials provider for AWS.
Jul 14 20:51:56.441 DEBUG vector::tls::settings: Fetching system root certs.
Jul 14 20:51:56.442 DEBUG vector::sinks::util::rusoto: using default credentials provider for AWS.
Jul 14 20:51:56.444  INFO vector::topology: Running healthchecks.
Jul 14 20:51:56.446 DEBUG http: vector::sinks::util::http: sending request. uri=https://ams3.digitaloceanspaces.com/hoverbear method=HEAD
Jul 14 20:51:57.446 DEBUG http: vector::sinks::util::http: response. status=200 version=HTTP/1.1
Jul 14 20:51:57.454  INFO vector::topology::builder: Healthcheck: Passed.
Jul 14 20:51:57.460  INFO vector::topology: All healthchecks passed.
Jul 14 20:51:57.460  INFO vector::topology: Starting source "my_file"
Jul 14 20:51:57.461  INFO vector::topology: Starting transform "my_json_parser"
Jul 14 20:51:57.461  INFO source{name=my_file type=file}: vector::sources::file: Starting file server. include=["/root/test.json"] exclude=[]
Jul 14 20:51:57.461  INFO vector::topology: Starting sink "my_s3"
Jul 14 20:51:57.465  INFO source{name=my_file type=file}:file_server: file_source::file_server: Found file to watch. path="/root/test.json" file_position=0
.........
Jul 14 20:51:57.781 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="/root/test.json" bytes=247
Jul 14 20:51:57.782 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="/root/test.json" bytes=4210
Jul 14 20:51:57.782 TRACE source{name=my_file type=file}: vector::internal_events::file: received one event. self.file=/root/test.json rate_limit_secs=10
Jul 14 20:51:57.805 TRACE source{name=my_file type=file}: vector::internal_events::file: "received one event." is being rate limited. rate_limit_secs=5
Process 11609 stopped
* thread #1, name = 'vector', stop reason = signal SIGSTOP
    frame #0: 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92
libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3:
->  0x806749edc <+92>: retq
    0x806749edd <+93>: int3
    0x806749ede <+94>: int3
    0x806749edf <+95>: int3
(lldb) bt
error: need to add support for DW_TAG_base_type '()' encoded with DW_ATE = 0x7, bit_size = 0
* thread #1, name = 'vector', stop reason = signal SIGSTOP
  * frame #0: 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92
    frame #1: 0x0000000806747512 libthr.so.3`___lldb_unnamed_symbol159$$libthr.so.3 + 482
    frame #2: 0x00000000066796e9 vector`std::sys::unix::condvar::Condvar::wait::h86f5865dd8d44fd5(self=0x0000000806cf7690, mutex=0x0000000806cf7688) at condvar.rs:73:17
    frame #3: 0x000000000666e40e vector`std::sys_common::condvar::Condvar::wait::h5d0eb6f1bd70a377(self=0x0000000806cf7690, mutex=0x0000000806cf7688) at condvar.rs:50:9
    frame #4: 0x000000000667977b vector`std::sync::condvar::Condvar::wait::h4259c4ed1de6e177(self=0x0000000806d12e28, guard=MutexGuard<()> @ 0x00007fffffff5760) at condvar.rs:200:13
    frame #5: 0x000000000666dc2f vector`futures::task_impl::std::ThreadNotify::park::h9e751d5c706c01fe(self=0x0000000806d12e10) at mod.rs:513:17
    frame #6: 0x0000000002c77988 vector`futures::task_impl::std::_$LT$impl$u20$futures..task_impl..Spawn$LT$F$GT$$GT$::wait_future::_$u7b$$u7b$closure$u7d$$u7d$::hf37463e2e32fffcc(notify=0x0000000806d135a0) at mod.rs:237:40
    frame #7: 0x0000000002c771f7 vector`futures::task_impl::std::ThreadNotify::with_current::_$u7b$$u7b$closure$u7d$$u7d$::h0f0d696b58ce6886(notify=0x0000000806d135a0) at mod.rs:483:45
    frame #8: 0x0000000002c57670 vector`std::thread::local::LocalKey$LT$T$GT$::try_with::h8898932c8b6ed9bd(self=0x00000000066f8fb0, f=closure-0 @ 0x00007fffffff5aa0) at local.rs:263:16
    frame #9: 0x0000000002c56457 vector`std::thread::local::LocalKey$LT$T$GT$::with::h0b602e2ff1325be4(self=0x00000000066f8fb0, f=closure-0 @ 0x00007fffffff5b28) at local.rs:239:9
    frame #10: 0x0000000002c7717a vector`futures::task_impl::std::ThreadNotify::with_current::h4e4ec5ba465dbf1a(f=closure-0 @ 0x00007fffffff5b58) at mod.rs:483:9
    frame #11: 0x0000000002c7eb54 vector`futures::task_impl::std::_$LT$impl$u20$futures..task_impl..Spawn$LT$F$GT$$GT$::wait_future::h32c0e886024555e9(self=0x00007fffffff5bb8) at mod.rs:233:9
    frame #12: 0x0000000002c872d4 vector`futures::future::Future::wait::h7b9dc1180b81864a(self=(inner = core::option::Option<(futures::future::poll_fn::PollFn<closure-6>, futures::future::poll_fn::PollFn<closure-7>)> @ 0x00007fffffff5c00)) at mod.rs:300:9
    frame #13: 0x0000000002c5d613 vector`vector::main::h021f06a408bf5d1d at main.rs:312:24
    frame #14: 0x0000000002c7f22e vector`std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::h668a714620cfdad6 at rt.rs:67:34
    frame #15: 0x0000000006698822 vector`std::rt::lang_start_internal::had29a9c0e6ac7609 + 354
    frame #16: 0x0000000002c7f212 vector`std::rt::lang_start::h9e782001b86a0a41(main=(vector`vector::main::h021f06a408bf5d1d at main.rs:151), argc=5, argv=0x00007fffffffea68) at rt.rs:67:5
    frame #17: 0x0000000002c6ac7b vector`main + 43
    frame #18: 0x0000000002c4c10b vector`_start(ap=<unavailable>, cleanup=<unavailable>) at crt1.c:76:7

This leads me to suspect something to do with our async/signals.

@valpackett
Copy link

No, I think you're just looking at the main thread, which is not the one spinning at 100% CPU. You need to switch to the tokio-runtime-worker that was <running> in procstat. (use thr l, thr sel <N>)

@Hoverbear
Copy link
Contributor

@myfreeweb It's been awhile since I used lldb directly (clion spoils me), your pointers are appreciated. 😁

@Hoverbear
Copy link
Contributor

Hoverbear commented Jul 14, 2020

root@2966:~ # procstat -k 11801
  PID    TID COMM                TDNAME              KSTACK
11801 100377 vector              -                   mi_switch thread_suspend_switch ptracestop cursig ast doreti_ast
11801 100475 vector              tokio-runtime-worke mi_switch thread_suspend_check ast doreti_ast
11801 100505 vector              quanta-upkeep       mi_switch sleepq_catch_signals sleepq_timedwait_sig _sleep kern_clock_nanosleep sys_nanosleep amd64_syscall fast_syscall_common
11801 100554 vector              -                   mi_switch sleepq_catch_signals sleepq_wait_sig _sleep kqueue_kevent kern_kevent kern_kevent_generic freebsd11_kevent amd64_syscall fast_syscall_common
11801 100799 vector              tokio-runtime-worke mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_wait __umtx_op_wait_uint_private amd64_syscall fast_syscall_common
11801 101238 vector              tokio-runtime-worke mi_switch sleepq_catch_signals sleepq_wait_sig _sleep kqueue_kevent kern_kevent kern_kevent_generic freebsd11_kevent amd64_syscall fast_syscall_common
11801 101266 vector              tokio-runtime-worke mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_wait __umtx_op_wait_uint_private amd64_syscall fast_syscall_common
11801 101290 vector              tokio-runtime-worke mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_wait __umtx_op_wait_uint_private amd64_syscall fast_syscall_common
11801 101389 vector              hyper-dns0          mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_wait __umtx_op_wait_uint_private amd64_syscall fast_syscall_common
11801 101390 vector              hyper-dns1          mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common
11801 101398 vector              hyper-dns2          mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common
11801 101406 vector              hyper-dns3          mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common
11801 101425 vector              hyper-dns0          mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_wait __umtx_op_wait_uint_private amd64_syscall fast_syscall_common
11801 101426 vector              hyper-dns1          mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common
11801 101428 vector              hyper-dns2          mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common
11801 101434 vector              -                   mi_switch sleepq_catch_signals sleepq_wait_sig _sleep kqueue_kevent kern_kevent kern_kevent_generic freebsd11_kevent amd64_syscall fast_syscall_common
11801 101437 vector              hyper-dns3          mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common
(lldb) thr l
Process 11801 stopped
  thread #1: tid = 100377, 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92, name = 'vector', stop reason = signal SIGSTOP
  thread #2: tid = 100505, 0x00000008069b293a libc.so.7`__sys_nanosleep + 10, name = 'quanta-upkeep'
  thread #3: tid = 100554, 0x00000008068f599a libc.so.7`kevent + 10, name = 'vector'
* thread #4: tid = 100799, 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92, name = 'tokio-runtime-wo'
  thread #5: tid = 101238, 0x00000008068f599a libc.so.7`kevent + 10, name = 'tokio-runtime-wo'
  thread #6: tid = 101266, 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92, name = 'tokio-runtime-wo'
  thread #7: tid = 101290, 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92, name = 'tokio-runtime-wo'
  thread #16: tid = 101389, 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92, name = 'hyper-dns0'
  thread #17: tid = 101390, 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92, name = 'hyper-dns1'
  thread #18: tid = 101398, 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92, name = 'hyper-dns2'
  thread #19: tid = 101406, 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92, name = 'hyper-dns3'
  thread #20: tid = 101425, 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92, name = 'hyper-dns0'
  thread #21: tid = 101426, 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92, name = 'hyper-dns1'
  thread #22: tid = 101428, 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92, name = 'hyper-dns2'
  thread #23: tid = 101437, 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92, name = 'hyper-dns3'
  thread #24: tid = 100475, 0x000000000361c7a1 vector`alloc::collections::btree::node::NodeRef$LT$BorrowType$C$K$C$V$C$Type$GT$::as_leaf::hc4397aaa8072803f(self=0x00007fffdf3ed1d0) at node.rs:322:6, name = 'tokio-runtime-wo'
  thread #25: tid = 101434, 0x00000008068f599a libc.so.7`kevent + 10, name = 'vector'

Thread 4

(lldb) bt
* thread #4, name = 'tokio-runtime-wo'
  * frame #0: 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92
    frame #1: 0x0000000806747512 libthr.so.3`___lldb_unnamed_symbol159$$libthr.so.3 + 482
    frame #2: 0x000000000669bbaf vector`std::thread::park::h18f9aedde87d1a00 + 207
    frame #3: 0x00000000044da248 vector`futures_executor::local_pool::run_executor::_$u7b$$u7b$closure$u7d$$u7d$::hc0acb78ac8d421a4(thread_notify=0x000000080c76a260) at local_pool.rs:96:17
    frame #4: 0x0000000003f13ffd vector`std::thread::local::LocalKey$LT$T$GT$::try_with::h92bed61f3285b751(self=0x0000000006768db8, f=closure-0 @ 0x00007fffdfbf7540) at local.rs:263:16
    frame #5: 0x0000000003ef72e6 vector`std::thread::local::LocalKey$LT$T$GT$::with::h10e80cffb47a4edb(self=0x0000000006768db8, f=closure-0 @ 0x00007fffdfbf7598) at local.rs:239:9
    frame #6: 0x00000000044d9dc1 vector`futures_executor::local_pool::run_executor::h2dfd8e79f0e1fc8b(f=closure-0 @ 0x00007fffdfbf75d8) at local_pool.rs:83:5
    frame #7: 0x00000000044da551 vector`futures_executor::local_pool::block_on::h33a506951719ae27(f=<unavailable>) at local_pool.rs:317:5
    frame #8: 0x0000000003e93ea4 vector`file_source::file_server::FileServer::run::hdae16e341b07fc10(self=FileServer @ 0x00007fffdfbfa8d8, chans=Compat01As03Sink<futures::sink::map_err::SinkMapErr<futures::sync::mpsc::Sender<(bytes::bytes::Bytes, alloc::string::String)>, fn(futures::sync::mpsc::SendError<(bytes::bytes::Bytes, alloc::string::String)>)>, (bytes::bytes::Bytes, alloc::string::String)> @ 0x00007fffdfbfa978, shutdown=Compat01As03<vector::shutdown::ShutdownSignal> @ 0x00007fffdfbfaa50) at file_server.rs:269:26
    frame #9: 0x00000000046be463 vector`vector::sources::file::file_source::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h1e2e0a62d86f3009 at mod.rs:306:13
    frame #10: 0x00000000033662b8 vector`_$LT$tokio..runtime..blocking..task..BlockingTask$LT$T$GT$$u20$as$u20$core..future..future..Future$GT$::poll::h60819455096b0f17(self=Pin<&mut tokio::runtime::blocking::task::BlockingTask<closure-3>> @ 0x00007fffdfbfadd0, _cx=0x00007fffdfbfaec0) at task.rs:30:21
    frame #11: 0x0000000003a071d9 vector`tokio::task::core::Core$LT$T$GT$::poll::h842408a2a91bdecd(self=0x0000000809d8e170, header=0x0000000809d8e140) at core.rs:128:13
    frame #12: 0x00000000046a4441 vector`tokio::task::harness::Harness$LT$T$C$S$GT$::poll::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h66d42a3b2317cf20 at harness.rs:119:27
    frame #13: 0x000000000350bced vector`core::ops::function::FnOnce::call_once::h11ace97f8426d178((null)=closure-0 @ 0x00007fffdfbfb078, (null)=<unavailable>) at function.rs:232:5
    frame #14: 0x0000000003c19496 vector`_$LT$std..panic..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::hb1953cf1a8f543c6(self=AssertUnwindSafe<closure-0> @ 0x00007fffdfbfb0c0, _args=<unavailable>) at panic.rs:318:9
    frame #15: 0x0000000003c295b2 vector`std::panicking::try::do_call::h8b0c4a636b2bc2c0(data="س\xbf��\x7f") at panicking.rs:331:40
    frame #16: 0x0000000003c2adbd vector`__rust_try + 29
    frame #17: 0x0000000003c288ff vector`std::panicking::try::he62201e826b07e7a(f=AssertUnwindSafe<closure-0> @ 0x00007fffdfbfb1c8) at panicking.rs:274:15
    frame #18: 0x0000000003c1b06d vector`std::panic::catch_unwind::h4f55fa3c93c1ef23(f=AssertUnwindSafe<closure-0> @ 0x00007fffdfbfb240) at panic.rs:394:14
    frame #19: 0x00000000046a3b81 vector`tokio::task::harness::Harness$LT$T$C$S$GT$::poll::_$u7b$$u7b$closure$u7d$$u7d$::h8d1847263428c199((null)=0x0000000809d8e170) at harness.rs:100:13
    frame #20: 0x00000000046b5de3 vector`tokio::loom::std::causal_cell::CausalCell$LT$T$GT$::with_mut::h2c890abbfcbcfbd7(self=0x0000000809d8e170, f=closure-1 @ 0x00007fffdfbfb300) at causal_cell.rs:41:9
    frame #21: 0x000000000469fad6 vector`tokio::task::harness::Harness$LT$T$C$S$GT$::poll::h47c9ce822daff09b(self=Harness<tokio::runtime::blocking::task::BlockingTask<closure-3>, tokio::runtime::blocking::schedule::NoopSchedule> @ 0x00007fffdfbfb3b8, executor=&mut FnMut<()> @ 0x00007fffdfbfb4a0) at harness.rs:99:19
    frame #22: 0x0000000002f53d6c vector`tokio::task::raw::poll::h4fcd9f1cc58ffdcd(ptr=0x0000000809d8e140, executor=&mut FnMut<()> @ 0x00007fffdfbfb548) at raw.rs:162:5
    frame #23: 0x0000000005f86a80 vector`tokio::task::raw::RawTask::poll::hcbe9cc77297574fe(self=RawTask @ 0x00007fffdfbfb5a0, executor=&mut FnMut<()> @ 0x00007fffdfbfb5a8) at raw.rs:113:9
    frame #24: 0x0000000005f8d4f8 vector`tokio::task::Task$LT$S$GT$::run::h343bfcb1b2980db1(self=Task<tokio::runtime::blocking::schedule::NoopSchedule> @ 0x00007fffdfbfb5d8, executor=closure-0 @ 0x00007fffdfbfb5e0) at mod.rs:381:17
    frame #25: 0x0000000005fb7769 vector`tokio::runtime::blocking::pool::run_task::h7a496c28eb595cbe(f=Task<tokio::runtime::blocking::schedule::NoopSchedule> @ 0x00007fffdfbfb648) at pool.rs:311:15
    frame #26: 0x0000000005fb7070 vector`tokio::runtime::blocking::pool::Inner::run::hba3a92bdcbf63e3d(self=0x0000000806d65310) at pool.rs:230:17
    frame #27: 0x0000000005fb6e0b vector`tokio::runtime::blocking::pool::Spawner::spawn_thread::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h5339bcf49822497e at pool.rs:210:21
    frame #28: 0x0000000005fd4f09 vector`tokio::runtime::context::enter::h51e9dd0cd9a01887(new=<unavailable>, f=closure-0 @ 0x00007fffdfbfb8d0) at context.rs:72:5
    frame #29: 0x0000000005fc8f96 vector`tokio::runtime::handle::Handle::enter::h5989f2433a112c5d(self=0x00007fffdfbfba48, f=closure-0 @ 0x00007fffdfbfb980) at handle.rs:34:9
    frame #30: 0x0000000005fb6e8d vector`tokio::runtime::blocking::pool::Spawner::spawn_thread::_$u7b$$u7b$closure$u7d$$u7d$::h8ba31bb12fbf9f11 at pool.rs:209:17
    frame #31: 0x0000000005f80031 vector`std::sys_common::backtrace::__rust_begin_short_backtrace::ha30d80ccb844f90d(f=<unavailable>) at backtrace.rs:130:5
    frame #32: 0x0000000005f60fe2 vector`std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h12ba7c8b5875689c at mod.rs:475:17
    frame #33: 0x0000000005f77bf2 vector`_$LT$std..panic..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::hc7a9c75b387daa16(self=<unavailable>, _args=<unavailable>) at panic.rs:318:9
    frame #34: 0x0000000005fb4307 vector`std::panicking::try::do_call::hb6fd28d154c4c2c3(data="\x02") at panicking.rs:331:40
    frame #35: 0x0000000005fba84d vector`__rust_try + 29
    frame #36: 0x0000000005fb347d vector`std::panicking::try::h9b44254054ebd166(f=<unavailable>) at panicking.rs:274:15
    frame #37: 0x0000000005f77fe2 vector`std::panic::catch_unwind::haa0577d94d65e226(f=<unavailable>) at panic.rs:394:14
    frame #38: 0x0000000005f60e1b vector`std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::h0f7f06a526526a0f at mod.rs:474:30
    frame #39: 0x0000000005fc0662 vector`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hd948470106f8f12c((null)=0x0000000806d0d460, (null)=<unavailable>) at function.rs:232:5
    frame #40: 0x000000000669d48b vector`std::sys::unix::thread::Thread::new::thread_start::h71214bb84467b660 + 43
    frame #41: 0x000000080673b736 libthr.so.3`___lldb_unnamed_symbol1$$libthr.so.3 + 326

Thread 6

(lldb) thr sel 6
* thread #6, name = 'tokio-runtime-wo'
    frame #0: 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92
libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3:
->  0x806749edc <+92>: retq
    0x806749edd <+93>: int3
    0x806749ede <+94>: int3
    0x806749edf <+95>: int3
(lldb) bt
error: need to add support for DW_TAG_base_type '()' encoded with DW_ATE = 0x7, bit_size = 0
* thread #6, name = 'tokio-runtime-wo'
  * frame #0: 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92
    frame #1: 0x0000000806747512 libthr.so.3`___lldb_unnamed_symbol159$$libthr.so.3 + 482
    frame #2: 0x0000000006502129 vector`std::sys::unix::condvar::Condvar::wait::h12cb628a54a6fe14(self=0x0000000806cf71f0, mutex=0x0000000806cf7208) at condvar.rs:73:17
    frame #3: 0x00000000065020de vector`std::sys_common::condvar::Condvar::wait::h9db15ee36fa18c43(self=0x0000000806cf71f0, mutex=0x0000000806cf7208) at condvar.rs:50:9
    frame #4: 0x00000000064fab1b vector`std::sync::condvar::Condvar::wait::h81e9ad79a874236a(self=0x0000000806d12528, guard=MutexGuard<()> @ 0x00007fffdf7f84b0) at condvar.rs:200:13
    frame #5: 0x0000000005fd6b9f vector`tokio::runtime::park::Inner::park_condvar::hb2aacbd8c696183f(self=0x0000000806d12510) at park.rs:164:17
    frame #6: 0x0000000005fd65fb vector`tokio::runtime::park::Inner::park::h7a40ba92a531384e(self=0x0000000806d12510) at park.rs:135:13
    frame #7: 0x0000000005fd612f vector`_$LT$tokio..runtime..park..Parker$u20$as$u20$tokio..park..Park$GT$::park::h9fdcad9e9027a22d(self=0x0000000806d02550) at park.rs:93:9
    frame #8: 0x0000000005f674c7 vector`tokio::runtime::thread_pool::worker::GenerationGuard::park::h26d9467816d74ef6(self=0x00007fffdf7f8938) at worker.rs:499:13
    frame #9: 0x0000000005f66786 vector`tokio::runtime::thread_pool::worker::GenerationGuard::run::h78fad92b1674c0f6(self=GenerationGuard @ 0x00007fffdf7f8948) at worker.rs:281:17
    frame #10: 0x0000000005f66275 vector`tokio::runtime::thread_pool::worker::Worker::run::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h131996bb7aadc9fd(ob=0x00000008087b00a8) at worker.rs:160:25
    frame #11: 0x0000000005f3d51b vector`std::thread::local::LocalKey$LT$T$GT$::try_with::h8a2a1ee2c05eeeac(self=0x0000000006846cf8, f=closure-0 @ 0x00007fffdf7f8a78) at local.rs:263:16
    frame #12: 0x0000000005f3c9fa vector`std::thread::local::LocalKey$LT$T$GT$::with::hdce3c81f556e6779(self=0x0000000006846cf8, f=closure-0 @ 0x00007fffdf7f8ae0) at local.rs:239:9
    frame #13: 0x0000000005f66314 vector`tokio::runtime::thread_pool::worker::Worker::run::_$u7b$$u7b$closure$u7d$$u7d$::h5e1977f14c2eb579 at worker.rs:136:13
    frame #14: 0x0000000005fb796d vector`tokio::runtime::thread_pool::current::set::_$u7b$$u7b$closure$u7d$$u7d$::ha5459371bc986eb2(cell=0x00000008087b0088) at current.rs:47:9
    frame #15: 0x0000000005f3dea4 vector`std::thread::local::LocalKey$LT$T$GT$::try_with::he79115d768928358(self=0x0000000006847ed8, f=closure-0 @ 0x00007fffdf7f8ca8) at local.rs:263:16
    frame #16: 0x0000000005f3cb35 vector`std::thread::local::LocalKey$LT$T$GT$::with::hf521433ca38d0d00(self=0x0000000006847ed8, f=<unavailable>) at local.rs:239:9
    frame #17: 0x0000000005fb7881 vector`tokio::runtime::thread_pool::current::set::h21b521a288bdc5ba(pool=0x00007fffdf7f8de0, index=3, f=closure-0 @ 0x00007fffdf7f8d10) at current.rs:29:5
    frame #18: 0x0000000005f660c3 vector`tokio::runtime::thread_pool::worker::Worker::run::ha903fe7232f32324(self=Worker @ 0x00007fffdf7f8dd8) at worker.rs:132:9
    frame #19: 0x0000000005fca741 vector`tokio::runtime::thread_pool::Workers::spawn::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h2cabd1d65e0afc45 at mod.rs:113:49
    frame #20: 0x0000000005f6568f vector`_$LT$tokio..runtime..blocking..task..BlockingTask$LT$T$GT$$u20$as$u20$core..future..future..Future$GT$::poll::h638a3973f7a0913d(self=Pin<&mut tokio::runtime::blocking::task::BlockingTask<closure-0>> @ 0x00007fffdf7f8eb0, _cx=0x00007fffdf7f8f98) at task.rs:30:21
    frame #21: 0x0000000005f48149 vector`tokio::task::core::Core$LT$T$GT$::poll::ha62276e1950df7eb(self=0x0000000806d942d0, header=0x0000000806d942a0) at core.rs:128:13
    frame #22: 0x0000000005f6bc11 vector`tokio::task::harness::Harness$LT$T$C$S$GT$::poll::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h8fff9f9a3ca69498 at harness.rs:119:27
    frame #23: 0x0000000005fc0a49 vector`core::ops::function::FnOnce::call_once::h9c5038f9ef49631e((null)=closure-0 @ 0x00007fffdf7f9088, (null)=<unavailable>) at function.rs:232:5
    frame #24: 0x0000000005f77a66 vector`_$LT$std..panic..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::h00f23b3fc90bcde0(self=AssertUnwindSafe<closure-0> @ 0x00007fffdf7f90d0, _args=<unavailable>) at panic.rs:318:9
    frame #25: 0x0000000005fb3f82 vector`std::panicking::try::do_call::h54d2d371c9555e86(data="ؓ\x7f��\x7f") at panicking.rs:331:40
    frame #26: 0x0000000005fba84d vector`__rust_try + 29
    frame #27: 0x0000000005fb317f vector`std::panicking::try::h7b93830ba2700b60(f=AssertUnwindSafe<closure-0> @ 0x00007fffdf7f91d8) at panicking.rs:274:15
    frame #28: 0x0000000005f77f0d vector`std::panic::catch_unwind::h6852cd80a488706a(f=AssertUnwindSafe<closure-0> @ 0x00007fffdf7f9250) at panic.rs:394:14
    frame #29: 0x0000000005f6bff1 vector`tokio::task::harness::Harness$LT$T$C$S$GT$::poll::_$u7b$$u7b$closure$u7d$$u7d$::hd7e51ad74c743a1c((null)=0x0000000806d942d0) at harness.rs:100:13
    frame #30: 0x0000000005f639b3 vector`tokio::loom::std::causal_cell::CausalCell$LT$T$GT$::with_mut::h5843619ffaae45c6(self=0x0000000806d942d0, f=closure-1 @ 0x00007fffdf7f9300) at causal_cell.rs:41:9
    frame #31: 0x0000000005f6aa66 vector`tokio::task::harness::Harness$LT$T$C$S$GT$::poll::h667564576e04c6a5(self=Harness<tokio::runtime::blocking::task::BlockingTask<closure-0>, tokio::runtime::blocking::schedule::NoopSchedule> @ 0x00007fffdf7f93b8, executor=&mut FnMut<()> @ 0x00007fffdf7f94a0) at harness.rs:99:19
    frame #32: 0x0000000005f86e6c vector`tokio::task::raw::poll::hc65802c512e59632(ptr=0x0000000806d942a0, executor=&mut FnMut<()> @ 0x00007fffdf7f9548) at raw.rs:162:5
    frame #33: 0x0000000005f86a80 vector`tokio::task::raw::RawTask::poll::hcbe9cc77297574fe(self=RawTask @ 0x00007fffdf7f95a0, executor=&mut FnMut<()> @ 0x00007fffdf7f95a8) at raw.rs:113:9
    frame #34: 0x0000000005f8d4f8 vector`tokio::task::Task$LT$S$GT$::run::h343bfcb1b2980db1(self=Task<tokio::runtime::blocking::schedule::NoopSchedule> @ 0x00007fffdf7f95d8, executor=closure-0 @ 0x00007fffdf7f95e0) at mod.rs:381:17
    frame #35: 0x0000000005fb7769 vector`tokio::runtime::blocking::pool::run_task::h7a496c28eb595cbe(f=Task<tokio::runtime::blocking::schedule::NoopSchedule> @ 0x00007fffdf7f9648) at pool.rs:311:15
    frame #36: 0x0000000005fb7070 vector`tokio::runtime::blocking::pool::Inner::run::hba3a92bdcbf63e3d(self=0x0000000806d65310) at pool.rs:230:17
    frame #37: 0x0000000005fb6e0b vector`tokio::runtime::blocking::pool::Spawner::spawn_thread::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h5339bcf49822497e at pool.rs:210:21
    frame #38: 0x0000000005fd4f09 vector`tokio::runtime::context::enter::h51e9dd0cd9a01887(new=<unavailable>, f=closure-0 @ 0x00007fffdf7f98d0) at context.rs:72:5
    frame #39: 0x0000000005fc8f96 vector`tokio::runtime::handle::Handle::enter::h5989f2433a112c5d(self=0x00007fffdf7f9a48, f=closure-0 @ 0x00007fffdf7f9980) at handle.rs:34:9
    frame #40: 0x0000000005fb6e8d vector`tokio::runtime::blocking::pool::Spawner::spawn_thread::_$u7b$$u7b$closure$u7d$$u7d$::h8ba31bb12fbf9f11 at pool.rs:209:17
    frame #41: 0x0000000005f80031 vector`std::sys_common::backtrace::__rust_begin_short_backtrace::ha30d80ccb844f90d(f=<unavailable>) at backtrace.rs:130:5
    frame #42: 0x0000000005f60fe2 vector`std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h12ba7c8b5875689c at mod.rs:475:17
    frame #43: 0x0000000005f77bf2 vector`_$LT$std..panic..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::hc7a9c75b387daa16(self=<unavailable>, _args=<unavailable>) at panic.rs:318:9
    frame #44: 0x0000000005fb4307 vector`std::panicking::try::do_call::hb6fd28d154c4c2c3(data="\x02") at panicking.rs:331:40
    frame #45: 0x0000000005fba84d vector`__rust_try + 29
    frame #46: 0x0000000005fb347d vector`std::panicking::try::h9b44254054ebd166(f=<unavailable>) at panicking.rs:274:15
    frame #47: 0x0000000005f77fe2 vector`std::panic::catch_unwind::haa0577d94d65e226(f=<unavailable>) at panic.rs:394:14
    frame #48: 0x0000000005f60e1b vector`std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::h0f7f06a526526a0f at mod.rs:474:30
    frame #49: 0x0000000005fc0662 vector`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hd948470106f8f12c((null)=0x0000000806d0d5a0, (null)=<unavailable>) at function.rs:232:5
    frame #50: 0x000000000669d48b vector`std::sys::unix::thread::Thread::new::thread_start::h71214bb84467b660 + 43
    frame #51: 0x000000080673b736 libthr.so.3`___lldb_unnamed_symbol1$$libthr.so.3 + 326

Thread 7

(lldb) thr sel 7
* thread #7, name = 'tokio-runtime-wo'
    frame #0: 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92
libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3:
->  0x806749edc <+92>: retq
    0x806749edd <+93>: int3
    0x806749ede <+94>: int3
    0x806749edf <+95>: int3
(lldb) bt
* thread #7, name = 'tokio-runtime-wo'
  * frame #0: 0x0000000806749edc libthr.so.3`___lldb_unnamed_symbol190$$libthr.so.3 + 92
    frame #1: 0x0000000806747512 libthr.so.3`___lldb_unnamed_symbol159$$libthr.so.3 + 482
    frame #2: 0x0000000006502129 vector`std::sys::unix::condvar::Condvar::wait::h12cb628a54a6fe14(self=0x0000000806cf7210, mutex=0x0000000806cf7148) at condvar.rs:73:17
    frame #3: 0x00000000065020de vector`std::sys_common::condvar::Condvar::wait::h9db15ee36fa18c43(self=0x0000000806cf7210, mutex=0x0000000806cf7148) at condvar.rs:50:9
    frame #4: 0x00000000064fab1b vector`std::sync::condvar::Condvar::wait::h81e9ad79a874236a(self=0x0000000806d124e8, guard=MutexGuard<()> @ 0x00007fffdf5f74b0) at condvar.rs:200:13
    frame #5: 0x0000000005fd6b9f vector`tokio::runtime::park::Inner::park_condvar::hb2aacbd8c696183f(self=0x0000000806d124d0) at park.rs:164:17
    frame #6: 0x0000000005fd65fb vector`tokio::runtime::park::Inner::park::h7a40ba92a531384e(self=0x0000000806d124d0) at park.rs:135:13
    frame #7: 0x0000000005fd612f vector`_$LT$tokio..runtime..park..Parker$u20$as$u20$tokio..park..Park$GT$::park::h9fdcad9e9027a22d(self=0x0000000806d02510) at park.rs:93:9
    frame #8: 0x0000000005f674c7 vector`tokio::runtime::thread_pool::worker::GenerationGuard::park::h26d9467816d74ef6(self=0x00007fffdf5f7938) at worker.rs:499:13
    frame #9: 0x0000000005f66786 vector`tokio::runtime::thread_pool::worker::GenerationGuard::run::h78fad92b1674c0f6(self=GenerationGuard @ 0x00007fffdf5f7948) at worker.rs:281:17
    frame #10: 0x0000000005f66275 vector`tokio::runtime::thread_pool::worker::Worker::run::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h131996bb7aadc9fd(ob=0x00000008087a40a8) at worker.rs:160:25
    frame #11: 0x0000000005f3d51b vector`std::thread::local::LocalKey$LT$T$GT$::try_with::h8a2a1ee2c05eeeac(self=0x0000000006846cf8, f=closure-0 @ 0x00007fffdf5f7a78) at local.rs:263:16
    frame #12: 0x0000000005f3c9fa vector`std::thread::local::LocalKey$LT$T$GT$::with::hdce3c81f556e6779(self=0x0000000006846cf8, f=closure-0 @ 0x00007fffdf5f7ae0) at local.rs:239:9
    frame #13: 0x0000000005f66314 vector`tokio::runtime::thread_pool::worker::Worker::run::_$u7b$$u7b$closure$u7d$$u7d$::h5e1977f14c2eb579 at worker.rs:136:13
    frame #14: 0x0000000005fb796d vector`tokio::runtime::thread_pool::current::set::_$u7b$$u7b$closure$u7d$$u7d$::ha5459371bc986eb2(cell=0x00000008087a4088) at current.rs:47:9
    frame #15: 0x0000000005f3dea4 vector`std::thread::local::LocalKey$LT$T$GT$::try_with::he79115d768928358(self=0x0000000006847ed8, f=closure-0 @ 0x00007fffdf5f7ca8) at local.rs:263:16
    frame #16: 0x0000000005f3cb35 vector`std::thread::local::LocalKey$LT$T$GT$::with::hf521433ca38d0d00(self=0x0000000006847ed8, f=<unavailable>) at local.rs:239:9
    frame #17: 0x0000000005fb7881 vector`tokio::runtime::thread_pool::current::set::h21b521a288bdc5ba(pool=0x00007fffdf5f7de0, index=2, f=closure-0 @ 0x00007fffdf5f7d10) at current.rs:29:5
    frame #18: 0x0000000005f660c3 vector`tokio::runtime::thread_pool::worker::Worker::run::ha903fe7232f32324(self=Worker @ 0x00007fffdf5f7dd8) at worker.rs:132:9
    frame #19: 0x0000000005fca741 vector`tokio::runtime::thread_pool::Workers::spawn::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h2cabd1d65e0afc45 at mod.rs:113:49
    frame #20: 0x0000000005f6568f vector`_$LT$tokio..runtime..blocking..task..BlockingTask$LT$T$GT$$u20$as$u20$core..future..future..Future$GT$::poll::h638a3973f7a0913d(self=Pin<&mut tokio::runtime::blocking::task::BlockingTask<closure-0>> @ 0x00007fffdf5f7eb0, _cx=0x00007fffdf5f7f98) at task.rs:30:21
    frame #21: 0x0000000005f48149 vector`tokio::task::core::Core$LT$T$GT$::poll::ha62276e1950df7eb(self=0x0000000806d94260, header=0x0000000806d94230) at core.rs:128:13
    frame #22: 0x0000000005f6bc11 vector`tokio::task::harness::Harness$LT$T$C$S$GT$::poll::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h8fff9f9a3ca69498 at harness.rs:119:27
    frame #23: 0x0000000005fc0a49 vector`core::ops::function::FnOnce::call_once::h9c5038f9ef49631e((null)=closure-0 @ 0x00007fffdf5f8088, (null)=<unavailable>) at function.rs:232:5
    frame #24: 0x0000000005f77a66 vector`_$LT$std..panic..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::h00f23b3fc90bcde0(self=AssertUnwindSafe<closure-0> @ 0x00007fffdf5f80d0, _args=<unavailable>) at panic.rs:318:9
    frame #25: 0x0000000005fb3f82 vector`std::panicking::try::do_call::h54d2d371c9555e86(data="؃_��\x7f") at panicking.rs:331:40
    frame #26: 0x0000000005fba84d vector`__rust_try + 29
    frame #27: 0x0000000005fb317f vector`std::panicking::try::h7b93830ba2700b60(f=AssertUnwindSafe<closure-0> @ 0x00007fffdf5f81d8) at panicking.rs:274:15
    frame #28: 0x0000000005f77f0d vector`std::panic::catch_unwind::h6852cd80a488706a(f=AssertUnwindSafe<closure-0> @ 0x00007fffdf5f8250) at panic.rs:394:14
    frame #29: 0x0000000005f6bff1 vector`tokio::task::harness::Harness$LT$T$C$S$GT$::poll::_$u7b$$u7b$closure$u7d$$u7d$::hd7e51ad74c743a1c((null)=0x0000000806d94260) at harness.rs:100:13
    frame #30: 0x0000000005f639b3 vector`tokio::loom::std::causal_cell::CausalCell$LT$T$GT$::with_mut::h5843619ffaae45c6(self=0x0000000806d94260, f=closure-1 @ 0x00007fffdf5f8300) at causal_cell.rs:41:9
    frame #31: 0x0000000005f6aa66 vector`tokio::task::harness::Harness$LT$T$C$S$GT$::poll::h667564576e04c6a5(self=Harness<tokio::runtime::blocking::task::BlockingTask<closure-0>, tokio::runtime::blocking::schedule::NoopSchedule> @ 0x00007fffdf5f83b8, executor=&mut FnMut<()> @ 0x00007fffdf5f84a0) at harness.rs:99:19
    frame #32: 0x0000000005f86e6c vector`tokio::task::raw::poll::hc65802c512e59632(ptr=0x0000000806d94230, executor=&mut FnMut<()> @ 0x00007fffdf5f8548) at raw.rs:162:5
    frame #33: 0x0000000005f86a80 vector`tokio::task::raw::RawTask::poll::hcbe9cc77297574fe(self=RawTask @ 0x00007fffdf5f85a0, executor=&mut FnMut<()> @ 0x00007fffdf5f85a8) at raw.rs:113:9
    frame #34: 0x0000000005f8d4f8 vector`tokio::task::Task$LT$S$GT$::run::h343bfcb1b2980db1(self=Task<tokio::runtime::blocking::schedule::NoopSchedule> @ 0x00007fffdf5f85d8, executor=closure-0 @ 0x00007fffdf5f85e0) at mod.rs:381:17
    frame #35: 0x0000000005fb7769 vector`tokio::runtime::blocking::pool::run_task::h7a496c28eb595cbe(f=Task<tokio::runtime::blocking::schedule::NoopSchedule> @ 0x00007fffdf5f8648) at pool.rs:311:15
    frame #36: 0x0000000005fb7070 vector`tokio::runtime::blocking::pool::Inner::run::hba3a92bdcbf63e3d(self=0x0000000806d65310) at pool.rs:230:17
    frame #37: 0x0000000005fb6e0b vector`tokio::runtime::blocking::pool::Spawner::spawn_thread::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h5339bcf49822497e at pool.rs:210:21
    frame #38: 0x0000000005fd4f09 vector`tokio::runtime::context::enter::h51e9dd0cd9a01887(new=<unavailable>, f=closure-0 @ 0x00007fffdf5f88d0) at context.rs:72:5
    frame #39: 0x0000000005fc8f96 vector`tokio::runtime::handle::Handle::enter::h5989f2433a112c5d(self=0x00007fffdf5f8a48, f=closure-0 @ 0x00007fffdf5f8980) at handle.rs:34:9
    frame #40: 0x0000000005fb6e8d vector`tokio::runtime::blocking::pool::Spawner::spawn_thread::_$u7b$$u7b$closure$u7d$$u7d$::h8ba31bb12fbf9f11 at pool.rs:209:17
    frame #41: 0x0000000005f80031 vector`std::sys_common::backtrace::__rust_begin_short_backtrace::ha30d80ccb844f90d(f=<unavailable>) at backtrace.rs:130:5
    frame #42: 0x0000000005f60fe2 vector`std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h12ba7c8b5875689c at mod.rs:475:17
    frame #43: 0x0000000005f77bf2 vector`_$LT$std..panic..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::hc7a9c75b387daa16(self=<unavailable>, _args=<unavailable>) at panic.rs:318:9
    frame #44: 0x0000000005fb4307 vector`std::panicking::try::do_call::hb6fd28d154c4c2c3(data="\x02") at panicking.rs:331:40
    frame #45: 0x0000000005fba84d vector`__rust_try + 29
    frame #46: 0x0000000005fb347d vector`std::panicking::try::h9b44254054ebd166(f=<unavailable>) at panicking.rs:274:15
    frame #47: 0x0000000005f77fe2 vector`std::panic::catch_unwind::haa0577d94d65e226(f=<unavailable>) at panic.rs:394:14
    frame #48: 0x0000000005f60e1b vector`std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::h0f7f06a526526a0f at mod.rs:474:30
    frame #49: 0x0000000005fc0662 vector`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hd948470106f8f12c((null)=0x0000000806d0d640, (null)=<unavailable>) at function.rs:232:5
    frame #50: 0x000000000669d48b vector`std::sys::unix::thread::Thread::new::thread_start::h71214bb84467b660 + 43
    frame #51: 0x000000080673b736 libthr.so.3`___lldb_unnamed_symbol1$$libthr.so.3 + 326

Thread 24

(lldb) thr sel 24
* thread #24, name = 'tokio-runtime-wo'
    frame #0: 0x000000000361c7a1 vector`alloc::collections::btree::node::NodeRef$LT$BorrowType$C$K$C$V$C$Type$GT$::as_leaf::hc4397aaa8072803f(self=0x00007fffdf3ed1d0) at node.rs:322:6
(lldb) bt
error: vector DWARF DIE at 0x01913b56 (class closure-0) has a member variable 0x01913b5d (__0) whose type is a forward declaration, not a complete definition.
Try compiling the source file with -fstandalone-debug

@valpackett
Copy link

error: vector DWARF DIE at 0x01913b56 (class closure-0) has a member variable 0x01913b5d (__0) whose type is a forward declaration, not a complete definition.

haha whoops that's.. interesting. (maybe lldb10 from llvm10 package would fare better.. also try a debug build of vector)

Yeah, don't bother with the sleeping threads (that are in libthr), if you get the debugger to not fail, check that one working worker a couple times (continue, wait a bit, interrupt again)

@Hoverbear
Copy link
Contributor

This was a debug build. :) I'm waiting on a UFS build to test then I'll root a bit more. Thanks for the pointers!

@Hoverbear
Copy link
Contributor

Hoverbear commented Jul 14, 2020

Same behavior on UFS.

(End of day for me -- Will return to this issue tomorrow!)

@0xYUANTI
Copy link
Author

Cool, let me know if there's anything I can do to help! And thanks everyone for vector, has been a real joy to use so far :)

@Hoverbear
Copy link
Contributor

So I ended up distracted by pre-release blockers yesterday. (And might be today too... but this is on my list!)

On a more practical note, I would suggest against using the disk buffers if you can afford to give up the small amount of safety loss. Our disk buffers add a noticeable slowdown at the moment. We've been investigating ways to improve it. :)

@0xYUANTI
Copy link
Author

So I ended up distracted by pre-release blockers yesterday. (And might be today too... but this is on my list!)

On a more practical note, I would suggest against using the disk buffers if you can afford to give up the small amount of safety loss. Our disk buffers add a noticeable slowdown at the moment. We've been investigating ways to improve it. :)

Vector buffers are basically redundant for a logfile managed by runit with log rotation etc (i.e. Vector doesn't update it's file position until all sinks have acked regardless of buffer setting) right?

@lukesteensen
Copy link
Member

Vector buffers are basically redundant for a logfile managed by runit with log rotation etc (i.e. Vector doesn't update it's file position until all sinks have acked regardless of buffer setting) right?

The answer here is unfortunately somewhat complicated, so apologies in advance 😄

First of all, we agree that the behavior you describe would be ideal. We plan to implement that behavior, but there are a variety of complications we are still figuring out how we'd like to handle (e.g. sinks that flush data only periodically, sinks that have no explicit flush, sources that provide no ability to ack, transforms that intentionally drop events in between sources and sinks, etc).

In the meantime, we essentially provide two different modes of operation per sink. The default mode, with small in-memory buffers, is based around backpressure. In this mode, sinks "pull" data through the pipeline at whatever speed they're able. This flows all the way upstream to sources, so a file source (for example) would be reading and checkpointing at roughly the same throughput as the downstream sink is sending. This prevents issues where this is a large gap between data that has been ingested and data that's been fully processed. It's not as precise as the ideal behavior, but provides some of the same benefits.

The second mode uses our disk buffers. The purpose of the disk buffers is to absorb and smooth out ingestion throughput variability in cases where backpressure would lead to data loss. A good example here is a UDP syslog source, where we have no way to signal the upstream system to slow down and need to simply accept the data as quickly as we can. If you're using something like the file source, however, disk buffers are very likely redundant (unless your files are very aggressively rotated).

@Hoverbear
Copy link
Contributor

Release is out the door! I will be looking at this tomorrow. :)

@Hoverbear
Copy link
Contributor

Hoverbear commented Jul 24, 2020

@cannedprimates Can you try a 0.10.0 build? It seems to work for me? Perhaps it was related to some of changes @bruceg made in #2866 or some other ticket...

(This same setup had issues on 0.9.2)

root@freebsd-s-6vcpu-16gb-tor1-01:~ # cat vector.toml
data_dir            = "/home/freebsd/var"

[sources.my_file]
type                = "file"
file_key            = ""
include             = ["dump"]
oldest_first        = true
max_line_bytes      = 1000000 #1MB
max_read_bytes      = 1000000 #1MB

[transforms.my_json_parser]
type                = "json_parser"
inputs              = ["my_file"]

[sinks.my_s3]
type                 = "aws_s3"
inputs               = ["my_json_parser"]
bucket               = "hoverbear"
key_prefix           = "my/"
batch.timeout_secs   = 10
endpoint = "ams3.digitaloceanspaces.com"
encoding.codec       = "ndjson"
compression          = "gzip"
filename_append_uuid = false
filename_extension   = "json.gz"
filename_time_format = "%Y/%m/%d/%H%M%S"
buffer.type          = "disk"
buffer.max_size      = 10490000
buffer.when_full     = "block"
root@freebsd-s-6vcpu-16gb-tor1-01:~ # ./run.sh
Jul 24 17:28:12.288  INFO vector: Log level "trace" is enabled.
Jul 24 17:28:12.291  INFO vector: Loading configs. path=["vector.toml"]
Jul 24 17:28:12.291 TRACE vector: Parsing config. path="vector.toml"
Jul 24 17:28:12.294  INFO vector: Vector is starting. version="0.10.0" git_version="" released="Fri, 24 Jul 2020 17:09:23 +0000" arch="x86_64"
Jul 24 17:28:12.464 DEBUG vector::tls::settings: Fetching system root certs.
Jul 24 17:28:12.465 DEBUG vector::sinks::util::rusoto: using default credentials provider for AWS.
Jul 24 17:28:12.465  INFO vector::topology: Running healthchecks.
Jul 24 17:28:12.465 DEBUG http: vector::sinks::util::http: sending request. uri=https://ams3.digitaloceanspaces.com/hoverbear method=HEAD
Jul 24 17:28:13.191 DEBUG http: vector::sinks::util::http: response. status=200 version=HTTP/1.1
Jul 24 17:28:13.191  INFO vector::topology::builder: Healthcheck: Passed.
Jul 24 17:28:13.192  INFO vector::topology: All healthchecks passed.
Jul 24 17:28:13.192  INFO vector::topology: Starting source "my_file"
Jul 24 17:28:13.192  INFO vector::topology: Starting transform "my_json_parser"
Jul 24 17:28:13.192  INFO vector::topology: Starting sink "my_s3"
Jul 24 17:28:13.192  INFO source{name=my_file type=file}: vector::sources::file: Starting file server. include=["dump"] exclude=[]
Jul 24 17:28:13.193  INFO source{name=my_file type=file}:file_server: file_source::file_server: Found file to watch. path="dump" file_position=0
Jul 24 17:28:13.193 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Continue watching file. path="dump"
Jul 24 17:28:13.194 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="dump" bytes=1684
Jul 24 17:28:13.194 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="dump" bytes=196
Jul 24 17:28:13.194 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="dump" bytes=227
Jul 24 17:28:13.194 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="dump" bytes=227
Jul 24 17:28:13.194 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="dump" bytes=238
Jul 24 17:28:13.194 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="dump" bytes=238
Jul 24 17:28:13.194 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="dump" bytes=242
Jul 24 17:28:13.194 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="dump" bytes=259
Jul 24 17:28:13.194 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="dump" bytes=4227
Jul 24 17:28:13.194 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="dump" bytes=249
Jul 24 17:28:13.194 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="dump" bytes=241
Jul 24 17:28:13.194 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="dump" bytes=250
Jul 24 17:28:13.194 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="dump" bytes=241
Jul 24 17:28:13.194 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="dump" bytes=250
Jul 24 17:28:13.194 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="dump" bytes=240
Jul 24 17:28:13.194 TRACE source{name=my_file type=file}:file_server: file_source::file_server: Read bytes. path="dump" bytes=274

@Hoverbear
Copy link
Contributor

Hoverbear commented Jul 24, 2020

(I didn't realize this would be closed by me moving the card in my tasks!)

Please reopen this if you still face this issue! I really want to make sure the FreeBSD experience is first class.

Thank you for this very awesome bug report and the truly fantastic reproduction steps. I'd love to solve issues from you any day, any time, any project of mine. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
domain: buffers Anything related to Vector's memory/disk buffers have: must We must have this feature, it is critical to the project's success. It is high priority. type: bug A code related bug.
Projects
None yet
Development

No branches or pull requests

5 participants