-
Notifications
You must be signed in to change notification settings - Fork 13
Add ability to inject a tokio runtime as the backend for libevent. #15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
src/backend.rs
Outdated
base.evbase = Box::into_raw(backend).cast(); | ||
} | ||
|
||
/// Convenience function that returns true if the signal event bit is set. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would guess the compiler will figure it out but these might be decent candidates for explicitly requesting inlining via #[inline(always)]
src/backend.rs
Outdated
self.runtime.block_on(async move { | ||
match timeout { | ||
// spawned tasks are serviced during the sleep time | ||
Some(timeout) => tokio::time::sleep(timeout).await, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue I see with this approach is that an I/O event that libevent has registered with tokio might result in timers being registered that would then be at the top of the timer heap. This implementation would not return control back to libevent to possibly execute these actions by the deadlines for those new timers.
Take for example the following scenario:
- I register a 30s timer with libevent
- Libevent calls dispatch, there are 29.99s until the next timer needs to fire so that is the timeout passed into dispatch
- After 0.99s an I/O event occurs and a singleshot 250ms timer is registered with libevent.
- The remaining 29s elapses and libevent sees that it needs to service the 250ms oneshot timer but we've done so 28.75s late of its intended deadline.
I think this situation could occur with this implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that you are right. We could use a Notify to break out of the loop, but that seems like a big performance hit. I need to do some profiling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I modified the time-test
sample program to verify this. I extended the timeout to 10 seconds. Then I set an SIGALRM handler for 1 second (so it doesn't use libevent timers). In that handler, I set another 0 second timeout that then sets up another SIGALRM.
The original version of this PR delayed the signal handling and 0 second timeout until the initial 10 second timeout completed. That's not executable. So, I added a notification mechanism to allow the block_on to break out early if an event occurs.
With the change, the SIGALRM is handled every second and the 0 second timeout occurs immediately afterwards.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
…ent. Verified that the dispatch needs to break when an event occurs and implement that with a notification.
…ent. Added a benchmark program from libevent for comparing performance of tokio backend.
…ent. Added two more samples from libevent for testing the tokio backend.
…ent. Optimize dispatch function by not constructing a sleep if the given duration is 0.
…ent. Removed use of fdinfo in favor of a hash map.
…nership configurations.
193f285
to
645e3cf
Compare
15c260d
to
2c1a6a9
Compare
A optional tokio backend for handling libevent I/O and signal readiness is optionally provided. It is not patched into libevent directly, but is substituted at run time with a call to
libevent::inject_tokio
.The primary motivation for this feature is to allow native tokio and libevent tasks to co-exist with a single event loop on the same thread. This feature is especially useful when gradually migrating a C/libevent project to Rust/tokio when use of FFI between the C and Rust code prevents running the event loops on separate threads.
The feature requires bundling of the libevent C code build with libevent-sys due to dependencies on internal/non-public data structures within the library. It could work with a system installed build of libevent if the versions match, but that approach would not be without risk.