Document not found (404)
+This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..f1731109 --- /dev/null +++ b/.nojekyll @@ -0,0 +1 @@ +This file makes sure that Github Pages doesn't process mdBook's output. diff --git a/404.html b/404.html new file mode 100644 index 00000000..cab666fe --- /dev/null +++ b/404.html @@ -0,0 +1,228 @@ + + +
+ + +This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +pub enum Event<T> {
+ Msg(T),
+ Closed,
+}
The events generated by the channel event source
+A message was received and is bundled here
+The channel was closed
+This means all the Sender
s associated with this channel
+have been dropped, no more messages will ever be received.
Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub fn sync_channel<T>(bound: usize) -> (SyncSender<T>, Channel<T>)
Create a new synchronous, bounded channel
+An MPSC channel whose receiving end is an event source
+Create a channel using channel()
, which returns a
+Sender
that can be cloned and sent accross threads if T: Send
,
+and a Channel
that can be inserted into an EventLoop
.
+It will generate one event per message.
A synchronous version of the channel is provided by sync_channel
, in which
+the SyncSender
will block when the channel is full.
pub struct Channel<T> { /* private fields */ }
The receiving end of the channel
+This is the event source to be inserted into your EventLoop
.
Proxy for mpsc::Receiver::recv
to manually poll events.
Note: Normally you would want to use the Channel
by inserting
+it into an event loop instead. Use this for example to immediately
+dispatch pending events after creation.
Proxy for mpsc::Receiver::try_recv
to manually poll events.
Note: Normally you would want to use the Channel
by inserting
+it into an event loop instead. Use this for example to immediately
+dispatch pending events after creation.
process_events()
(not the user callback!).EventSource::before_sleep
+and EventSource::before_handle_events
notifications. These are opt-in because
+they require more expensive checks, and almost all sources will not need these notificationspoll
is about to begin Read moreEventSource::process_events
will
+be called with the given events for this source. The iterator may be empty,
+which indicates that no events were generated for this source Read moreSubscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct ChannelError(/* private fields */);
An error arising from processing events for a channel.
+Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct Sender<T> { /* private fields */ }
The sender end of a channel
+It can be cloned and sent accross threads (if T
is).
Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct SyncSender<T> { /* private fields */ }
The sender end of a synchronous channel
+It can be cloned and sent accross threads (if T
is).
Send a message to the synchronous channel
+This will wake the event loop and deliver an Event::Msg
to
+it containing the provided value. If the channel is full, this
+function will block until the event loop empties it and it can
+deliver the message.
Due to the blocking behavior, this method should not be used on the +same thread as the one running the event loop, as it could cause deadlocks.
+Send a message to the synchronous channel
+This will wake the event loop and deliver an Event::Msg
to
+it containing the provided value. If the channel is full, this
+function will return an error, but the event loop will still be
+signaled for readiness.
source
. Read moreSubscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub enum Mode {
+ OneShot,
+ Level,
+ Edge,
+}
Possible modes for registering a file descriptor
+Single event generation
+This FD will be disabled as soon as it has generated one event.
+The user will need to use LoopHandle::update()
to re-enable it if
+desired.
Level-triggering
+This FD will report events on every poll as long as the requested interests +are available.
+Edge-triggering
+This FD will report events only when it gains one of the requested interests. +it must thus be fully processed before it’ll generate events again.
+This mode is not supported on certain platforms, and an error will be returned +if it is used.
+As of the time of writing, the platforms that support edge triggered polling are +as follows:
+Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub enum PostAction {
+ Continue,
+ Reregister,
+ Disable,
+ Remove,
+}
Possible actions that can be requested to the event loop by an +event source once its events have been processed.
+PostAction
values can be combined with the |
(bit-or) operator (or with
+|=
) with the result that:
Reregister
Bit-or-ing these results is useful for composed sources to combine the
+results of their child sources, but note that it only applies to the child
+sources. For example, if every child source returns Continue
, the result
+will be Continue
, but the parent source might still need to return
+Reregister
or something else depending on any additional logic it uses.
Continue listening for events on this source as before
+Trigger a re-registration of this source
+Disable this source
+Has the same effect as LoopHandle::disable
Remove this source from the eventloop
+Has the same effect as LoopHandle::kill
Combines PostAction
values returned from nested event sources.
Combines PostAction
values returned from nested event sources.
|=
operation. Read moresource
. Read moreself
and other
values to be equal, and is used
+by ==
.Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub enum Error {
+ InvalidToken,
+ IoError(Error),
+ OtherError(Box<dyn Error + Sync + Send>),
+}
The primary error type used by Calloop covering internal errors and I/O +errors that arise during loop operations such as source registration or +event dispatching.
+When an event source is registered (or re- or un-registered) with the +event loop, this error variant will occur if the token Calloop uses to +keep track of the event source is not valid.
+This variant wraps a std::io::Error
, which might arise from
+Calloop’s internal operations.
Any other unexpected error kind (most likely from a user implementation of
+EventSource::process_events()
) will be wrapped in this.
Converts Calloop’s error type into a std::io::Error
.
Converts the InsertError
into Calloop’s error type, throwing away
+the contained source.
Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read moreError types used and generated by Calloop.
+This module contains error types for Calloop’s operations. They are designed +to make it easy to deal with errors arising from Calloop’s internal I/O and +other operations.
+There are two top-level error types:
+Error
: used by callback functions, internal operations, and some event
+loop API calls
InsertError
: used primarily by the insert_source()
method when an
+event source cannot be added to the loop and needs to be given back to the
+caller
Result
alias using Calloop’s error type.pub struct InsertError<T> {
+ pub inserted: T,
+ pub error: Error,
+}
An error generated when trying to insert an event source
+inserted: T
The source that could not be inserted
+error: Error
The generated error
+Converts the InsertError
into Calloop’s error type, throwing away
+the contained source.
Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub enum ExecutorError {
+ NewFutureError(ChannelError),
+ WakeError(PingError),
+}
An error arising from processing events in an async executor event source.
+Error while reading new futures added via Scheduler::schedule()
.
Error while processing wake events from existing futures.
+Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read moreA futures executor as an event source
+Only available with the executor
cargo feature of calloop
.
This executor is intended for light futures, which will be polled as part of your +event loop. Such futures may be waiting for IO, or for some external computation on an +other thread for example.
+You can create a new executor using the executor
function, which creates a pair
+(Executor<T>, Scheduler<T>)
to handle futures that all evaluate to type T
. The
+executor should be inserted into your event loop, and will yield the return values of
+the futures as they finish into your callback. The scheduler can be cloned and used
+to send futures to be executed into the executor. A generic executor can be obtained
+by choosing T = ()
and letting futures handle the forwarding of their return values
+(if any) by their own means.
Note: The futures must have their own means of being woken up, as this executor is,
+by itself, not I/O aware. See LoopHandle::adapt_io
+for that, or you can use some other mechanism if you prefer.
pub struct Executor<T> { /* private fields */ }
A future executor as an event source
+process_events()
(not the user callback!).EventSource::before_sleep
+and EventSource::before_handle_events
notifications. These are opt-in because
+they require more expensive checks, and almost all sources will not need these notificationspoll
is about to begin Read moreEventSource::process_events
will
+be called with the given events for this source. The iterator may be empty,
+which indicates that no events were generated for this source Read moreSubscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct ExecutorDestroyed;
Error generated when trying to schedule a future after the +executor was destroyed.
+Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct Scheduler<T> { /* private fields */ }
A scheduler to send futures to an executor
+Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read moreA generic event source wrapping an IO objects or file descriptor
+You can use this general purpose adapter around file-descriptor backed objects to
+insert into an EventLoop
.
The event generated by this Generic
event source are the Readiness
+notification itself, and the monitored object is provided to your callback as the second
+argument.
use calloop::{generic::Generic, Interest, Mode, PostAction};
+
+handle.insert_source(
+ // wrap your IO object in a Generic, here we register for read readiness
+ // in level-triggering mode
+ Generic::new(io_object, Interest::READ, Mode::Level),
+ |readiness, io_object, shared_data| {
+ // The first argument of the callback is a Readiness
+ // The second is a &mut reference to your object
+
+ // your callback needs to return a Result<PostAction, std::io::Error>
+ // if it returns an error, the event loop will consider this event
+ // event source as erroring and report it to the user.
+ Ok(PostAction::Continue)
+ }
+);
It can also help you implementing your own event sources: just have
+these Generic<_>
as fields of your event source, and delegate the
+EventSource
implementation to them.
AsRawFd
but not AsFd
with Generic
pub struct FdWrapper<T: AsRawFd>(/* private fields */);
Wrapper to use a type implementing AsRawFd
but not AsFd
with Generic
Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct Generic<F: AsFd, E = Error> {
+ pub interest: Interest,
+ pub mode: Mode,
+ /* private fields */
+}
A generic event source wrapping a FD-backed type
+interest: Interest
The programmed interest
+mode: Mode
The programmed mode
+Wrap a FD-backed type into a Generic
event source that uses
+std::io::Error
as its error type.
Wrap a FD-backed type into a Generic
event source using an arbitrary error type.
process_events()
(not the user callback!).EventSource::before_sleep
+and EventSource::before_handle_events
notifications. These are opt-in because
+they require more expensive checks, and almost all sources will not need these notificationspoll
is about to begin Read moreEventSource::process_events
will
+be called with the given events for this source. The iterator may be empty,
+which indicates that no events were generated for this source Read moreSubscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct NoIoDrop<T>(/* private fields */);
A wrapper around a type that doesn’t expose it mutably safely.
+The EventSource
trait’s Metadata
type demands mutable access to the inner I/O source.
+However, the inner polling source used by calloop
keeps the handle-based equivalent of an
+immutable pointer to the underlying object’s I/O handle. Therefore, if the inner source is
+dropped, this leaves behind a dangling pointer which immediately invokes undefined behavior
+on the next poll of the event loop.
In order to prevent this from happening, the Generic
I/O source must not directly expose
+a mutable reference to the underlying handle. This type wraps around the underlying handle and
+easily allows users to take immutable (&
) references to the type, but makes mutable (&mut
)
+references unsafe to get. Therefore, it prevents the source from being moved out and dropped
+while it is still registered in the event loop.
Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read moreCalloop, a Callback-based Event Loop
+This crate provides an EventLoop
type, which is a small abstraction
+over a polling system. The main difference between this crate
+and other traditional rust event loops is that it is based on callbacks:
+you can register several event sources, each being associated with a callback
+closure that will be invoked whenever the associated event source generates
+events.
The main target use of this event loop is thus for apps that expect to spend +most of their time waiting for events and wishes to do so in a cheap and convenient +way. It is not meant for large scale high performance IO.
+Below is a quick usage example of calloop. For a more in-depth tutorial, see +the calloop book.
+For simple uses, you can just add event sources with callbacks to the event +loop. For example, here’s a runnable program that exits after five seconds:
+ +use calloop::{timer::{Timer, TimeoutAction}, EventLoop, LoopSignal};
+
+fn main() {
+ // Create the event loop. The loop is parameterised by the kind of shared
+ // data you want the callbacks to use. In this case, we want to be able to
+ // stop the loop when the timer fires, so we provide the loop with a
+ // LoopSignal, which has the ability to stop the loop from within events. We
+ // just annotate the type here; the actual data is provided later in the
+ // run() call.
+ let mut event_loop: EventLoop<LoopSignal> =
+ EventLoop::try_new().expect("Failed to initialize the event loop!");
+
+ // Retrieve a handle. It is used to insert new sources into the event loop
+ // It can be cloned, allowing you to insert sources from within source
+ // callbacks.
+ let handle = event_loop.handle();
+
+ // Create our event source, a timer, that will expire in 2 seconds
+ let source = Timer::from_duration(std::time::Duration::from_secs(2));
+
+ // Inserting an event source takes this general form. It can also be done
+ // from within the callback of another event source.
+ handle
+ .insert_source(
+ // a type which implements the EventSource trait
+ source,
+ // a callback that is invoked whenever this source generates an event
+ |event, _metadata, shared_data| {
+ // This callback is given 3 values:
+ // - the event generated by the source (in our case, timer events are the Instant
+ // representing the deadline for which it has fired)
+ // - &mut access to some metadata, specific to the event source (in our case, a
+ // timer handle)
+ // - &mut access to the global shared data that was passed to EventLoop::run or
+ // EventLoop::dispatch (in our case, a LoopSignal object to stop the loop)
+ //
+ // The return type is just () because nothing uses it. Some
+ // sources will expect a Result of some kind instead.
+ println!("Timeout for {:?} expired!", event);
+ // notify the event loop to stop running using the signal in the shared data
+ // (see below)
+ shared_data.stop();
+ // The timer event source requires us to return a TimeoutAction to
+ // specify if the timer should be rescheduled. In our case we just drop it.
+ TimeoutAction::Drop
+ },
+ )
+ .expect("Failed to insert event source!");
+
+ // Create the shared data for our loop.
+ let mut shared_data = event_loop.get_signal();
+
+ // Actually run the event loop. This will dispatch received events to their
+ // callbacks, waiting at most 20ms for new events between each invocation of
+ // the provided callback (pass None for the timeout argument if you want to
+ // wait indefinitely between events).
+ //
+ // This is where we pass the *value* of the shared data, as a mutable
+ // reference that will be forwarded to all your callbacks, allowing them to
+ // share some state
+ event_loop
+ .run(
+ std::time::Duration::from_millis(20),
+ &mut shared_data,
+ |_shared_data| {
+ // Finally, this is where you can insert the processing you need
+ // to do do between each waiting event eg. drawing logic if
+ // you're doing a GUI app.
+ },
+ )
+ .expect("Error during event loop!");
+}
The event loop is backed by an OS provided polling selector (epoll on Linux).
+This crate also provide some adapters for common event sources such as:
+As well as generic objects backed by file descriptors.
+It is also possible to insert “idle” callbacks. These callbacks represent computations that
+need to be done at some point, but are not as urgent as processing the events. These callbacks
+are stored and then executed during EventLoop::dispatch
, once all
+events from the sources have been processed.
calloop
can be used with futures, both as an executor and for monitoring Async IO.
Activating the executor
cargo feature will add the futures
module, which provides
+a future executor that can be inserted into an EventLoop
as yet another EventSource
.
IO objects can be made Async-aware via the LoopHandle::adapt_io
+method. Waking up the futures using these objects is handled by the associated EventLoop
+directly.
You can create custom event sources can will be inserted in the event loop by
+implementing the EventSource
trait. This can be done either directly from the file
+descriptors of your source of interest, or by wrapping an other event source and further
+processing its events. An EventSource
can register more than one file descriptor and
+aggregate them.
Currently, calloop is tested on Linux, FreeBSD and macOS.
+The following platforms are also enabled at compile time but not tested: Android, NetBSD, +OpenBSD, DragonFlyBSD.
+Those platforms should work based on the fact that they have the same polling mechanism as +tested platforms, but some subtle bugs might still occur.
+pub use error::Error;
pub use error::InsertError;
pub use error::Result;
EventSource::register()
for all the sources provided.EventSource::reregister()
for all the sources provided.EventSource::unregister()
for all the sources provided.Iterator
over the events relevant to a particular source
+This type is used in the EventSource::before_handle_events
methods for
+two main reasons:EventLoop
.EventSource
trait)Adapters for async IO objects
+This module mainly hosts the Async
adapter for making IO objects async with readiness
+monitoring backed by an EventLoop
. See LoopHandle::adapt_io
for
+how to create them.
pub struct Async<'l, F: AsFd> { /* private fields */ }
Adapter for async IO manipulations
+This type wraps an IO object, providing methods to create futures waiting for its +readiness.
+If the futures-io
cargo feature is enabled, it also implements AsyncRead
and/or
+AsyncWrite
if the underlying type implements Read
and/or Write
.
Note that this adapter and the futures procuded from it and not threadsafe.
+A future that resolves once the object becomes ready for reading
+A future that resolves once the object becomes ready for writing
+Remove the async adapter and retrieve the underlying object
+Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct Readable<'s, 'l, F: AsFd> { /* private fields */ }
A future that resolves once the associated object becomes ready for reading
+Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct Writable<'s, 'l, F: AsFd> { /* private fields */ }
A future that resolves once the associated object becomes ready for writing
+Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read moreRedirecting to ../../calloop/struct.EventIterator.html...
+ + + \ No newline at end of file diff --git a/api/calloop/loop_logic/struct.EventLoop.html b/api/calloop/loop_logic/struct.EventLoop.html new file mode 100644 index 00000000..11d4cad0 --- /dev/null +++ b/api/calloop/loop_logic/struct.EventLoop.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../calloop/struct.EventLoop.html...
+ + + \ No newline at end of file diff --git a/api/calloop/loop_logic/struct.LoopHandle.html b/api/calloop/loop_logic/struct.LoopHandle.html new file mode 100644 index 00000000..f97e3669 --- /dev/null +++ b/api/calloop/loop_logic/struct.LoopHandle.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../calloop/struct.LoopHandle.html...
+ + + \ No newline at end of file diff --git a/api/calloop/loop_logic/struct.LoopSignal.html b/api/calloop/loop_logic/struct.LoopSignal.html new file mode 100644 index 00000000..df0146bb --- /dev/null +++ b/api/calloop/loop_logic/struct.LoopSignal.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../calloop/struct.LoopSignal.html...
+ + + \ No newline at end of file diff --git a/api/calloop/loop_logic/struct.RegistrationToken.html b/api/calloop/loop_logic/struct.RegistrationToken.html new file mode 100644 index 00000000..f649cb7e --- /dev/null +++ b/api/calloop/loop_logic/struct.RegistrationToken.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../calloop/struct.RegistrationToken.html...
+ + + \ No newline at end of file diff --git a/api/calloop/macro.batch_register!.html b/api/calloop/macro.batch_register!.html new file mode 100644 index 00000000..c8a28b39 --- /dev/null +++ b/api/calloop/macro.batch_register!.html @@ -0,0 +1,11 @@ + + + + +Redirecting to macro.batch_register.html...
+ + + \ No newline at end of file diff --git a/api/calloop/macro.batch_register.html b/api/calloop/macro.batch_register.html new file mode 100644 index 00000000..8f097c5c --- /dev/null +++ b/api/calloop/macro.batch_register.html @@ -0,0 +1,19 @@ +macro_rules! batch_register { + ($poll:ident, $token_fac:ident, $( $source:expr ),* $(,)?) => { ... }; +}
Register a set of event sources. Effectively calls
+EventSource::register()
for all the sources provided.
Usage:
+calloop::batch_register!(
+ poll, token_factory,
+ self.source_one,
+ self.source_two,
+ self.source_three,
+ self.source_four,
+)
+
Note that there is no scope for customisation; if you need to do special +things with a particular source, you’ll need to leave it off the list. Also +note that this only does try-or-early-return error handling in the order +that you list the sources; if you need anything else, don’t use this macro.
+Redirecting to macro.batch_reregister.html...
+ + + \ No newline at end of file diff --git a/api/calloop/macro.batch_reregister.html b/api/calloop/macro.batch_reregister.html new file mode 100644 index 00000000..4b99a4a1 --- /dev/null +++ b/api/calloop/macro.batch_reregister.html @@ -0,0 +1,19 @@ +macro_rules! batch_reregister { + ($poll:ident, $token_fac:ident, $( $source:expr ),* $(,)?) => { ... }; +}
Reregister a set of event sources. Effectively calls
+EventSource::reregister()
for all the sources provided.
Usage:
+calloop::batch_reregister!(
+ poll, token_factory,
+ self.source_one,
+ self.source_two,
+ self.source_three,
+ self.source_four,
+)
+
Note that there is no scope for customisation; if you need to do special +things with a particular source, you’ll need to leave it off the list. Also +note that this only does try-or-early-return error handling in the order +that you list the sources; if you need anything else, don’t use this macro.
+Redirecting to macro.batch_unregister.html...
+ + + \ No newline at end of file diff --git a/api/calloop/macro.batch_unregister.html b/api/calloop/macro.batch_unregister.html new file mode 100644 index 00000000..8e49a943 --- /dev/null +++ b/api/calloop/macro.batch_unregister.html @@ -0,0 +1,19 @@ +macro_rules! batch_unregister { + ($poll:ident, $( $source:expr ),* $(,)?) => { ... }; +}
Unregister a set of event sources. Effectively calls
+EventSource::unregister()
for all the sources provided.
Usage:
+calloop::batch_unregister!(
+ poll,
+ self.source_one,
+ self.source_two,
+ self.source_three,
+ self.source_four,
+)
+
Note that there is no scope for customisation; if you need to do special +things with a particular source, you’ll need to leave it off the list. Also +note that this only does try-or-early-return error handling in the order +that you list the sources; if you need anything else, don’t use this macro.
+pub fn make_ping() -> Result<(Ping, PingSource)>
Create a new ping event source
+you are given a Ping
instance, which can be cloned and used to ping the
+event loop, and a PingSource
, which you can insert in your event loop to
+receive the pings.
Ping to the event loop
+This is an event source that just produces ()
events whevener the associated
+Ping::ping
method is called. If the event source is pinged multiple times
+between a single dispatching, it’ll only generate one event.
This event source is a simple way of waking up the event loop from an other part of your program
+(and is what backs the LoopSignal
). It can also be used as a building
+block to construct event sources whose source of event is not file descriptor, but rather an
+userspace source (like an other thread).
pub struct PingError(/* private fields */);
An error arising from processing events for a ping.
+Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub type PingSource = PingSource;
The Ping handle
+This handle can be cloned and sent accross threads. It can be used to
+send pings to the PingSource
.
struct PingSource { /* private fields */ }
Redirecting to ../../../calloop/channel/enum.Event.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/channel/fn.channel.html b/api/calloop/sources/channel/fn.channel.html new file mode 100644 index 00000000..5af86020 --- /dev/null +++ b/api/calloop/sources/channel/fn.channel.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/channel/fn.channel.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/channel/fn.sync_channel.html b/api/calloop/sources/channel/fn.sync_channel.html new file mode 100644 index 00000000..e6075d60 --- /dev/null +++ b/api/calloop/sources/channel/fn.sync_channel.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/channel/fn.sync_channel.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/channel/index.html b/api/calloop/sources/channel/index.html new file mode 100644 index 00000000..b91f82ed --- /dev/null +++ b/api/calloop/sources/channel/index.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/channel/index.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/channel/struct.Channel.html b/api/calloop/sources/channel/struct.Channel.html new file mode 100644 index 00000000..c60ce13a --- /dev/null +++ b/api/calloop/sources/channel/struct.Channel.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/channel/struct.Channel.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/channel/struct.ChannelError.html b/api/calloop/sources/channel/struct.ChannelError.html new file mode 100644 index 00000000..77cad728 --- /dev/null +++ b/api/calloop/sources/channel/struct.ChannelError.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/channel/struct.ChannelError.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/channel/struct.Sender.html b/api/calloop/sources/channel/struct.Sender.html new file mode 100644 index 00000000..bb3d1df2 --- /dev/null +++ b/api/calloop/sources/channel/struct.Sender.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/channel/struct.Sender.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/channel/struct.SyncSender.html b/api/calloop/sources/channel/struct.SyncSender.html new file mode 100644 index 00000000..25eda08f --- /dev/null +++ b/api/calloop/sources/channel/struct.SyncSender.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/channel/struct.SyncSender.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/enum.PostAction.html b/api/calloop/sources/enum.PostAction.html new file mode 100644 index 00000000..95cc41df --- /dev/null +++ b/api/calloop/sources/enum.PostAction.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../calloop/enum.PostAction.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/futures/enum.ExecutorError.html b/api/calloop/sources/futures/enum.ExecutorError.html new file mode 100644 index 00000000..df8d0d1b --- /dev/null +++ b/api/calloop/sources/futures/enum.ExecutorError.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/futures/enum.ExecutorError.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/futures/fn.executor.html b/api/calloop/sources/futures/fn.executor.html new file mode 100644 index 00000000..fb9ca9d2 --- /dev/null +++ b/api/calloop/sources/futures/fn.executor.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/futures/fn.executor.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/futures/index.html b/api/calloop/sources/futures/index.html new file mode 100644 index 00000000..43a4f351 --- /dev/null +++ b/api/calloop/sources/futures/index.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/futures/index.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/futures/struct.Executor.html b/api/calloop/sources/futures/struct.Executor.html new file mode 100644 index 00000000..0b003d49 --- /dev/null +++ b/api/calloop/sources/futures/struct.Executor.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/futures/struct.Executor.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/futures/struct.ExecutorDestroyed.html b/api/calloop/sources/futures/struct.ExecutorDestroyed.html new file mode 100644 index 00000000..f41f7e42 --- /dev/null +++ b/api/calloop/sources/futures/struct.ExecutorDestroyed.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/futures/struct.ExecutorDestroyed.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/futures/struct.Scheduler.html b/api/calloop/sources/futures/struct.Scheduler.html new file mode 100644 index 00000000..cef7e561 --- /dev/null +++ b/api/calloop/sources/futures/struct.Scheduler.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/futures/struct.Scheduler.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/generic/index.html b/api/calloop/sources/generic/index.html new file mode 100644 index 00000000..ef1341ef --- /dev/null +++ b/api/calloop/sources/generic/index.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/generic/index.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/generic/struct.FdWrapper.html b/api/calloop/sources/generic/struct.FdWrapper.html new file mode 100644 index 00000000..f4b9d9b9 --- /dev/null +++ b/api/calloop/sources/generic/struct.FdWrapper.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/generic/struct.FdWrapper.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/generic/struct.Generic.html b/api/calloop/sources/generic/struct.Generic.html new file mode 100644 index 00000000..faf3c1e4 --- /dev/null +++ b/api/calloop/sources/generic/struct.Generic.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/generic/struct.Generic.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/generic/struct.NoIoDrop.html b/api/calloop/sources/generic/struct.NoIoDrop.html new file mode 100644 index 00000000..2f0d24aa --- /dev/null +++ b/api/calloop/sources/generic/struct.NoIoDrop.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/generic/struct.NoIoDrop.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/ping/fn.make_ping.html b/api/calloop/sources/ping/fn.make_ping.html new file mode 100644 index 00000000..d19bbd93 --- /dev/null +++ b/api/calloop/sources/ping/fn.make_ping.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/ping/fn.make_ping.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/ping/index.html b/api/calloop/sources/ping/index.html new file mode 100644 index 00000000..33281796 --- /dev/null +++ b/api/calloop/sources/ping/index.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/ping/index.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/ping/struct.PingError.html b/api/calloop/sources/ping/struct.PingError.html new file mode 100644 index 00000000..c156ce3c --- /dev/null +++ b/api/calloop/sources/ping/struct.PingError.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/ping/struct.PingError.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/ping/type.Ping.html b/api/calloop/sources/ping/type.Ping.html new file mode 100644 index 00000000..6cf96c69 --- /dev/null +++ b/api/calloop/sources/ping/type.Ping.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/ping/type.Ping.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/ping/type.PingSource.html b/api/calloop/sources/ping/type.PingSource.html new file mode 100644 index 00000000..d8cbb6b2 --- /dev/null +++ b/api/calloop/sources/ping/type.PingSource.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/ping/type.PingSource.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/struct.Dispatcher.html b/api/calloop/sources/struct.Dispatcher.html new file mode 100644 index 00000000..3c2f6e0b --- /dev/null +++ b/api/calloop/sources/struct.Dispatcher.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../calloop/struct.Dispatcher.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/struct.Idle.html b/api/calloop/sources/struct.Idle.html new file mode 100644 index 00000000..8bb544ee --- /dev/null +++ b/api/calloop/sources/struct.Idle.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../calloop/struct.Idle.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/timer/enum.TimeoutAction.html b/api/calloop/sources/timer/enum.TimeoutAction.html new file mode 100644 index 00000000..41cad1fb --- /dev/null +++ b/api/calloop/sources/timer/enum.TimeoutAction.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/timer/enum.TimeoutAction.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/timer/index.html b/api/calloop/sources/timer/index.html new file mode 100644 index 00000000..1320f161 --- /dev/null +++ b/api/calloop/sources/timer/index.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/timer/index.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/timer/struct.TimeoutFuture.html b/api/calloop/sources/timer/struct.TimeoutFuture.html new file mode 100644 index 00000000..0d17fdc3 --- /dev/null +++ b/api/calloop/sources/timer/struct.TimeoutFuture.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/timer/struct.TimeoutFuture.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/timer/struct.Timer.html b/api/calloop/sources/timer/struct.Timer.html new file mode 100644 index 00000000..8d54a5c9 --- /dev/null +++ b/api/calloop/sources/timer/struct.Timer.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/timer/struct.Timer.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/trait.EventSource.html b/api/calloop/sources/trait.EventSource.html new file mode 100644 index 00000000..275d2733 --- /dev/null +++ b/api/calloop/sources/trait.EventSource.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../calloop/trait.EventSource.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/transient/index.html b/api/calloop/sources/transient/index.html new file mode 100644 index 00000000..4a4c759a --- /dev/null +++ b/api/calloop/sources/transient/index.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/transient/index.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sources/transient/struct.TransientSource.html b/api/calloop/sources/transient/struct.TransientSource.html new file mode 100644 index 00000000..ead34fdf --- /dev/null +++ b/api/calloop/sources/transient/struct.TransientSource.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../calloop/transient/struct.TransientSource.html...
+ + + \ No newline at end of file diff --git a/api/calloop/struct.Dispatcher.html b/api/calloop/struct.Dispatcher.html new file mode 100644 index 00000000..0793c826 --- /dev/null +++ b/api/calloop/struct.Dispatcher.html @@ -0,0 +1,39 @@ +pub struct Dispatcher<'a, S, Data>(/* private fields */);
An event source with its callback.
+The Dispatcher
can be registered in an event loop.
+Use the as_source_{ref,mut}
functions to interact with the event source.
+Use into_source_inner
to get the event source back.
Returns an immutable reference to the event source.
+Has the same semantics as RefCell::borrow()
.
The dispatcher being mutably borrowed while its events are dispatched, +this method will panic if invoked from within the associated dispatching closure.
+Returns a mutable reference to the event source.
+Has the same semantics as RefCell::borrow_mut()
.
The dispatcher being mutably borrowed while its events are dispatched, +this method will panic if invoked from within the associated dispatching closure.
+Consumes the Dispatcher and returns the inner event source.
+Panics if the Dispatcher
is still registered.
source
. Read moreSubscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct EventIterator<'a> { /* private fields */ }
The EventIterator is an Iterator
over the events relevant to a particular source
+This type is used in the EventSource::before_handle_events
methods for
+two main reasons:
Clone
, which is not
+possible with dynamic dispatchiter_next_chunk
)N
values. Read moreiter_advance_by
)n
elements. Read moren
th element of the iterator. Read moreiter_intersperse
)separator
+between adjacent items of the original iterator. Read moren
elements. Read moren
elements, or fewer
+if the underlying iterator ends sooner. Read moreiter_map_windows
)f
for each contiguous window of size N
over
+self
and returns an iterator over the outputs of f
. Like slice::windows()
,
+the windows during mapping overlap as well. Read moreiter_collect_into
)iter_is_partitioned
)true
precede all those that return false
. Read moreiterator_try_reduce
)try_find
)iter_array_chunks
)N
elements of the iterator at a time. Read moreiter_order_by
)Iterator
with those
+of another with respect to the specified comparison function. Read morePartialOrd
elements of
+this Iterator
with those of another. The comparison works like short-circuit
+evaluation, returning a result without comparing the remaining elements.
+As soon as an order can be determined, the evaluation stops and a result is returned. Read moreiter_order_by
)Iterator
with those
+of another with respect to the specified comparison function. Read moreiter_order_by
)Iterator
are lexicographically
+less than those of another. Read moreIterator
are lexicographically
+less or equal to those of another. Read moreIterator
are lexicographically
+greater than those of another. Read moreIterator
are lexicographically
+greater than or equal to those of another. Read moreis_sorted
)is_sorted
)Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct EventLoop<'l, Data> { /* private fields */ }
An event loop
+This loop can host several event sources, that can be dynamically added or removed.
+Create a new event loop
+Fails if the initialization of the polling system failed.
+Retrieve a loop handle
+Dispatch pending events to their callbacks
+If some sources have events available, their callbacks will be immediatly called.
+Otherwise this will wait until an event is receive or the provided timeout
+is reached. If timeout
is None
, it will wait without a duration limit.
Once pending events have been processed or the timeout is reached, all pending +idle callbacks will be fired before this method returns.
+Get a signal to stop this event loop from running
+To be used in conjunction with the run()
method.
Run this event loop
+This will repeatedly try to dispatch events (see the dispatch()
method) on
+this event loop, waiting at most timeout
every time.
Between each dispatch wait, your provided callback will be called.
+You can use the get_signal()
method to retrieve a way to stop or wakeup
+the event loop from anywhere.
Block a future on this event loop.
+This will run the provided future on this event loop, blocking until it is +resolved.
+If LoopSignal::stop()
is called before the future is resolved, this function returns
+None
.
Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct Idle<'i> { /* private fields */ }
An idle callback that was inserted in this loop
+This handle allows you to cancel the callback. Dropping +it will not cancel it.
+Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct Interest {
+ pub readable: bool,
+ pub writable: bool,
+}
Interest to register regarding the file descriptor
+readable: bool
Wait for the FD to be readable
+writable: bool
Wait for the FD to be writable
+Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct LoopHandle<'l, Data> { /* private fields */ }
An handle to an event loop
+This handle allows you to insert new sources and idles in this event loop, +it can be cloned, and it is possible to insert new sources from within a source +callback.
+Inserts a new event source in the loop.
+The provided callback will be called during the dispatching cycles whenever the
+associated source generates events, see EventLoop::dispatch(..)
for details.
This function takes ownership of the event source. Use register_dispatcher
+if you need access to the event source after this call.
Registers a Dispatcher
in the loop.
Use this function if you need access to the event source after its insertion in the loop.
+See also insert_source
.
Inserts an idle callback.
+This callback will be called during a dispatching cycle when the event loop has +finished processing all pending events from the sources and becomes idle.
+Enables this previously disabled event source.
+This previously disabled source will start generating events again.
+Note: this cannot be done from within the source callback.
+Makes this source update its registration.
+If after accessing the source you changed its parameters in a way that requires +updating its registration.
+Disables this event source.
+The source remains in the event loop, but it’ll no longer generate events
+Removes this source from the event loop.
+Wrap an IO object into an async adapter
+This adapter turns the IO object into an async-aware one that can be used in futures. +The readiness of these futures will be driven by the event loop.
+The produced futures can be polled in any executor, and notably the one provided by +calloop.
+Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct LoopSignal { /* private fields */ }
A signal that can be shared between thread to stop or wakeup a running +event loop
+Stop the event loop
+Once this method is called, the next time the event loop has finished +waiting for events, it will return rather than starting to wait again.
+This is only useful if you are using the EventLoop::run()
method.
Wake up the event loop
+This sends a dummy event to the event loop to simulate the reception
+of an event, making the wait return early. Called after stop()
, this
+ensures the event loop will terminate quickly if you specified a long
+timeout (or no timeout at all) to the dispatch
or run
method.
source
. Read moreSubscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct Poll { /* private fields */ }
The polling system
+This type represents the polling system of calloop, on which you
+can register your file descriptors. This interface is only accessible in
+implementations of the EventSource
trait.
You only need to interact with this type if you are implementing your
+own event sources, while implementing the EventSource
trait.
+And even in this case, you can often just use the Generic
event
+source and delegate the implementations to it.
Register a new file descriptor for polling
+The file descriptor will be registered with given interest, +mode and token. This function will fail if given a +bad file descriptor or if the provided file descriptor is already +registered.
+The registered source must not be dropped before it is unregistered.
+If your event source is dropped without being unregistered, the token +passed in here will remain on the heap and continue to be used by the +polling system even though no event source will match it.
+Update the registration for a file descriptor
+This allows you to change the interest, mode or token of a file +descriptor. Fails if the provided fd is not currently registered.
+See note on register()
regarding leaking.
Unregister a file descriptor
+This file descriptor will no longer generate events. Fails if the +provided file descriptor is not currently registered.
+Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct Readiness {
+ pub readable: bool,
+ pub writable: bool,
+ pub error: bool,
+}
Readiness for a file descriptor notification
+readable: bool
Is the FD readable
+writable: bool
Is the FD writable
+error: bool
Is the FD in an error state
+Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct RegistrationToken { /* private fields */ }
A token representing a registration in the EventLoop
.
This token is given to you by the EventLoop
when an EventSource
is inserted or
+a Dispatcher
is registered. You can use it to disable,
+enable, update`,
+remove or kill it.
source
. Read moreself
and other
values to be equal, and is used
+by ==
.Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct Token { /* private fields */ }
A token (for implementation of the EventSource
trait)
This token is produced by the TokenFactory
and is used when calling the
+EventSource
implementations to process event, in order
+to identify which sub-source produced them.
You should forward it to the Poll
when registering your file descriptors.
Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct TokenFactory { /* private fields */ }
Factory for creating tokens in your registrations
+When composing event sources, each sub-source needs to +have its own token to identify itself. This factory is +provided to produce such unique tokens.
+Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read moreRedirecting to ../../calloop/enum.Mode.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sys/struct.Interest.html b/api/calloop/sys/struct.Interest.html new file mode 100644 index 00000000..f7a4b979 --- /dev/null +++ b/api/calloop/sys/struct.Interest.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../calloop/struct.Interest.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sys/struct.Poll.html b/api/calloop/sys/struct.Poll.html new file mode 100644 index 00000000..7af67df7 --- /dev/null +++ b/api/calloop/sys/struct.Poll.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../calloop/struct.Poll.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sys/struct.Readiness.html b/api/calloop/sys/struct.Readiness.html new file mode 100644 index 00000000..b71dde50 --- /dev/null +++ b/api/calloop/sys/struct.Readiness.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../calloop/struct.Readiness.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sys/struct.Token.html b/api/calloop/sys/struct.Token.html new file mode 100644 index 00000000..3ef87225 --- /dev/null +++ b/api/calloop/sys/struct.Token.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../calloop/struct.Token.html...
+ + + \ No newline at end of file diff --git a/api/calloop/sys/struct.TokenFactory.html b/api/calloop/sys/struct.TokenFactory.html new file mode 100644 index 00000000..8b4477e8 --- /dev/null +++ b/api/calloop/sys/struct.TokenFactory.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../calloop/struct.TokenFactory.html...
+ + + \ No newline at end of file diff --git a/api/calloop/timer/enum.TimeoutAction.html b/api/calloop/timer/enum.TimeoutAction.html new file mode 100644 index 00000000..40e7485e --- /dev/null +++ b/api/calloop/timer/enum.TimeoutAction.html @@ -0,0 +1,25 @@ +pub enum TimeoutAction {
+ Drop,
+ ToInstant(Instant),
+ ToDuration(Duration),
+}
Action to reschedule a timeout if necessary
+Don’t reschedule this timer
+Reschedule this timer to a given Instant
Reschedule this timer to a given Duration
in the future
Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read moreTimer event source
+The Timer
is an event source that will fire its event after a certain amount of time
+specified at creation. Its timing is tracked directly by the event loop core logic, and it does
+not consume any system resource.
As of calloop v0.11.0, the event loop always uses high-precision timers. However, the timer +precision varies between operating systems; for instance, the scheduler granularity on Windows +is about 16 milliseconds. If you need to rely on good precision timers in general, you may need +to enable realtime features of your OS to ensure your thread is quickly woken up by the system +scheduler.
+The provided event is an Instant
representing the deadline for which this timer has fired
+(which can be earlier than the current time depending on the event loop congestion).
The callback associated with this event source is expected to return a TimeoutAction
, which
+can be used to implement self-repeating timers by telling calloop to reprogram the same timer
+for a later timeout after it has fired.
pub struct TimeoutFuture { /* private fields */ }
A future that resolves once a certain timeout is expired
+Create a future that resolves after a given duration
+Create a future that resolves at a given instant
+Subscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub struct Timer { /* private fields */ }
A timer event source
+When registered to the event loop, it will trigger an event once its deadline is reached.
+If the deadline is in the past relative to the moment of its insertion in the event loop,
+the TImer
will trigger an event as soon as the event loop is dispatched.
Create a timer that will fire immediately when inserted in the event loop
+Create a timer that will fire after a given duration from now
+Create a timer that will fire at a given instant
+Changes the deadline of this timer to an Instant
If the Timer
is currently registered in the event loop, it needs to be
+re-registered for this change to take effect.
Changes the deadline of this timer to a Duration
from now
If the Timer
is currently registered in the event loop, it needs to be
+re-registered for this change to take effect.
Get the current deadline of this Timer
Returns None
if the timer has overflowed.
process_events()
(not the user callback!).EventSource::before_sleep
+and EventSource::before_handle_events
notifications. These are opt-in because
+they require more expensive checks, and almost all sources will not need these notificationspoll
is about to begin Read moreEventSource::process_events
will
+be called with the given events for this source. The iterator may be empty,
+which indicates that no events were generated for this source Read moreSubscriber
to this type, returning a
+[WithDispatch
] wrapper. Read morepub trait EventSource {
+ type Event;
+ type Metadata;
+ type Ret;
+ type Error: Into<Box<dyn Error + Sync + Send>>;
+
+ const NEEDS_EXTRA_LIFECYCLE_EVENTS: bool = false;
+
+ // Required methods
+ fn process_events<F>(
+ &mut self,
+ readiness: Readiness,
+ token: Token,
+ callback: F
+ ) -> Result<PostAction, Self::Error>
+ where F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret;
+ fn register(
+ &mut self,
+ poll: &mut Poll,
+ token_factory: &mut TokenFactory
+ ) -> Result<()>;
+ fn reregister(
+ &mut self,
+ poll: &mut Poll,
+ token_factory: &mut TokenFactory
+ ) -> Result<()>;
+ fn unregister(&mut self, poll: &mut Poll) -> Result<()>;
+
+ // Provided methods
+ fn before_sleep(&mut self) -> Result<Option<(Readiness, Token)>> { ... }
+ fn before_handle_events(&mut self, events: EventIterator<'_>) { ... }
+}
Trait representing an event source
+This is the trait you need to implement if you wish to create your own +calloop-compatible event sources.
+The 3 associated types define the type of closure the user will need to +provide to process events for your event source.
+The process_events
method will be called when one of the FD you registered
+is ready, with the associated readiness and token.
The register
, reregister
and unregister
methods are plumbing to let your
+source register itself with the polling system. See their documentation for details.
In case your event source needs to do some special processing before or after a
+polling session occurs (to prepare the underlying source for polling, and cleanup
+after that), you can override NEEDS_EXTRA_LIFECYCLE_EVENTS
to true
.
+For all sources for which that constant is true
, the methods before_sleep
and
+before_handle_events
will be called.
+before_sleep
is called before the polling system performs a poll operation.
+before_handle_events
is called before any process_events methods have been called.
+This means that during process_events
you can assume that all cleanup has occured on
+all sources.
Some metadata of your event source
+This is typically useful if your source contains some internal state that
+the user may need to interact with when processing events. The user callback
+will receive a &mut Metadata
reference.
Set to ()
if not needed.
Whether this source needs to be sent the EventSource::before_sleep
+and EventSource::before_handle_events
notifications. These are opt-in because
+they require more expensive checks, and almost all sources will not need these notifications
Process any relevant events
+This method will be called every time one of the FD you registered becomes +ready, including the readiness details and the associated token.
+Your event source will then do some processing of the file descriptor(s) to generate
+events, and call the provided callback
for each one of them.
You should ensure you drained the file descriptors of their events, especially if using +edge-triggered mode.
+Register yourself to this poll instance
+You should register all your relevant file descriptors to the provided Poll
+using its Poll::register
method.
If you need to register more than one file descriptor, you can change the
+sub_id
field of the Token
to differentiate between them.
Re-register your file descriptors
+Your should update the registration of all your relevant file descriptor to
+the provided Poll
using its Poll::reregister
,
+if necessary.
Unregister your file descriptors
+You should unregister all your file descriptors from this Poll
using its
+Poll::unregister
method.
Notification that a single poll
is about to begin
Use this to perform operations which must be done before polling, +but which may conflict with other event handlers. For example, +if polling requires a lock to be taken
+If this returns Ok(Some), this will be treated as an event arriving in polling, and
+your event handler will be called with the returned Token
and Readiness
.
+Polling will however still occur, but with a timeout of 0, so additional events
+from this or other sources may also be handled in the same iterations.
+The returned Token
must belong to this source
Notification that polling is complete, and EventSource::process_events
will
+be called with the given events for this source. The iterator may be empty,
+which indicates that no events were generated for this source
Please note, the iterator excludes any synthetic events returned from
+EventSource::before_sleep
Use this to perform a cleanup before event handlers with arbitrary
+code may run. This could be used to drop a lock obtained in
+EventSource::before_sleep
Blanket implementation for exclusive references to event sources.
+EventSource
is not an object safe trait, so this does not include trait
+objects.
Blanket implementation for boxed event sources. EventSource
is not an
+object safe trait, so this does not include trait objects.
Wrapper for a transient Calloop event source.
+If you have high level event source that you expect to remain in the event +loop indefinitely, and another event source nested inside that one that you +expect to require removal or disabling from time to time, this module can +handle it for you.
+TransientSource
wraps a Calloop event source and manages its
+registration. A user of this type only needs to perform the usual Calloop
+calls (process_events()
and *register()
) and the return value of
+process_events()
.pub struct TransientSource<T> { /* private fields */ }
A TransientSource
wraps a Calloop event source and manages its
+registration. A user of this type only needs to perform the usual Calloop
+calls (process_events()
and *register()
) and the return value of
+process_events()
.
Rather than needing to check for the full set of
+PostAction
values returned from process_events()
,
+you can just check for Continue
or Reregister
and pass that back out
+through your own process_events()
implementation. In your registration
+functions, you then only need to call the same function on this type ie.
+register()
inside register()
etc.
For example, say you have a source that contains a channel along with some +other logic. If the channel’s sending end has been dropped, it needs to be +removed from the loop. So to manage this, you use this in your struct:
+struct CompositeSource {
+ // Event source for channel.
+ mpsc_receiver: TransientSource<calloop::channel::Channel<T>>,
+
+ // Any other fields go here...
+}
+
To create the transient source, you can simply use the Into
+implementation:
let (sender, source) = channel();
+let mpsc_receiver: TransientSource<Channel> = source.into();
+
(If you want to start off with an empty TransientSource
, you can just use
+Default::default()
instead.)
TransientSource
implements EventSource
and passes
+through process_events()
calls, so in the parent’s process_events()
+implementation you can just do this:
fn process_events<F>(
+ &mut self,
+ readiness: calloop::Readiness,
+ token: calloop::Token,
+ callback: F,
+) -> Result<calloop::PostAction, Self::Error>
+where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+{
+ let channel_return = self.mpsc_receiver.process_events(readiness, token, callback)?;
+
+ // Perform other logic here...
+
+ Ok(channel_return)
+}
+
Note that:
+You can call process_events()
on the TransientSource<Channel>
even
+if the channel has been unregistered and dropped. All that will happen
+is that you won’t get any events from it.
The PostAction
returned from process_events()
+will only ever be PostAction::Continue
or PostAction::Reregister
.
+You will still need to combine this with the result of any other sources
+(transient or not).
Once you return channel_return
from your process_events()
method (and
+assuming it propagates all the way up to the event loop itself through any
+other event sources), the event loop might call reregister()
on your
+source. All your source has to do is:
fn reregister(
+ &mut self,
+ poll: &mut calloop::Poll,
+ token_factory: &mut calloop::TokenFactory,
+) -> crate::Result<()> {
+ self.mpsc_receiver.reregister(poll, token_factory)?;
+
+ // Other registration actions...
+
+ Ok(())
+}
+
The TransientSource
will take care of updating the registration of the
+inner source, even if it actually needs to be unregistered or initially
+registered.
TransientSource
sNot properly removing or replacing TransientSource
s can cause spurious
+wakeups of the event loop, and in some cases can leak file descriptors or
+fail to free entries in Calloop’s internal data structures. No unsoundness
+or undefined behaviour will result, but leaking file descriptors can result
+in errors or panics.
If you want to remove a source before it returns PostAction::Remove
, use
+the TransientSource::remove()
method. If you want to replace a source
+with another one, use the TransientSource::replace()
method. Either of
+these may be called at any time during processing or from outside the event
+loop. Both require either returning PostAction::Reregister
from the
+process_event()
call that does this, or reregistering the event source
+some other way eg. via the top-level loop handle.
If, instead, you directly assign a new source to the variable holding the
+TransientSource
, the inner source will be dropped before it can be
+unregistered. For example:
self.mpsc_receiver = Default::default();
+self.mpsc_receiver = new_channel.into();
+
Apply a function to the enclosed source, if it exists and is not about +to be removed.
+Removes the wrapped event source from the event loop and this wrapper.
+If this is called from outside of the event loop, you will need to wake
+up the event loop for any changes to take place. If it is called from
+within the event loop, you must return PostAction::Reregister
from
+your own event source’s process_events()
, and the source will be
+unregistered as needed after it exits.
Replace the currently wrapped source with the given one. No more events +will be generated from the old source after this point. The old source +will not be dropped immediately, it will be kept so that it can be +deregistered.
+If this is called from outside of the event loop, you will need to wake
+up the event loop for any changes to take place. If it is called from
+within the event loop, you must return PostAction::Reregister
from
+your own event source’s process_events()
, and the sources will be
+registered and unregistered as needed after it exits.
process_events()
(not the user callback!).EventSource::before_sleep
+and EventSource::before_handle_events
notifications. These are opt-in because
+they require more expensive checks, and almost all sources will not need these notificationspoll
is about to begin Read moreEventSource::process_events
will
+be called with the given events for this source. The iterator may be empty,
+which indicates that no events were generated for this source Read moreSubscriber
to this type, returning a
+[WithDispatch
] wrapper. Read moreprocess_events()
(not the …","The type of events generated by your source.","The EventIterator is an Iterator
over the events relevant …","An event loop","Trait representing an event source","An idle callback that was inserted in this loop","","Interest to register regarding the file descriptor","Level-triggering","An handle to an event loop","A signal that can be shared between thread to stop or …","Some metadata of your event source","Possible modes for registering a file descriptor","Whether this source needs to be sent the …","Whether this source needs to be sent the …","Single event generation","The polling system","Possible actions that can be requested to the event loop …","Shorthand for read interest","Readiness for a file descriptor notification","A token representing a registration in the EventLoop
.","Remove this source from the eventloop","Trigger a re-registration of this source","","The return type of the user callback","A token (for implementation of the EventSource
trait)","Factory for creating tokens in your registrations","Shorthand for write interest","Wrap an IO object into an async adapter","Get the underlying fd of the poller.","Get the underlying raw-fd of the poller.","Returns a mutable reference to the event source.","Returns an immutable reference to the event source.","Register a set of event sources. Effectively calls …","Reregister a set of event sources. Effectively calls …","Unregister a set of event sources. Effectively calls …","Notification that polling is complete, and …","Notification that polling is complete, and …","Notification that a single poll
is about to begin","Notification that a single poll
is about to begin","","","Block a future on this event loop.","","","","","","","","","","","","","","","","","","","","","","","","","","","","","Cancel the idle callback if it was not already run","An MPSC channel whose receiving end is an event source","","","","","","","","","","","","","","","","","","","","","Disables this event source.","Dispatch pending events to their callbacks","Enables this previously disabled event source.","","","","Error types used and generated by Calloop.","Is the FD in an error state","","","","","","","","","","","","","","","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","A futures executor as an event source","A generic event source wrapping an IO objects or file …","Get a signal to stop this event loop from running","Retrieve a loop handle","Inserts an idle callback.","Inserts a new event source in the loop.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","","Consumes the Dispatcher and returns the inner event source.","Adapters for async IO objects","Builds a dispatcher.","","Ping to the event loop","Process any relevant events","Wait for the FD to be readable","Is the FD readable","Register yourself to this poll instance","Register a new file descriptor for polling","Registers a Dispatcher
in the loop.","Removes this source from the event loop.","Re-register your file descriptors","Update the registration for a file descriptor","Run this event loop","Stop the event loop","Timer event source","","","","","","","","","","","Produce a new unique token","Wrapper for a transient Calloop event source.","","","","","","","","","","","","","","","","","","","","","","","","","","","","","Create a new event loop","","","","","","","","","","","","","","","Unregister your file descriptors","Unregister a file descriptor","Makes this source update its registration.","Wake up the event loop","Wait for the FD to be writable","Is the FD writable","The receiving end of the channel","An error arising from processing events for a channel.","The channel was closed","The events generated by the channel event source","A message was received and is bundled here","The sender end of a channel","The sender end of a synchronous channel","","","","","","","","","","","Create a new asynchronous channel","","","","","","","","","","","","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","","Proxy for mpsc::Receiver::recv
to manually poll events.","","","Send a message to the channel","Send a message to the synchronous channel","","Create a new synchronous, bounded channel","","","","","","","","","","","","","","Proxy for mpsc::Receiver::try_recv
to manually poll events.","Send a message to the synchronous channel","","","","","","","Contains the error value","The primary error type used by Calloop covering internal …","An error generated when trying to insert an event source","When an event source is registered (or re- or …","This variant wraps a std::io::Error
, which might arise from","Contains the success value","Any other unexpected error kind (most likely from a user …","Result
alias using Calloop’s error type.","","","","","The generated error","","","","","Converts the InsertError
into Calloop’s error type, …","","","Returns the argument unchanged.","Returns the argument unchanged.","The source that could not be inserted","Calls U::from(self)
.","Calls U::from(self)
.","","","","","","","","","","","A future executor as an event source","Error generated when trying to schedule a future after the …","An error arising from processing events in an async …","Error while reading new futures added via …","A scheduler to send futures to an executor","Error while processing wake events from existing futures.","","","","","","","","","","","","Create a new executor, and its associated scheduler","","","","","","","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","","","","Sends the given future to the executor associated to this …","","","","","","","","","","","","","","","","","Wrapper to use a type implementing AsRawFd
but not AsFd
…","A generic event source wrapping a FD-backed type","A wrapper around a type that doesn’t expose it mutably …","","","","","","","","","","","","","","","","","","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Get a mutable reference.","Get a mutable reference to the underlying type.","Get a reference to the underlying type.","The programmed interest","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","The programmed mode","Wrap inner
with an AsFd
implementation.","Wrap a FD-backed type into a Generic
event source that uses","Wrap a FD-backed type into a Generic
event source using an …","","","","","","","","","","","","","","Unwrap the Generic
source to retrieve the underlying type","Adapter for async IO manipulations","A future that resolves once the associated object becomes …","A future that resolves once the associated object becomes …","","","","","","","","","","","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Mutably access the underlying IO object","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","","","Remove the async adapter and retrieve the underlying object","","","A future that resolves once the object becomes ready for …","","","","","","","","","","A future that resolves once the object becomes ready for …","The ping event source","An error arising from processing events for a ping.","The Ping handle","","","","","Returns the argument unchanged.","Calls U::from(self)
.","Create a new ping event source","","","","","","Don’t reschedule this timer","Action to reschedule a timeout if necessary","A future that resolves once a certain timeout is expired","A timer event source","Reschedule this timer to a given Duration
in the future","Reschedule this timer to a given Instant
","","","","","","","Get the current deadline of this Timer
","","","","Returns the argument unchanged.","Returns the argument unchanged.","Returns the argument unchanged.","Create a future that resolves at a given instant","Create a timer that will fire at a given instant","Create a future that resolves after a given duration","Create a timer that will fire after a given duration from …","Create a timer that will fire immediately when inserted in …","Calls U::from(self)
.","Calls U::from(self)
.","Calls U::from(self)
.","","","","","","Changes the deadline of this timer to an Instant
","Changes the deadline of this timer to a Duration
from now","","","","","","","","","","","A TransientSource
wraps a Calloop event source and manages …","","","","","","Returns the argument unchanged.","Calls U::from(self)
.","Returns true
if there is no wrapped event source.","Apply a function to the enclosed source, if it exists and …","","","Removes the wrapped event source from the event loop and …","Replace the currently wrapped source with the given one. …","","","","",""],"i":[31,25,25,0,31,21,30,0,10,10,0,0,0,0,0,0,30,0,0,10,0,10,10,30,0,0,31,0,0,25,25,0,10,0,0,31,1,5,5,8,8,0,0,0,10,10,10,10,25,25,5,36,1,5,8,29,30,31,21,39,22,32,16,33,25,36,1,5,8,29,30,31,21,39,22,32,16,33,25,29,0,1,8,30,31,21,22,32,16,33,25,1,8,30,31,21,22,32,16,33,25,1,5,1,22,32,25,0,21,36,1,5,8,29,30,31,21,39,22,32,16,33,25,36,1,5,8,29,30,31,21,39,22,32,16,33,25,0,0,5,5,1,1,36,1,5,8,29,30,31,21,39,22,32,16,33,25,16,8,0,8,16,0,10,31,21,10,36,1,1,10,36,5,33,0,1,8,30,31,21,22,32,16,33,25,39,0,36,1,5,8,29,30,31,21,39,22,32,16,33,25,36,1,5,8,29,30,31,21,39,22,32,16,33,25,5,36,1,5,8,29,30,31,21,39,22,32,16,33,25,10,36,1,33,31,21,0,0,47,0,47,0,0,47,44,46,45,49,47,44,46,45,49,0,44,46,44,46,44,47,44,46,45,49,49,47,44,46,45,49,47,44,46,45,49,45,45,45,45,44,46,49,0,44,46,49,47,44,46,45,49,47,44,46,45,49,45,46,47,44,46,45,49,45,3,0,0,56,56,3,56,0,56,41,56,41,41,56,56,41,41,56,56,56,56,41,41,56,41,56,41,56,41,56,41,56,41,56,41,0,0,0,63,0,63,61,59,62,63,61,59,62,63,59,59,61,0,61,59,62,62,63,63,61,59,62,63,61,59,62,63,61,61,61,59,59,62,63,61,59,62,63,61,59,62,63,61,59,62,63,61,0,0,0,64,66,66,64,66,66,67,64,66,67,64,66,64,67,64,66,67,64,66,67,66,67,67,67,64,66,67,67,64,67,67,67,67,67,64,66,67,64,66,67,64,66,67,67,67,0,0,0,2,68,69,2,68,69,2,2,68,69,2,68,69,2,2,68,69,68,69,2,68,69,2,2,68,69,2,68,69,2,68,69,2,0,0,0,73,73,73,73,73,73,0,73,73,73,73,73,80,0,0,0,80,80,79,77,80,79,77,80,77,79,77,80,79,77,80,79,77,79,77,77,79,77,80,79,79,77,77,77,77,77,79,77,80,79,77,80,79,77,80,77,0,81,81,81,81,81,81,81,81,81,81,81,81,81,81,81,81,81,81],"f":"````````````````````````````````````{{{b{c}}e}{{f{{d{e}}}}}{}h}{{{j{c}}}l{}}{{{j{c}}}n{}}{{{A`{ce}}}{{Ab{c}}}Ad{}}{{{A`{ce}}}{{Af{c}}}Ad{}}```{{{Ad{}{{Ah{c}}{Aj{e}}{Al{g}}{An{i}}}}B`}Bb{}{}{}{{Bh{{Bf{Bd}}}}}}0{{{Ad{}{{Ah{c}}{Aj{e}}{Al{g}}{An{i}}}}}{{f{{C`{{Bn{BjBl}}}}}}}{}{}{}{{Bh{{Bf{Bd}}}}}}0{{CbCb}c{}}{{CbCb}Bb}{{{j{c}}gci}{{f{{C`{e}}}}}{}{}{{Cf{}{{Cd{e}}}}}{{Ch{c}}}}{ce{}{}}000000000000000000000000000{CjBb}`{{{b{c}}}{{b{c}}}{}}{{{A`{ce}}}{{A`{ce}}}{}{}}{ClCl}{CnCn}{BjBj}{BlBl}{D`D`}{B`B`}{DbDb}{CbCb}{{ce}Bb{}{}}000000000{{{b{c}}D`}{{f{Bb}}}{}}{{{j{c}}ec}{{f{Bb}}}{}{{Bh{{C`{Dd}}}}}}1{{BlBl}Df}{{D`D`}Df}{{CbCb}Df}``{{DhDj}Dl}{{{b{c}}Dj}Dl{}}{{{j{c}}Dj}Dl{}}{{{A`{ce}}Dj}Dl{}{}}{{CjDj}Dl}{{ClDj}Dl}{{CnDj}Dl}{{BjDj}Dl}{{DnDj}Dl}{{BlDj}Dl}{{D`Dj}Dl}{{B`Dj}Dl}{{DbDj}Dl}{{CbDj}Dl}{cc{}}0000000000000``{{{j{c}}}Db{}}{{{j{c}}}{{b{c}}}{}}{{{b{c}}e}Cj{}{{E`{c}}}}{{{b{c}}eg}{{Ed{D`{Eb{e}}}}}{}Ad{{Ch{c}{{Cd{}}}}}}{ce{}{}}00000000000000{{{A`{ce}}}cAd{}}`{{cg}{{A`{ce}}}Ad{}{{Ch{e}{{Cd{}}}}}}{B`{{C`{c}}}{}}`{{{Ad{}{{Ah{c}}{Aj{e}}{Al{g}}{An{i}}}}BjBlk}{{Ed{Cbi}}}{}{}{}{{Bh{{Bf{Bd}}}}}{{Ch{ce}{{Cd{g}}}}}}``{{{Ad{}{{Ah{c}}{Aj{e}}{Al{g}}{An{i}}}}DhDn}{{f{Bb}}}{}{}{}{{Bh{{Bf{Bd}}}}}}{{DhcCnClBl}{{f{Bb}}}h}{{{b{c}}{A`{ec}}}{{f{D`}}}{}Ad}{{{b{c}}D`}Bb{}}32{{{j{c}}ecg}{{f{Bb}}}{}{{Bh{{C`{Dd}}}}}{{Ch{c}}}}{DbBb}`::::::::::{DnBl}`{c{{Ed{e}}}{}{}}000000000000000000000000000{{}{{f{{j{c}}}}}{}}{cEf{}}0000000000000{{{Ad{}{{Ah{c}}{Aj{e}}{Al{g}}{An{i}}}}Dh}{{f{Bb}}}{}{}{}{{Bh{{Bf{Bd}}}}}}{{Dhc}{{f{Bb}}}h}{{{b{c}}D`}{{f{Bb}}}{}}7`````````{ce{}{}}000000000{{}{{Bn{{Eh{c}}{Ej{c}}}}}{}}{{{Eh{c}}}{{Eh{c}}}{}}{{{El{c}}}{{El{c}}}{}}{{ce}Bb{}{}}0{{{Eh{c}}}Bb{}}{{{En{c}}Dj}DlF`}{{{Eh{c}}Dj}DlF`}{{{El{c}}Dj}DlF`}{{{Ej{c}}Dj}DlF`}{{FbDj}Dl}0{cc{}}0000;;;;;{{{Ej{c}}BjBlk}{{Ed{Cbm}}}{}{}{}{}{{Ch{eg}{{Cd{i}}}}}{}}{{{Ej{c}}}{{Ed{cFd}}}{}}{{{Ej{c}}DhDn}{{f{Bb}}}{}}0{{{Eh{c}}c}{{Ed{Bb{Ff{c}}}}}{}}{{{El{c}}c}{{Ed{Bb{Ff{c}}}}}{}}{Fb{{C`{Bd}}}}{Fh{{Bn{{El{c}}{Ej{c}}}}}{}}{ce{}{}}0{cFj{}}{c{{Ed{e}}}{}{}}000000000{{{Ej{c}}}{{Ed{cFl}}}{}}{{{El{c}}c}{{Ed{Bb{Fn{c}}}}}{}}{cEf{}}0000{{{Ej{c}}Dh}{{f{Bb}}}{}}````````6666`{{G`Dj}Dl}0{{{Eb{c}}Dj}Dl{}}{{{Eb{c}}Dj}{{Ed{BbGb}}}{}}{{{Eb{c}}}G`{}}{GdG`}{{{Bf{Bd}}}G`}{cc{}}0`=={G`{{C`{Bd}}}}{{{Eb{c}}}{{C`{Bd}}}{}}>>====::``````????????{{{Gf{c}}}{{Gf{c}}}Gh}{{ce}Bb{}{}}{{{Gj{c}}}Bb{}}{{}{{f{{Bn{{Gj{c}}{Gf{c}}}}}}}{}}{{{Gj{c}}Dj}DlF`}{{{Gf{c}}Dj}DlF`}{{GlDj}Dl}0{{GnDj}Dl}0::::{ce{}{}}000{{{Gj{c}}BjBle}{{Ed{Cbg}}}{}{{Ch{cBb}}}{}}{{{Gj{c}}DhDn}{{f{Bb}}}{}}0{{{Gf{c}}e}{{Ed{BbGl}}}{}{{Cf{}{{Cd{c}}}}}}3{cFj{}}0{c{{Ed{e}}}{}{}}0000000{cEf{}}000{{{Gj{c}}Dh}{{f{Bb}}}{}}```{{{H`{c}}}lHb}{{{Hd{c}}}lh}{{{Hd{c}}}c{}}::0::::{{{H`{c}}}eHb{}}{{{Hd{c}}}e{}{}}1{{{Hf{ce}}}Bbh{}}{{{H`{c}}Dj}Dl{F`Hb}}{{{Hd{c}}Dj}DlF`}{{{Hf{ce}}Dj}Dl{F`h}F`}{cc{}}007{{{Hf{ce}}}ch{}}0`{ce{}{}}00`{c{{H`{c}}}Hb}{{cCnCl}{{Hf{cGd}}}h}{{cCnCl}{{Hf{ce}}}h{}}{{{Hf{ce}}BjBlm}{{Ed{Cbo}}}h{{Bh{{Bf{Bd}}}}}{}{}{}{{Ch{gi}{{Cd{k}}}}}{}}{{{Hf{ce}}DhDn}{{f{Bb}}}h{{Bh{{Bf{Bd}}}}}}0{c{{Ed{e}}}{}{}}00000{cEf{}}00{{{Hf{ce}}Dh}{{f{Bb}}}h{{Bh{{Bf{Bd}}}}}}9```888888{{{d{c}}}Bbh}{{{d{c}}Dj}Dl{hF`}}{{{Hh{c}}Dj}Dl{F`h}}{{{Hj{c}}Dj}Dl{F`h}}>>>{{{d{c}}}ch}==={c{}{}}01{{{Hl{{Hh{c}}}}Hn}{{I`{Bb}}}h}{{{Hl{{Hj{c}}}}Hn}{{I`{Bb}}}h}{{{d{c}}}{{Hh{c}}}h};;;;;;:::{{{d{c}}}{{Hj{c}}}h}```{ce{}{}}0{{IbDj}Dl}0{cc{}}2{{}{{Ih{{Bn{IdIf}}}}}}{Ib{{C`{Bd}}}}{cFj{}}{c{{Ed{e}}}{}{}}0{cEf{}}``````777777{Ij{{C`{Il}}}}{{InDj}Dl}{{IjDj}Dl}{{J`Dj}Dl}999{{{b{c}}Il}In{}}{IlIj}{{{b{c}}Dd}In{}}{DdIj}{{}Ij}{ce{}{}}00{c{}{}}{{{Hl{In}}Hn}{{I`{c}}}{}}{{IjBjBli}{{Ed{Cbk}}}{}{}{}{{Ch{ce}{{Cd{g}}}}}{}}{{IjDhDn}{{f{Bb}}}}0{{IjIl}Bb}{{IjDd}Bb}{c{{Ed{e}}}{}{}}00000{cEf{}}00{{IjDh}{{f{Bb}}}}`99{{}{{Jb{c}}}Jd}{{{Jb{c}}Dj}DlF`}{c{{Jb{c}}}Ad}{cc{}}={{{Jb{c}}}Df{}}{{{Jb{c}}g}{{C`{e}}}{}{}{{E`{c}{{Cd{e}}}}}}{{{Jb{c}}BjBlk}{{Ed{Cbm}}}Ad{}{}{}{{Ch{eg}{{Cd{i}}}}}{}}{{{Jb{c}}DhDn}{{f{Bb}}}Ad}{{{Jb{c}}}Bb{}}{{{Jb{c}}c}Bb{}}2<<;{{{Jb{c}}Dh}{{f{Bb}}}Ad}","c":[],"p":[[5,"LoopHandle",0],[5,"Async",439],[8,"Result",304],[10,"AsFd",553],[5,"EventLoop",0],[5,"BorrowedFd",553],[8,"RawFd",554],[5,"Dispatcher",0],[5,"RefMut",555],[10,"EventSource",0],[5,"Ref",555],[17,"Event"],[17,"Metadata"],[17,"Ret"],[17,"Error"],[5,"EventIterator",0],[1,"unit"],[10,"Error",556],[5,"Box",557],[10,"Into",558],[5,"Readiness",0],[5,"Token",0],[1,"tuple"],[6,"Option",559],[6,"PostAction",0],[17,"Output"],[10,"Future",560],[10,"FnMut",561],[5,"Idle",0],[6,"Mode",0],[5,"Interest",0],[5,"RegistrationToken",0],[5,"LoopSignal",0],[5,"Duration",562],[1,"bool"],[5,"Poll",0],[5,"Formatter",563],[8,"Result",563],[5,"TokenFactory",0],[10,"FnOnce",561],[5,"InsertError",304],[6,"Result",564],[5,"TypeId",565],[5,"Sender",236],[5,"Channel",236],[5,"SyncSender",236],[6,"Event",236],[10,"Debug",563],[5,"ChannelError",236],[5,"RecvError",566],[5,"SendError",566],[1,"usize"],[5,"String",567],[6,"TryRecvError",566],[6,"TrySendError",566],[6,"Error",304],[5,"Error",563],[5,"Error",568],[5,"Scheduler",339],[10,"Clone",569],[5,"Executor",339],[5,"ExecutorDestroyed",339],[6,"ExecutorError",339],[5,"FdWrapper",391],[10,"AsRawFd",554],[5,"NoIoDrop",391],[5,"Generic",391],[5,"Readable",439],[5,"Writable",439],[5,"Pin",570],[5,"Context",571],[6,"Poll",572],[5,"PingError",475],[8,"Ping",475],[8,"PingSource",475],[8,"Result",568],[5,"Timer",490],[5,"Instant",573],[5,"TimeoutFuture",490],[6,"TimeoutAction",490],[5,"TransientSource",534],[10,"Default",574]],"b":[[263,"impl-Debug-for-ChannelError"],[264,"impl-Display-for-ChannelError"],[317,"impl-Debug-for-Error"],[318,"impl-Display-for-Error"],[319,"impl-Display-for-InsertError%3CT%3E"],[320,"impl-Debug-for-InsertError%3CT%3E"],[321,"impl-From%3CInsertError%3CT%3E%3E-for-Error"],[322,"impl-From%3CError%3E-for-Error"],[323,"impl-From%3CBox%3Cdyn+Error+%2B+Send+%2B+Sync%3E%3E-for-Error"],[359,"impl-Display-for-ExecutorDestroyed"],[360,"impl-Debug-for-ExecutorDestroyed"],[361,"impl-Display-for-ExecutorError"],[362,"impl-Debug-for-ExecutorError"],[480,"impl-Display-for-PingError"],[481,"impl-Debug-for-PingError"]]}]\
+]'));
+if (typeof exports !== 'undefined') exports.searchIndex = searchIndex;
+else if (window.initSearch) window.initSearch(searchIndex);
diff --git a/api/settings.html b/api/settings.html
new file mode 100644
index 00000000..cf515a4f
--- /dev/null
+++ b/api/settings.html
@@ -0,0 +1,2 @@
+1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +
//! Error types used and generated by Calloop.
+//!
+//! This module contains error types for Calloop's operations. They are designed
+//! to make it easy to deal with errors arising from Calloop's internal I/O and
+//! other operations.
+//!
+//! There are two top-level error types:
+//!
+//! - [`Error`]: used by callback functions, internal operations, and some event
+//! loop API calls
+//!
+//! - [`InsertError`]: used primarily by the [`insert_source()`] method when an
+//! event source cannot be added to the loop and needs to be given back to the
+//! caller
+//!
+//! [`insert_source()`]: crate::LoopHandle::insert_source()
+
+use std::fmt::{self, Debug, Formatter};
+
+/// The primary error type used by Calloop covering internal errors and I/O
+/// errors that arise during loop operations such as source registration or
+/// event dispatching.
+#[derive(thiserror::Error, Debug)]
+pub enum Error {
+ /// When an event source is registered (or re- or un-registered) with the
+ /// event loop, this error variant will occur if the token Calloop uses to
+ /// keep track of the event source is not valid.
+ #[error("invalid token provided to internal function")]
+ InvalidToken,
+
+ /// This variant wraps a [`std::io::Error`], which might arise from
+ /// Calloop's internal operations.
+ #[error("underlying IO error")]
+ IoError(#[from] std::io::Error),
+
+ /// Any other unexpected error kind (most likely from a user implementation of
+ /// [`EventSource::process_events()`]) will be wrapped in this.
+ ///
+ /// [`EventSource::process_events()`]: crate::EventSource::process_events()
+ #[error("other error during loop operation")]
+ OtherError(#[from] Box<dyn std::error::Error + Sync + Send>),
+}
+
+impl From<Error> for std::io::Error {
+ /// Converts Calloop's error type into a [`std::io::Error`].
+ fn from(err: Error) -> Self {
+ match err {
+ Error::IoError(source) => source,
+ Error::InvalidToken => Self::new(std::io::ErrorKind::InvalidInput, err.to_string()),
+ Error::OtherError(source) => Self::new(std::io::ErrorKind::Other, source),
+ }
+ }
+}
+
+/// [`Result`] alias using Calloop's error type.
+pub type Result<T> = core::result::Result<T, Error>;
+
+/// An error generated when trying to insert an event source
+#[derive(thiserror::Error)]
+#[error("error inserting event source")]
+pub struct InsertError<T> {
+ /// The source that could not be inserted
+ pub inserted: T,
+ /// The generated error
+ #[source]
+ pub error: Error,
+}
+
+impl<T> Debug for InsertError<T> {
+ #[cfg_attr(feature = "nightly_coverage", coverage(off))]
+ fn fmt(&self, formatter: &mut Formatter) -> core::result::Result<(), fmt::Error> {
+ write!(formatter, "{:?}", self.error)
+ }
+}
+
+impl<T> From<InsertError<T>> for crate::Error {
+ /// Converts the [`InsertError`] into Calloop's error type, throwing away
+ /// the contained source.
+ #[cfg_attr(feature = "nightly_coverage", coverage(off))]
+ fn from(e: InsertError<T>) -> crate::Error {
+ e.error
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 +169 +170 +171 +172 +173 +174 +175 +176 +177 +178 +179 +180 +181 +182 +183 +184 +185 +186 +187 +188 +189 +190 +191 +192 +193 +194 +195 +196 +197 +198 +199 +200 +201 +202 +203 +204 +205 +206 +207 +208 +209 +210 +211 +212 +213 +214 +215 +216 +217 +218 +219 +220 +221 +222 +223 +224 +225 +226 +227 +228 +229 +230 +231 +232 +233 +234 +235 +236 +237 +238 +239 +240 +241 +242 +243 +244 +245 +246 +247 +248 +249 +250 +251 +252 +253 +254 +255 +256 +257 +258 +259 +260 +261 +262 +263 +264 +265 +266 +267 +268 +269 +270 +271 +272 +273 +274 +275 +276 +277 +278 +279 +280 +281 +282 +283 +284 +285 +286 +287 +288 +289 +290 +291 +292 +293 +294 +295 +296 +297 +298 +299 +300 +301 +302 +303 +304 +305 +306 +307 +308 +309 +310 +311 +312 +313 +314 +315 +316 +317 +318 +319 +320 +321 +322 +323 +324 +325 +326 +327 +328 +329 +330 +331 +332 +333 +334 +335 +336 +337 +338 +339 +340 +341 +342 +343 +344 +345 +346 +347 +348 +349 +350 +351 +352 +353 +354 +355 +356 +357 +358 +359 +360 +361 +362 +363 +364 +365 +366 +367 +368 +369 +370 +371 +372 +373 +374 +375 +376 +377 +378 +379 +380 +381 +382 +383 +384 +385 +386 +387 +388 +389 +390 +391 +392 +393 +394 +395 +396 +397 +398 +399 +400 +401 +402 +403 +404 +405 +406 +407 +408 +409 +410 +411 +412 +413 +414 +415 +416 +417 +418 +419 +420 +421 +422 +423 +424 +425 +426 +427 +428 +429 +430 +431 +432 +433 +434 +435 +436 +437 +438 +439 +440 +441 +442 +443 +444 +445 +446 +447 +448 +449 +450 +451 +452 +453 +454 +455 +456 +457 +458 +459 +460 +461 +462 +463 +464 +465 +466 +467 +468 +469 +470 +471 +472 +473 +474 +475 +476 +477 +478 +479 +480 +481 +482 +483 +484 +485 +486 +487 +488 +489 +490 +491 +492 +493 +494 +495 +496 +497 +498 +499 +500 +501 +502 +503 +504 +505 +506 +507 +508 +509 +510 +511 +512 +513 +514 +515 +516 +517 +518 +519 +520 +521 +522 +523 +524 +525 +526 +527 +528 +529 +530 +531 +532 +533 +534 +535 +536 +537 +538 +539 +540 +541 +542 +543 +544 +545 +546 +547 +548 +549 +550 +551 +552 +553 +554 +555 +556 +557 +558 +559 +560 +561 +562 +563 +564 +565 +566 +567 +568 +569 +570 +571 +572 +573 +574 +575 +576 +577 +578 +579 +580 +581 +582 +583 +584 +585 +586 +587 +588 +589 +590 +591 +592 +593 +594 +595 +596 +597 +598 +599 +600 +601 +602 +603 +604 +605 +606 +607 +608 +609 +610 +611 +612 +613 +
//! Adapters for async IO objects
+//!
+//! This module mainly hosts the [`Async`] adapter for making IO objects async with readiness
+//! monitoring backed by an [`EventLoop`](crate::EventLoop). See [`LoopHandle::adapt_io`] for
+//! how to create them.
+//!
+//! [`LoopHandle::adapt_io`]: crate::LoopHandle#method.adapt_io
+
+use std::cell::RefCell;
+use std::pin::Pin;
+use std::rc::Rc;
+use std::task::{Context, Poll as TaskPoll, Waker};
+
+#[cfg(unix)]
+use std::os::unix::io::{AsFd, AsRawFd, BorrowedFd, RawFd};
+#[cfg(windows)]
+use std::os::windows::io::{
+ AsRawSocket as AsRawFd, AsSocket as AsFd, BorrowedSocket as BorrowedFd, RawSocket as RawFd,
+};
+
+#[cfg(feature = "futures-io")]
+use futures_io::{AsyncRead, AsyncWrite, IoSlice, IoSliceMut};
+
+use crate::loop_logic::EventIterator;
+use crate::{
+ loop_logic::LoopInner, sources::EventDispatcher, Interest, Mode, Poll, PostAction, Readiness,
+ Token, TokenFactory,
+};
+use crate::{AdditionalLifecycleEventsSet, RegistrationToken};
+
+/// Adapter for async IO manipulations
+///
+/// This type wraps an IO object, providing methods to create futures waiting for its
+/// readiness.
+///
+/// If the `futures-io` cargo feature is enabled, it also implements `AsyncRead` and/or
+/// `AsyncWrite` if the underlying type implements `Read` and/or `Write`.
+///
+/// Note that this adapter and the futures procuded from it and *not* threadsafe.
+///
+/// ## Platform-Specific
+///
+/// - **Windows:** Usually, on drop, the file descriptor is set back to its previous status.
+/// For example, if the file was previously nonblocking it will be set to nonblocking, and
+/// if the file was blocking it will be set to blocking. However, on Windows, it is impossible
+/// to tell what its status was before. Therefore it will always be set to blocking.
+pub struct Async<'l, F: AsFd> {
+ fd: Option<F>,
+ dispatcher: Rc<RefCell<IoDispatcher>>,
+ inner: Rc<dyn IoLoopInner + 'l>,
+ was_nonblocking: bool,
+}
+
+impl<'l, F: AsFd + std::fmt::Debug> std::fmt::Debug for Async<'l, F> {
+ #[cfg_attr(feature = "nightly_coverage", coverage(off))]
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ f.debug_struct("Async").field("fd", &self.fd).finish()
+ }
+}
+
+impl<'l, F: AsFd> Async<'l, F> {
+ pub(crate) fn new<Data>(inner: Rc<LoopInner<'l, Data>>, fd: F) -> crate::Result<Async<'l, F>> {
+ // set non-blocking
+ let was_nonblocking = set_nonblocking(
+ #[cfg(unix)]
+ fd.as_fd(),
+ #[cfg(windows)]
+ fd.as_socket(),
+ true,
+ )?;
+ // register in the loop
+ let dispatcher = Rc::new(RefCell::new(IoDispatcher {
+ #[cfg(unix)]
+ fd: fd.as_fd().as_raw_fd(),
+ #[cfg(windows)]
+ fd: fd.as_socket().as_raw_socket(),
+ token: None,
+ waker: None,
+ is_registered: false,
+ interest: Interest::EMPTY,
+ last_readiness: Readiness::EMPTY,
+ }));
+
+ {
+ let mut sources = inner.sources.borrow_mut();
+ let slot = sources.vacant_entry();
+ slot.source = Some(dispatcher.clone());
+ dispatcher.borrow_mut().token = Some(Token { inner: slot.token });
+ }
+
+ // SAFETY: We are sure to deregister on drop.
+ unsafe {
+ inner.register(&dispatcher)?;
+ }
+
+ // Straightforward casting would require us to add the bound `Data: 'l` but we don't actually need it
+ // as this module never accesses the dispatch data, so we use transmute to erase it
+ let inner: Rc<dyn IoLoopInner + 'l> =
+ unsafe { std::mem::transmute(inner as Rc<dyn IoLoopInner>) };
+
+ Ok(Async {
+ fd: Some(fd),
+ dispatcher,
+ inner,
+ was_nonblocking,
+ })
+ }
+
+ /// Mutably access the underlying IO object
+ pub fn get_mut(&mut self) -> &mut F {
+ self.fd.as_mut().unwrap()
+ }
+
+ /// A future that resolves once the object becomes ready for reading
+ pub fn readable<'s>(&'s mut self) -> Readable<'s, 'l, F> {
+ Readable { io: self }
+ }
+
+ /// A future that resolves once the object becomes ready for writing
+ pub fn writable<'s>(&'s mut self) -> Writable<'s, 'l, F> {
+ Writable { io: self }
+ }
+
+ /// Remove the async adapter and retrieve the underlying object
+ pub fn into_inner(mut self) -> F {
+ self.fd.take().unwrap()
+ }
+
+ fn readiness(&self) -> Readiness {
+ self.dispatcher.borrow_mut().readiness()
+ }
+
+ fn register_waker(&self, interest: Interest, waker: Waker) -> crate::Result<()> {
+ {
+ let mut disp = self.dispatcher.borrow_mut();
+ disp.interest = interest;
+ disp.waker = Some(waker);
+ }
+ self.inner.reregister(&self.dispatcher)
+ }
+}
+
+/// A future that resolves once the associated object becomes ready for reading
+#[derive(Debug)]
+pub struct Readable<'s, 'l, F: AsFd> {
+ io: &'s mut Async<'l, F>,
+}
+
+impl<'s, 'l, F: AsFd> std::future::Future for Readable<'s, 'l, F> {
+ type Output = ();
+ fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> TaskPoll<()> {
+ let io = &mut self.as_mut().io;
+ let readiness = io.readiness();
+ if readiness.readable || readiness.error {
+ TaskPoll::Ready(())
+ } else {
+ let _ = io.register_waker(Interest::READ, cx.waker().clone());
+ TaskPoll::Pending
+ }
+ }
+}
+
+/// A future that resolves once the associated object becomes ready for writing
+#[derive(Debug)]
+pub struct Writable<'s, 'l, F: AsFd> {
+ io: &'s mut Async<'l, F>,
+}
+
+impl<'s, 'l, F: AsFd> std::future::Future for Writable<'s, 'l, F> {
+ type Output = ();
+ fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> TaskPoll<()> {
+ let io = &mut self.as_mut().io;
+ let readiness = io.readiness();
+ if readiness.writable || readiness.error {
+ TaskPoll::Ready(())
+ } else {
+ let _ = io.register_waker(Interest::WRITE, cx.waker().clone());
+ TaskPoll::Pending
+ }
+ }
+}
+
+impl<'l, F: AsFd> Drop for Async<'l, F> {
+ fn drop(&mut self) {
+ self.inner.kill(&self.dispatcher);
+ // restore flags
+ let _ = set_nonblocking(
+ unsafe { BorrowedFd::borrow_raw(self.dispatcher.borrow().fd) },
+ self.was_nonblocking,
+ );
+ }
+}
+
+impl<'l, F: AsFd> Unpin for Async<'l, F> {}
+
+trait IoLoopInner {
+ unsafe fn register(&self, dispatcher: &RefCell<IoDispatcher>) -> crate::Result<()>;
+ fn reregister(&self, dispatcher: &RefCell<IoDispatcher>) -> crate::Result<()>;
+ fn kill(&self, dispatcher: &RefCell<IoDispatcher>);
+}
+
+impl<'l, Data> IoLoopInner for LoopInner<'l, Data> {
+ unsafe fn register(&self, dispatcher: &RefCell<IoDispatcher>) -> crate::Result<()> {
+ let disp = dispatcher.borrow();
+ self.poll.borrow_mut().register(
+ unsafe { BorrowedFd::borrow_raw(disp.fd) },
+ Interest::EMPTY,
+ Mode::OneShot,
+ disp.token.expect("No token for IO dispatcher"),
+ )
+ }
+
+ fn reregister(&self, dispatcher: &RefCell<IoDispatcher>) -> crate::Result<()> {
+ let disp = dispatcher.borrow();
+ self.poll.borrow_mut().reregister(
+ unsafe { BorrowedFd::borrow_raw(disp.fd) },
+ disp.interest,
+ Mode::OneShot,
+ disp.token.expect("No token for IO dispatcher"),
+ )
+ }
+
+ fn kill(&self, dispatcher: &RefCell<IoDispatcher>) {
+ let token = dispatcher
+ .borrow()
+ .token
+ .expect("No token for IO dispatcher");
+ if let Ok(slot) = self.sources.borrow_mut().get_mut(token.inner) {
+ slot.source = None;
+ }
+ }
+}
+
+struct IoDispatcher {
+ fd: RawFd, // FIXME: `BorrowedFd`? How to statically verify it doesn't outlive file?
+ token: Option<Token>,
+ waker: Option<Waker>,
+ is_registered: bool,
+ interest: Interest,
+ last_readiness: Readiness,
+}
+
+impl IoDispatcher {
+ fn readiness(&mut self) -> Readiness {
+ std::mem::replace(&mut self.last_readiness, Readiness::EMPTY)
+ }
+}
+
+impl<Data> EventDispatcher<Data> for RefCell<IoDispatcher> {
+ fn process_events(
+ &self,
+ readiness: Readiness,
+ _token: Token,
+ _data: &mut Data,
+ ) -> crate::Result<PostAction> {
+ let mut disp = self.borrow_mut();
+ disp.last_readiness = readiness;
+ if let Some(waker) = disp.waker.take() {
+ waker.wake();
+ }
+ Ok(PostAction::Continue)
+ }
+
+ fn register(
+ &self,
+ _: &mut Poll,
+ _: &mut AdditionalLifecycleEventsSet,
+ _: &mut TokenFactory,
+ ) -> crate::Result<()> {
+ // registration is handled by IoLoopInner
+ unreachable!()
+ }
+
+ fn reregister(
+ &self,
+ _: &mut Poll,
+ _: &mut AdditionalLifecycleEventsSet,
+ _: &mut TokenFactory,
+ ) -> crate::Result<bool> {
+ // registration is handled by IoLoopInner
+ unreachable!()
+ }
+
+ fn unregister(
+ &self,
+ poll: &mut Poll,
+ _: &mut AdditionalLifecycleEventsSet,
+ _: RegistrationToken,
+ ) -> crate::Result<bool> {
+ let disp = self.borrow();
+ if disp.is_registered {
+ poll.unregister(unsafe { BorrowedFd::borrow_raw(disp.fd) })?;
+ }
+ Ok(true)
+ }
+
+ fn before_sleep(&self) -> crate::Result<Option<(Readiness, Token)>> {
+ Ok(None)
+ }
+ fn before_handle_events(&self, _: EventIterator<'_>) {}
+}
+
+/*
+ * Async IO trait implementations
+ */
+
+#[cfg(feature = "futures-io")]
+#[cfg_attr(docsrs, doc(cfg(feature = "futures-io")))]
+impl<'l, F: AsFd + std::io::Read> AsyncRead for Async<'l, F> {
+ fn poll_read(
+ mut self: Pin<&mut Self>,
+ cx: &mut Context<'_>,
+ buf: &mut [u8],
+ ) -> TaskPoll<std::io::Result<usize>> {
+ match (*self).get_mut().read(buf) {
+ Err(err) if err.kind() == std::io::ErrorKind::WouldBlock => {}
+ res => return TaskPoll::Ready(res),
+ }
+ self.register_waker(Interest::READ, cx.waker().clone())?;
+ TaskPoll::Pending
+ }
+
+ fn poll_read_vectored(
+ mut self: Pin<&mut Self>,
+ cx: &mut Context<'_>,
+ bufs: &mut [IoSliceMut<'_>],
+ ) -> TaskPoll<std::io::Result<usize>> {
+ match (*self).get_mut().read_vectored(bufs) {
+ Err(err) if err.kind() == std::io::ErrorKind::WouldBlock => {}
+ res => return TaskPoll::Ready(res),
+ }
+ self.register_waker(Interest::READ, cx.waker().clone())?;
+ TaskPoll::Pending
+ }
+}
+
+#[cfg(feature = "futures-io")]
+#[cfg_attr(docsrs, doc(cfg(feature = "futures-io")))]
+impl<'l, F: AsFd + std::io::Write> AsyncWrite for Async<'l, F> {
+ fn poll_write(
+ mut self: Pin<&mut Self>,
+ cx: &mut Context<'_>,
+ buf: &[u8],
+ ) -> TaskPoll<std::io::Result<usize>> {
+ match (*self).get_mut().write(buf) {
+ Err(err) if err.kind() == std::io::ErrorKind::WouldBlock => {}
+ res => return TaskPoll::Ready(res),
+ }
+ self.register_waker(Interest::WRITE, cx.waker().clone())?;
+ TaskPoll::Pending
+ }
+
+ fn poll_write_vectored(
+ mut self: Pin<&mut Self>,
+ cx: &mut Context<'_>,
+ bufs: &[IoSlice<'_>],
+ ) -> TaskPoll<std::io::Result<usize>> {
+ match (*self).get_mut().write_vectored(bufs) {
+ Err(err) if err.kind() == std::io::ErrorKind::WouldBlock => {}
+ res => return TaskPoll::Ready(res),
+ }
+ self.register_waker(Interest::WRITE, cx.waker().clone())?;
+ TaskPoll::Pending
+ }
+
+ fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> TaskPoll<std::io::Result<()>> {
+ match (*self).get_mut().flush() {
+ Err(err) if err.kind() == std::io::ErrorKind::WouldBlock => {}
+ res => return TaskPoll::Ready(res),
+ }
+ self.register_waker(Interest::WRITE, cx.waker().clone())?;
+ TaskPoll::Pending
+ }
+
+ fn poll_close(self: Pin<&mut Self>, cx: &mut Context<'_>) -> TaskPoll<std::io::Result<()>> {
+ self.poll_flush(cx)
+ }
+}
+
+// https://github.com/smol-rs/async-io/blob/6499077421495f2200d5b86918399f3a84bbe8e4/src/lib.rs#L2171-L2195
+/// Set the nonblocking status of an FD and return whether it was nonblocking before.
+#[allow(clippy::needless_return)]
+#[inline]
+fn set_nonblocking(fd: BorrowedFd<'_>, is_nonblocking: bool) -> std::io::Result<bool> {
+ #[cfg(windows)]
+ {
+ rustix::io::ioctl_fionbio(fd, is_nonblocking)?;
+
+ // Unfortunately it is impossible to tell if a socket was nonblocking on Windows.
+ // Just say it wasn't for now.
+ return Ok(false);
+ }
+
+ #[cfg(not(windows))]
+ {
+ let previous = rustix::fs::fcntl_getfl(fd)?;
+ let new = if is_nonblocking {
+ previous | rustix::fs::OFlags::NONBLOCK
+ } else {
+ previous & !(rustix::fs::OFlags::NONBLOCK)
+ };
+ if new != previous {
+ rustix::fs::fcntl_setfl(fd, new)?;
+ }
+
+ return Ok(previous.contains(rustix::fs::OFlags::NONBLOCK));
+ }
+}
+
+#[cfg(all(test, unix, feature = "executor", feature = "futures-io"))]
+mod tests {
+ use futures::io::{AsyncReadExt, AsyncWriteExt};
+
+ use crate::sources::futures::executor;
+
+ #[test]
+ fn read_write() {
+ let mut event_loop = crate::EventLoop::try_new().unwrap();
+ let handle = event_loop.handle();
+ let (exec, sched) = executor().unwrap();
+ handle
+ .insert_source(exec, move |ret, &mut (), got| {
+ *got = ret;
+ })
+ .unwrap();
+
+ let (tx, rx) = std::os::unix::net::UnixStream::pair().unwrap();
+ let mut tx = handle.adapt_io(tx).unwrap();
+ let mut rx = handle.adapt_io(rx).unwrap();
+ let received = std::rc::Rc::new(std::cell::Cell::new(false));
+ let fut_received = received.clone();
+
+ sched
+ .schedule(async move {
+ let mut buf = [0; 12];
+ rx.read_exact(&mut buf).await.unwrap();
+ assert_eq!(&buf, b"Hello World!");
+ fut_received.set(true);
+ })
+ .unwrap();
+
+ // The receiving future alone cannot advance
+ event_loop
+ .dispatch(Some(std::time::Duration::from_millis(10)), &mut ())
+ .unwrap();
+ assert!(!received.get());
+
+ // schedule the writing future as well and wait until finish
+ sched
+ .schedule(async move {
+ tx.write_all(b"Hello World!").await.unwrap();
+ tx.flush().await.unwrap();
+ })
+ .unwrap();
+
+ while !received.get() {
+ event_loop.dispatch(None, &mut ()).unwrap();
+ }
+ }
+
+ #[test]
+ fn read_write_vectored() {
+ let mut event_loop = crate::EventLoop::try_new().unwrap();
+ let handle = event_loop.handle();
+ let (exec, sched) = executor().unwrap();
+ handle
+ .insert_source(exec, move |ret, &mut (), got| {
+ *got = ret;
+ })
+ .unwrap();
+
+ let (tx, rx) = std::os::unix::net::UnixStream::pair().unwrap();
+ let mut tx = handle.adapt_io(tx).unwrap();
+ let mut rx = handle.adapt_io(rx).unwrap();
+ let received = std::rc::Rc::new(std::cell::Cell::new(false));
+ let fut_received = received.clone();
+
+ sched
+ .schedule(async move {
+ let mut buf = [0; 12];
+ let mut ioslices = buf
+ .chunks_mut(2)
+ .map(std::io::IoSliceMut::new)
+ .collect::<Vec<_>>();
+ let count = rx.read_vectored(&mut ioslices).await.unwrap();
+ assert_eq!(count, 12);
+ assert_eq!(&buf, b"Hello World!");
+ fut_received.set(true);
+ })
+ .unwrap();
+
+ // The receiving future alone cannot advance
+ event_loop
+ .dispatch(Some(std::time::Duration::from_millis(10)), &mut ())
+ .unwrap();
+ assert!(!received.get());
+
+ // schedule the writing future as well and wait until finish
+ sched
+ .schedule(async move {
+ let buf = b"Hello World!";
+ let ioslices = buf.chunks(2).map(std::io::IoSlice::new).collect::<Vec<_>>();
+ let count = tx.write_vectored(&ioslices).await.unwrap();
+ assert_eq!(count, 12);
+ tx.flush().await.unwrap();
+ })
+ .unwrap();
+
+ while !received.get() {
+ event_loop.dispatch(None, &mut ()).unwrap();
+ }
+ }
+
+ #[test]
+ fn readable() {
+ use std::io::Write;
+
+ let mut event_loop = crate::EventLoop::try_new().unwrap();
+ let handle = event_loop.handle();
+ let (exec, sched) = executor().unwrap();
+ handle
+ .insert_source(exec, move |(), &mut (), got| {
+ *got = true;
+ })
+ .unwrap();
+
+ let (mut tx, rx) = std::os::unix::net::UnixStream::pair().unwrap();
+
+ let mut rx = handle.adapt_io(rx).unwrap();
+ sched
+ .schedule(async move {
+ rx.readable().await;
+ })
+ .unwrap();
+
+ let mut dispatched = false;
+
+ event_loop
+ .dispatch(Some(std::time::Duration::from_millis(100)), &mut dispatched)
+ .unwrap();
+ // The socket is not yet readable, so the readable() future has not completed
+ assert!(!dispatched);
+
+ tx.write_all(&[42]).unwrap();
+ tx.flush().unwrap();
+
+ // Now we should become readable
+ while !dispatched {
+ event_loop.dispatch(None, &mut dispatched).unwrap();
+ }
+ }
+
+ #[test]
+ fn writable() {
+ use std::io::{BufReader, BufWriter, Read, Write};
+
+ let mut event_loop = crate::EventLoop::try_new().unwrap();
+ let handle = event_loop.handle();
+ let (exec, sched) = executor().unwrap();
+ handle
+ .insert_source(exec, move |(), &mut (), got| {
+ *got = true;
+ })
+ .unwrap();
+
+ let (mut tx, mut rx) = std::os::unix::net::UnixStream::pair().unwrap();
+ tx.set_nonblocking(true).unwrap();
+ rx.set_nonblocking(true).unwrap();
+
+ // First, fill the socket buffers
+ {
+ let mut writer = BufWriter::new(&mut tx);
+ let data = vec![42u8; 1024];
+ loop {
+ if writer.write(&data).is_err() {
+ break;
+ }
+ }
+ }
+
+ // Now, wait for it to be readable
+ let mut tx = handle.adapt_io(tx).unwrap();
+ sched
+ .schedule(async move {
+ tx.writable().await;
+ })
+ .unwrap();
+
+ let mut dispatched = false;
+
+ event_loop
+ .dispatch(Some(std::time::Duration::from_millis(100)), &mut dispatched)
+ .unwrap();
+ // The socket is not yet writable, so the readable() future has not completed
+ assert!(!dispatched);
+
+ // now read everything
+ {
+ let mut reader = BufReader::new(&mut rx);
+ let mut buffer = vec![0u8; 1024];
+ loop {
+ if reader.read(&mut buffer).is_err() {
+ break;
+ }
+ }
+ }
+
+ // Now we should become writable
+ while !dispatched {
+ event_loop.dispatch(None, &mut dispatched).unwrap();
+ }
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +
//! Calloop, a Callback-based Event Loop
+//!
+//! This crate provides an [`EventLoop`] type, which is a small abstraction
+//! over a polling system. The main difference between this crate
+//! and other traditional rust event loops is that it is based on callbacks:
+//! you can register several event sources, each being associated with a callback
+//! closure that will be invoked whenever the associated event source generates
+//! events.
+//!
+//! The main target use of this event loop is thus for apps that expect to spend
+//! most of their time waiting for events and wishes to do so in a cheap and convenient
+//! way. It is not meant for large scale high performance IO.
+//!
+//! ## How to use it
+//!
+//! Below is a quick usage example of calloop. For a more in-depth tutorial, see
+//! the [calloop book](https://smithay.github.io/calloop).
+//!
+//! For simple uses, you can just add event sources with callbacks to the event
+//! loop. For example, here's a runnable program that exits after five seconds:
+//!
+//! ```no_run
+//! use calloop::{timer::{Timer, TimeoutAction}, EventLoop, LoopSignal};
+//!
+//! fn main() {
+//! // Create the event loop. The loop is parameterised by the kind of shared
+//! // data you want the callbacks to use. In this case, we want to be able to
+//! // stop the loop when the timer fires, so we provide the loop with a
+//! // LoopSignal, which has the ability to stop the loop from within events. We
+//! // just annotate the type here; the actual data is provided later in the
+//! // run() call.
+//! let mut event_loop: EventLoop<LoopSignal> =
+//! EventLoop::try_new().expect("Failed to initialize the event loop!");
+//!
+//! // Retrieve a handle. It is used to insert new sources into the event loop
+//! // It can be cloned, allowing you to insert sources from within source
+//! // callbacks.
+//! let handle = event_loop.handle();
+//!
+//! // Create our event source, a timer, that will expire in 2 seconds
+//! let source = Timer::from_duration(std::time::Duration::from_secs(2));
+//!
+//! // Inserting an event source takes this general form. It can also be done
+//! // from within the callback of another event source.
+//! handle
+//! .insert_source(
+//! // a type which implements the EventSource trait
+//! source,
+//! // a callback that is invoked whenever this source generates an event
+//! |event, _metadata, shared_data| {
+//! // This callback is given 3 values:
+//! // - the event generated by the source (in our case, timer events are the Instant
+//! // representing the deadline for which it has fired)
+//! // - &mut access to some metadata, specific to the event source (in our case, a
+//! // timer handle)
+//! // - &mut access to the global shared data that was passed to EventLoop::run or
+//! // EventLoop::dispatch (in our case, a LoopSignal object to stop the loop)
+//! //
+//! // The return type is just () because nothing uses it. Some
+//! // sources will expect a Result of some kind instead.
+//! println!("Timeout for {:?} expired!", event);
+//! // notify the event loop to stop running using the signal in the shared data
+//! // (see below)
+//! shared_data.stop();
+//! // The timer event source requires us to return a TimeoutAction to
+//! // specify if the timer should be rescheduled. In our case we just drop it.
+//! TimeoutAction::Drop
+//! },
+//! )
+//! .expect("Failed to insert event source!");
+//!
+//! // Create the shared data for our loop.
+//! let mut shared_data = event_loop.get_signal();
+//!
+//! // Actually run the event loop. This will dispatch received events to their
+//! // callbacks, waiting at most 20ms for new events between each invocation of
+//! // the provided callback (pass None for the timeout argument if you want to
+//! // wait indefinitely between events).
+//! //
+//! // This is where we pass the *value* of the shared data, as a mutable
+//! // reference that will be forwarded to all your callbacks, allowing them to
+//! // share some state
+//! event_loop
+//! .run(
+//! std::time::Duration::from_millis(20),
+//! &mut shared_data,
+//! |_shared_data| {
+//! // Finally, this is where you can insert the processing you need
+//! // to do do between each waiting event eg. drawing logic if
+//! // you're doing a GUI app.
+//! },
+//! )
+//! .expect("Error during event loop!");
+//! }
+//! ```
+//!
+//! ## Event source types
+//!
+//! The event loop is backed by an OS provided polling selector (epoll on Linux).
+//!
+//! This crate also provide some adapters for common event sources such as:
+//!
+//! - [MPSC channels](channel)
+//! - [Timers](timer)
+//! - [unix signals](signals) on Linux
+//!
+//! As well as generic objects backed by file descriptors.
+//!
+//! It is also possible to insert "idle" callbacks. These callbacks represent computations that
+//! need to be done at some point, but are not as urgent as processing the events. These callbacks
+//! are stored and then executed during [`EventLoop::dispatch`](EventLoop#method.dispatch), once all
+//! events from the sources have been processed.
+//!
+//! ## Async/Await compatibility
+//!
+//! `calloop` can be used with futures, both as an executor and for monitoring Async IO.
+//!
+//! Activating the `executor` cargo feature will add the [`futures`] module, which provides
+//! a future executor that can be inserted into an [`EventLoop`] as yet another [`EventSource`].
+//!
+//! IO objects can be made Async-aware via the [`LoopHandle::adapt_io`](LoopHandle#method.adapt_io)
+//! method. Waking up the futures using these objects is handled by the associated [`EventLoop`]
+//! directly.
+//!
+//! ## Custom event sources
+//!
+//! You can create custom event sources can will be inserted in the event loop by
+//! implementing the [`EventSource`] trait. This can be done either directly from the file
+//! descriptors of your source of interest, or by wrapping an other event source and further
+//! processing its events. An [`EventSource`] can register more than one file descriptor and
+//! aggregate them.
+//!
+//! ## Platforms support
+//!
+//! Currently, calloop is tested on Linux, FreeBSD and macOS.
+//!
+//! The following platforms are also enabled at compile time but not tested: Android, NetBSD,
+//! OpenBSD, DragonFlyBSD.
+//!
+//! Those platforms *should* work based on the fact that they have the same polling mechanism as
+//! tested platforms, but some subtle bugs might still occur.
+
+#![warn(missing_docs, missing_debug_implementations)]
+#![allow(clippy::needless_doctest_main)]
+#![cfg_attr(docsrs, feature(doc_cfg))]
+#![cfg_attr(feature = "nightly_coverage", feature(coverage_attribute))]
+
+mod sys;
+
+pub use sys::{Interest, Mode, Poll, Readiness, Token, TokenFactory};
+
+pub use self::loop_logic::{EventLoop, LoopHandle, LoopSignal, RegistrationToken};
+pub use self::sources::*;
+
+pub mod error;
+pub use error::{Error, InsertError, Result};
+
+pub mod io;
+mod list;
+mod loop_logic;
+mod macros;
+mod sources;
+mod token;
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +
use std::rc::Rc;
+
+use crate::sources::EventDispatcher;
+use crate::token::TokenInner;
+
+pub(crate) struct SourceEntry<'l, Data> {
+ pub(crate) token: TokenInner,
+ pub(crate) source: Option<Rc<dyn EventDispatcher<Data> + 'l>>,
+}
+
+pub(crate) struct SourceList<'l, Data> {
+ sources: Vec<SourceEntry<'l, Data>>,
+}
+
+impl<'l, Data> SourceList<'l, Data> {
+ pub(crate) fn new() -> Self {
+ SourceList {
+ sources: Vec::new(),
+ }
+ }
+
+ pub(crate) fn vacant_entry(&mut self) -> &mut SourceEntry<'l, Data> {
+ let opt_id = self.sources.iter().position(|slot| slot.source.is_none());
+ match opt_id {
+ Some(id) => {
+ // we are reusing a slot
+ let slot = &mut self.sources[id];
+ // increment the slot version
+ slot.token = slot.token.increment_version();
+ slot
+ }
+ None => {
+ // we are inserting a new slot
+ let next_id = self.sources.len();
+ self.sources.push(SourceEntry {
+ token: TokenInner::new(self.sources.len())
+ .expect("Trying to insert too many sources in an event loop."),
+ source: None,
+ });
+ &mut self.sources[next_id]
+ }
+ }
+ }
+
+ pub(crate) fn get(&self, token: TokenInner) -> crate::Result<&SourceEntry<'l, Data>> {
+ let entry = self
+ .sources
+ .get(token.get_id())
+ .ok_or(crate::Error::InvalidToken)?;
+ if entry.token.same_source_as(token) {
+ Ok(entry)
+ } else {
+ Err(crate::Error::InvalidToken)
+ }
+ }
+
+ pub(crate) fn get_mut(
+ &mut self,
+ token: TokenInner,
+ ) -> crate::Result<&mut SourceEntry<'l, Data>> {
+ let entry = self
+ .sources
+ .get_mut(token.get_id())
+ .ok_or(crate::Error::InvalidToken)?;
+ if entry.token.same_source_as(token) {
+ Ok(entry)
+ } else {
+ Err(crate::Error::InvalidToken)
+ }
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 +169 +170 +171 +172 +173 +174 +175 +176 +177 +178 +179 +180 +181 +182 +183 +184 +185 +186 +187 +188 +189 +190 +191 +192 +193 +194 +195 +196 +197 +198 +199 +200 +201 +202 +203 +204 +205 +206 +207 +208 +209 +210 +211 +212 +213 +214 +215 +216 +217 +218 +219 +220 +221 +222 +223 +224 +225 +226 +227 +228 +229 +230 +231 +232 +233 +234 +235 +236 +237 +238 +239 +240 +241 +242 +243 +244 +245 +246 +247 +248 +249 +250 +251 +252 +253 +254 +255 +256 +257 +258 +259 +260 +261 +262 +263 +264 +265 +266 +267 +268 +269 +270 +271 +272 +273 +274 +275 +276 +277 +278 +279 +280 +281 +282 +283 +284 +285 +286 +287 +288 +289 +290 +291 +292 +293 +294 +295 +296 +297 +298 +299 +300 +301 +302 +303 +304 +305 +306 +307 +308 +309 +310 +311 +312 +313 +314 +315 +316 +317 +318 +319 +320 +321 +322 +323 +324 +325 +326 +327 +328 +329 +330 +331 +332 +333 +334 +335 +336 +337 +338 +339 +340 +341 +342 +343 +344 +345 +346 +347 +348 +349 +350 +351 +352 +353 +354 +355 +356 +357 +358 +359 +360 +361 +362 +363 +364 +365 +366 +367 +368 +369 +370 +371 +372 +373 +374 +375 +376 +377 +378 +379 +380 +381 +382 +383 +384 +385 +386 +387 +388 +389 +390 +391 +392 +393 +394 +395 +396 +397 +398 +399 +400 +401 +402 +403 +404 +405 +406 +407 +408 +409 +410 +411 +412 +413 +414 +415 +416 +417 +418 +419 +420 +421 +422 +423 +424 +425 +426 +427 +428 +429 +430 +431 +432 +433 +434 +435 +436 +437 +438 +439 +440 +441 +442 +443 +444 +445 +446 +447 +448 +449 +450 +451 +452 +453 +454 +455 +456 +457 +458 +459 +460 +461 +462 +463 +464 +465 +466 +467 +468 +469 +470 +471 +472 +473 +474 +475 +476 +477 +478 +479 +480 +481 +482 +483 +484 +485 +486 +487 +488 +489 +490 +491 +492 +493 +494 +495 +496 +497 +498 +499 +500 +501 +502 +503 +504 +505 +506 +507 +508 +509 +510 +511 +512 +513 +514 +515 +516 +517 +518 +519 +520 +521 +522 +523 +524 +525 +526 +527 +528 +529 +530 +531 +532 +533 +534 +535 +536 +537 +538 +539 +540 +541 +542 +543 +544 +545 +546 +547 +548 +549 +550 +551 +552 +553 +554 +555 +556 +557 +558 +559 +560 +561 +562 +563 +564 +565 +566 +567 +568 +569 +570 +571 +572 +573 +574 +575 +576 +577 +578 +579 +580 +581 +582 +583 +584 +585 +586 +587 +588 +589 +590 +591 +592 +593 +594 +595 +596 +597 +598 +599 +600 +601 +602 +603 +604 +605 +606 +607 +608 +609 +610 +611 +612 +613 +614 +615 +616 +617 +618 +619 +620 +621 +622 +623 +624 +625 +626 +627 +628 +629 +630 +631 +632 +633 +634 +635 +636 +637 +638 +639 +640 +641 +642 +643 +644 +645 +646 +647 +648 +649 +650 +651 +652 +653 +654 +655 +656 +657 +658 +659 +660 +661 +662 +663 +664 +665 +666 +667 +668 +669 +670 +671 +672 +673 +674 +675 +676 +677 +678 +679 +680 +681 +682 +683 +684 +685 +686 +687 +688 +689 +690 +691 +692 +693 +694 +695 +696 +697 +698 +699 +700 +701 +702 +703 +704 +705 +706 +707 +708 +709 +710 +711 +712 +713 +714 +715 +716 +717 +718 +719 +720 +721 +722 +723 +724 +725 +726 +727 +728 +729 +730 +731 +732 +733 +734 +735 +736 +737 +738 +739 +740 +741 +742 +743 +744 +745 +746 +747 +748 +749 +750 +751 +752 +753 +754 +755 +756 +757 +758 +759 +760 +761 +762 +763 +764 +765 +766 +767 +768 +769 +770 +771 +772 +773 +774 +775 +776 +777 +778 +779 +780 +781 +782 +783 +784 +785 +786 +787 +788 +789 +790 +791 +792 +793 +794 +795 +796 +797 +798 +799 +800 +801 +802 +803 +804 +805 +806 +807 +808 +809 +810 +811 +812 +813 +814 +815 +816 +817 +818 +819 +820 +821 +822 +823 +824 +825 +826 +827 +828 +829 +830 +831 +832 +833 +834 +835 +836 +837 +838 +839 +840 +841 +842 +843 +844 +845 +846 +847 +848 +849 +850 +851 +852 +853 +854 +855 +856 +857 +858 +859 +860 +861 +862 +863 +864 +865 +866 +867 +868 +869 +870 +871 +872 +873 +874 +875 +876 +877 +878 +879 +880 +881 +882 +883 +884 +885 +886 +887 +888 +889 +890 +891 +892 +893 +894 +895 +896 +897 +898 +899 +900 +901 +902 +903 +904 +905 +906 +907 +908 +909 +910 +911 +912 +913 +914 +915 +916 +917 +918 +919 +920 +921 +922 +923 +924 +925 +926 +927 +928 +929 +930 +931 +932 +933 +934 +935 +936 +937 +938 +939 +940 +941 +942 +943 +944 +945 +946 +947 +948 +949 +950 +951 +952 +953 +954 +955 +956 +957 +958 +959 +960 +961 +962 +963 +964 +965 +966 +967 +968 +969 +970 +971 +972 +973 +974 +975 +976 +977 +978 +979 +980 +981 +982 +983 +984 +985 +986 +987 +988 +989 +990 +991 +992 +993 +994 +995 +996 +997 +998 +999 +1000 +1001 +1002 +1003 +1004 +1005 +1006 +1007 +1008 +1009 +1010 +1011 +1012 +1013 +1014 +1015 +1016 +1017 +1018 +1019 +1020 +1021 +1022 +1023 +1024 +1025 +1026 +1027 +1028 +1029 +1030 +1031 +1032 +1033 +1034 +1035 +1036 +1037 +1038 +1039 +1040 +1041 +1042 +1043 +1044 +1045 +1046 +1047 +1048 +1049 +1050 +1051 +1052 +1053 +1054 +1055 +1056 +1057 +1058 +1059 +1060 +1061 +1062 +1063 +1064 +1065 +1066 +1067 +1068 +1069 +1070 +1071 +1072 +1073 +1074 +1075 +1076 +1077 +1078 +1079 +1080 +1081 +1082 +1083 +1084 +1085 +1086 +1087 +1088 +1089 +1090 +1091 +1092 +1093 +1094 +1095 +1096 +1097 +1098 +1099 +1100 +1101 +1102 +1103 +1104 +1105 +1106 +1107 +1108 +1109 +1110 +1111 +1112 +1113 +1114 +1115 +1116 +1117 +1118 +1119 +1120 +1121 +1122 +1123 +1124 +1125 +1126 +1127 +1128 +1129 +1130 +1131 +1132 +1133 +1134 +1135 +1136 +1137 +1138 +1139 +1140 +1141 +1142 +1143 +1144 +1145 +1146 +1147 +1148 +1149 +1150 +1151 +1152 +1153 +1154 +1155 +1156 +1157 +1158 +1159 +1160 +1161 +1162 +1163 +1164 +1165 +1166 +1167 +1168 +1169 +1170 +1171 +1172 +1173 +1174 +1175 +1176 +1177 +1178 +1179 +1180 +1181 +1182 +1183 +1184 +1185 +1186 +1187 +1188 +1189 +1190 +1191 +1192 +1193 +1194 +1195 +1196 +1197 +1198 +1199 +1200 +1201 +1202 +1203 +1204 +1205 +1206 +1207 +1208 +1209 +1210 +1211 +1212 +1213 +1214 +1215 +1216 +1217 +1218 +1219 +1220 +1221 +1222 +1223 +1224 +1225 +1226 +1227 +1228 +1229 +1230 +1231 +1232 +1233 +1234 +1235 +1236 +1237 +1238 +1239 +1240 +1241 +1242 +1243 +1244 +1245 +1246 +1247 +1248 +1249 +1250 +1251 +1252 +1253 +1254 +1255 +1256 +1257 +1258 +1259 +1260 +1261 +1262 +1263 +1264 +1265 +1266 +1267 +1268 +1269 +1270 +1271 +1272 +1273 +1274 +1275 +1276 +1277 +1278 +1279 +1280 +1281 +1282 +1283 +1284 +1285 +1286 +1287 +1288 +1289 +1290 +1291 +1292 +1293 +1294 +1295 +1296 +1297 +1298 +1299 +1300 +1301 +1302 +1303 +1304 +1305 +1306 +1307 +1308 +1309 +1310 +1311 +1312 +1313 +1314 +1315 +1316 +1317 +1318 +1319 +1320 +1321 +1322 +1323 +1324 +1325 +1326 +1327 +1328 +1329 +1330 +1331 +1332 +1333 +1334 +1335 +1336 +1337 +1338 +1339 +1340 +1341 +1342 +1343 +1344 +1345 +1346 +1347 +1348 +1349 +1350 +1351 +1352 +1353 +1354 +1355 +1356 +1357 +1358 +1359 +1360 +1361 +1362 +1363 +1364 +1365 +1366 +1367 +1368 +1369 +1370 +1371 +1372 +1373 +1374 +1375 +1376 +1377 +1378 +1379 +1380 +1381 +1382 +1383 +1384 +1385 +1386 +1387 +1388 +1389 +1390 +1391 +1392 +1393 +1394 +1395 +1396 +1397 +1398 +1399 +1400 +1401 +1402 +1403 +1404 +1405 +1406 +1407 +1408 +1409 +1410 +1411 +1412 +1413 +1414 +1415 +1416 +1417 +1418 +1419 +1420 +1421 +1422 +1423 +1424 +1425 +1426 +1427 +1428 +1429 +1430 +1431 +1432 +1433 +1434 +1435 +1436 +1437 +1438 +1439 +1440 +1441 +1442 +1443 +1444 +1445 +1446 +1447 +1448 +1449 +1450 +1451 +1452 +1453 +1454 +1455 +1456 +1457 +1458 +1459 +1460 +1461 +1462 +1463 +1464 +1465 +1466 +1467 +1468 +1469 +1470 +1471 +1472 +1473 +1474 +1475 +1476 +1477 +1478 +1479 +1480 +1481 +1482 +1483 +1484 +1485 +1486 +1487 +1488 +1489 +1490 +1491 +1492 +1493 +1494 +1495 +1496 +1497 +1498 +1499 +1500 +1501 +1502 +1503 +1504 +1505 +1506 +1507 +1508 +1509 +1510 +1511 +1512 +1513 +1514 +1515 +1516 +1517 +1518 +1519 +1520 +1521 +1522 +1523 +1524 +1525 +1526 +1527 +1528 +1529 +1530 +1531 +1532 +1533 +1534 +1535 +1536 +1537 +1538 +1539 +1540 +1541 +1542 +1543 +1544 +1545 +1546 +1547 +1548 +1549 +1550 +1551 +1552 +1553 +1554 +1555 +1556 +1557 +1558 +1559 +1560 +1561 +1562 +1563 +1564 +1565 +1566 +1567 +1568 +1569 +1570 +1571 +1572 +1573 +1574 +1575 +1576 +1577 +1578 +1579 +1580 +1581 +1582 +1583 +1584 +1585 +1586 +1587 +1588 +1589 +1590 +1591 +1592 +1593 +1594 +1595 +1596 +1597 +1598 +1599 +1600 +1601 +1602 +1603 +1604 +1605 +1606 +1607 +1608 +1609 +1610 +1611 +1612 +1613 +1614 +1615 +1616 +1617 +1618 +1619 +1620 +1621 +1622 +1623 +1624 +1625 +1626 +1627 +1628 +1629 +1630 +1631 +1632 +1633 +1634 +1635 +1636 +1637 +1638 +1639 +1640 +1641 +1642 +1643 +1644 +1645 +1646 +1647 +1648 +1649 +1650 +1651 +1652 +1653 +1654 +1655 +1656 +1657 +1658 +1659 +1660 +1661 +1662 +1663 +1664 +1665 +1666 +1667 +1668 +1669 +1670 +1671 +1672 +1673 +1674 +1675 +1676 +1677 +1678 +1679 +1680 +1681 +1682 +1683 +1684 +1685 +1686 +1687 +
use std::cell::{Cell, RefCell};
+use std::fmt::Debug;
+use std::rc::Rc;
+use std::sync::atomic::{AtomicBool, Ordering};
+use std::sync::Arc;
+use std::time::{Duration, Instant};
+use std::{io, slice};
+
+#[cfg(feature = "block_on")]
+use std::future::Future;
+
+#[cfg(unix)]
+use std::os::unix::io::{AsFd, AsRawFd, BorrowedFd, RawFd};
+#[cfg(windows)]
+use std::os::windows::io::{AsHandle, AsRawHandle, AsSocket as AsFd, BorrowedHandle, RawHandle};
+
+use log::trace;
+use polling::Poller;
+
+use crate::list::{SourceEntry, SourceList};
+use crate::sources::{Dispatcher, EventSource, Idle, IdleDispatcher};
+use crate::sys::{Notifier, PollEvent};
+use crate::token::TokenInner;
+use crate::{
+ AdditionalLifecycleEventsSet, InsertError, Poll, PostAction, Readiness, Token, TokenFactory,
+};
+
+type IdleCallback<'i, Data> = Rc<RefCell<dyn IdleDispatcher<Data> + 'i>>;
+
+/// A token representing a registration in the [`EventLoop`].
+///
+/// This token is given to you by the [`EventLoop`] when an [`EventSource`] is inserted or
+/// a [`Dispatcher`] is registered. You can use it to [disable](LoopHandle#method.disable),
+/// [enable](LoopHandle#method.enable), [update`](LoopHandle#method.update),
+/// [remove](LoopHandle#method.remove) or [kill](LoopHandle#method.kill) it.
+#[derive(Clone, Copy, Debug, PartialEq, Eq)]
+pub struct RegistrationToken {
+ inner: TokenInner,
+}
+
+impl RegistrationToken {
+ /// Create the RegistrationToken corresponding to the given raw key
+ /// This is needed because some methods use `RegistrationToken`s as
+ /// raw usizes within this crate
+ pub(crate) fn new(inner: TokenInner) -> Self {
+ Self { inner }
+ }
+}
+
+pub(crate) struct LoopInner<'l, Data> {
+ pub(crate) poll: RefCell<Poll>,
+ // The `Option` is used to keep slots of the slab occipied, to prevent id reuse
+ // while in-flight events might still referr to a recently destroyed event source.
+ pub(crate) sources: RefCell<SourceList<'l, Data>>,
+ pub(crate) sources_with_additional_lifecycle_events: RefCell<AdditionalLifecycleEventsSet>,
+ idles: RefCell<Vec<IdleCallback<'l, Data>>>,
+ pending_action: Cell<PostAction>,
+}
+
+/// An handle to an event loop
+///
+/// This handle allows you to insert new sources and idles in this event loop,
+/// it can be cloned, and it is possible to insert new sources from within a source
+/// callback.
+pub struct LoopHandle<'l, Data> {
+ inner: Rc<LoopInner<'l, Data>>,
+}
+
+impl<'l, Data> std::fmt::Debug for LoopHandle<'l, Data> {
+ #[cfg_attr(feature = "nightly_coverage", coverage(off))]
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ f.write_str("LoopHandle { ... }")
+ }
+}
+
+impl<'l, Data> Clone for LoopHandle<'l, Data> {
+ #[cfg_attr(feature = "nightly_coverage", coverage(off))]
+ fn clone(&self) -> Self {
+ LoopHandle {
+ inner: self.inner.clone(),
+ }
+ }
+}
+
+impl<'l, Data> LoopHandle<'l, Data> {
+ /// Inserts a new event source in the loop.
+ ///
+ /// The provided callback will be called during the dispatching cycles whenever the
+ /// associated source generates events, see `EventLoop::dispatch(..)` for details.
+ ///
+ /// This function takes ownership of the event source. Use `register_dispatcher`
+ /// if you need access to the event source after this call.
+ pub fn insert_source<S, F>(
+ &self,
+ source: S,
+ callback: F,
+ ) -> Result<RegistrationToken, InsertError<S>>
+ where
+ S: EventSource + 'l,
+ F: FnMut(S::Event, &mut S::Metadata, &mut Data) -> S::Ret + 'l,
+ {
+ let dispatcher = Dispatcher::new(source, callback);
+ self.register_dispatcher(dispatcher.clone())
+ .map_err(|error| InsertError {
+ error,
+ inserted: dispatcher.into_source_inner(),
+ })
+ }
+
+ /// Registers a `Dispatcher` in the loop.
+ ///
+ /// Use this function if you need access to the event source after its insertion in the loop.
+ ///
+ /// See also `insert_source`.
+ #[cfg_attr(feature = "nightly_coverage", coverage(off))] // Contains a branch we can't hit w/o OOM
+ pub fn register_dispatcher<S>(
+ &self,
+ dispatcher: Dispatcher<'l, S, Data>,
+ ) -> crate::Result<RegistrationToken>
+ where
+ S: EventSource + 'l,
+ {
+ let mut sources = self.inner.sources.borrow_mut();
+ let mut poll = self.inner.poll.borrow_mut();
+
+ // Find an empty slot if any
+ let slot = sources.vacant_entry();
+
+ slot.source = Some(dispatcher.clone_as_event_dispatcher());
+ trace!("[calloop] Inserting new source #{}", slot.token.get_id());
+ let ret = slot.source.as_ref().unwrap().register(
+ &mut poll,
+ &mut self
+ .inner
+ .sources_with_additional_lifecycle_events
+ .borrow_mut(),
+ &mut TokenFactory::new(slot.token),
+ );
+
+ if let Err(error) = ret {
+ slot.source = None;
+ return Err(error);
+ }
+
+ Ok(RegistrationToken { inner: slot.token })
+ }
+
+ /// Inserts an idle callback.
+ ///
+ /// This callback will be called during a dispatching cycle when the event loop has
+ /// finished processing all pending events from the sources and becomes idle.
+ pub fn insert_idle<'i, F: FnOnce(&mut Data) + 'l + 'i>(&self, callback: F) -> Idle<'i> {
+ let mut opt_cb = Some(callback);
+ let callback = Rc::new(RefCell::new(Some(move |data: &mut Data| {
+ if let Some(cb) = opt_cb.take() {
+ cb(data);
+ }
+ })));
+ self.inner.idles.borrow_mut().push(callback.clone());
+ Idle { callback }
+ }
+
+ /// Enables this previously disabled event source.
+ ///
+ /// This previously disabled source will start generating events again.
+ ///
+ /// **Note:** this cannot be done from within the source callback.
+ pub fn enable(&self, token: &RegistrationToken) -> crate::Result<()> {
+ if let &SourceEntry {
+ token: entry_token,
+ source: Some(ref source),
+ } = self.inner.sources.borrow().get(token.inner)?
+ {
+ trace!("[calloop] Registering source #{}", entry_token.get_id());
+ source.register(
+ &mut self.inner.poll.borrow_mut(),
+ &mut self
+ .inner
+ .sources_with_additional_lifecycle_events
+ .borrow_mut(),
+ &mut TokenFactory::new(entry_token),
+ )
+ } else {
+ Err(crate::Error::InvalidToken)
+ }
+ }
+
+ /// Makes this source update its registration.
+ ///
+ /// If after accessing the source you changed its parameters in a way that requires
+ /// updating its registration.
+ pub fn update(&self, token: &RegistrationToken) -> crate::Result<()> {
+ if let &SourceEntry {
+ token: entry_token,
+ source: Some(ref source),
+ } = self.inner.sources.borrow().get(token.inner)?
+ {
+ trace!(
+ "[calloop] Updating registration of source #{}",
+ entry_token.get_id()
+ );
+ if !source.reregister(
+ &mut self.inner.poll.borrow_mut(),
+ &mut self
+ .inner
+ .sources_with_additional_lifecycle_events
+ .borrow_mut(),
+ &mut TokenFactory::new(entry_token),
+ )? {
+ trace!("[calloop] Cannot do it now, storing for later.");
+ // we are in a callback, store for later processing
+ self.inner.pending_action.set(PostAction::Reregister);
+ }
+ Ok(())
+ } else {
+ Err(crate::Error::InvalidToken)
+ }
+ }
+
+ /// Disables this event source.
+ ///
+ /// The source remains in the event loop, but it'll no longer generate events
+ pub fn disable(&self, token: &RegistrationToken) -> crate::Result<()> {
+ if let &SourceEntry {
+ token: entry_token,
+ source: Some(ref source),
+ } = self.inner.sources.borrow().get(token.inner)?
+ {
+ if !token.inner.same_source_as(entry_token) {
+ // The token provided by the user is no longer valid
+ return Err(crate::Error::InvalidToken);
+ }
+ trace!("[calloop] Unregistering source #{}", entry_token.get_id());
+ if !source.unregister(
+ &mut self.inner.poll.borrow_mut(),
+ &mut self
+ .inner
+ .sources_with_additional_lifecycle_events
+ .borrow_mut(),
+ *token,
+ )? {
+ trace!("[calloop] Cannot do it now, storing for later.");
+ // we are in a callback, store for later processing
+ self.inner.pending_action.set(PostAction::Disable);
+ }
+ Ok(())
+ } else {
+ Err(crate::Error::InvalidToken)
+ }
+ }
+
+ /// Removes this source from the event loop.
+ pub fn remove(&self, token: RegistrationToken) {
+ if let Ok(&mut SourceEntry {
+ token: entry_token,
+ ref mut source,
+ }) = self.inner.sources.borrow_mut().get_mut(token.inner)
+ {
+ if let Some(source) = source.take() {
+ trace!("[calloop] Removing source #{}", entry_token.get_id());
+ if let Err(e) = source.unregister(
+ &mut self.inner.poll.borrow_mut(),
+ &mut self
+ .inner
+ .sources_with_additional_lifecycle_events
+ .borrow_mut(),
+ token,
+ ) {
+ log::warn!(
+ "[calloop] Failed to unregister source from the polling system: {:?}",
+ e
+ );
+ }
+ }
+ }
+ }
+
+ /// Wrap an IO object into an async adapter
+ ///
+ /// This adapter turns the IO object into an async-aware one that can be used in futures.
+ /// The readiness of these futures will be driven by the event loop.
+ ///
+ /// The produced futures can be polled in any executor, and notably the one provided by
+ /// calloop.
+ pub fn adapt_io<F: AsFd>(&self, fd: F) -> crate::Result<crate::io::Async<'l, F>> {
+ crate::io::Async::new(self.inner.clone(), fd)
+ }
+}
+
+/// An event loop
+///
+/// This loop can host several event sources, that can be dynamically added or removed.
+pub struct EventLoop<'l, Data> {
+ #[allow(dead_code)]
+ poller: Arc<Poller>,
+ handle: LoopHandle<'l, Data>,
+ signals: Arc<Signals>,
+ // A caching vector for synthetic poll events
+ synthetic_events: Vec<PollEvent>,
+}
+
+impl<'l, Data> std::fmt::Debug for EventLoop<'l, Data> {
+ #[cfg_attr(feature = "nightly_coverage", coverage(off))]
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ f.write_str("EventLoop { ... }")
+ }
+}
+
+/// Signals related to the event loop.
+struct Signals {
+ /// Signal to stop the event loop.
+ stop: AtomicBool,
+
+ /// Signal that the future is ready.
+ #[cfg(feature = "block_on")]
+ future_ready: AtomicBool,
+}
+
+impl<'l, Data> EventLoop<'l, Data> {
+ /// Create a new event loop
+ ///
+ /// Fails if the initialization of the polling system failed.
+ pub fn try_new() -> crate::Result<Self> {
+ let poll = Poll::new()?;
+ let poller = poll.poller.clone();
+ let handle = LoopHandle {
+ inner: Rc::new(LoopInner {
+ poll: RefCell::new(poll),
+ sources: RefCell::new(SourceList::new()),
+ idles: RefCell::new(Vec::new()),
+ pending_action: Cell::new(PostAction::Continue),
+ sources_with_additional_lifecycle_events: Default::default(),
+ }),
+ };
+
+ Ok(EventLoop {
+ handle,
+ signals: Arc::new(Signals {
+ stop: AtomicBool::new(false),
+ #[cfg(feature = "block_on")]
+ future_ready: AtomicBool::new(false),
+ }),
+ poller,
+ synthetic_events: vec![],
+ })
+ }
+
+ /// Retrieve a loop handle
+ pub fn handle(&self) -> LoopHandle<'l, Data> {
+ self.handle.clone()
+ }
+
+ fn dispatch_events(
+ &mut self,
+ mut timeout: Option<Duration>,
+ data: &mut Data,
+ ) -> crate::Result<()> {
+ let now = Instant::now();
+ {
+ let mut extra_lifecycle_sources = self
+ .handle
+ .inner
+ .sources_with_additional_lifecycle_events
+ .borrow_mut();
+ let sources = &self.handle.inner.sources.borrow();
+ for source in &mut *extra_lifecycle_sources.values {
+ if let Ok(SourceEntry {
+ source: Some(disp), ..
+ }) = sources.get(source.inner)
+ {
+ if let Some((readiness, token)) = disp.before_sleep()? {
+ // Wake up instantly after polling if we recieved an event
+ timeout = Some(Duration::ZERO);
+ self.synthetic_events.push(PollEvent { readiness, token });
+ }
+ } else {
+ unreachable!()
+ }
+ }
+ }
+ let events = {
+ let poll = self.handle.inner.poll.borrow();
+ loop {
+ let result = poll.poll(timeout);
+
+ match result {
+ Ok(events) => break events,
+ Err(crate::Error::IoError(err)) if err.kind() == io::ErrorKind::Interrupted => {
+ // Interrupted by a signal. Update timeout and retry.
+ if let Some(to) = timeout {
+ let elapsed = now.elapsed();
+ if elapsed >= to {
+ return Ok(());
+ } else {
+ timeout = Some(to - elapsed);
+ }
+ }
+ }
+ Err(err) => return Err(err),
+ };
+ }
+ };
+ {
+ let mut extra_lifecycle_sources = self
+ .handle
+ .inner
+ .sources_with_additional_lifecycle_events
+ .borrow_mut();
+ if !extra_lifecycle_sources.values.is_empty() {
+ for source in &mut *extra_lifecycle_sources.values {
+ if let Ok(SourceEntry {
+ source: Some(disp), ..
+ }) = self.handle.inner.sources.borrow().get(source.inner)
+ {
+ let iter = EventIterator {
+ inner: events.iter(),
+ registration_token: *source,
+ };
+ disp.before_handle_events(iter);
+ } else {
+ unreachable!()
+ }
+ }
+ }
+ }
+
+ for event in self.synthetic_events.drain(..).chain(events) {
+ // Get the registration token associated with the event.
+ let reg_token = event.token.inner.forget_sub_id();
+
+ let opt_disp = self
+ .handle
+ .inner
+ .sources
+ .borrow()
+ .get(reg_token)
+ .ok()
+ .and_then(|entry| entry.source.clone());
+
+ if let Some(disp) = opt_disp {
+ trace!(
+ "[calloop] Dispatching events for source #{}",
+ reg_token.get_id()
+ );
+ let mut ret = disp.process_events(event.readiness, event.token, data)?;
+
+ // if the returned PostAction is Continue, it may be overwritten by an user-specified pending action
+ let pending_action = self
+ .handle
+ .inner
+ .pending_action
+ .replace(PostAction::Continue);
+ if let PostAction::Continue = ret {
+ ret = pending_action;
+ }
+
+ match ret {
+ PostAction::Reregister => {
+ trace!(
+ "[calloop] Postaction reregister for source #{}",
+ reg_token.get_id()
+ );
+ disp.reregister(
+ &mut self.handle.inner.poll.borrow_mut(),
+ &mut self
+ .handle
+ .inner
+ .sources_with_additional_lifecycle_events
+ .borrow_mut(),
+ &mut TokenFactory::new(reg_token),
+ )?;
+ }
+ PostAction::Disable => {
+ trace!(
+ "[calloop] Postaction unregister for source #{}",
+ reg_token.get_id()
+ );
+ disp.unregister(
+ &mut self.handle.inner.poll.borrow_mut(),
+ &mut self
+ .handle
+ .inner
+ .sources_with_additional_lifecycle_events
+ .borrow_mut(),
+ RegistrationToken::new(reg_token),
+ )?;
+ }
+ PostAction::Remove => {
+ trace!(
+ "[calloop] Postaction remove for source #{}",
+ reg_token.get_id()
+ );
+ if let Ok(entry) = self.handle.inner.sources.borrow_mut().get_mut(reg_token)
+ {
+ entry.source = None;
+ }
+ }
+ PostAction::Continue => {}
+ }
+
+ if self
+ .handle
+ .inner
+ .sources
+ .borrow()
+ .get(reg_token)
+ .ok()
+ .map(|entry| entry.source.is_none())
+ .unwrap_or(true)
+ {
+ // the source has been removed from within its callback, unregister it
+ let mut poll = self.handle.inner.poll.borrow_mut();
+ if let Err(e) = disp.unregister(
+ &mut poll,
+ &mut self
+ .handle
+ .inner
+ .sources_with_additional_lifecycle_events
+ .borrow_mut(),
+ RegistrationToken::new(reg_token),
+ ) {
+ log::warn!(
+ "[calloop] Failed to unregister source from the polling system: {:?}",
+ e
+ );
+ }
+ }
+ } else {
+ log::warn!(
+ "[calloop] Received an event for non-existence source: {:?}",
+ reg_token
+ );
+ }
+ }
+
+ Ok(())
+ }
+
+ fn dispatch_idles(&mut self, data: &mut Data) {
+ let idles = std::mem::take(&mut *self.handle.inner.idles.borrow_mut());
+ for idle in idles {
+ idle.borrow_mut().dispatch(data);
+ }
+ }
+
+ /// Dispatch pending events to their callbacks
+ ///
+ /// If some sources have events available, their callbacks will be immediatly called.
+ /// Otherwise this will wait until an event is receive or the provided `timeout`
+ /// is reached. If `timeout` is `None`, it will wait without a duration limit.
+ ///
+ /// Once pending events have been processed or the timeout is reached, all pending
+ /// idle callbacks will be fired before this method returns.
+ pub fn dispatch<D: Into<Option<Duration>>>(
+ &mut self,
+ timeout: D,
+ data: &mut Data,
+ ) -> crate::Result<()> {
+ self.dispatch_events(timeout.into(), data)?;
+ self.dispatch_idles(data);
+
+ Ok(())
+ }
+
+ /// Get a signal to stop this event loop from running
+ ///
+ /// To be used in conjunction with the `run()` method.
+ pub fn get_signal(&self) -> LoopSignal {
+ LoopSignal {
+ signal: self.signals.clone(),
+ notifier: self.handle.inner.poll.borrow().notifier(),
+ }
+ }
+
+ /// Run this event loop
+ ///
+ /// This will repeatedly try to dispatch events (see the `dispatch()` method) on
+ /// this event loop, waiting at most `timeout` every time.
+ ///
+ /// Between each dispatch wait, your provided callback will be called.
+ ///
+ /// You can use the `get_signal()` method to retrieve a way to stop or wakeup
+ /// the event loop from anywhere.
+ pub fn run<F, D: Into<Option<Duration>>>(
+ &mut self,
+ timeout: D,
+ data: &mut Data,
+ mut cb: F,
+ ) -> crate::Result<()>
+ where
+ F: FnMut(&mut Data),
+ {
+ let timeout = timeout.into();
+ self.signals.stop.store(false, Ordering::Release);
+ while !self.signals.stop.load(Ordering::Acquire) {
+ self.dispatch(timeout, data)?;
+ cb(data);
+ }
+ Ok(())
+ }
+
+ /// Block a future on this event loop.
+ ///
+ /// This will run the provided future on this event loop, blocking until it is
+ /// resolved.
+ ///
+ /// If [`LoopSignal::stop()`] is called before the future is resolved, this function returns
+ /// `None`.
+ #[cfg(feature = "block_on")]
+ pub fn block_on<R>(
+ &mut self,
+ future: impl Future<Output = R>,
+ data: &mut Data,
+ mut cb: impl FnMut(&mut Data),
+ ) -> crate::Result<Option<R>> {
+ use std::task::{Context, Poll, Wake, Waker};
+
+ /// A waker that will wake up the event loop when it is ready to make progress.
+ struct EventLoopWaker(LoopSignal);
+
+ impl Wake for EventLoopWaker {
+ fn wake(self: Arc<Self>) {
+ // Set the waker.
+ self.0.signal.future_ready.store(true, Ordering::Release);
+ self.0.notifier.notify().ok();
+ }
+
+ fn wake_by_ref(self: &Arc<Self>) {
+ // Set the waker.
+ self.0.signal.future_ready.store(true, Ordering::Release);
+ self.0.notifier.notify().ok();
+ }
+ }
+
+ // Pin the future to the stack.
+ pin_utils::pin_mut!(future);
+
+ // Create a waker that will wake up the event loop when it is ready to make progress.
+ let waker = {
+ let handle = EventLoopWaker(self.get_signal());
+
+ Waker::from(Arc::new(handle))
+ };
+ let mut context = Context::from_waker(&waker);
+
+ // Begin running the loop.
+ let mut output = None;
+
+ self.signals.stop.store(false, Ordering::Release);
+ self.signals.future_ready.store(true, Ordering::Release);
+
+ while !self.signals.stop.load(Ordering::Acquire) {
+ // If the future is ready to be polled, poll it.
+ if self.signals.future_ready.swap(false, Ordering::AcqRel) {
+ // Poll the future and break the loop if it's ready.
+ if let Poll::Ready(result) = future.as_mut().poll(&mut context) {
+ output = Some(result);
+ break;
+ }
+ }
+
+ // Otherwise, block on the event loop.
+ self.dispatch_events(None, data)?;
+ self.dispatch_idles(data);
+ cb(data);
+ }
+
+ Ok(output)
+ }
+}
+
+#[cfg(unix)]
+impl<'l, Data> AsRawFd for EventLoop<'l, Data> {
+ /// Get the underlying raw-fd of the poller.
+ ///
+ /// This could be used to create [`Generic`] source out of the current loop
+ /// and inserting into some other [`EventLoop`]. It's recommended to clone `fd`
+ /// before doing so.
+ ///
+ /// [`Generic`]: crate::generic::Generic
+ fn as_raw_fd(&self) -> RawFd {
+ self.poller.as_raw_fd()
+ }
+}
+
+#[cfg(unix)]
+impl<'l, Data> AsFd for EventLoop<'l, Data> {
+ /// Get the underlying fd of the poller.
+ ///
+ /// This could be used to create [`Generic`] source out of the current loop
+ /// and inserting into some other [`EventLoop`].
+ ///
+ /// [`Generic`]: crate::generic::Generic
+ fn as_fd(&self) -> BorrowedFd<'_> {
+ self.poller.as_fd()
+ }
+}
+
+#[cfg(windows)]
+impl<Data> AsRawHandle for EventLoop<'_, Data> {
+ fn as_raw_handle(&self) -> RawHandle {
+ self.poller.as_raw_handle()
+ }
+}
+
+#[cfg(windows)]
+impl<Data> AsHandle for EventLoop<'_, Data> {
+ fn as_handle(&self) -> BorrowedHandle<'_> {
+ self.poller.as_handle()
+ }
+}
+
+#[derive(Clone, Debug)]
+/// The EventIterator is an `Iterator` over the events relevant to a particular source
+/// This type is used in the [`EventSource::before_handle_events`] methods for
+/// two main reasons:
+/// - To avoid dynamic dispatch overhead
+/// - Secondly, it is to allow this type to be `Clone`, which is not
+/// possible with dynamic dispatch
+pub struct EventIterator<'a> {
+ inner: slice::Iter<'a, PollEvent>,
+ registration_token: RegistrationToken,
+}
+
+impl<'a> Iterator for EventIterator<'a> {
+ type Item = (Readiness, Token);
+
+ fn next(&mut self) -> Option<Self::Item> {
+ for next in self.inner.by_ref() {
+ if next
+ .token
+ .inner
+ .same_source_as(self.registration_token.inner)
+ {
+ return Some((next.readiness, next.token));
+ }
+ }
+ None
+ }
+}
+
+/// A signal that can be shared between thread to stop or wakeup a running
+/// event loop
+#[derive(Clone)]
+pub struct LoopSignal {
+ signal: Arc<Signals>,
+ notifier: Notifier,
+}
+
+impl std::fmt::Debug for LoopSignal {
+ #[cfg_attr(feature = "nightly_coverage", coverage(off))]
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ f.write_str("LoopSignal { ... }")
+ }
+}
+
+impl LoopSignal {
+ /// Stop the event loop
+ ///
+ /// Once this method is called, the next time the event loop has finished
+ /// waiting for events, it will return rather than starting to wait again.
+ ///
+ /// This is only useful if you are using the `EventLoop::run()` method.
+ pub fn stop(&self) {
+ self.signal.stop.store(true, Ordering::Release);
+ }
+
+ /// Wake up the event loop
+ ///
+ /// This sends a dummy event to the event loop to simulate the reception
+ /// of an event, making the wait return early. Called after `stop()`, this
+ /// ensures the event loop will terminate quickly if you specified a long
+ /// timeout (or no timeout at all) to the `dispatch` or `run` method.
+ pub fn wakeup(&self) {
+ self.notifier.notify().ok();
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ use std::{cell::Cell, rc::Rc, time::Duration};
+
+ use crate::{
+ channel::{channel, Channel},
+ ping::*,
+ EventIterator, EventSource, Poll, PostAction, Readiness, RegistrationToken, Token,
+ TokenFactory,
+ };
+
+ #[cfg(unix)]
+ use crate::{generic::Generic, Dispatcher, Interest, Mode};
+
+ use super::EventLoop;
+
+ #[test]
+ fn dispatch_idle() {
+ let mut event_loop = EventLoop::try_new().unwrap();
+
+ let mut dispatched = false;
+
+ event_loop.handle().insert_idle(|d| {
+ *d = true;
+ });
+
+ event_loop
+ .dispatch(Some(Duration::ZERO), &mut dispatched)
+ .unwrap();
+
+ assert!(dispatched);
+ }
+
+ #[test]
+ fn cancel_idle() {
+ let mut event_loop = EventLoop::try_new().unwrap();
+
+ let mut dispatched = false;
+
+ let handle = event_loop.handle();
+ let idle = handle.insert_idle(move |d| {
+ *d = true;
+ });
+
+ idle.cancel();
+
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+
+ assert!(!dispatched);
+ }
+
+ #[test]
+ fn wakeup() {
+ let mut event_loop = EventLoop::try_new().unwrap();
+
+ let signal = event_loop.get_signal();
+
+ ::std::thread::spawn(move || {
+ ::std::thread::sleep(Duration::from_millis(500));
+ signal.wakeup();
+ });
+
+ // the test should return
+ event_loop.dispatch(None, &mut ()).unwrap();
+ }
+
+ #[test]
+ fn wakeup_stop() {
+ let mut event_loop = EventLoop::try_new().unwrap();
+
+ let signal = event_loop.get_signal();
+
+ ::std::thread::spawn(move || {
+ ::std::thread::sleep(Duration::from_millis(500));
+ signal.stop();
+ signal.wakeup();
+ });
+
+ // the test should return
+ event_loop.run(None, &mut (), |_| {}).unwrap();
+ }
+
+ #[test]
+ fn additional_events() {
+ let mut event_loop: EventLoop<'_, Lock> = EventLoop::try_new().unwrap();
+ let mut lock = Lock {
+ lock: Rc::new((
+ // Whether the lock is locked
+ Cell::new(false),
+ // The total number of events processed in process_events
+ Cell::new(0),
+ // The total number of events processed in before_handle_events
+ // This is used to ensure that the count seen in before_handle_events is expected
+ Cell::new(0),
+ )),
+ };
+ let (sender, channel) = channel();
+ let token = event_loop
+ .handle()
+ .insert_source(
+ LockingSource {
+ channel,
+ lock: lock.clone(),
+ },
+ |_, _, lock| {
+ lock.lock();
+ lock.unlock();
+ },
+ )
+ .unwrap();
+ sender.send(()).unwrap();
+
+ event_loop.dispatch(None, &mut lock).unwrap();
+ // We should have been locked twice so far
+ assert_eq!(lock.lock.1.get(), 2);
+ // And we should have received one event
+ assert_eq!(lock.lock.2.get(), 1);
+ event_loop.handle().disable(&token).unwrap();
+ event_loop
+ .dispatch(Some(Duration::ZERO), &mut lock)
+ .unwrap();
+ assert_eq!(lock.lock.1.get(), 2);
+
+ event_loop.handle().enable(&token).unwrap();
+ event_loop
+ .dispatch(Some(Duration::ZERO), &mut lock)
+ .unwrap();
+ assert_eq!(lock.lock.1.get(), 3);
+ event_loop.handle().remove(token);
+ event_loop
+ .dispatch(Some(Duration::ZERO), &mut lock)
+ .unwrap();
+ assert_eq!(lock.lock.1.get(), 3);
+ assert_eq!(lock.lock.2.get(), 1);
+
+ #[derive(Clone)]
+ struct Lock {
+ lock: Rc<(Cell<bool>, Cell<u32>, Cell<u32>)>,
+ }
+ impl Lock {
+ fn lock(&self) {
+ if self.lock.0.get() {
+ panic!();
+ }
+ // Increase the count
+ self.lock.1.set(self.lock.1.get() + 1);
+ self.lock.0.set(true)
+ }
+ fn unlock(&self) {
+ if !self.lock.0.get() {
+ panic!();
+ }
+ self.lock.0.set(false);
+ }
+ }
+ struct LockingSource {
+ channel: Channel<()>,
+ lock: Lock,
+ }
+ impl EventSource for LockingSource {
+ type Event = <Channel<()> as EventSource>::Event;
+
+ type Metadata = <Channel<()> as EventSource>::Metadata;
+
+ type Ret = <Channel<()> as EventSource>::Ret;
+
+ type Error = <Channel<()> as EventSource>::Error;
+
+ fn process_events<F>(
+ &mut self,
+ readiness: Readiness,
+ token: Token,
+ callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ self.channel.process_events(readiness, token, callback)
+ }
+
+ fn register(
+ &mut self,
+ poll: &mut Poll,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()> {
+ self.channel.register(poll, token_factory)
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut Poll,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()> {
+ self.channel.reregister(poll, token_factory)
+ }
+
+ fn unregister(&mut self, poll: &mut Poll) -> crate::Result<()> {
+ self.channel.unregister(poll)
+ }
+
+ const NEEDS_EXTRA_LIFECYCLE_EVENTS: bool = true;
+
+ fn before_sleep(&mut self) -> crate::Result<Option<(Readiness, Token)>> {
+ self.lock.lock();
+ Ok(None)
+ }
+
+ fn before_handle_events(&mut self, events: EventIterator) {
+ let events_count = events.count();
+ let lock = &self.lock.lock;
+ lock.2.set(lock.2.get() + events_count as u32);
+ self.lock.unlock();
+ }
+ }
+ }
+ #[test]
+ fn default_additional_events() {
+ let (sender, channel) = channel();
+ let mut test_source = NoopWithDefaultHandlers { channel };
+ let mut event_loop = EventLoop::try_new().unwrap();
+ event_loop
+ .handle()
+ .insert_source(Box::new(&mut test_source), |_, _, _| {})
+ .unwrap();
+ sender.send(()).unwrap();
+
+ event_loop.dispatch(None, &mut ()).unwrap();
+ struct NoopWithDefaultHandlers {
+ channel: Channel<()>,
+ }
+ impl EventSource for NoopWithDefaultHandlers {
+ type Event = <Channel<()> as EventSource>::Event;
+
+ type Metadata = <Channel<()> as EventSource>::Metadata;
+
+ type Ret = <Channel<()> as EventSource>::Ret;
+
+ type Error = <Channel<()> as EventSource>::Error;
+
+ fn process_events<F>(
+ &mut self,
+ readiness: Readiness,
+ token: Token,
+ callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ self.channel.process_events(readiness, token, callback)
+ }
+
+ fn register(
+ &mut self,
+ poll: &mut Poll,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()> {
+ self.channel.register(poll, token_factory)
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut Poll,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()> {
+ self.channel.reregister(poll, token_factory)
+ }
+
+ fn unregister(&mut self, poll: &mut Poll) -> crate::Result<()> {
+ self.channel.unregister(poll)
+ }
+
+ const NEEDS_EXTRA_LIFECYCLE_EVENTS: bool = true;
+ }
+ }
+
+ #[test]
+ fn additional_events_synthetic() {
+ let mut event_loop: EventLoop<'_, Lock> = EventLoop::try_new().unwrap();
+ let mut lock = Lock {
+ lock: Rc::new(Cell::new(false)),
+ };
+ event_loop
+ .handle()
+ .insert_source(
+ InstantWakeupLockingSource {
+ lock: lock.clone(),
+ token: None,
+ },
+ |_, _, lock| {
+ lock.lock();
+ lock.unlock();
+ },
+ )
+ .unwrap();
+
+ // Loop should finish, as
+ event_loop.dispatch(None, &mut lock).unwrap();
+ #[derive(Clone)]
+ struct Lock {
+ lock: Rc<Cell<bool>>,
+ }
+ impl Lock {
+ fn lock(&self) {
+ if self.lock.get() {
+ panic!();
+ }
+ self.lock.set(true)
+ }
+ fn unlock(&self) {
+ if !self.lock.get() {
+ panic!();
+ }
+ self.lock.set(false);
+ }
+ }
+ struct InstantWakeupLockingSource {
+ lock: Lock,
+ token: Option<Token>,
+ }
+ impl EventSource for InstantWakeupLockingSource {
+ type Event = ();
+
+ type Metadata = ();
+
+ type Ret = ();
+
+ type Error = <Channel<()> as EventSource>::Error;
+
+ fn process_events<F>(
+ &mut self,
+ _: Readiness,
+ token: Token,
+ mut callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ assert_eq!(token, self.token.unwrap());
+ callback((), &mut ());
+ Ok(PostAction::Continue)
+ }
+
+ fn register(
+ &mut self,
+ _: &mut Poll,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()> {
+ self.token = Some(token_factory.token());
+ Ok(())
+ }
+
+ fn reregister(&mut self, _: &mut Poll, _: &mut TokenFactory) -> crate::Result<()> {
+ unreachable!()
+ }
+
+ fn unregister(&mut self, _: &mut Poll) -> crate::Result<()> {
+ unreachable!()
+ }
+
+ const NEEDS_EXTRA_LIFECYCLE_EVENTS: bool = true;
+
+ fn before_sleep(&mut self) -> crate::Result<Option<(Readiness, Token)>> {
+ self.lock.lock();
+ Ok(Some((Readiness::EMPTY, self.token.unwrap())))
+ }
+
+ fn before_handle_events(&mut self, _: EventIterator) {
+ self.lock.unlock();
+ }
+ }
+ }
+
+ #[cfg(unix)]
+ #[test]
+ fn insert_bad_source() {
+ use std::os::unix::io::FromRawFd;
+
+ let event_loop = EventLoop::<()>::try_new().unwrap();
+ let fd = unsafe { std::os::unix::io::OwnedFd::from_raw_fd(420) };
+ let ret = event_loop.handle().insert_source(
+ crate::sources::generic::Generic::new(fd, Interest::READ, Mode::Level),
+ |_, _, _| Ok(PostAction::Continue),
+ );
+ assert!(ret.is_err());
+ }
+
+ #[test]
+ fn invalid_token() {
+ let (_ping, source) = crate::sources::ping::make_ping().unwrap();
+
+ let event_loop = EventLoop::<()>::try_new().unwrap();
+ let handle = event_loop.handle();
+ let reg_token = handle.insert_source(source, |_, _, _| {}).unwrap();
+ handle.remove(reg_token);
+
+ let ret = handle.enable(®_token);
+ assert!(ret.is_err());
+ }
+
+ #[cfg(unix)]
+ #[test]
+ fn insert_source_no_interest() {
+ use rustix::pipe::pipe;
+
+ // Create a pipe to get an arbitrary fd.
+ let (read, _write) = pipe().unwrap();
+
+ let source = crate::sources::generic::Generic::new(read, Interest::EMPTY, Mode::Level);
+ let dispatcher = Dispatcher::new(source, |_, _, _| Ok(PostAction::Continue));
+
+ let event_loop = EventLoop::<()>::try_new().unwrap();
+ let handle = event_loop.handle();
+ let ret = handle.register_dispatcher(dispatcher.clone());
+
+ if let Ok(token) = ret {
+ // Unwrap the dispatcher+source and close the read end.
+ handle.remove(token);
+ } else {
+ // Fail the test.
+ panic!();
+ }
+ }
+
+ #[test]
+ fn disarm_rearm() {
+ let mut event_loop = EventLoop::<bool>::try_new().unwrap();
+ let (ping, ping_source) = make_ping().unwrap();
+
+ let ping_token = event_loop
+ .handle()
+ .insert_source(ping_source, |(), &mut (), dispatched| {
+ *dispatched = true;
+ })
+ .unwrap();
+
+ ping.ping();
+ let mut dispatched = false;
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert!(dispatched);
+
+ // disable the source
+ ping.ping();
+ event_loop.handle().disable(&ping_token).unwrap();
+ let mut dispatched = false;
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert!(!dispatched);
+
+ // reenable it, the previous ping now gets dispatched
+ event_loop.handle().enable(&ping_token).unwrap();
+ let mut dispatched = false;
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert!(dispatched);
+ }
+
+ #[test]
+ fn multiple_tokens() {
+ struct DoubleSource {
+ ping1: PingSource,
+ ping2: PingSource,
+ }
+
+ impl crate::EventSource for DoubleSource {
+ type Event = u32;
+ type Metadata = ();
+ type Ret = ();
+ type Error = PingError;
+
+ fn process_events<F>(
+ &mut self,
+ readiness: Readiness,
+ token: Token,
+ mut callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ self.ping1
+ .process_events(readiness, token, |(), &mut ()| callback(1, &mut ()))?;
+ self.ping2
+ .process_events(readiness, token, |(), &mut ()| callback(2, &mut ()))?;
+ Ok(PostAction::Continue)
+ }
+
+ fn register(
+ &mut self,
+ poll: &mut Poll,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()> {
+ self.ping1.register(poll, token_factory)?;
+ self.ping2.register(poll, token_factory)?;
+ Ok(())
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut Poll,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()> {
+ self.ping1.reregister(poll, token_factory)?;
+ self.ping2.reregister(poll, token_factory)?;
+ Ok(())
+ }
+
+ fn unregister(&mut self, poll: &mut Poll) -> crate::Result<()> {
+ self.ping1.unregister(poll)?;
+ self.ping2.unregister(poll)?;
+ Ok(())
+ }
+ }
+
+ let mut event_loop = EventLoop::<u32>::try_new().unwrap();
+
+ let (ping1, source1) = make_ping().unwrap();
+ let (ping2, source2) = make_ping().unwrap();
+
+ let source = DoubleSource {
+ ping1: source1,
+ ping2: source2,
+ };
+
+ event_loop
+ .handle()
+ .insert_source(source, |i, _, d| {
+ eprintln!("Dispatching {}", i);
+ *d += i
+ })
+ .unwrap();
+
+ let mut dispatched = 0;
+ ping1.ping();
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert_eq!(dispatched, 1);
+
+ dispatched = 0;
+ ping2.ping();
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert_eq!(dispatched, 2);
+
+ dispatched = 0;
+ ping1.ping();
+ ping2.ping();
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert_eq!(dispatched, 3);
+ }
+
+ #[cfg(unix)]
+ #[test]
+ fn change_interests() {
+ use rustix::io::write;
+ use rustix::net::{recv, socketpair, AddressFamily, RecvFlags, SocketFlags, SocketType};
+ let mut event_loop = EventLoop::<bool>::try_new().unwrap();
+
+ let (sock1, sock2) = socketpair(
+ AddressFamily::UNIX,
+ SocketType::STREAM,
+ SocketFlags::empty(),
+ None, // recv with DONTWAIT will suffice for platforms without SockFlag::SOCK_NONBLOCKING such as macOS
+ )
+ .unwrap();
+
+ let source = Generic::new(sock1, Interest::READ, Mode::Level);
+ let dispatcher = Dispatcher::new(source, |_, fd, dispatched| {
+ *dispatched = true;
+ // read all contents available to drain the socket
+ let mut buf = [0u8; 32];
+ loop {
+ match recv(&*fd, &mut buf, RecvFlags::DONTWAIT) {
+ Ok(0) => break, // closed pipe, we are now inert
+ Ok(_) => {}
+ Err(e) => {
+ let e: std::io::Error = e.into();
+ if e.kind() == std::io::ErrorKind::WouldBlock {
+ break;
+ // nothing more to read
+ } else {
+ // propagate error
+ return Err(e);
+ }
+ }
+ }
+ }
+ Ok(PostAction::Continue)
+ });
+
+ let sock_token_1 = event_loop
+ .handle()
+ .register_dispatcher(dispatcher.clone())
+ .unwrap();
+
+ // first dispatch, nothing is readable
+ let mut dispatched = false;
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert!(!dispatched);
+
+ // write something, the socket becomes readable
+ write(&sock2, &[1, 2, 3]).unwrap();
+ dispatched = false;
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert!(dispatched);
+
+ // All has been read, no longer readable
+ dispatched = false;
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert!(!dispatched);
+
+ // change the interests for writability instead
+ dispatcher.as_source_mut().interest = Interest::WRITE;
+ event_loop.handle().update(&sock_token_1).unwrap();
+
+ // the socket is writable
+ dispatched = false;
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert!(dispatched);
+
+ // change back to readable
+ dispatcher.as_source_mut().interest = Interest::READ;
+ event_loop.handle().update(&sock_token_1).unwrap();
+
+ // the socket is not readable
+ dispatched = false;
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert!(!dispatched);
+ }
+
+ #[test]
+ fn kill_source() {
+ let mut event_loop = EventLoop::<Option<RegistrationToken>>::try_new().unwrap();
+
+ let handle = event_loop.handle();
+ let (ping, ping_source) = make_ping().unwrap();
+ let ping_token = event_loop
+ .handle()
+ .insert_source(ping_source, move |(), &mut (), opt_src| {
+ if let Some(src) = opt_src.take() {
+ handle.remove(src);
+ }
+ })
+ .unwrap();
+
+ ping.ping();
+
+ let mut opt_src = Some(ping_token);
+
+ event_loop.dispatch(Duration::ZERO, &mut opt_src).unwrap();
+
+ assert!(opt_src.is_none());
+ }
+
+ #[test]
+ fn non_static_data() {
+ use std::sync::mpsc;
+
+ let (sender, receiver) = mpsc::channel();
+
+ {
+ struct RefSender<'a>(&'a mpsc::Sender<()>);
+ let mut ref_sender = RefSender(&sender);
+
+ let mut event_loop = EventLoop::<RefSender<'_>>::try_new().unwrap();
+ let (ping, ping_source) = make_ping().unwrap();
+ let _ping_token = event_loop
+ .handle()
+ .insert_source(ping_source, |_, _, ref_sender| {
+ ref_sender.0.send(()).unwrap();
+ })
+ .unwrap();
+
+ ping.ping();
+
+ event_loop
+ .dispatch(Duration::ZERO, &mut ref_sender)
+ .unwrap();
+ }
+
+ receiver.recv().unwrap();
+ // sender still usable (e.g. for another EventLoop)
+ drop(sender);
+ }
+
+ #[cfg(feature = "block_on")]
+ #[test]
+ fn block_on_test() {
+ use crate::sources::timer::TimeoutFuture;
+ use std::time::Duration;
+
+ let mut evl = EventLoop::<()>::try_new().unwrap();
+
+ let mut data = 22;
+ let timeout = {
+ let data = &mut data;
+ let evl_handle = evl.handle();
+
+ async move {
+ TimeoutFuture::from_duration(&evl_handle, Duration::from_secs(2)).await;
+ *data = 32;
+ 11
+ }
+ };
+
+ let result = evl.block_on(timeout, &mut (), |&mut ()| {}).unwrap();
+ assert_eq!(result, Some(11));
+ assert_eq!(data, 32);
+ }
+
+ #[cfg(feature = "block_on")]
+ #[test]
+ fn block_on_early_cancel() {
+ use crate::sources::timer;
+ use std::time::Duration;
+
+ let mut evl = EventLoop::<()>::try_new().unwrap();
+
+ let mut data = 22;
+ let timeout = {
+ let data = &mut data;
+ let evl_handle = evl.handle();
+
+ async move {
+ timer::TimeoutFuture::from_duration(&evl_handle, Duration::from_secs(2)).await;
+ *data = 32;
+ 11
+ }
+ };
+
+ let timer_source = timer::Timer::from_duration(Duration::from_secs(1));
+ let handle = evl.get_signal();
+ let _timer_token = evl
+ .handle()
+ .insert_source(timer_source, move |_, _, _| {
+ handle.stop();
+ timer::TimeoutAction::Drop
+ })
+ .unwrap();
+
+ let result = evl.block_on(timeout, &mut (), |&mut ()| {}).unwrap();
+ assert_eq!(result, None);
+ assert_eq!(data, 22);
+ }
+
+ #[test]
+ fn reuse() {
+ use crate::sources::timer;
+ use std::sync::{Arc, Mutex};
+ use std::time::{Duration, Instant};
+
+ let mut evl = EventLoop::<RegistrationToken>::try_new().unwrap();
+ let handle = evl.handle();
+
+ let data = Arc::new(Mutex::new(1));
+ let data_cloned = data.clone();
+
+ let timer_source = timer::Timer::from_duration(Duration::from_secs(1));
+ let mut first_timer_token = evl
+ .handle()
+ .insert_source(timer_source, move |_, _, own_token| {
+ handle.remove(*own_token);
+ let data_cloned = data_cloned.clone();
+ let _ = handle.insert_source(timer::Timer::immediate(), move |_, _, _| {
+ *data_cloned.lock().unwrap() = 2;
+ timer::TimeoutAction::Drop
+ });
+ timer::TimeoutAction::Drop
+ })
+ .unwrap();
+
+ let now = Instant::now();
+ loop {
+ evl.dispatch(Some(Duration::from_secs(3)), &mut first_timer_token)
+ .unwrap();
+ if Instant::now().duration_since(now) > Duration::from_secs(3) {
+ break;
+ }
+ }
+
+ assert_eq!(*data.lock().unwrap(), 2);
+ }
+
+ #[test]
+ fn drop_of_subsource() {
+ struct WithSubSource {
+ token: Option<Token>,
+ }
+
+ impl crate::EventSource for WithSubSource {
+ type Event = ();
+ type Metadata = ();
+ type Ret = ();
+ type Error = crate::Error;
+ const NEEDS_EXTRA_LIFECYCLE_EVENTS: bool = true;
+
+ fn process_events<F>(
+ &mut self,
+ _: Readiness,
+ _: Token,
+ mut callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ callback((), &mut ());
+ // Drop the source
+ Ok(PostAction::Remove)
+ }
+
+ fn register(&mut self, _: &mut Poll, fact: &mut TokenFactory) -> crate::Result<()> {
+ // produce a few tokens to emulate a subsource
+ fact.token();
+ fact.token();
+ self.token = Some(fact.token());
+ Ok(())
+ }
+
+ fn reregister(&mut self, _: &mut Poll, _: &mut TokenFactory) -> crate::Result<()> {
+ Ok(())
+ }
+
+ fn unregister(&mut self, _: &mut Poll) -> crate::Result<()> {
+ Ok(())
+ }
+
+ // emulate a readiness
+ fn before_sleep(&mut self) -> crate::Result<Option<(Readiness, Token)>> {
+ Ok(self.token.map(|token| {
+ (
+ Readiness {
+ readable: true,
+ writable: false,
+ error: false,
+ },
+ token,
+ )
+ }))
+ }
+ }
+
+ // Now the actual test
+ let mut evl = EventLoop::<bool>::try_new().unwrap();
+ evl.handle()
+ .insert_source(WithSubSource { token: None }, |_, _, ran| {
+ *ran = true;
+ })
+ .unwrap();
+
+ let mut ran = false;
+
+ evl.dispatch(Some(Duration::ZERO), &mut ran).unwrap();
+
+ assert!(ran);
+ }
+
+ // A dummy EventSource to test insertion and removal of sources
+ struct DummySource;
+
+ impl crate::EventSource for DummySource {
+ type Event = ();
+ type Metadata = ();
+ type Ret = ();
+ type Error = crate::Error;
+
+ fn process_events<F>(
+ &mut self,
+ _: Readiness,
+ _: Token,
+ mut callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ callback((), &mut ());
+ Ok(PostAction::Continue)
+ }
+
+ fn register(&mut self, _: &mut Poll, _: &mut TokenFactory) -> crate::Result<()> {
+ Ok(())
+ }
+
+ fn reregister(&mut self, _: &mut Poll, _: &mut TokenFactory) -> crate::Result<()> {
+ Ok(())
+ }
+
+ fn unregister(&mut self, _: &mut Poll) -> crate::Result<()> {
+ Ok(())
+ }
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 +169 +170 +171 +172 +173 +174 +175 +176 +177 +178 +179 +180 +181 +182 +183 +184 +185 +186 +187 +188 +189 +190 +191 +192 +193 +194 +195 +196 +197 +198 +199 +200 +201 +202 +203 +204 +205 +206 +207 +208 +209 +210 +211 +212 +213 +214 +215 +216 +217 +218 +219 +220 +221 +222 +223 +
//! Macros for helping with common operations in Calloop.
+
+/// Register a set of event sources. Effectively calls
+/// [`EventSource::register()`] for all the sources provided.
+///
+/// Usage:
+///
+/// ```none,actually-rust-but-see-https://github.com/rust-lang/rust/issues/63193
+/// calloop::batch_register!(
+/// poll, token_factory,
+/// self.source_one,
+/// self.source_two,
+/// self.source_three,
+/// self.source_four,
+/// )
+/// ```
+///
+/// Note that there is no scope for customisation; if you need to do special
+/// things with a particular source, you'll need to leave it off the list. Also
+/// note that this only does try-or-early-return error handling in the order
+/// that you list the sources; if you need anything else, don't use this macro.
+///
+/// [`EventSource::register()`]: crate::EventSource::register()
+#[macro_export]
+macro_rules! batch_register {
+ ($poll:ident, $token_fac:ident, $( $source:expr ),* $(,)?) => {
+ {
+ $(
+ $source.register($poll, $token_fac)?;
+ )*
+ $crate::Result::<_>::Ok(())
+ }
+ };
+}
+
+/// Reregister a set of event sources. Effectively calls
+/// [`EventSource::reregister()`] for all the sources provided.
+///
+/// Usage:
+///
+/// ```none,actually-rust-but-see-https://github.com/rust-lang/rust/issues/63193
+/// calloop::batch_reregister!(
+/// poll, token_factory,
+/// self.source_one,
+/// self.source_two,
+/// self.source_three,
+/// self.source_four,
+/// )
+/// ```
+///
+/// Note that there is no scope for customisation; if you need to do special
+/// things with a particular source, you'll need to leave it off the list. Also
+/// note that this only does try-or-early-return error handling in the order
+/// that you list the sources; if you need anything else, don't use this macro.
+///
+/// [`EventSource::reregister()`]: crate::EventSource::reregister()
+#[macro_export]
+macro_rules! batch_reregister {
+ ($poll:ident, $token_fac:ident, $( $source:expr ),* $(,)?) => {
+ {
+ $(
+ $source.reregister($poll, $token_fac)?;
+ )*
+ $crate::Result::<_>::Ok(())
+ }
+ };
+}
+
+/// Unregister a set of event sources. Effectively calls
+/// [`EventSource::unregister()`] for all the sources provided.
+///
+/// Usage:
+///
+/// ```none,actually-rust-but-see-https://github.com/rust-lang/rust/issues/63193
+/// calloop::batch_unregister!(
+/// poll,
+/// self.source_one,
+/// self.source_two,
+/// self.source_three,
+/// self.source_four,
+/// )
+/// ```
+///
+/// Note that there is no scope for customisation; if you need to do special
+/// things with a particular source, you'll need to leave it off the list. Also
+/// note that this only does try-or-early-return error handling in the order
+/// that you list the sources; if you need anything else, don't use this macro.
+///
+/// [`EventSource::unregister()`]: crate::EventSource::unregister()
+#[macro_export]
+macro_rules! batch_unregister {
+ ($poll:ident, $( $source:expr ),* $(,)?) => {
+ {
+ $(
+ $source.unregister($poll)?;
+ )*
+ $crate::Result::<_>::Ok(())
+ }
+ };
+}
+
+#[cfg(test)]
+mod tests {
+ use std::time::Duration;
+
+ use crate::{
+ ping::{make_ping, PingSource},
+ EventSource, PostAction,
+ };
+
+ struct BatchSource {
+ ping0: PingSource,
+ ping1: PingSource,
+ ping2: PingSource,
+ }
+
+ impl EventSource for BatchSource {
+ type Event = usize;
+ type Metadata = ();
+ type Ret = ();
+ type Error = Box<dyn std::error::Error + Sync + Send>;
+
+ fn process_events<F>(
+ &mut self,
+ readiness: crate::Readiness,
+ token: crate::Token,
+ mut callback: F,
+ ) -> Result<crate::PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ self.ping0
+ .process_events(readiness, token, |_, m| callback(0, m))?;
+ self.ping1
+ .process_events(readiness, token, |_, m| callback(1, m))?;
+ self.ping2
+ .process_events(readiness, token, |_, m| callback(2, m))?;
+ Ok(PostAction::Continue)
+ }
+
+ fn register(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ crate::batch_register!(poll, token_factory, self.ping0, self.ping1, self.ping2)
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ crate::batch_reregister!(poll, token_factory, self.ping0, self.ping1, self.ping2)
+ }
+
+ fn unregister(&mut self, poll: &mut crate::Poll) -> crate::Result<()> {
+ crate::batch_unregister!(poll, self.ping0, self.ping1, self.ping2)
+ }
+ }
+
+ #[test]
+ fn test_batch_operations() {
+ let mut fired = [false; 3];
+
+ let (send0, ping0) = make_ping().unwrap();
+ let (send1, ping1) = make_ping().unwrap();
+ let (send2, ping2) = make_ping().unwrap();
+
+ let top = BatchSource {
+ ping0,
+ ping1,
+ ping2,
+ };
+
+ let mut event_loop = crate::EventLoop::<[bool; 3]>::try_new().unwrap();
+ let handle = event_loop.handle();
+
+ let token = handle
+ .insert_source(top, |idx, _, fired| {
+ fired[idx] = true;
+ })
+ .unwrap();
+
+ send0.ping();
+ send1.ping();
+ send2.ping();
+
+ event_loop
+ .dispatch(Duration::new(0, 0), &mut fired)
+ .unwrap();
+
+ assert_eq!(fired, [true; 3]);
+
+ fired = [false; 3];
+
+ handle.update(&token).unwrap();
+
+ send0.ping();
+ send1.ping();
+ send2.ping();
+
+ event_loop
+ .dispatch(Duration::new(0, 0), &mut fired)
+ .unwrap();
+
+ assert_eq!(fired, [true; 3]);
+
+ fired = [false; 3];
+
+ handle.remove(token);
+
+ send0.ping();
+ send1.ping();
+ send2.ping();
+
+ event_loop
+ .dispatch(Duration::new(0, 0), &mut fired)
+ .unwrap();
+
+ assert_eq!(fired, [false; 3]);
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 +169 +170 +171 +172 +173 +174 +175 +176 +177 +178 +179 +180 +181 +182 +183 +184 +185 +186 +187 +188 +189 +190 +191 +192 +193 +194 +195 +196 +197 +198 +199 +200 +201 +202 +203 +204 +205 +206 +207 +208 +209 +210 +211 +212 +213 +214 +215 +216 +217 +218 +219 +220 +221 +222 +223 +224 +225 +226 +227 +228 +229 +230 +231 +232 +233 +234 +235 +236 +237 +238 +239 +240 +241 +242 +243 +244 +245 +246 +247 +248 +249 +250 +251 +252 +253 +254 +255 +256 +257 +258 +259 +260 +261 +262 +263 +264 +265 +266 +267 +268 +269 +270 +271 +272 +273 +274 +275 +276 +277 +278 +279 +280 +281 +282 +283 +284 +285 +286 +287 +288 +289 +290 +291 +292 +293 +294 +295 +296 +297 +298 +299 +300 +301 +302 +303 +304 +305 +306 +307 +308 +309 +310 +311 +312 +313 +314 +315 +316 +317 +318 +319 +320 +321 +322 +323 +324 +325 +326 +327 +
//! An MPSC channel whose receiving end is an event source
+//!
+//! Create a channel using [`channel()`](channel), which returns a
+//! [`Sender`] that can be cloned and sent accross threads if `T: Send`,
+//! and a [`Channel`] that can be inserted into an [`EventLoop`](crate::EventLoop).
+//! It will generate one event per message.
+//!
+//! A synchronous version of the channel is provided by [`sync_channel`], in which
+//! the [`SyncSender`] will block when the channel is full.
+
+use std::sync::mpsc;
+
+use crate::{EventSource, Poll, PostAction, Readiness, Token, TokenFactory};
+
+use super::ping::{make_ping, Ping, PingError, PingSource};
+
+/// The events generated by the channel event source
+#[derive(Debug)]
+pub enum Event<T> {
+ /// A message was received and is bundled here
+ Msg(T),
+ /// The channel was closed
+ ///
+ /// This means all the `Sender`s associated with this channel
+ /// have been dropped, no more messages will ever be received.
+ Closed,
+}
+
+/// The sender end of a channel
+///
+/// It can be cloned and sent accross threads (if `T` is).
+#[derive(Debug)]
+pub struct Sender<T> {
+ sender: mpsc::Sender<T>,
+ ping: Ping,
+}
+
+impl<T> Clone for Sender<T> {
+ #[cfg_attr(feature = "nightly_coverage", coverage(off))]
+ fn clone(&self) -> Sender<T> {
+ Sender {
+ sender: self.sender.clone(),
+ ping: self.ping.clone(),
+ }
+ }
+}
+
+impl<T> Sender<T> {
+ /// Send a message to the channel
+ ///
+ /// This will wake the event loop and deliver an `Event::Msg` to
+ /// it containing the provided value.
+ pub fn send(&self, t: T) -> Result<(), mpsc::SendError<T>> {
+ self.sender.send(t).map(|()| self.ping.ping())
+ }
+}
+
+impl<T> Drop for Sender<T> {
+ fn drop(&mut self) {
+ // ping on drop, to notify about channel closure
+ self.ping.ping();
+ }
+}
+
+/// The sender end of a synchronous channel
+///
+/// It can be cloned and sent accross threads (if `T` is).
+#[derive(Debug)]
+pub struct SyncSender<T> {
+ sender: mpsc::SyncSender<T>,
+ ping: Ping,
+}
+
+impl<T> Clone for SyncSender<T> {
+ #[cfg_attr(feature = "nightly_coverage", coverage(off))]
+ fn clone(&self) -> SyncSender<T> {
+ SyncSender {
+ sender: self.sender.clone(),
+ ping: self.ping.clone(),
+ }
+ }
+}
+
+impl<T> SyncSender<T> {
+ /// Send a message to the synchronous channel
+ ///
+ /// This will wake the event loop and deliver an `Event::Msg` to
+ /// it containing the provided value. If the channel is full, this
+ /// function will block until the event loop empties it and it can
+ /// deliver the message.
+ ///
+ /// Due to the blocking behavior, this method should not be used on the
+ /// same thread as the one running the event loop, as it could cause deadlocks.
+ pub fn send(&self, t: T) -> Result<(), mpsc::SendError<T>> {
+ let ret = self.try_send(t);
+ match ret {
+ Ok(()) => Ok(()),
+ Err(mpsc::TrySendError::Full(t)) => self.sender.send(t).map(|()| self.ping.ping()),
+ Err(mpsc::TrySendError::Disconnected(t)) => Err(mpsc::SendError(t)),
+ }
+ }
+
+ /// Send a message to the synchronous channel
+ ///
+ /// This will wake the event loop and deliver an `Event::Msg` to
+ /// it containing the provided value. If the channel is full, this
+ /// function will return an error, but the event loop will still be
+ /// signaled for readiness.
+ pub fn try_send(&self, t: T) -> Result<(), mpsc::TrySendError<T>> {
+ let ret = self.sender.try_send(t);
+ if let Ok(()) | Err(mpsc::TrySendError::Full(_)) = ret {
+ self.ping.ping();
+ }
+ ret
+ }
+}
+
+/// The receiving end of the channel
+///
+/// This is the event source to be inserted into your `EventLoop`.
+#[derive(Debug)]
+pub struct Channel<T> {
+ receiver: mpsc::Receiver<T>,
+ source: PingSource,
+}
+
+// This impl is safe because the Channel is only able to move around threads
+// when it is not inserted into an event loop. (Otherwise it is stuck into
+// a Source<_> and the internals of calloop, which are not Send).
+// At this point, the Arc<Receiver> has a count of 1, and it is obviously
+// safe to Send between threads.
+unsafe impl<T: Send> Send for Channel<T> {}
+
+impl<T> Channel<T> {
+ /// Proxy for [`mpsc::Receiver::recv`] to manually poll events.
+ ///
+ /// *Note*: Normally you would want to use the `Channel` by inserting
+ /// it into an event loop instead. Use this for example to immediately
+ /// dispatch pending events after creation.
+ pub fn recv(&self) -> Result<T, mpsc::RecvError> {
+ self.receiver.recv()
+ }
+
+ /// Proxy for [`mpsc::Receiver::try_recv`] to manually poll events.
+ ///
+ /// *Note*: Normally you would want to use the `Channel` by inserting
+ /// it into an event loop instead. Use this for example to immediately
+ /// dispatch pending events after creation.
+ pub fn try_recv(&self) -> Result<T, mpsc::TryRecvError> {
+ self.receiver.try_recv()
+ }
+}
+
+/// Create a new asynchronous channel
+pub fn channel<T>() -> (Sender<T>, Channel<T>) {
+ let (sender, receiver) = mpsc::channel();
+ let (ping, source) = make_ping().expect("Failed to create a Ping.");
+ (Sender { sender, ping }, Channel { receiver, source })
+}
+
+/// Create a new synchronous, bounded channel
+pub fn sync_channel<T>(bound: usize) -> (SyncSender<T>, Channel<T>) {
+ let (sender, receiver) = mpsc::sync_channel(bound);
+ let (ping, source) = make_ping().expect("Failed to create a Ping.");
+ (SyncSender { sender, ping }, Channel { receiver, source })
+}
+
+impl<T> EventSource for Channel<T> {
+ type Event = Event<T>;
+ type Metadata = ();
+ type Ret = ();
+ type Error = ChannelError;
+
+ fn process_events<C>(
+ &mut self,
+ readiness: Readiness,
+ token: Token,
+ mut callback: C,
+ ) -> Result<PostAction, Self::Error>
+ where
+ C: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ let receiver = &self.receiver;
+ self.source
+ .process_events(readiness, token, |(), &mut ()| loop {
+ match receiver.try_recv() {
+ Ok(val) => callback(Event::Msg(val), &mut ()),
+ Err(mpsc::TryRecvError::Empty) => break,
+ Err(mpsc::TryRecvError::Disconnected) => {
+ callback(Event::Closed, &mut ());
+ break;
+ }
+ }
+ })
+ .map_err(ChannelError)
+ }
+
+ fn register(&mut self, poll: &mut Poll, token_factory: &mut TokenFactory) -> crate::Result<()> {
+ self.source.register(poll, token_factory)
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut Poll,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()> {
+ self.source.reregister(poll, token_factory)
+ }
+
+ fn unregister(&mut self, poll: &mut Poll) -> crate::Result<()> {
+ self.source.unregister(poll)
+ }
+}
+
+/// An error arising from processing events for a channel.
+#[derive(thiserror::Error, Debug)]
+#[error(transparent)]
+pub struct ChannelError(PingError);
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+
+ #[test]
+ fn basic_channel() {
+ let mut event_loop = crate::EventLoop::try_new().unwrap();
+
+ let handle = event_loop.handle();
+
+ let (tx, rx) = channel::<()>();
+
+ // (got_msg, got_closed)
+ let mut got = (false, false);
+
+ let _channel_token = handle
+ .insert_source(rx, move |evt, &mut (), got: &mut (bool, bool)| match evt {
+ Event::Msg(()) => {
+ got.0 = true;
+ }
+ Event::Closed => {
+ got.1 = true;
+ }
+ })
+ .unwrap();
+
+ // nothing is sent, nothing is received
+ event_loop
+ .dispatch(Some(::std::time::Duration::ZERO), &mut got)
+ .unwrap();
+
+ assert_eq!(got, (false, false));
+
+ // a message is send
+ tx.send(()).unwrap();
+ event_loop
+ .dispatch(Some(::std::time::Duration::ZERO), &mut got)
+ .unwrap();
+
+ assert_eq!(got, (true, false));
+
+ // the sender is dropped
+ ::std::mem::drop(tx);
+ event_loop
+ .dispatch(Some(::std::time::Duration::ZERO), &mut got)
+ .unwrap();
+
+ assert_eq!(got, (true, true));
+ }
+
+ #[test]
+ fn basic_sync_channel() {
+ let mut event_loop = crate::EventLoop::try_new().unwrap();
+
+ let handle = event_loop.handle();
+
+ let (tx, rx) = sync_channel::<()>(2);
+
+ let mut received = (0, false);
+
+ let _channel_token = handle
+ .insert_source(
+ rx,
+ move |evt, &mut (), received: &mut (u32, bool)| match evt {
+ Event::Msg(()) => {
+ received.0 += 1;
+ }
+ Event::Closed => {
+ received.1 = true;
+ }
+ },
+ )
+ .unwrap();
+
+ // nothing is sent, nothing is received
+ event_loop
+ .dispatch(Some(::std::time::Duration::ZERO), &mut received)
+ .unwrap();
+
+ assert_eq!(received.0, 0);
+ assert!(!received.1);
+
+ // fill the channel
+ tx.send(()).unwrap();
+ tx.send(()).unwrap();
+ assert!(tx.try_send(()).is_err());
+
+ // empty it
+ event_loop
+ .dispatch(Some(::std::time::Duration::ZERO), &mut received)
+ .unwrap();
+
+ assert_eq!(received.0, 2);
+ assert!(!received.1);
+
+ // send a final message and drop the sender
+ tx.send(()).unwrap();
+ std::mem::drop(tx);
+
+ // final read of the channel
+ event_loop
+ .dispatch(Some(::std::time::Duration::ZERO), &mut received)
+ .unwrap();
+
+ assert_eq!(received.0, 3);
+ assert!(received.1);
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 +169 +170 +171 +172 +173 +174 +175 +176 +177 +178 +179 +180 +181 +182 +183 +184 +185 +186 +187 +188 +189 +190 +191 +192 +193 +194 +195 +196 +197 +198 +199 +200 +201 +202 +203 +204 +205 +206 +207 +208 +209 +210 +211 +212 +213 +214 +215 +216 +217 +218 +219 +220 +221 +222 +223 +224 +225 +226 +227 +228 +229 +230 +231 +232 +233 +234 +235 +236 +237 +238 +239 +240 +241 +242 +243 +244 +245 +246 +247 +248 +249 +250 +251 +252 +253 +254 +255 +256 +257 +258 +259 +260 +261 +262 +263 +264 +265 +266 +267 +268 +269 +270 +271 +272 +273 +274 +275 +276 +277 +278 +279 +280 +281 +282 +283 +284 +285 +286 +287 +288 +289 +290 +291 +292 +293 +294 +295 +296 +297 +298 +299 +300 +301 +302 +303 +304 +305 +306 +307 +308 +309 +310 +311 +312 +313 +314 +315 +316 +317 +318 +319 +320 +321 +322 +323 +324 +325 +326 +327 +328 +329 +330 +331 +332 +333 +334 +335 +336 +337 +338 +339 +340 +341 +342 +343 +344 +345 +346 +347 +348 +349 +350 +351 +352 +353 +354 +355 +356 +357 +358 +359 +360 +361 +362 +363 +364 +365 +366 +367 +368 +369 +370 +371 +372 +373 +374 +375 +376 +377 +378 +379 +380 +381 +382 +383 +384 +385 +386 +387 +388 +389 +390 +391 +392 +393 +394 +395 +396 +397 +398 +399 +400 +401 +402 +403 +404 +405 +406 +407 +408 +409 +410 +411 +412 +413 +414 +415 +416 +417 +418 +419 +420 +421 +422 +423 +
//! A futures executor as an event source
+//!
+//! Only available with the `executor` cargo feature of `calloop`.
+//!
+//! This executor is intended for light futures, which will be polled as part of your
+//! event loop. Such futures may be waiting for IO, or for some external computation on an
+//! other thread for example.
+//!
+//! You can create a new executor using the `executor` function, which creates a pair
+//! `(Executor<T>, Scheduler<T>)` to handle futures that all evaluate to type `T`. The
+//! executor should be inserted into your event loop, and will yield the return values of
+//! the futures as they finish into your callback. The scheduler can be cloned and used
+//! to send futures to be executed into the executor. A generic executor can be obtained
+//! by choosing `T = ()` and letting futures handle the forwarding of their return values
+//! (if any) by their own means.
+//!
+//! **Note:** The futures must have their own means of being woken up, as this executor is,
+//! by itself, not I/O aware. See [`LoopHandle::adapt_io`](crate::LoopHandle#method.adapt_io)
+//! for that, or you can use some other mechanism if you prefer.
+
+use async_task::{Builder, Runnable};
+use slab::Slab;
+use std::{
+ cell::RefCell,
+ future::Future,
+ rc::Rc,
+ sync::{
+ atomic::{AtomicBool, Ordering},
+ mpsc, Arc, Mutex,
+ },
+ task::Waker,
+};
+
+use crate::{
+ sources::{
+ channel::ChannelError,
+ ping::{make_ping, Ping, PingError, PingSource},
+ EventSource,
+ },
+ Poll, PostAction, Readiness, Token, TokenFactory,
+};
+
+/// A future executor as an event source
+#[derive(Debug)]
+pub struct Executor<T> {
+ /// Shared state between the executor and the scheduler.
+ state: Rc<State<T>>,
+
+ /// Notifies us when the executor is woken up.
+ ping: PingSource,
+}
+
+/// A scheduler to send futures to an executor
+#[derive(Clone, Debug)]
+pub struct Scheduler<T> {
+ /// Shared state between the executor and the scheduler.
+ state: Rc<State<T>>,
+}
+
+/// The inner state of the executor.
+#[derive(Debug)]
+struct State<T> {
+ /// The incoming queue of runnables to be executed.
+ incoming: mpsc::Receiver<Runnable<usize>>,
+
+ /// The sender corresponding to `incoming`.
+ sender: Arc<Sender>,
+
+ /// The list of currently active tasks.
+ ///
+ /// This is set to `None` when the executor is destroyed.
+ active_tasks: RefCell<Option<Slab<Active<T>>>>,
+}
+
+/// Send a future to an executor.
+///
+/// This needs to be thread-safe, as it is called from a `Waker` that may be on a different thread.
+#[derive(Debug)]
+struct Sender {
+ /// The sender used to send runnables to the executor.
+ ///
+ /// `mpsc::Sender` is `!Sync`, wrapping it in a `Mutex` makes it `Sync`.
+ sender: Mutex<mpsc::Sender<Runnable<usize>>>,
+
+ /// The ping source used to wake up the executor.
+ wake_up: Ping,
+
+ /// Whether the executor has already been woken.
+ notified: AtomicBool,
+}
+
+/// An active future or its result.
+#[derive(Debug)]
+enum Active<T> {
+ /// The future is currently being polled.
+ ///
+ /// Waking this waker will insert the runnable into `incoming`.
+ Future(Waker),
+
+ /// The future has finished polling, and its result is stored here.
+ Finished(T),
+}
+
+impl<T> Active<T> {
+ fn is_finished(&self) -> bool {
+ matches!(self, Active::Finished(_))
+ }
+}
+
+impl<T> Scheduler<T> {
+ /// Sends the given future to the executor associated to this scheduler
+ ///
+ /// Returns an error if the the executor not longer exists.
+ pub fn schedule<Fut: 'static>(&self, future: Fut) -> Result<(), ExecutorDestroyed>
+ where
+ Fut: Future<Output = T>,
+ T: 'static,
+ {
+ /// Store this future's result in the executor.
+ struct StoreOnDrop<'a, T> {
+ index: usize,
+ value: Option<T>,
+ state: &'a State<T>,
+ }
+
+ impl<T> Drop for StoreOnDrop<'_, T> {
+ fn drop(&mut self) {
+ let mut active_tasks = self.state.active_tasks.borrow_mut();
+ if let Some(active_tasks) = active_tasks.as_mut() {
+ if let Some(value) = self.value.take() {
+ active_tasks[self.index] = Active::Finished(value);
+ } else {
+ // The future was dropped before it finished.
+ // Remove it from the active list.
+ active_tasks.remove(self.index);
+ }
+ }
+ }
+ }
+
+ fn assert_send_and_sync<T: Send + Sync>(_: &T) {}
+
+ let mut active_guard = self.state.active_tasks.borrow_mut();
+ let active_tasks = active_guard.as_mut().ok_or(ExecutorDestroyed)?;
+
+ // Wrap the future in another future that polls it and stores the result.
+ let index = active_tasks.vacant_key();
+ let future = {
+ let state = self.state.clone();
+ async move {
+ let mut guard = StoreOnDrop {
+ index,
+ value: None,
+ state: &state,
+ };
+
+ // Get the value of the future.
+ let value = future.await;
+
+ // Store it in the executor.
+ guard.value = Some(value);
+ }
+ };
+
+ // A schedule function that inserts the runnable into the incoming queue.
+ let schedule = {
+ let sender = self.state.sender.clone();
+ move |runnable| sender.send(runnable)
+ };
+
+ assert_send_and_sync(&schedule);
+
+ // Spawn the future.
+ let (runnable, task) = Builder::new()
+ .metadata(index)
+ .spawn_local(move |_| future, schedule);
+
+ // Insert the runnable into the set of active tasks.
+ active_tasks.insert(Active::Future(runnable.waker()));
+ drop(active_guard);
+
+ // Schedule the runnable and detach the task so it isn't cancellable.
+ runnable.schedule();
+ task.detach();
+
+ Ok(())
+ }
+}
+
+impl Sender {
+ /// Send a runnable to the executor.
+ fn send(&self, runnable: Runnable<usize>) {
+ // Send on the channel.
+ //
+ // All we do with the lock is call `send`, so there's no chance of any state being corrupted on
+ // panic. Therefore it's safe to ignore the mutex poison.
+ if let Err(e) = self
+ .sender
+ .lock()
+ .unwrap_or_else(|e| e.into_inner())
+ .send(runnable)
+ {
+ // The runnable must be dropped on its origin thread, since the original future might be
+ // !Send. This channel immediately sends it back to the Executor, which is pinned to the
+ // origin thread. The executor's Drop implementation will force all of the runnables to be
+ // dropped, therefore the channel should always be available. If we can't send the runnable,
+ // it indicates that the above behavior is broken and that unsoundness has occurred. The
+ // only option at this stage is to forget the runnable and leak the future.
+
+ std::mem::forget(e);
+ unreachable!("Attempted to send runnable to a stopped executor");
+ }
+
+ // If the executor is already awake, don't bother waking it up again.
+ if self.notified.swap(true, Ordering::SeqCst) {
+ return;
+ }
+
+ // Wake the executor.
+ self.wake_up.ping();
+ }
+}
+
+impl<T> Drop for Executor<T> {
+ fn drop(&mut self) {
+ let active_tasks = self.state.active_tasks.borrow_mut().take().unwrap();
+
+ // Wake all of the active tasks in order to destroy their runnables.
+ for (_, task) in active_tasks {
+ if let Active::Future(waker) = task {
+ // Don't let a panicking waker blow everything up.
+ //
+ // There is a chance that a future will panic and, during the unwinding process,
+ // drop this executor. However, since the future panicked, there is a possibility
+ // that the internal state of the waker will be invalid in such a way that the waker
+ // panics as well. Since this would be a panic during a panic, Rust will upgrade it
+ // into an abort.
+ //
+ // In the interest of not aborting without a good reason, we just drop the panic here.
+ std::panic::catch_unwind(|| waker.wake()).ok();
+ }
+ }
+
+ // Drain the queue in order to drop all of the runnables.
+ while self.state.incoming.try_recv().is_ok() {}
+ }
+}
+
+/// Error generated when trying to schedule a future after the
+/// executor was destroyed.
+#[derive(thiserror::Error, Debug)]
+#[error("the executor was destroyed")]
+pub struct ExecutorDestroyed;
+
+/// Create a new executor, and its associated scheduler
+///
+/// May fail due to OS errors preventing calloop to setup its internal pipes (if your
+/// process has reatched its file descriptor limit for example).
+pub fn executor<T>() -> crate::Result<(Executor<T>, Scheduler<T>)> {
+ let (sender, incoming) = mpsc::channel();
+ let (wake_up, ping) = make_ping()?;
+
+ let state = Rc::new(State {
+ incoming,
+ active_tasks: RefCell::new(Some(Slab::new())),
+ sender: Arc::new(Sender {
+ sender: Mutex::new(sender),
+ wake_up,
+ notified: AtomicBool::new(false),
+ }),
+ });
+
+ Ok((
+ Executor {
+ state: state.clone(),
+ ping,
+ },
+ Scheduler { state },
+ ))
+}
+
+impl<T> EventSource for Executor<T> {
+ type Event = T;
+ type Metadata = ();
+ type Ret = ();
+ type Error = ExecutorError;
+
+ fn process_events<F>(
+ &mut self,
+ readiness: Readiness,
+ token: Token,
+ mut callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(T, &mut ()),
+ {
+ let state = &self.state;
+
+ let clear_readiness = {
+ let mut clear_readiness = false;
+
+ // Process runnables, but not too many at a time; better to move onto the next event quickly!
+ for _ in 0..1024 {
+ let runnable = match state.incoming.try_recv() {
+ Ok(runnable) => runnable,
+ Err(_) => {
+ // Make sure to clear the readiness if there are no more runnables.
+ clear_readiness = true;
+ break;
+ }
+ };
+
+ // Run the runnable.
+ let index = *runnable.metadata();
+ runnable.run();
+
+ // If the runnable finished with a result, call the callback.
+ let mut active_guard = state.active_tasks.borrow_mut();
+ let active_tasks = active_guard.as_mut().unwrap();
+
+ if let Some(state) = active_tasks.get(index) {
+ if state.is_finished() {
+ // Take out the state and provide it to the caller.
+ let result = match active_tasks.remove(index) {
+ Active::Finished(result) => result,
+ _ => unreachable!(),
+ };
+
+ // Drop the guard since the callback may register another future to the scheduler.
+ drop(active_guard);
+
+ callback(result, &mut ());
+ }
+ }
+ }
+
+ clear_readiness
+ };
+
+ // Clear the readiness of the ping source if there are no more runnables.
+ if clear_readiness {
+ self.ping
+ .process_events(readiness, token, |(), &mut ()| {})
+ .map_err(ExecutorError::WakeError)?;
+ }
+
+ // Set to the unnotified state.
+ state.sender.notified.store(false, Ordering::SeqCst);
+
+ Ok(PostAction::Continue)
+ }
+
+ fn register(&mut self, poll: &mut Poll, token_factory: &mut TokenFactory) -> crate::Result<()> {
+ self.ping.register(poll, token_factory)?;
+ Ok(())
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut Poll,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()> {
+ self.ping.reregister(poll, token_factory)?;
+ Ok(())
+ }
+
+ fn unregister(&mut self, poll: &mut Poll) -> crate::Result<()> {
+ self.ping.unregister(poll)?;
+ Ok(())
+ }
+}
+
+/// An error arising from processing events in an async executor event source.
+#[derive(thiserror::Error, Debug)]
+pub enum ExecutorError {
+ /// Error while reading new futures added via [`Scheduler::schedule()`].
+ #[error("error adding new futures")]
+ NewFutureError(ChannelError),
+
+ /// Error while processing wake events from existing futures.
+ #[error("error processing wake events")]
+ WakeError(PingError),
+}
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+
+ #[test]
+ fn ready() {
+ let mut event_loop = crate::EventLoop::<u32>::try_new().unwrap();
+
+ let handle = event_loop.handle();
+
+ let (exec, sched) = executor::<u32>().unwrap();
+
+ handle
+ .insert_source(exec, move |ret, &mut (), got| {
+ *got = ret;
+ })
+ .unwrap();
+
+ let mut got = 0;
+
+ let fut = async { 42 };
+
+ event_loop
+ .dispatch(Some(::std::time::Duration::ZERO), &mut got)
+ .unwrap();
+
+ // the future is not yet inserted, and thus has not yet run
+ assert_eq!(got, 0);
+
+ sched.schedule(fut).unwrap();
+
+ event_loop
+ .dispatch(Some(::std::time::Duration::ZERO), &mut got)
+ .unwrap();
+
+ // the future has run
+ assert_eq!(got, 42);
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 +169 +170 +171 +172 +173 +174 +175 +176 +177 +178 +179 +180 +181 +182 +183 +184 +185 +186 +187 +188 +189 +190 +191 +192 +193 +194 +195 +196 +197 +198 +199 +200 +201 +202 +203 +204 +205 +206 +207 +208 +209 +210 +211 +212 +213 +214 +215 +216 +217 +218 +219 +220 +221 +222 +223 +224 +225 +226 +227 +228 +229 +230 +231 +232 +233 +234 +235 +236 +237 +238 +239 +240 +241 +242 +243 +244 +245 +246 +247 +248 +249 +250 +251 +252 +253 +254 +255 +256 +257 +258 +259 +260 +261 +262 +263 +264 +265 +266 +267 +268 +269 +270 +271 +272 +273 +274 +275 +276 +277 +278 +279 +280 +281 +282 +283 +284 +285 +286 +287 +288 +289 +290 +291 +292 +293 +294 +295 +296 +297 +298 +299 +300 +301 +302 +303 +304 +305 +306 +307 +308 +309 +310 +311 +312 +313 +314 +315 +316 +317 +318 +319 +320 +321 +322 +323 +324 +325 +326 +327 +328 +329 +330 +331 +332 +333 +334 +335 +336 +337 +338 +339 +340 +341 +342 +343 +344 +345 +346 +347 +348 +349 +350 +351 +352 +353 +354 +355 +356 +357 +358 +359 +360 +361 +362 +363 +364 +365 +366 +367 +368 +369 +370 +371 +372 +373 +374 +375 +376 +377 +378 +379 +380 +381 +382 +383 +384 +385 +386 +387 +388 +389 +390 +391 +392 +393 +394 +395 +396 +397 +398 +399 +400 +401 +402 +403 +404 +405 +406 +407 +408 +409 +410 +411 +412 +413 +414 +415 +416 +417 +418 +419 +420 +421 +422 +423 +424 +425 +426 +427 +428 +429 +430 +431 +432 +433 +434 +435 +436 +437 +438 +439 +440 +441 +442 +443 +444 +445 +446 +447 +448 +449 +450 +451 +452 +453 +454 +455 +456 +457 +458 +459 +460 +461 +462 +463 +464 +465 +466 +467 +468 +469 +470 +471 +472 +473 +474 +475 +476 +477 +478 +479 +480 +481 +482 +483 +484 +485 +486 +487 +488 +489 +490 +491 +492 +493 +494 +495 +496 +497 +498 +499 +500 +501 +502 +503 +504 +505 +506 +
//! A generic event source wrapping an IO objects or file descriptor
+//!
+//! You can use this general purpose adapter around file-descriptor backed objects to
+//! insert into an [`EventLoop`](crate::EventLoop).
+//!
+//! The event generated by this [`Generic`] event source are the [`Readiness`](crate::Readiness)
+//! notification itself, and the monitored object is provided to your callback as the second
+//! argument.
+//!
+#![cfg_attr(unix, doc = "```")]
+#![cfg_attr(not(unix), doc = "```no_run")]
+//! # extern crate calloop;
+//! use calloop::{generic::Generic, Interest, Mode, PostAction};
+//!
+//! # fn main() {
+//! # let mut event_loop = calloop::EventLoop::<()>::try_new()
+//! # .expect("Failed to initialize the event loop!");
+//! # let handle = event_loop.handle();
+//! # #[cfg(unix)]
+//! # let io_object = std::io::stdin();
+//! # #[cfg(windows)]
+//! # let io_object: std::net::TcpStream = panic!();
+//! handle.insert_source(
+//! // wrap your IO object in a Generic, here we register for read readiness
+//! // in level-triggering mode
+//! Generic::new(io_object, Interest::READ, Mode::Level),
+//! |readiness, io_object, shared_data| {
+//! // The first argument of the callback is a Readiness
+//! // The second is a &mut reference to your object
+//!
+//! // your callback needs to return a Result<PostAction, std::io::Error>
+//! // if it returns an error, the event loop will consider this event
+//! // event source as erroring and report it to the user.
+//! Ok(PostAction::Continue)
+//! }
+//! );
+//! # }
+//! ```
+//!
+//! It can also help you implementing your own event sources: just have
+//! these `Generic<_>` as fields of your event source, and delegate the
+//! [`EventSource`](crate::EventSource) implementation to them.
+
+use polling::Poller;
+use std::{borrow, marker::PhantomData, ops, sync::Arc};
+
+#[cfg(unix)]
+use std::os::unix::io::{AsFd, AsRawFd, BorrowedFd};
+#[cfg(windows)]
+use std::os::windows::io::{
+ AsRawSocket as AsRawFd, AsSocket as AsFd, BorrowedSocket as BorrowedFd,
+};
+
+use crate::{EventSource, Interest, Mode, Poll, PostAction, Readiness, Token, TokenFactory};
+
+/// Wrapper to use a type implementing `AsRawFd` but not `AsFd` with `Generic`
+#[derive(Debug)]
+pub struct FdWrapper<T: AsRawFd>(T);
+
+impl<T: AsRawFd> FdWrapper<T> {
+ /// Wrap `inner` with an `AsFd` implementation.
+ ///
+ /// # Safety
+ /// This is safe if the `AsRawFd` implementation of `inner` always returns
+ /// a valid fd. This should usually be true for types implementing
+ /// `AsRawFd`. But this isn't guaranteed with `FdWrapper<RawFd>`.
+ pub unsafe fn new(inner: T) -> Self {
+ Self(inner)
+ }
+}
+
+impl<T: AsRawFd> ops::Deref for FdWrapper<T> {
+ type Target = T;
+
+ fn deref(&self) -> &Self::Target {
+ &self.0
+ }
+}
+
+impl<T: AsRawFd> ops::DerefMut for FdWrapper<T> {
+ fn deref_mut(&mut self) -> &mut Self::Target {
+ &mut self.0
+ }
+}
+
+impl<T: AsRawFd> AsFd for FdWrapper<T> {
+ #[cfg(unix)]
+ fn as_fd(&self) -> BorrowedFd {
+ unsafe { BorrowedFd::borrow_raw(self.0.as_raw_fd()) }
+ }
+
+ #[cfg(windows)]
+ fn as_socket(&self) -> BorrowedFd {
+ unsafe { BorrowedFd::borrow_raw(self.0.as_raw_socket()) }
+ }
+}
+
+/// A wrapper around a type that doesn't expose it mutably safely.
+///
+/// The [`EventSource`] trait's `Metadata` type demands mutable access to the inner I/O source.
+/// However, the inner polling source used by `calloop` keeps the handle-based equivalent of an
+/// immutable pointer to the underlying object's I/O handle. Therefore, if the inner source is
+/// dropped, this leaves behind a dangling pointer which immediately invokes undefined behavior
+/// on the next poll of the event loop.
+///
+/// In order to prevent this from happening, the [`Generic`] I/O source must not directly expose
+/// a mutable reference to the underlying handle. This type wraps around the underlying handle and
+/// easily allows users to take immutable (`&`) references to the type, but makes mutable (`&mut`)
+/// references unsafe to get. Therefore, it prevents the source from being moved out and dropped
+/// while it is still registered in the event loop.
+///
+/// [`EventSource`]: crate::EventSource
+#[derive(Debug)]
+pub struct NoIoDrop<T>(T);
+
+impl<T> NoIoDrop<T> {
+ /// Get a mutable reference.
+ ///
+ /// # Safety
+ ///
+ /// The inner type's I/O source must not be dropped.
+ pub unsafe fn get_mut(&mut self) -> &mut T {
+ &mut self.0
+ }
+}
+
+impl<T> AsRef<T> for NoIoDrop<T> {
+ fn as_ref(&self) -> &T {
+ &self.0
+ }
+}
+
+impl<T> borrow::Borrow<T> for NoIoDrop<T> {
+ fn borrow(&self) -> &T {
+ &self.0
+ }
+}
+
+impl<T> ops::Deref for NoIoDrop<T> {
+ type Target = T;
+
+ fn deref(&self) -> &Self::Target {
+ &self.0
+ }
+}
+
+impl<T: AsFd> AsFd for NoIoDrop<T> {
+ #[cfg(unix)]
+ fn as_fd(&self) -> BorrowedFd<'_> {
+ // SAFETY: The innter type is not mutated.
+ self.0.as_fd()
+ }
+
+ #[cfg(windows)]
+ fn as_socket(&self) -> BorrowedFd<'_> {
+ // SAFETY: The innter type is not mutated.
+ self.0.as_socket()
+ }
+}
+
+/// A generic event source wrapping a FD-backed type
+#[derive(Debug)]
+pub struct Generic<F: AsFd, E = std::io::Error> {
+ /// The wrapped FD-backed type.
+ ///
+ /// This must be deregistered before it is dropped.
+ file: Option<NoIoDrop<F>>,
+ /// The programmed interest
+ pub interest: Interest,
+ /// The programmed mode
+ pub mode: Mode,
+
+ /// Back-reference to the poller.
+ ///
+ /// This is needed to drop the original file.
+ poller: Option<Arc<Poller>>,
+
+ // This token is used by the event loop logic to look up this source when an
+ // event occurs.
+ token: Option<Token>,
+
+ // This allows us to make the associated error and return types generic.
+ _error_type: PhantomData<E>,
+}
+
+impl<F: AsFd> Generic<F, std::io::Error> {
+ /// Wrap a FD-backed type into a `Generic` event source that uses
+ /// [`std::io::Error`] as its error type.
+ pub fn new(file: F, interest: Interest, mode: Mode) -> Generic<F, std::io::Error> {
+ Generic {
+ file: Some(NoIoDrop(file)),
+ interest,
+ mode,
+ token: None,
+ poller: None,
+ _error_type: PhantomData,
+ }
+ }
+
+ /// Wrap a FD-backed type into a `Generic` event source using an arbitrary error type.
+ pub fn new_with_error<E>(file: F, interest: Interest, mode: Mode) -> Generic<F, E> {
+ Generic {
+ file: Some(NoIoDrop(file)),
+ interest,
+ mode,
+ token: None,
+ poller: None,
+ _error_type: PhantomData,
+ }
+ }
+}
+
+impl<F: AsFd, E> Generic<F, E> {
+ /// Unwrap the `Generic` source to retrieve the underlying type
+ pub fn unwrap(mut self) -> F {
+ let NoIoDrop(file) = self.file.take().unwrap();
+
+ // Remove it from the poller.
+ if let Some(poller) = self.poller.take() {
+ poller
+ .delete(
+ #[cfg(unix)]
+ file.as_fd(),
+ #[cfg(windows)]
+ file.as_socket(),
+ )
+ .ok();
+ }
+
+ file
+ }
+
+ /// Get a reference to the underlying type.
+ pub fn get_ref(&self) -> &F {
+ &self.file.as_ref().unwrap().0
+ }
+
+ /// Get a mutable reference to the underlying type.
+ ///
+ /// # Safety
+ ///
+ /// This is unsafe because it allows you to modify the underlying type, which
+ /// allows you to drop the underlying event source. Dropping the underlying source
+ /// leads to a dangling reference.
+ pub unsafe fn get_mut(&mut self) -> &mut F {
+ self.file.as_mut().unwrap().get_mut()
+ }
+}
+
+impl<F: AsFd, E> Drop for Generic<F, E> {
+ fn drop(&mut self) {
+ // Remove it from the poller.
+ if let (Some(file), Some(poller)) = (self.file.take(), self.poller.take()) {
+ poller
+ .delete(
+ #[cfg(unix)]
+ file.as_fd(),
+ #[cfg(windows)]
+ file.as_socket(),
+ )
+ .ok();
+ }
+ }
+}
+
+impl<F, E> EventSource for Generic<F, E>
+where
+ F: AsFd,
+ E: Into<Box<dyn std::error::Error + Send + Sync>>,
+{
+ type Event = Readiness;
+ type Metadata = NoIoDrop<F>;
+ type Ret = Result<PostAction, E>;
+ type Error = E;
+
+ fn process_events<C>(
+ &mut self,
+ readiness: Readiness,
+ token: Token,
+ mut callback: C,
+ ) -> Result<PostAction, Self::Error>
+ where
+ C: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ // If the token is invalid or not ours, skip processing.
+ if self.token != Some(token) {
+ return Ok(PostAction::Continue);
+ }
+
+ callback(readiness, self.file.as_mut().unwrap())
+ }
+
+ fn register(&mut self, poll: &mut Poll, token_factory: &mut TokenFactory) -> crate::Result<()> {
+ let token = token_factory.token();
+
+ // SAFETY: We ensure that we have a poller to deregister with (see below).
+ unsafe {
+ poll.register(
+ &self.file.as_ref().unwrap().0,
+ self.interest,
+ self.mode,
+ token,
+ )?;
+ }
+
+ // Make sure we can use the poller to deregister if need be.
+ // But only if registration actually succeeded
+ // So that we don't try to unregister the FD on drop if it wasn't registered
+ // in the first place (for example if registration failed because of a duplicate insertion)
+ self.poller = Some(poll.poller().clone());
+ self.token = Some(token);
+
+ Ok(())
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut Poll,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()> {
+ let token = token_factory.token();
+
+ poll.reregister(
+ &self.file.as_ref().unwrap().0,
+ self.interest,
+ self.mode,
+ token,
+ )?;
+
+ self.token = Some(token);
+ Ok(())
+ }
+
+ fn unregister(&mut self, poll: &mut Poll) -> crate::Result<()> {
+ poll.unregister(&self.file.as_ref().unwrap().0)?;
+ self.poller = None;
+ self.token = None;
+ Ok(())
+ }
+}
+
+#[cfg(all(unix, test))]
+mod tests {
+ use std::io::{Read, Write};
+
+ use super::Generic;
+ use crate::{Dispatcher, Interest, Mode, PostAction};
+ #[cfg(unix)]
+ #[test]
+ fn dispatch_unix() {
+ use std::os::unix::net::UnixStream;
+
+ let mut event_loop = crate::EventLoop::try_new().unwrap();
+
+ let handle = event_loop.handle();
+
+ let (mut tx, rx) = UnixStream::pair().unwrap();
+
+ let generic = Generic::new(rx, Interest::READ, Mode::Level);
+
+ let mut dispached = false;
+
+ let _generic_token = handle
+ .insert_source(generic, move |readiness, file, d| {
+ assert!(readiness.readable);
+ // we have not registered for writability
+ assert!(!readiness.writable);
+ let mut buffer = vec![0; 10];
+ let ret = (&**file).read(&mut buffer).unwrap();
+ assert_eq!(ret, 6);
+ assert_eq!(&buffer[..6], &[1, 2, 3, 4, 5, 6]);
+
+ *d = true;
+ Ok(PostAction::Continue)
+ })
+ .unwrap();
+
+ event_loop
+ .dispatch(Some(::std::time::Duration::ZERO), &mut dispached)
+ .unwrap();
+
+ assert!(!dispached);
+
+ let ret = tx.write(&[1, 2, 3, 4, 5, 6]).unwrap();
+ assert_eq!(ret, 6);
+ tx.flush().unwrap();
+
+ event_loop
+ .dispatch(Some(::std::time::Duration::ZERO), &mut dispached)
+ .unwrap();
+
+ assert!(dispached);
+ }
+
+ #[test]
+ fn register_deregister_unix() {
+ use std::os::unix::net::UnixStream;
+
+ let mut event_loop = crate::EventLoop::try_new().unwrap();
+
+ let handle = event_loop.handle();
+
+ let (mut tx, rx) = UnixStream::pair().unwrap();
+
+ let generic = Generic::new(rx, Interest::READ, Mode::Level);
+ let dispatcher = Dispatcher::new(generic, move |_, _, d| {
+ *d = true;
+ Ok(PostAction::Continue)
+ });
+
+ let mut dispached = false;
+
+ let generic_token = handle.register_dispatcher(dispatcher.clone()).unwrap();
+
+ event_loop
+ .dispatch(Some(::std::time::Duration::ZERO), &mut dispached)
+ .unwrap();
+
+ assert!(!dispached);
+
+ // remove the source, and then write something
+
+ event_loop.handle().remove(generic_token);
+
+ let ret = tx.write(&[1, 2, 3, 4, 5, 6]).unwrap();
+ assert_eq!(ret, 6);
+ tx.flush().unwrap();
+
+ event_loop
+ .dispatch(Some(::std::time::Duration::ZERO), &mut dispached)
+ .unwrap();
+
+ // the source has not been dispatched, as the source is no longer here
+ assert!(!dispached);
+
+ // insert it again
+ let generic = dispatcher.into_source_inner();
+ let _generic_token = handle
+ .insert_source(generic, move |readiness, file, d| {
+ assert!(readiness.readable);
+ // we have not registered for writability
+ assert!(!readiness.writable);
+ let mut buffer = vec![0; 10];
+ let ret = (&**file).read(&mut buffer).unwrap();
+ assert_eq!(ret, 6);
+ assert_eq!(&buffer[..6], &[1, 2, 3, 4, 5, 6]);
+
+ *d = true;
+ Ok(PostAction::Continue)
+ })
+ .unwrap();
+
+ event_loop
+ .dispatch(Some(::std::time::Duration::ZERO), &mut dispached)
+ .unwrap();
+
+ // the has now been properly dispatched
+ assert!(dispached);
+ }
+
+ // Duplicate insertion does not fail on all platforms, but does on Linux
+ #[cfg(target_os = "linux")]
+ #[test]
+ fn duplicate_insert() {
+ use std::os::unix::{
+ io::{AsFd, BorrowedFd},
+ net::UnixStream,
+ };
+ let event_loop = crate::EventLoop::<()>::try_new().unwrap();
+
+ let handle = event_loop.handle();
+
+ let (_, rx) = UnixStream::pair().unwrap();
+
+ // Rc only implements AsFd since 1.69...
+ struct RcFd<T> {
+ rc: std::rc::Rc<T>,
+ }
+
+ impl<T: AsFd> AsFd for RcFd<T> {
+ fn as_fd(&self) -> BorrowedFd<'_> {
+ self.rc.as_fd()
+ }
+ }
+
+ let rx = std::rc::Rc::new(rx);
+
+ let token = handle
+ .insert_source(
+ Generic::new(RcFd { rc: rx.clone() }, Interest::READ, Mode::Level),
+ |_, _, _| Ok(PostAction::Continue),
+ )
+ .unwrap();
+
+ // inserting the same FD a second time should fail
+ let ret = handle.insert_source(
+ Generic::new(RcFd { rc: rx.clone() }, Interest::READ, Mode::Level),
+ |_, _, _| Ok(PostAction::Continue),
+ );
+ assert!(ret.is_err());
+ std::mem::drop(ret);
+
+ // but the original token is still registered
+ handle.update(&token).unwrap();
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 +169 +170 +171 +172 +173 +174 +175 +176 +177 +178 +179 +180 +181 +182 +183 +184 +185 +186 +187 +188 +189 +190 +191 +192 +193 +194 +195 +196 +197 +198 +199 +200 +201 +202 +203 +204 +205 +206 +207 +208 +209 +210 +211 +212 +213 +214 +215 +216 +217 +218 +219 +220 +221 +222 +223 +224 +225 +226 +227 +228 +229 +230 +231 +232 +233 +234 +235 +236 +237 +238 +239 +240 +241 +242 +243 +244 +245 +246 +247 +248 +249 +250 +251 +252 +253 +254 +255 +256 +257 +258 +259 +260 +261 +262 +263 +264 +265 +266 +267 +268 +269 +270 +271 +272 +273 +274 +275 +276 +277 +278 +279 +280 +281 +282 +283 +284 +285 +286 +287 +288 +289 +290 +291 +292 +293 +294 +295 +296 +297 +298 +299 +300 +301 +302 +303 +304 +305 +306 +307 +308 +309 +310 +311 +312 +313 +314 +315 +316 +317 +318 +319 +320 +321 +322 +323 +324 +325 +326 +327 +328 +329 +330 +331 +332 +333 +334 +335 +336 +337 +338 +339 +340 +341 +342 +343 +344 +345 +346 +347 +348 +349 +350 +351 +352 +353 +354 +355 +356 +357 +358 +359 +360 +361 +362 +363 +364 +365 +366 +367 +368 +369 +370 +371 +372 +373 +374 +375 +376 +377 +378 +379 +380 +381 +382 +383 +384 +385 +386 +387 +388 +389 +390 +391 +392 +393 +394 +395 +396 +397 +398 +399 +400 +401 +402 +403 +404 +405 +406 +407 +408 +409 +410 +411 +412 +413 +414 +415 +416 +417 +418 +419 +420 +421 +422 +423 +424 +425 +426 +427 +428 +429 +430 +431 +432 +433 +434 +435 +436 +437 +438 +439 +440 +441 +442 +443 +444 +445 +446 +447 +448 +449 +450 +451 +452 +453 +454 +455 +456 +457 +458 +459 +460 +461 +462 +463 +464 +465 +466 +467 +468 +469 +470 +471 +472 +473 +474 +475 +476 +477 +478 +479 +480 +481 +482 +483 +484 +485 +486 +487 +488 +489 +490 +491 +492 +493 +494 +495 +496 +497 +498 +499 +500 +501 +502 +503 +504 +505 +506 +507 +508 +509 +510 +511 +512 +513 +514 +515 +516 +517 +518 +519 +520 +521 +522 +523 +524 +525 +526 +527 +528 +529 +530 +531 +532 +533 +534 +535 +536 +537 +538 +539 +540 +541 +542 +543 +544 +545 +546 +547 +548 +549 +550 +551 +552 +553 +554 +555 +556 +557 +558 +559 +560 +561 +562 +563 +564 +565 +566 +567 +568 +569 +570 +571 +572 +573 +574 +575 +576 +577 +578 +579 +580 +581 +582 +583 +584 +585 +586 +587 +588 +589 +590 +591 +592 +593 +594 +595 +596 +597 +598 +599 +600 +601 +602 +603 +604 +605 +606 +607 +608 +609 +610 +611 +612 +613 +614 +615 +616 +617 +618 +619 +620 +621 +622 +623 +624 +625 +626 +627 +628 +629 +630 +631 +632 +633 +634 +635 +636 +637 +638 +639 +640 +641 +642 +643 +644 +645 +646 +647 +648 +649 +650 +651 +652 +653 +654 +655 +656 +657 +658 +659 +660 +661 +662 +663 +664 +665 +666 +667 +668 +669 +670 +671 +672 +673 +674 +675 +676 +677 +678 +679 +680 +681 +682 +683 +684 +685 +686 +687 +688 +689 +690 +691 +692 +693 +694 +695 +696 +697 +698 +699 +700 +701 +702 +703 +704 +705 +706 +707 +708 +709 +710 +711 +712 +713 +714 +715 +716 +717 +718 +719 +720 +721 +722 +723 +724 +725 +726 +727 +728 +729 +730 +731 +732 +733 +734 +735 +736 +737 +738 +739 +740 +741 +742 +743 +744 +745 +746 +747 +748 +749 +750 +751 +752 +753 +754 +755 +756 +757 +758 +759 +760 +761 +762 +763 +764 +765 +766 +767 +768 +769 +770 +771 +772 +773 +774 +775 +776 +777 +778 +779 +780 +781 +782 +783 +784 +785 +786 +787 +788 +789 +790 +
use std::{
+ cell::{Ref, RefCell, RefMut},
+ ops::{BitOr, BitOrAssign},
+ rc::Rc,
+};
+
+use log::trace;
+
+pub use crate::loop_logic::EventIterator;
+use crate::{sys::TokenFactory, Poll, Readiness, RegistrationToken, Token};
+
+pub mod channel;
+#[cfg(feature = "executor")]
+#[cfg_attr(docsrs, doc(cfg(feature = "executor")))]
+pub mod futures;
+pub mod generic;
+pub mod ping;
+#[cfg(all(target_os = "linux", feature = "signals"))]
+#[cfg_attr(docsrs, doc(cfg(target_os = "linux")))]
+pub mod signals;
+pub mod timer;
+pub mod transient;
+
+/// Possible actions that can be requested to the event loop by an
+/// event source once its events have been processed.
+///
+/// `PostAction` values can be combined with the `|` (bit-or) operator (or with
+/// `|=`) with the result that:
+/// - if both values are identical, the result is that value
+/// - if they are different, the result is [`Reregister`](PostAction::Reregister)
+///
+/// Bit-or-ing these results is useful for composed sources to combine the
+/// results of their child sources, but note that it only applies to the child
+/// sources. For example, if every child source returns `Continue`, the result
+/// will be `Continue`, but the parent source might still need to return
+/// `Reregister` or something else depending on any additional logic it uses.
+#[derive(Copy, Clone, Debug, PartialEq, Eq)]
+pub enum PostAction {
+ /// Continue listening for events on this source as before
+ Continue,
+ /// Trigger a re-registration of this source
+ Reregister,
+ /// Disable this source
+ ///
+ /// Has the same effect as [`LoopHandle::disable`](crate::LoopHandle#method.disable)
+ Disable,
+ /// Remove this source from the eventloop
+ ///
+ /// Has the same effect as [`LoopHandle::kill`](crate::LoopHandle#method.kill)
+ Remove,
+}
+
+/// Combines `PostAction` values returned from nested event sources.
+impl BitOr for PostAction {
+ type Output = Self;
+
+ fn bitor(self, rhs: Self) -> Self::Output {
+ if matches!(self, x if x == rhs) {
+ self
+ } else {
+ Self::Reregister
+ }
+ }
+}
+
+/// Combines `PostAction` values returned from nested event sources.
+impl BitOrAssign for PostAction {
+ fn bitor_assign(&mut self, rhs: Self) {
+ if *self != rhs {
+ *self = Self::Reregister;
+ }
+ }
+}
+
+/// Trait representing an event source
+///
+/// This is the trait you need to implement if you wish to create your own
+/// calloop-compatible event sources.
+///
+/// The 3 associated types define the type of closure the user will need to
+/// provide to process events for your event source.
+///
+/// The `process_events` method will be called when one of the FD you registered
+/// is ready, with the associated readiness and token.
+///
+/// The `register`, `reregister` and `unregister` methods are plumbing to let your
+/// source register itself with the polling system. See their documentation for details.
+///
+/// In case your event source needs to do some special processing before or after a
+/// polling session occurs (to prepare the underlying source for polling, and cleanup
+/// after that), you can override [`NEEDS_EXTRA_LIFECYCLE_EVENTS`] to `true`.
+/// For all sources for which that constant is `true`, the methods [`before_sleep`] and
+/// [`before_handle_events`] will be called.
+/// [`before_sleep`] is called before the polling system performs a poll operation.
+/// [`before_handle_events`] is called before any process_events methods have been called.
+/// This means that during `process_events` you can assume that all cleanup has occured on
+/// all sources.
+///
+/// [`NEEDS_EXTRA_LIFECYCLE_EVENTS`]: EventSource::NEEDS_EXTRA_LIFECYCLE_EVENTS
+/// [`before_sleep`]: EventSource::before_sleep
+/// [`before_handle_events`]: EventSource::before_handle_events
+pub trait EventSource {
+ /// The type of events generated by your source.
+ type Event;
+ /// Some metadata of your event source
+ ///
+ /// This is typically useful if your source contains some internal state that
+ /// the user may need to interact with when processing events. The user callback
+ /// will receive a `&mut Metadata` reference.
+ ///
+ /// Set to `()` if not needed.
+ type Metadata;
+ /// The return type of the user callback
+ ///
+ /// If the user needs to return some value back to your event source once its
+ /// processing is finshed (to indicate success or failure for example), you can
+ /// specify it using this type.
+ ///
+ /// Set to `()` if not needed.
+ type Ret;
+ /// The error type returned from
+ /// [`process_events()`](Self::process_events()) (not the user callback!).
+ type Error: Into<Box<dyn std::error::Error + Sync + Send>>;
+
+ /// Process any relevant events
+ ///
+ /// This method will be called every time one of the FD you registered becomes
+ /// ready, including the readiness details and the associated token.
+ ///
+ /// Your event source will then do some processing of the file descriptor(s) to generate
+ /// events, and call the provided `callback` for each one of them.
+ ///
+ /// You should ensure you drained the file descriptors of their events, especially if using
+ /// edge-triggered mode.
+ fn process_events<F>(
+ &mut self,
+ readiness: Readiness,
+ token: Token,
+ callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret;
+
+ /// Register yourself to this poll instance
+ ///
+ /// You should register all your relevant file descriptors to the provided [`Poll`](crate::Poll)
+ /// using its [`Poll::register`](crate::Poll#method.register) method.
+ ///
+ /// If you need to register more than one file descriptor, you can change the
+ /// `sub_id` field of the [`Token`](crate::Token) to differentiate between them.
+ fn register(&mut self, poll: &mut Poll, token_factory: &mut TokenFactory) -> crate::Result<()>;
+
+ /// Re-register your file descriptors
+ ///
+ /// Your should update the registration of all your relevant file descriptor to
+ /// the provided [`Poll`](crate::Poll) using its [`Poll::reregister`](crate::Poll#method.reregister),
+ /// if necessary.
+ fn reregister(
+ &mut self,
+ poll: &mut Poll,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()>;
+
+ /// Unregister your file descriptors
+ ///
+ /// You should unregister all your file descriptors from this [`Poll`](crate::Poll) using its
+ /// [`Poll::unregister`](crate::Poll#method.unregister) method.
+ fn unregister(&mut self, poll: &mut Poll) -> crate::Result<()>;
+
+ /// Whether this source needs to be sent the [`EventSource::before_sleep`]
+ /// and [`EventSource::before_handle_events`] notifications. These are opt-in because
+ /// they require more expensive checks, and almost all sources will not need these notifications
+ const NEEDS_EXTRA_LIFECYCLE_EVENTS: bool = false;
+ /// Notification that a single `poll` is about to begin
+ ///
+ /// Use this to perform operations which must be done before polling,
+ /// but which may conflict with other event handlers. For example,
+ /// if polling requires a lock to be taken
+ ///
+ /// If this returns Ok(Some), this will be treated as an event arriving in polling, and
+ /// your event handler will be called with the returned `Token` and `Readiness`.
+ /// Polling will however still occur, but with a timeout of 0, so additional events
+ /// from this or other sources may also be handled in the same iterations.
+ /// The returned `Token` must belong to this source
+ // If you need to return multiple synthetic events from this notification, please
+ // open an issue
+ fn before_sleep(&mut self) -> crate::Result<Option<(Readiness, Token)>> {
+ Ok(None)
+ }
+ /// Notification that polling is complete, and [`EventSource::process_events`] will
+ /// be called with the given events for this source. The iterator may be empty,
+ /// which indicates that no events were generated for this source
+ ///
+ /// Please note, the iterator excludes any synthetic events returned from
+ /// [`EventSource::before_sleep`]
+ ///
+ /// Use this to perform a cleanup before event handlers with arbitrary
+ /// code may run. This could be used to drop a lock obtained in
+ /// [`EventSource::before_sleep`]
+ #[allow(unused_variables)]
+ fn before_handle_events(&mut self, events: EventIterator<'_>) {}
+}
+
+/// Blanket implementation for boxed event sources. [`EventSource`] is not an
+/// object safe trait, so this does not include trait objects.
+impl<T: EventSource> EventSource for Box<T> {
+ type Event = T::Event;
+ type Metadata = T::Metadata;
+ type Ret = T::Ret;
+ type Error = T::Error;
+
+ fn process_events<F>(
+ &mut self,
+ readiness: Readiness,
+ token: Token,
+ callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ T::process_events(&mut **self, readiness, token, callback)
+ }
+
+ fn register(&mut self, poll: &mut Poll, token_factory: &mut TokenFactory) -> crate::Result<()> {
+ T::register(&mut **self, poll, token_factory)
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut Poll,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()> {
+ T::reregister(&mut **self, poll, token_factory)
+ }
+
+ fn unregister(&mut self, poll: &mut Poll) -> crate::Result<()> {
+ T::unregister(&mut **self, poll)
+ }
+
+ const NEEDS_EXTRA_LIFECYCLE_EVENTS: bool = T::NEEDS_EXTRA_LIFECYCLE_EVENTS;
+
+ fn before_sleep(&mut self) -> crate::Result<Option<(Readiness, Token)>> {
+ T::before_sleep(&mut **self)
+ }
+
+ fn before_handle_events(&mut self, events: EventIterator) {
+ T::before_handle_events(&mut **self, events)
+ }
+}
+
+/// Blanket implementation for exclusive references to event sources.
+/// [`EventSource`] is not an object safe trait, so this does not include trait
+/// objects.
+impl<T: EventSource> EventSource for &mut T {
+ type Event = T::Event;
+ type Metadata = T::Metadata;
+ type Ret = T::Ret;
+ type Error = T::Error;
+
+ fn process_events<F>(
+ &mut self,
+ readiness: Readiness,
+ token: Token,
+ callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ T::process_events(&mut **self, readiness, token, callback)
+ }
+
+ fn register(&mut self, poll: &mut Poll, token_factory: &mut TokenFactory) -> crate::Result<()> {
+ T::register(&mut **self, poll, token_factory)
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut Poll,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()> {
+ T::reregister(&mut **self, poll, token_factory)
+ }
+
+ fn unregister(&mut self, poll: &mut Poll) -> crate::Result<()> {
+ T::unregister(&mut **self, poll)
+ }
+
+ const NEEDS_EXTRA_LIFECYCLE_EVENTS: bool = T::NEEDS_EXTRA_LIFECYCLE_EVENTS;
+
+ fn before_sleep(&mut self) -> crate::Result<Option<(Readiness, Token)>> {
+ T::before_sleep(&mut **self)
+ }
+
+ fn before_handle_events(&mut self, events: EventIterator) {
+ T::before_handle_events(&mut **self, events)
+ }
+}
+
+pub(crate) struct DispatcherInner<S, F> {
+ source: S,
+ callback: F,
+ needs_additional_lifecycle_events: bool,
+}
+
+impl<Data, S, F> EventDispatcher<Data> for RefCell<DispatcherInner<S, F>>
+where
+ S: EventSource,
+ F: FnMut(S::Event, &mut S::Metadata, &mut Data) -> S::Ret,
+{
+ fn process_events(
+ &self,
+ readiness: Readiness,
+ token: Token,
+ data: &mut Data,
+ ) -> crate::Result<PostAction> {
+ let mut disp = self.borrow_mut();
+ let DispatcherInner {
+ ref mut source,
+ ref mut callback,
+ ..
+ } = *disp;
+ trace!(
+ "[calloop] Processing events for source type {}",
+ std::any::type_name::<S>()
+ );
+ source
+ .process_events(readiness, token, |event, meta| callback(event, meta, data))
+ .map_err(|e| crate::Error::OtherError(e.into()))
+ }
+
+ fn register(
+ &self,
+ poll: &mut Poll,
+ additional_lifecycle_register: &mut AdditionalLifecycleEventsSet,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()> {
+ let mut this = self.borrow_mut();
+
+ if this.needs_additional_lifecycle_events {
+ additional_lifecycle_register.register(token_factory.registration_token());
+ }
+ this.source.register(poll, token_factory)
+ }
+
+ fn reregister(
+ &self,
+ poll: &mut Poll,
+ additional_lifecycle_register: &mut AdditionalLifecycleEventsSet,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<bool> {
+ if let Ok(mut me) = self.try_borrow_mut() {
+ me.source.reregister(poll, token_factory)?;
+ if me.needs_additional_lifecycle_events {
+ additional_lifecycle_register.register(token_factory.registration_token());
+ }
+ Ok(true)
+ } else {
+ Ok(false)
+ }
+ }
+
+ fn unregister(
+ &self,
+ poll: &mut Poll,
+ additional_lifecycle_register: &mut AdditionalLifecycleEventsSet,
+ registration_token: RegistrationToken,
+ ) -> crate::Result<bool> {
+ if let Ok(mut me) = self.try_borrow_mut() {
+ me.source.unregister(poll)?;
+ if me.needs_additional_lifecycle_events {
+ additional_lifecycle_register.unregister(registration_token);
+ }
+ Ok(true)
+ } else {
+ Ok(false)
+ }
+ }
+
+ fn before_sleep(&self) -> crate::Result<Option<(Readiness, Token)>> {
+ let mut disp = self.borrow_mut();
+ let DispatcherInner { ref mut source, .. } = *disp;
+ source.before_sleep()
+ }
+
+ fn before_handle_events(&self, events: EventIterator<'_>) {
+ let mut disp = self.borrow_mut();
+ let DispatcherInner { ref mut source, .. } = *disp;
+ source.before_handle_events(events);
+ }
+}
+
+pub(crate) trait EventDispatcher<Data> {
+ fn process_events(
+ &self,
+ readiness: Readiness,
+ token: Token,
+ data: &mut Data,
+ ) -> crate::Result<PostAction>;
+
+ fn register(
+ &self,
+ poll: &mut Poll,
+ additional_lifecycle_register: &mut AdditionalLifecycleEventsSet,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()>;
+
+ fn reregister(
+ &self,
+ poll: &mut Poll,
+ additional_lifecycle_register: &mut AdditionalLifecycleEventsSet,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<bool>;
+
+ fn unregister(
+ &self,
+ poll: &mut Poll,
+ additional_lifecycle_register: &mut AdditionalLifecycleEventsSet,
+ registration_token: RegistrationToken,
+ ) -> crate::Result<bool>;
+
+ fn before_sleep(&self) -> crate::Result<Option<(Readiness, Token)>>;
+ fn before_handle_events(&self, events: EventIterator<'_>);
+}
+
+#[derive(Default)]
+/// The list of events
+pub(crate) struct AdditionalLifecycleEventsSet {
+ /// The list of sources
+ pub(crate) values: Vec<RegistrationToken>,
+}
+
+impl AdditionalLifecycleEventsSet {
+ fn register(&mut self, token: RegistrationToken) {
+ self.values.push(token)
+ }
+
+ fn unregister(&mut self, token: RegistrationToken) {
+ self.values.retain(|it| it != &token)
+ }
+}
+
+// An internal trait to erase the `F` type parameter of `DispatcherInner`
+trait ErasedDispatcher<'a, S, Data> {
+ fn as_source_ref(&self) -> Ref<S>;
+ fn as_source_mut(&self) -> RefMut<S>;
+ fn into_source_inner(self: Rc<Self>) -> S;
+ fn into_event_dispatcher(self: Rc<Self>) -> Rc<dyn EventDispatcher<Data> + 'a>;
+}
+
+impl<'a, S, Data, F> ErasedDispatcher<'a, S, Data> for RefCell<DispatcherInner<S, F>>
+where
+ S: EventSource + 'a,
+ F: FnMut(S::Event, &mut S::Metadata, &mut Data) -> S::Ret + 'a,
+{
+ fn as_source_ref(&self) -> Ref<S> {
+ Ref::map(self.borrow(), |inner| &inner.source)
+ }
+
+ fn as_source_mut(&self) -> RefMut<S> {
+ RefMut::map(self.borrow_mut(), |inner| &mut inner.source)
+ }
+
+ fn into_source_inner(self: Rc<Self>) -> S {
+ if let Ok(ref_cell) = Rc::try_unwrap(self) {
+ ref_cell.into_inner().source
+ } else {
+ panic!("Dispatcher is still registered");
+ }
+ }
+
+ fn into_event_dispatcher(self: Rc<Self>) -> Rc<dyn EventDispatcher<Data> + 'a>
+ where
+ S: 'a,
+ {
+ self as Rc<dyn EventDispatcher<Data> + 'a>
+ }
+}
+
+/// An event source with its callback.
+///
+/// The `Dispatcher` can be registered in an event loop.
+/// Use the `as_source_{ref,mut}` functions to interact with the event source.
+/// Use `into_source_inner` to get the event source back.
+pub struct Dispatcher<'a, S, Data>(Rc<dyn ErasedDispatcher<'a, S, Data> + 'a>);
+
+impl<'a, S, Data> std::fmt::Debug for Dispatcher<'a, S, Data> {
+ #[cfg_attr(feature = "nightly_coverage", coverage(off))]
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ f.write_str("Dispatcher { ... }")
+ }
+}
+
+impl<'a, S, Data> Dispatcher<'a, S, Data>
+where
+ S: EventSource + 'a,
+{
+ /// Builds a dispatcher.
+ ///
+ /// The resulting `Dispatcher`
+ pub fn new<F>(source: S, callback: F) -> Self
+ where
+ F: FnMut(S::Event, &mut S::Metadata, &mut Data) -> S::Ret + 'a,
+ {
+ Dispatcher(Rc::new(RefCell::new(DispatcherInner {
+ source,
+ callback,
+ needs_additional_lifecycle_events: S::NEEDS_EXTRA_LIFECYCLE_EVENTS,
+ })))
+ }
+
+ /// Returns an immutable reference to the event source.
+ ///
+ /// # Panics
+ ///
+ /// Has the same semantics as `RefCell::borrow()`.
+ ///
+ /// The dispatcher being mutably borrowed while its events are dispatched,
+ /// this method will panic if invoked from within the associated dispatching closure.
+ pub fn as_source_ref(&self) -> Ref<S> {
+ self.0.as_source_ref()
+ }
+
+ /// Returns a mutable reference to the event source.
+ ///
+ /// # Panics
+ ///
+ /// Has the same semantics as `RefCell::borrow_mut()`.
+ ///
+ /// The dispatcher being mutably borrowed while its events are dispatched,
+ /// this method will panic if invoked from within the associated dispatching closure.
+ pub fn as_source_mut(&self) -> RefMut<S> {
+ self.0.as_source_mut()
+ }
+
+ /// Consumes the Dispatcher and returns the inner event source.
+ ///
+ /// # Panics
+ ///
+ /// Panics if the `Dispatcher` is still registered.
+ pub fn into_source_inner(self) -> S {
+ self.0.into_source_inner()
+ }
+
+ pub(crate) fn clone_as_event_dispatcher(&self) -> Rc<dyn EventDispatcher<Data> + 'a> {
+ Rc::clone(&self.0).into_event_dispatcher()
+ }
+}
+
+impl<'a, S, Data> Clone for Dispatcher<'a, S, Data> {
+ fn clone(&self) -> Dispatcher<'a, S, Data> {
+ Dispatcher(Rc::clone(&self.0))
+ }
+}
+
+/// An idle callback that was inserted in this loop
+///
+/// This handle allows you to cancel the callback. Dropping
+/// it will *not* cancel it.
+pub struct Idle<'i> {
+ pub(crate) callback: Rc<RefCell<dyn CancellableIdle + 'i>>,
+}
+
+impl<'i> std::fmt::Debug for Idle<'i> {
+ #[cfg_attr(feature = "nightly_coverage", coverage(off))]
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ f.write_str("Idle { ... }")
+ }
+}
+
+impl<'i> Idle<'i> {
+ /// Cancel the idle callback if it was not already run
+ pub fn cancel(self) {
+ self.callback.borrow_mut().cancel();
+ }
+}
+
+pub(crate) trait CancellableIdle {
+ fn cancel(&mut self);
+}
+
+impl<F> CancellableIdle for Option<F> {
+ fn cancel(&mut self) {
+ self.take();
+ }
+}
+
+pub(crate) trait IdleDispatcher<Data> {
+ fn dispatch(&mut self, data: &mut Data);
+}
+
+impl<Data, F> IdleDispatcher<Data> for Option<F>
+where
+ F: FnMut(&mut Data),
+{
+ fn dispatch(&mut self, data: &mut Data) {
+ if let Some(callabck) = self.as_mut() {
+ callabck(data);
+ }
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ use std::time::Duration;
+
+ use crate::{ping::make_ping, EventLoop};
+
+ // Test event source boxing.
+ #[test]
+ fn test_boxed_source() {
+ let mut fired = false;
+
+ let (pinger, source) = make_ping().unwrap();
+ let boxed = Box::new(source);
+
+ let mut event_loop = EventLoop::try_new().unwrap();
+ let handle = event_loop.handle();
+
+ let token = handle
+ .insert_source(boxed, |_, _, fired| *fired = true)
+ .unwrap();
+
+ pinger.ping();
+
+ event_loop
+ .dispatch(Duration::new(0, 0), &mut fired)
+ .unwrap();
+
+ assert!(fired);
+ fired = false;
+
+ handle.update(&token).unwrap();
+
+ pinger.ping();
+
+ event_loop
+ .dispatch(Duration::new(0, 0), &mut fired)
+ .unwrap();
+
+ assert!(fired);
+ fired = false;
+
+ handle.remove(token);
+
+ event_loop
+ .dispatch(Duration::new(0, 0), &mut fired)
+ .unwrap();
+
+ assert!(!fired);
+ }
+
+ // Test event source trait methods via mut ref.
+ #[test]
+ fn test_mut_ref_source() {
+ let mut fired = false;
+
+ let (pinger, mut source) = make_ping().unwrap();
+ let source_ref = &mut source;
+
+ let mut event_loop = EventLoop::try_new().unwrap();
+ let handle = event_loop.handle();
+
+ let token = handle
+ .insert_source(source_ref, |_, _, fired| *fired = true)
+ .unwrap();
+
+ pinger.ping();
+
+ event_loop
+ .dispatch(Duration::new(0, 0), &mut fired)
+ .unwrap();
+
+ assert!(fired);
+ fired = false;
+
+ handle.update(&token).unwrap();
+
+ pinger.ping();
+
+ event_loop
+ .dispatch(Duration::new(0, 0), &mut fired)
+ .unwrap();
+
+ assert!(fired);
+ fired = false;
+
+ handle.remove(token);
+
+ event_loop
+ .dispatch(Duration::new(0, 0), &mut fired)
+ .unwrap();
+
+ assert!(!fired);
+ }
+
+ // Test PostAction combinations.
+ #[test]
+ fn post_action_combine() {
+ use super::PostAction::*;
+ assert_eq!(Continue | Continue, Continue);
+ assert_eq!(Continue | Reregister, Reregister);
+ assert_eq!(Continue | Disable, Reregister);
+ assert_eq!(Continue | Remove, Reregister);
+
+ assert_eq!(Reregister | Continue, Reregister);
+ assert_eq!(Reregister | Reregister, Reregister);
+ assert_eq!(Reregister | Disable, Reregister);
+ assert_eq!(Reregister | Remove, Reregister);
+
+ assert_eq!(Disable | Continue, Reregister);
+ assert_eq!(Disable | Reregister, Reregister);
+ assert_eq!(Disable | Disable, Disable);
+ assert_eq!(Disable | Remove, Reregister);
+
+ assert_eq!(Remove | Continue, Reregister);
+ assert_eq!(Remove | Reregister, Reregister);
+ assert_eq!(Remove | Disable, Reregister);
+ assert_eq!(Remove | Remove, Remove);
+ }
+
+ // Test PostAction self-assignment.
+ #[test]
+ fn post_action_combine_assign() {
+ use super::PostAction::*;
+
+ let mut action = Continue;
+ action |= Continue;
+ assert_eq!(action, Continue);
+
+ let mut action = Continue;
+ action |= Reregister;
+ assert_eq!(action, Reregister);
+
+ let mut action = Continue;
+ action |= Disable;
+ assert_eq!(action, Reregister);
+
+ let mut action = Continue;
+ action |= Remove;
+ assert_eq!(action, Reregister);
+
+ let mut action = Reregister;
+ action |= Continue;
+ assert_eq!(action, Reregister);
+
+ let mut action = Reregister;
+ action |= Reregister;
+ assert_eq!(action, Reregister);
+
+ let mut action = Reregister;
+ action |= Disable;
+ assert_eq!(action, Reregister);
+
+ let mut action = Reregister;
+ action |= Remove;
+ assert_eq!(action, Reregister);
+
+ let mut action = Disable;
+ action |= Continue;
+ assert_eq!(action, Reregister);
+
+ let mut action = Disable;
+ action |= Reregister;
+ assert_eq!(action, Reregister);
+
+ let mut action = Disable;
+ action |= Disable;
+ assert_eq!(action, Disable);
+
+ let mut action = Disable;
+ action |= Remove;
+ assert_eq!(action, Reregister);
+
+ let mut action = Remove;
+ action |= Continue;
+ assert_eq!(action, Reregister);
+
+ let mut action = Remove;
+ action |= Reregister;
+ assert_eq!(action, Reregister);
+
+ let mut action = Remove;
+ action |= Disable;
+ assert_eq!(action, Reregister);
+
+ let mut action = Remove;
+ action |= Remove;
+ assert_eq!(action, Remove);
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 +169 +170 +171 +172 +173 +174 +175 +176 +177 +178 +179 +180 +181 +182 +183 +184 +185 +186 +187 +188 +189 +190 +191 +192 +193 +194 +195 +196 +197 +198 +199 +200 +201 +202 +203 +204 +205 +206 +207 +208 +209 +210 +211 +212 +213 +214 +215 +216 +217 +218 +219 +220 +221 +222 +223 +224 +225 +226 +227 +228 +229 +230 +231 +232 +233 +234 +235 +236 +237 +238 +239 +240 +241 +242 +243 +244 +245 +246 +247 +248 +249 +250 +251 +252 +253 +254 +255 +256 +257 +258 +259 +260 +261 +262 +263 +264 +265 +266 +267 +268 +269 +270 +271 +272 +273 +274 +275 +276 +277 +278 +279 +280 +281 +
//! Ping to the event loop
+//!
+//! This is an event source that just produces `()` events whevener the associated
+//! [`Ping::ping`](Ping#method.ping) method is called. If the event source is pinged multiple times
+//! between a single dispatching, it'll only generate one event.
+//!
+//! This event source is a simple way of waking up the event loop from an other part of your program
+//! (and is what backs the [`LoopSignal`](crate::LoopSignal)). It can also be used as a building
+//! block to construct event sources whose source of event is not file descriptor, but rather an
+//! userspace source (like an other thread).
+
+// The ping source has platform-dependent implementations provided by modules
+// under this one. These modules should expose:
+// - a make_ping() function
+// - a Ping type
+// - a PingSource type
+//
+// See eg. the pipe implementation for these items' specific requirements.
+
+#[cfg(target_os = "linux")]
+mod eventfd;
+#[cfg(target_os = "linux")]
+use eventfd as platform;
+
+#[cfg(windows)]
+mod iocp;
+#[cfg(windows)]
+use iocp as platform;
+
+#[cfg(not(any(target_os = "linux", windows)))]
+mod pipe;
+#[cfg(not(any(target_os = "linux", windows)))]
+use pipe as platform;
+
+/// Create a new ping event source
+///
+/// you are given a [`Ping`] instance, which can be cloned and used to ping the
+/// event loop, and a [`PingSource`], which you can insert in your event loop to
+/// receive the pings.
+pub fn make_ping() -> std::io::Result<(Ping, PingSource)> {
+ platform::make_ping()
+}
+
+/// The ping event source
+///
+/// You can insert it in your event loop to receive pings.
+///
+/// If you use it directly, it will automatically remove itself from the event loop
+/// once all [`Ping`] instances are dropped.
+pub type Ping = platform::Ping;
+
+/// The Ping handle
+///
+/// This handle can be cloned and sent accross threads. It can be used to
+/// send pings to the `PingSource`.
+pub type PingSource = platform::PingSource;
+
+/// An error arising from processing events for a ping.
+#[derive(thiserror::Error, Debug)]
+#[error(transparent)]
+pub struct PingError(Box<dyn std::error::Error + Sync + Send>);
+
+#[cfg(test)]
+mod tests {
+ use crate::transient::TransientSource;
+ use std::time::Duration;
+
+ use super::*;
+
+ #[test]
+ fn ping() {
+ let mut event_loop = crate::EventLoop::<bool>::try_new().unwrap();
+
+ let (ping, source) = make_ping().unwrap();
+
+ event_loop
+ .handle()
+ .insert_source(source, |(), &mut (), dispatched| *dispatched = true)
+ .unwrap();
+
+ ping.ping();
+
+ let mut dispatched = false;
+ event_loop
+ .dispatch(std::time::Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert!(dispatched);
+
+ // Ping has been drained an no longer generates events
+ let mut dispatched = false;
+ event_loop
+ .dispatch(std::time::Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert!(!dispatched);
+ }
+
+ #[test]
+ fn ping_closed() {
+ let mut event_loop = crate::EventLoop::<bool>::try_new().unwrap();
+
+ let (_, source) = make_ping().unwrap();
+ event_loop
+ .handle()
+ .insert_source(source, |(), &mut (), dispatched| *dispatched = true)
+ .unwrap();
+
+ let mut dispatched = false;
+
+ // If the sender is closed from the start, the ping should first trigger
+ // once, disabling itself but not invoking the callback
+ event_loop
+ .dispatch(std::time::Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert!(!dispatched);
+
+ // Then it should not trigger any more, so this dispatch should wait the whole 100ms
+ let now = std::time::Instant::now();
+ event_loop
+ .dispatch(std::time::Duration::from_millis(100), &mut dispatched)
+ .unwrap();
+ assert!(now.elapsed() >= std::time::Duration::from_millis(100));
+ }
+
+ #[test]
+ fn ping_removed() {
+ // This keeps track of whether the event fired.
+ let mut dispatched = false;
+
+ let mut event_loop = crate::EventLoop::<bool>::try_new().unwrap();
+
+ let (sender, source) = make_ping().unwrap();
+ let wrapper = TransientSource::from(source);
+
+ // Check that the source starts off in the wrapper.
+ assert!(!wrapper.is_none());
+
+ // Put the source in the loop.
+
+ let dispatcher =
+ crate::Dispatcher::new(wrapper, |(), &mut (), dispatched| *dispatched = true);
+
+ let token = event_loop
+ .handle()
+ .register_dispatcher(dispatcher.clone())
+ .unwrap();
+
+ // Drop the sender and check that it's actually removed.
+ drop(sender);
+
+ // There should be no event, but the loop still needs to wake up to
+ // process the close event (just like in the ping_closed() test).
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert!(!dispatched);
+
+ // Pull the source wrapper out.
+
+ event_loop.handle().remove(token);
+ let wrapper = dispatcher.into_source_inner();
+
+ // Check that the inner source is now gone.
+ assert!(wrapper.is_none());
+ }
+
+ #[test]
+ fn ping_fired_and_removed() {
+ // This is like ping_removed() with the single difference that we fire a
+ // ping and drop it between two successive dispatches of the loop.
+
+ // This keeps track of whether the event fired.
+ let mut dispatched = false;
+
+ let mut event_loop = crate::EventLoop::<bool>::try_new().unwrap();
+
+ let (sender, source) = make_ping().unwrap();
+ let wrapper = TransientSource::from(source);
+
+ // Check that the source starts off in the wrapper.
+ assert!(!wrapper.is_none());
+
+ // Put the source in the loop.
+
+ let dispatcher =
+ crate::Dispatcher::new(wrapper, |(), &mut (), dispatched| *dispatched = true);
+
+ let token = event_loop
+ .handle()
+ .register_dispatcher(dispatcher.clone())
+ .unwrap();
+
+ // Send a ping AND drop the sender and check that it's actually removed.
+ sender.ping();
+ drop(sender);
+
+ // There should be an event, but the source should be removed from the
+ // loop immediately after.
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert!(dispatched);
+
+ // Pull the source wrapper out.
+
+ event_loop.handle().remove(token);
+ let wrapper = dispatcher.into_source_inner();
+
+ // Check that the inner source is now gone.
+ assert!(wrapper.is_none());
+ }
+
+ #[test]
+ fn ping_multiple_senders() {
+ // This is like ping_removed() but for testing the behaviour of multiple
+ // senders.
+
+ // This keeps track of whether the event fired.
+ let mut dispatched = false;
+
+ let mut event_loop = crate::EventLoop::<bool>::try_new().unwrap();
+
+ let (sender0, source) = make_ping().unwrap();
+ let wrapper = TransientSource::from(source);
+ let sender1 = sender0.clone();
+ let sender2 = sender1.clone();
+
+ // Check that the source starts off in the wrapper.
+ assert!(!wrapper.is_none());
+
+ // Put the source in the loop.
+
+ let dispatcher =
+ crate::Dispatcher::new(wrapper, |(), &mut (), dispatched| *dispatched = true);
+
+ let token = event_loop
+ .handle()
+ .register_dispatcher(dispatcher.clone())
+ .unwrap();
+
+ // Send a ping AND drop the sender and check that it's actually removed.
+ sender0.ping();
+ drop(sender0);
+
+ // There should be an event, and the source should remain in the loop.
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert!(dispatched);
+
+ // Now test that the clones still work. Drop after the dispatch loop
+ // instead of before, this time.
+ dispatched = false;
+
+ sender1.ping();
+
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert!(dispatched);
+
+ // Finally, drop all of them without sending anything.
+
+ dispatched = false;
+
+ drop(sender1);
+ drop(sender2);
+
+ event_loop
+ .dispatch(Duration::ZERO, &mut dispatched)
+ .unwrap();
+ assert!(!dispatched);
+
+ // Pull the source wrapper out.
+
+ event_loop.handle().remove(token);
+ let wrapper = dispatcher.into_source_inner();
+
+ // Check that the inner source is now gone.
+ assert!(wrapper.is_none());
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 +169 +170 +171 +172 +173 +174 +175 +176 +177 +178 +179 +180 +181 +182 +183 +184 +185 +186 +187 +188 +189 +190 +191 +192 +193 +194 +
//! Eventfd based implementation of the ping event source.
+//!
+//! # Implementation notes
+//!
+//! The eventfd is a much lighter signalling mechanism provided by the Linux
+//! kernel. Rather than write an arbitrary sequence of bytes, it only has a
+//! 64-bit counter.
+//!
+//! To avoid closing the eventfd early, we wrap it in a RAII-style closer
+//! `CloseOnDrop` in `make_ping()`. When all the senders are dropped, another
+//! wrapper `FlagOnDrop` handles signalling this to the event source, which is
+//! the sole owner of the eventfd itself. The senders have weak references to
+//! the eventfd, and if the source is dropped before the senders, they will
+//! simply not do anything (except log a message).
+//!
+//! To differentiate between regular ping events and close ping events, we add 2
+//! to the counter for regular events and 1 for close events. In the source we
+//! can then check the LSB and if it's set, we know it was a close event. This
+//! only works if a close event never fires more than once.
+
+use std::os::unix::io::{AsFd, BorrowedFd, OwnedFd};
+use std::sync::Arc;
+
+use rustix::event::{eventfd, EventfdFlags};
+use rustix::io::{read, write, Errno};
+
+use super::PingError;
+use crate::{
+ generic::Generic, EventSource, Interest, Mode, Poll, PostAction, Readiness, Token, TokenFactory,
+};
+
+// These are not bitfields! They are increments to add to the eventfd counter.
+// Since the fd can only be closed once, we can effectively use the
+// INCREMENT_CLOSE value as a bitmask when checking.
+const INCREMENT_PING: u64 = 0x2;
+const INCREMENT_CLOSE: u64 = 0x1;
+
+#[inline]
+pub fn make_ping() -> std::io::Result<(Ping, PingSource)> {
+ let read = eventfd(0, EventfdFlags::CLOEXEC | EventfdFlags::NONBLOCK)?;
+
+ // We only have one fd for the eventfd. If the sending end closes it when
+ // all copies are dropped, the receiving end will be closed as well. We need
+ // to make sure the fd is not closed until all holders of it have dropped
+ // it.
+
+ let fd = Arc::new(read);
+
+ let ping = Ping {
+ event: Arc::new(FlagOnDrop(Arc::clone(&fd))),
+ };
+
+ let source = PingSource {
+ event: Generic::new(ArcAsFd(fd), Interest::READ, Mode::Level),
+ };
+
+ Ok((ping, source))
+}
+
+// Helper functions for the event source IO.
+
+#[inline]
+fn send_ping(fd: BorrowedFd<'_>, count: u64) -> std::io::Result<()> {
+ assert!(count > 0);
+ match write(fd, &count.to_ne_bytes()) {
+ // The write succeeded, the ping will wake up the loop.
+ Ok(_) => Ok(()),
+
+ // The counter hit its cap, which means previous calls to write() will
+ // wake up the loop.
+ Err(Errno::AGAIN) => Ok(()),
+
+ // Anything else is a real error.
+ Err(e) => Err(e.into()),
+ }
+}
+
+#[inline]
+fn drain_ping(fd: BorrowedFd<'_>) -> std::io::Result<u64> {
+ // The eventfd counter is effectively a u64.
+ const NBYTES: usize = 8;
+ let mut buf = [0u8; NBYTES];
+
+ match read(fd, &mut buf) {
+ // Reading from an eventfd should only ever produce 8 bytes. No looping
+ // is required.
+ Ok(NBYTES) => Ok(u64::from_ne_bytes(buf)),
+
+ Ok(_) => unreachable!(),
+
+ // Any other error can be propagated.
+ Err(e) => Err(e.into()),
+ }
+}
+
+// Rust 1.64.0 adds an `AsFd` implementation for `Arc`, so this won't be needed
+#[derive(Debug)]
+struct ArcAsFd(Arc<OwnedFd>);
+
+impl AsFd for ArcAsFd {
+ fn as_fd(&self) -> BorrowedFd {
+ self.0.as_fd()
+ }
+}
+
+// The event source is simply a generic source with one of the eventfds.
+#[derive(Debug)]
+pub struct PingSource {
+ event: Generic<ArcAsFd>,
+}
+
+impl EventSource for PingSource {
+ type Event = ();
+ type Metadata = ();
+ type Ret = ();
+ type Error = PingError;
+
+ fn process_events<C>(
+ &mut self,
+ readiness: Readiness,
+ token: Token,
+ mut callback: C,
+ ) -> Result<PostAction, Self::Error>
+ where
+ C: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ self.event
+ .process_events(readiness, token, |_, fd| {
+ let counter = drain_ping(fd.as_fd())?;
+
+ // If the LSB is set, it means we were closed. If anything else
+ // is also set, it means we were pinged. The two are not
+ // mutually exclusive.
+ let close = (counter & INCREMENT_CLOSE) != 0;
+ let ping = (counter & (u64::MAX - 1)) != 0;
+
+ if ping {
+ callback((), &mut ());
+ }
+
+ if close {
+ Ok(PostAction::Remove)
+ } else {
+ Ok(PostAction::Continue)
+ }
+ })
+ .map_err(|e| PingError(e.into()))
+ }
+
+ fn register(&mut self, poll: &mut Poll, token_factory: &mut TokenFactory) -> crate::Result<()> {
+ self.event.register(poll, token_factory)
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut Poll,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()> {
+ self.event.reregister(poll, token_factory)
+ }
+
+ fn unregister(&mut self, poll: &mut Poll) -> crate::Result<()> {
+ self.event.unregister(poll)
+ }
+}
+
+#[derive(Clone, Debug)]
+pub struct Ping {
+ // This is an Arc because it's potentially shared with clones. The last one
+ // dropped needs to signal to the event source via the eventfd.
+ event: Arc<FlagOnDrop>,
+}
+
+impl Ping {
+ /// Send a ping to the `PingSource`.
+ pub fn ping(&self) {
+ if let Err(e) = send_ping(self.event.0.as_fd(), INCREMENT_PING) {
+ log::warn!("[calloop] Failed to write a ping: {:?}", e);
+ }
+ }
+}
+
+/// This manages signalling to the PingSource when it's dropped. There should
+/// only ever be one of these per PingSource.
+#[derive(Debug)]
+struct FlagOnDrop(Arc<OwnedFd>);
+
+impl Drop for FlagOnDrop {
+ fn drop(&mut self) {
+ if let Err(e) = send_ping(self.0.as_fd(), INCREMENT_CLOSE) {
+ log::warn!("[calloop] Failed to send close ping: {:?}", e);
+ }
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 +169 +170 +171 +172 +173 +174 +175 +176 +177 +178 +179 +180 +181 +182 +183 +184 +185 +186 +187 +188 +189 +190 +191 +192 +193 +194 +195 +196 +197 +198 +199 +200 +201 +202 +203 +204 +205 +206 +207 +208 +209 +210 +211 +212 +213 +214 +215 +216 +217 +218 +219 +220 +221 +222 +223 +224 +225 +226 +227 +228 +229 +230 +231 +232 +233 +234 +235 +236 +237 +238 +239 +240 +241 +242 +243 +244 +245 +246 +247 +248 +249 +250 +251 +252 +253 +254 +255 +256 +257 +258 +259 +260 +261 +262 +263 +264 +265 +266 +267 +268 +269 +270 +271 +272 +273 +274 +275 +276 +277 +278 +279 +280 +281 +282 +283 +284 +285 +286 +287 +288 +289 +290 +291 +292 +293 +294 +295 +296 +297 +298 +299 +300 +301 +302 +303 +304 +305 +306 +307 +308 +309 +310 +311 +312 +313 +314 +315 +316 +317 +318 +319 +320 +321 +322 +323 +324 +325 +326 +327 +328 +329 +330 +331 +332 +333 +334 +335 +336 +337 +338 +339 +340 +341 +342 +343 +344 +345 +346 +347 +348 +349 +350 +351 +352 +353 +354 +355 +356 +357 +358 +359 +360 +361 +362 +363 +364 +365 +366 +367 +368 +369 +370 +371 +372 +373 +374 +375 +376 +377 +378 +379 +380 +381 +382 +383 +384 +385 +386 +387 +388 +389 +390 +391 +392 +393 +394 +395 +396 +397 +398 +399 +400 +401 +402 +403 +404 +405 +406 +407 +408 +409 +410 +411 +412 +413 +414 +415 +416 +417 +418 +419 +420 +421 +422 +423 +424 +425 +426 +427 +428 +429 +430 +431 +432 +433 +434 +435 +436 +437 +438 +439 +440 +441 +442 +443 +444 +445 +446 +447 +448 +449 +450 +451 +452 +453 +454 +455 +456 +457 +458 +459 +460 +461 +462 +463 +464 +465 +466 +467 +468 +469 +470 +471 +472 +473 +474 +475 +476 +477 +478 +479 +480 +481 +482 +483 +484 +485 +486 +487 +488 +489 +490 +491 +492 +493 +494 +495 +496 +497 +498 +499 +500 +501 +502 +503 +504 +505 +506 +507 +508 +509 +510 +511 +512 +513 +514 +515 +516 +517 +518 +519 +520 +521 +522 +523 +524 +525 +526 +527 +528 +529 +530 +531 +532 +533 +534 +535 +536 +537 +538 +539 +540 +541 +542 +543 +544 +545 +546 +547 +548 +549 +550 +551 +552 +553 +554 +555 +556 +557 +558 +559 +560 +561 +562 +563 +564 +565 +566 +567 +568 +569 +570 +571 +572 +573 +574 +575 +576 +577 +578 +579 +580 +581 +582 +583 +584 +585 +586 +587 +588 +589 +590 +591 +592 +593 +594 +595 +596 +597 +598 +599 +600 +601 +602 +603 +604 +605 +606 +607 +608 +609 +610 +611 +612 +613 +614 +615 +616 +617 +618 +619 +620 +621 +622 +623 +624 +625 +626 +627 +628 +629 +630 +631 +632 +633 +634 +635 +636 +637 +638 +639 +640 +641 +642 +643 +644 +645 +646 +647 +
//! Timer event source
+//!
+//! The [`Timer`] is an event source that will fire its event after a certain amount of time
+//! specified at creation. Its timing is tracked directly by the event loop core logic, and it does
+//! not consume any system resource.
+//!
+//! As of calloop v0.11.0, the event loop always uses high-precision timers. However, the timer
+//! precision varies between operating systems; for instance, the scheduler granularity on Windows
+//! is about 16 milliseconds. If you need to rely on good precision timers in general, you may need
+//! to enable realtime features of your OS to ensure your thread is quickly woken up by the system
+//! scheduler.
+//!
+//! The provided event is an [`Instant`] representing the deadline for which this timer has fired
+//! (which can be earlier than the current time depending on the event loop congestion).
+//!
+//! The callback associated with this event source is expected to return a [`TimeoutAction`], which
+//! can be used to implement self-repeating timers by telling calloop to reprogram the same timer
+//! for a later timeout after it has fired.
+
+/*
+ * This module provides two main types:
+ *
+ * - `Timer` is the user-facing type that represents a timer event source
+ * - `TimerWheel` is an internal data structure for tracking registered timeouts, it is used by
+ * the polling logic in sys/mod.rs
+ */
+
+use std::{
+ cell::RefCell,
+ collections::BinaryHeap,
+ rc::Rc,
+ task::Waker,
+ time::{Duration, Instant},
+};
+
+use crate::{EventSource, LoopHandle, Poll, PostAction, Readiness, Token, TokenFactory};
+
+#[derive(Debug)]
+struct Registration {
+ token: Token,
+ wheel: Rc<RefCell<TimerWheel>>,
+ counter: u32,
+}
+
+/// A timer event source
+///
+/// When registered to the event loop, it will trigger an event once its deadline is reached.
+/// If the deadline is in the past relative to the moment of its insertion in the event loop,
+/// the `TImer` will trigger an event as soon as the event loop is dispatched.
+#[derive(Debug)]
+pub struct Timer {
+ registration: Option<Registration>,
+ deadline: Option<Instant>,
+}
+
+impl Timer {
+ /// Create a timer that will fire immediately when inserted in the event loop
+ pub fn immediate() -> Timer {
+ Self::from_deadline(Instant::now())
+ }
+
+ /// Create a timer that will fire after a given duration from now
+ pub fn from_duration(duration: Duration) -> Timer {
+ Self::from_deadline_inner(Instant::now().checked_add(duration))
+ }
+
+ /// Create a timer that will fire at a given instant
+ pub fn from_deadline(deadline: Instant) -> Timer {
+ Self::from_deadline_inner(Some(deadline))
+ }
+
+ fn from_deadline_inner(deadline: Option<Instant>) -> Timer {
+ Timer {
+ registration: None,
+ deadline,
+ }
+ }
+
+ /// Changes the deadline of this timer to an [`Instant`]
+ ///
+ /// If the `Timer` is currently registered in the event loop, it needs to be
+ /// re-registered for this change to take effect.
+ pub fn set_deadline(&mut self, deadline: Instant) {
+ self.deadline = Some(deadline);
+ }
+
+ /// Changes the deadline of this timer to a [`Duration`] from now
+ ///
+ /// If the `Timer` is currently registered in the event loop, it needs to be
+ /// re-registered for this change to take effect.
+ pub fn set_duration(&mut self, duration: Duration) {
+ self.deadline = Instant::now().checked_add(duration);
+ }
+
+ /// Get the current deadline of this `Timer`
+ ///
+ /// Returns `None` if the timer has overflowed.
+ pub fn current_deadline(&self) -> Option<Instant> {
+ self.deadline
+ }
+}
+
+impl EventSource for Timer {
+ type Event = Instant;
+ type Metadata = ();
+ type Ret = TimeoutAction;
+ type Error = std::io::Error;
+
+ fn process_events<F>(
+ &mut self,
+ _: Readiness,
+ token: Token,
+ mut callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ if let (Some(ref registration), Some(ref deadline)) = (&self.registration, &self.deadline) {
+ if registration.token != token {
+ return Ok(PostAction::Continue);
+ }
+ let new_deadline = match callback(*deadline, &mut ()) {
+ TimeoutAction::Drop => return Ok(PostAction::Remove),
+ TimeoutAction::ToInstant(instant) => instant,
+ TimeoutAction::ToDuration(duration) => match Instant::now().checked_add(duration) {
+ Some(new_deadline) => new_deadline,
+ None => {
+ // The timer has overflowed, meaning we have no choice but to drop it.
+ self.deadline = None;
+ return Ok(PostAction::Remove);
+ }
+ },
+ };
+ // If we received an event, we MUST have a valid counter value
+ registration.wheel.borrow_mut().insert_reuse(
+ registration.counter,
+ new_deadline,
+ registration.token,
+ );
+ self.deadline = Some(new_deadline);
+ }
+ Ok(PostAction::Continue)
+ }
+
+ fn register(&mut self, poll: &mut Poll, token_factory: &mut TokenFactory) -> crate::Result<()> {
+ // Only register a deadline if we haven't overflowed.
+ if let Some(deadline) = self.deadline {
+ let wheel = poll.timers.clone();
+ let token = token_factory.token();
+ let counter = wheel.borrow_mut().insert(deadline, token);
+ self.registration = Some(Registration {
+ token,
+ wheel,
+ counter,
+ });
+ }
+
+ Ok(())
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut Poll,
+ token_factory: &mut TokenFactory,
+ ) -> crate::Result<()> {
+ self.unregister(poll)?;
+ self.register(poll, token_factory)
+ }
+
+ fn unregister(&mut self, poll: &mut Poll) -> crate::Result<()> {
+ if let Some(registration) = self.registration.take() {
+ poll.timers.borrow_mut().cancel(registration.counter);
+ }
+ Ok(())
+ }
+}
+
+/// Action to reschedule a timeout if necessary
+#[derive(Debug)]
+pub enum TimeoutAction {
+ /// Don't reschedule this timer
+ Drop,
+ /// Reschedule this timer to a given [`Instant`]
+ ToInstant(Instant),
+ /// Reschedule this timer to a given [`Duration`] in the future
+ ToDuration(Duration),
+}
+
+// Internal representation of a timeout registered in the TimerWheel
+#[derive(Debug)]
+struct TimeoutData {
+ deadline: Instant,
+ token: RefCell<Option<Token>>,
+ counter: u32,
+}
+
+// A data structure for tracking registered timeouts
+#[derive(Debug)]
+pub(crate) struct TimerWheel {
+ heap: BinaryHeap<TimeoutData>,
+ counter: u32,
+}
+
+impl TimerWheel {
+ pub(crate) fn new() -> TimerWheel {
+ TimerWheel {
+ heap: BinaryHeap::new(),
+ counter: 0,
+ }
+ }
+
+ pub(crate) fn insert(&mut self, deadline: Instant, token: Token) -> u32 {
+ self.heap.push(TimeoutData {
+ deadline,
+ token: RefCell::new(Some(token)),
+ counter: self.counter,
+ });
+ let ret = self.counter;
+ self.counter += 1;
+ ret
+ }
+
+ pub(crate) fn insert_reuse(&mut self, counter: u32, deadline: Instant, token: Token) {
+ self.heap.push(TimeoutData {
+ deadline,
+ token: RefCell::new(Some(token)),
+ counter,
+ });
+ }
+
+ pub(crate) fn cancel(&mut self, counter: u32) {
+ self.heap
+ .iter()
+ .find(|data| data.counter == counter)
+ .map(|data| data.token.take());
+ }
+
+ pub(crate) fn next_expired(&mut self, now: Instant) -> Option<(u32, Token)> {
+ loop {
+ // check if there is an expired item
+ if let Some(data) = self.heap.peek() {
+ if data.deadline > now {
+ return None;
+ }
+ // there is an expired timeout, continue the
+ // loop body
+ } else {
+ return None;
+ }
+
+ // There is an item in the heap, this unwrap cannot blow
+ let data = self.heap.pop().unwrap();
+ if let Some(token) = data.token.into_inner() {
+ return Some((data.counter, token));
+ }
+ // otherwise this timeout was cancelled, continue looping
+ }
+ }
+
+ pub(crate) fn next_deadline(&self) -> Option<std::time::Instant> {
+ self.heap.peek().map(|data| data.deadline)
+ }
+}
+
+// trait implementations for TimeoutData
+
+impl std::cmp::Ord for TimeoutData {
+ fn cmp(&self, other: &Self) -> std::cmp::Ordering {
+ // earlier values have priority
+ self.deadline.cmp(&other.deadline).reverse()
+ }
+}
+
+impl std::cmp::PartialOrd for TimeoutData {
+ fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
+ Some(self.cmp(other))
+ }
+}
+
+// This impl is required for PartialOrd but actually never used
+// and the type is private, so ignore its coverage
+impl std::cmp::PartialEq for TimeoutData {
+ #[cfg_attr(feature = "nightly_coverage", coverage(off))]
+ fn eq(&self, other: &Self) -> bool {
+ self.deadline == other.deadline
+ }
+}
+
+impl std::cmp::Eq for TimeoutData {}
+
+// Logic for timer futures
+
+/// A future that resolves once a certain timeout is expired
+pub struct TimeoutFuture {
+ deadline: Option<Instant>,
+ waker: Rc<RefCell<Option<Waker>>>,
+}
+
+impl std::fmt::Debug for TimeoutFuture {
+ #[cfg_attr(feature = "nightly_coverage", coverage(off))]
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ f.debug_struct("TimeoutFuture")
+ .field("deadline", &self.deadline)
+ .finish_non_exhaustive()
+ }
+}
+
+impl TimeoutFuture {
+ /// Create a future that resolves after a given duration
+ pub fn from_duration<Data>(handle: &LoopHandle<'_, Data>, duration: Duration) -> TimeoutFuture {
+ Self::from_deadline_inner(handle, Instant::now().checked_add(duration))
+ }
+
+ /// Create a future that resolves at a given instant
+ pub fn from_deadline<Data>(handle: &LoopHandle<'_, Data>, deadline: Instant) -> TimeoutFuture {
+ Self::from_deadline_inner(handle, Some(deadline))
+ }
+
+ /// Create a future that resolves at a given instant
+ fn from_deadline_inner<Data>(
+ handle: &LoopHandle<'_, Data>,
+ deadline: Option<Instant>,
+ ) -> TimeoutFuture {
+ let timer = Timer::from_deadline_inner(deadline);
+ let waker = Rc::new(RefCell::new(None::<Waker>));
+ handle
+ .insert_source(timer, {
+ let waker = waker.clone();
+ move |_, &mut (), _| {
+ if let Some(waker) = waker.borrow_mut().clone() {
+ waker.wake()
+ }
+ TimeoutAction::Drop
+ }
+ })
+ .unwrap();
+
+ TimeoutFuture { deadline, waker }
+ }
+}
+
+impl std::future::Future for TimeoutFuture {
+ type Output = ();
+
+ fn poll(
+ self: std::pin::Pin<&mut Self>,
+ cx: &mut std::task::Context<'_>,
+ ) -> std::task::Poll<Self::Output> {
+ match self.deadline {
+ None => return std::task::Poll::Pending,
+
+ Some(deadline) => {
+ if Instant::now() >= deadline {
+ return std::task::Poll::Ready(());
+ }
+ }
+ }
+
+ *self.waker.borrow_mut() = Some(cx.waker().clone());
+ std::task::Poll::Pending
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+ use crate::*;
+ use std::time::Duration;
+
+ #[test]
+ fn simple_timer() {
+ let mut event_loop = EventLoop::try_new().unwrap();
+
+ let mut dispatched = false;
+
+ event_loop
+ .handle()
+ .insert_source(
+ Timer::from_duration(Duration::from_millis(100)),
+ |_, &mut (), dispatched| {
+ *dispatched = true;
+ TimeoutAction::Drop
+ },
+ )
+ .unwrap();
+
+ event_loop
+ .dispatch(Some(Duration::ZERO), &mut dispatched)
+ .unwrap();
+ // not yet dispatched
+ assert!(!dispatched);
+
+ event_loop
+ .dispatch(Some(Duration::from_millis(150)), &mut dispatched)
+ .unwrap();
+ // now dispatched
+ assert!(dispatched);
+ }
+
+ #[test]
+ fn simple_timer_instant() {
+ let mut event_loop = EventLoop::try_new().unwrap();
+
+ let mut dispatched = false;
+
+ event_loop
+ .handle()
+ .insert_source(
+ Timer::from_duration(Duration::from_millis(100)),
+ |_, &mut (), dispatched| {
+ *dispatched = true;
+ TimeoutAction::Drop
+ },
+ )
+ .unwrap();
+
+ event_loop
+ .dispatch(Some(Duration::from_millis(150)), &mut dispatched)
+ .unwrap();
+ // now dispatched
+ assert!(dispatched);
+ }
+
+ #[test]
+ fn immediate_timer() {
+ let mut event_loop = EventLoop::try_new().unwrap();
+
+ let mut dispatched = false;
+
+ event_loop
+ .handle()
+ .insert_source(Timer::immediate(), |_, &mut (), dispatched| {
+ *dispatched = true;
+ TimeoutAction::Drop
+ })
+ .unwrap();
+
+ event_loop
+ .dispatch(Some(Duration::ZERO), &mut dispatched)
+ .unwrap();
+ // now dispatched
+ assert!(dispatched);
+ }
+
+ // We cannot actually test high precision timers, as they are only high precision in release mode
+ // This test is here to ensure that the high-precision codepath are executed and work as intended
+ // even if we cannot test if they are actually high precision
+ #[test]
+ fn high_precision_timer() {
+ let mut event_loop = EventLoop::try_new().unwrap();
+
+ let mut dispatched = false;
+
+ event_loop
+ .handle()
+ .insert_source(
+ Timer::from_duration(Duration::from_millis(100)),
+ |_, &mut (), dispatched| {
+ *dispatched = true;
+ TimeoutAction::Drop
+ },
+ )
+ .unwrap();
+
+ event_loop
+ .dispatch(Some(Duration::ZERO), &mut dispatched)
+ .unwrap();
+ // not yet dispatched
+ assert!(!dispatched);
+
+ event_loop
+ .dispatch(Some(Duration::from_micros(10200)), &mut dispatched)
+ .unwrap();
+ // yet not dispatched
+ assert!(!dispatched);
+
+ event_loop
+ .dispatch(Some(Duration::from_millis(100)), &mut dispatched)
+ .unwrap();
+ // now dispatched
+ assert!(dispatched);
+ }
+
+ #[test]
+ fn cancel_timer() {
+ let mut event_loop = EventLoop::try_new().unwrap();
+
+ let mut dispatched = false;
+
+ let token = event_loop
+ .handle()
+ .insert_source(
+ Timer::from_duration(Duration::from_millis(100)),
+ |_, &mut (), dispatched| {
+ *dispatched = true;
+ TimeoutAction::Drop
+ },
+ )
+ .unwrap();
+
+ event_loop
+ .dispatch(Some(Duration::ZERO), &mut dispatched)
+ .unwrap();
+ // not yet dispatched
+ assert!(!dispatched);
+
+ event_loop.handle().remove(token);
+
+ event_loop
+ .dispatch(Some(Duration::from_millis(150)), &mut dispatched)
+ .unwrap();
+ // still not dispatched
+ assert!(!dispatched);
+ }
+
+ #[test]
+ fn repeating_timer() {
+ let mut event_loop = EventLoop::try_new().unwrap();
+
+ let mut dispatched = 0;
+
+ event_loop
+ .handle()
+ .insert_source(
+ Timer::from_duration(Duration::from_millis(500)),
+ |_, &mut (), dispatched| {
+ *dispatched += 1;
+ TimeoutAction::ToDuration(Duration::from_millis(500))
+ },
+ )
+ .unwrap();
+
+ event_loop
+ .dispatch(Some(Duration::from_millis(250)), &mut dispatched)
+ .unwrap();
+ assert_eq!(dispatched, 0);
+
+ event_loop
+ .dispatch(Some(Duration::from_millis(510)), &mut dispatched)
+ .unwrap();
+ assert_eq!(dispatched, 1);
+
+ event_loop
+ .dispatch(Some(Duration::from_millis(510)), &mut dispatched)
+ .unwrap();
+ assert_eq!(dispatched, 2);
+
+ event_loop
+ .dispatch(Some(Duration::from_millis(510)), &mut dispatched)
+ .unwrap();
+ assert_eq!(dispatched, 3);
+ }
+
+ #[cfg(feature = "executor")]
+ #[test]
+ fn timeout_future() {
+ let mut event_loop = EventLoop::try_new().unwrap();
+
+ let mut dispatched = 0;
+
+ let timeout_1 =
+ TimeoutFuture::from_duration(&event_loop.handle(), Duration::from_millis(500));
+ let timeout_2 =
+ TimeoutFuture::from_duration(&event_loop.handle(), Duration::from_millis(1500));
+ // This one should never go off.
+ let timeout_3 = TimeoutFuture::from_duration(&event_loop.handle(), Duration::MAX);
+
+ let (exec, sched) = crate::sources::futures::executor().unwrap();
+ event_loop
+ .handle()
+ .insert_source(exec, move |(), &mut (), got| {
+ *got += 1;
+ })
+ .unwrap();
+
+ sched.schedule(timeout_1).unwrap();
+ sched.schedule(timeout_2).unwrap();
+ sched.schedule(timeout_3).unwrap();
+
+ // We do a 0-timeout dispatch after every regular dispatch to let the timeout triggers
+ // flow back to the executor
+
+ event_loop
+ .dispatch(Some(Duration::ZERO), &mut dispatched)
+ .unwrap();
+ event_loop
+ .dispatch(Some(Duration::ZERO), &mut dispatched)
+ .unwrap();
+ assert_eq!(dispatched, 0);
+
+ event_loop
+ .dispatch(Some(Duration::from_millis(1000)), &mut dispatched)
+ .unwrap();
+ event_loop
+ .dispatch(Some(Duration::ZERO), &mut dispatched)
+ .unwrap();
+ assert_eq!(dispatched, 1);
+
+ event_loop
+ .dispatch(Some(Duration::from_millis(1100)), &mut dispatched)
+ .unwrap();
+ event_loop
+ .dispatch(Some(Duration::ZERO), &mut dispatched)
+ .unwrap();
+ assert_eq!(dispatched, 2);
+ }
+
+ #[test]
+ fn no_overflow() {
+ let mut event_loop = EventLoop::try_new().unwrap();
+
+ let mut dispatched = 0;
+
+ event_loop
+ .handle()
+ .insert_source(
+ Timer::from_duration(Duration::from_millis(500)),
+ |_, &mut (), dispatched| {
+ *dispatched += 1;
+ TimeoutAction::Drop
+ },
+ )
+ .unwrap();
+
+ event_loop
+ .handle()
+ .insert_source(Timer::from_duration(Duration::MAX), |_, &mut (), _| {
+ panic!("This timer should never go off")
+ })
+ .unwrap();
+
+ event_loop
+ .dispatch(Some(Duration::from_millis(250)), &mut dispatched)
+ .unwrap();
+ assert_eq!(dispatched, 0);
+
+ event_loop
+ .dispatch(Some(Duration::from_millis(510)), &mut dispatched)
+ .unwrap();
+ assert_eq!(dispatched, 1);
+
+ event_loop
+ .dispatch(Some(Duration::from_millis(510)), &mut dispatched)
+ .unwrap();
+ assert_eq!(dispatched, 1);
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 +169 +170 +171 +172 +173 +174 +175 +176 +177 +178 +179 +180 +181 +182 +183 +184 +185 +186 +187 +188 +189 +190 +191 +192 +193 +194 +195 +196 +197 +198 +199 +200 +201 +202 +203 +204 +205 +206 +207 +208 +209 +210 +211 +212 +213 +214 +215 +216 +217 +218 +219 +220 +221 +222 +223 +224 +225 +226 +227 +228 +229 +230 +231 +232 +233 +234 +235 +236 +237 +238 +239 +240 +241 +242 +243 +244 +245 +246 +247 +248 +249 +250 +251 +252 +253 +254 +255 +256 +257 +258 +259 +260 +261 +262 +263 +264 +265 +266 +267 +268 +269 +270 +271 +272 +273 +274 +275 +276 +277 +278 +279 +280 +281 +282 +283 +284 +285 +286 +287 +288 +289 +290 +291 +292 +293 +294 +295 +296 +297 +298 +299 +300 +301 +302 +303 +304 +305 +306 +307 +308 +309 +310 +311 +312 +313 +314 +315 +316 +317 +318 +319 +320 +321 +322 +323 +324 +325 +326 +327 +328 +329 +330 +331 +332 +333 +334 +335 +336 +337 +338 +339 +340 +341 +342 +343 +344 +345 +346 +347 +348 +349 +350 +351 +352 +353 +354 +355 +356 +357 +358 +359 +360 +361 +362 +363 +364 +365 +366 +367 +368 +369 +370 +371 +372 +373 +374 +375 +376 +377 +378 +379 +380 +381 +382 +383 +384 +385 +386 +387 +388 +389 +390 +391 +392 +393 +394 +395 +396 +397 +398 +399 +400 +401 +402 +403 +404 +405 +406 +407 +408 +409 +410 +411 +412 +413 +414 +415 +416 +417 +418 +419 +420 +421 +422 +423 +424 +425 +426 +427 +428 +429 +430 +431 +432 +433 +434 +435 +436 +437 +438 +439 +440 +441 +442 +443 +444 +445 +446 +447 +448 +449 +450 +451 +452 +453 +454 +455 +456 +457 +458 +459 +460 +461 +462 +463 +464 +465 +466 +467 +468 +469 +470 +471 +472 +473 +474 +475 +476 +477 +478 +479 +480 +481 +482 +483 +484 +485 +486 +487 +488 +489 +490 +491 +492 +493 +494 +495 +496 +497 +498 +499 +500 +501 +502 +503 +504 +505 +506 +507 +508 +509 +510 +511 +512 +513 +514 +515 +516 +517 +518 +519 +520 +521 +522 +523 +524 +525 +526 +527 +528 +529 +530 +531 +532 +533 +534 +535 +536 +537 +538 +539 +540 +541 +542 +543 +544 +545 +546 +547 +548 +549 +550 +551 +552 +553 +554 +555 +556 +557 +558 +559 +560 +561 +562 +563 +564 +565 +566 +567 +568 +569 +570 +571 +572 +573 +574 +575 +576 +577 +578 +579 +580 +581 +582 +583 +584 +585 +586 +587 +588 +589 +590 +591 +592 +593 +594 +595 +596 +597 +598 +599 +600 +601 +602 +603 +604 +605 +606 +607 +608 +609 +610 +611 +612 +613 +614 +615 +616 +617 +618 +619 +620 +621 +622 +623 +624 +625 +626 +627 +628 +629 +630 +631 +632 +633 +634 +635 +636 +637 +638 +639 +640 +641 +642 +643 +644 +645 +646 +647 +648 +649 +650 +651 +652 +653 +654 +655 +656 +657 +658 +659 +660 +661 +662 +663 +664 +665 +666 +667 +668 +669 +670 +671 +672 +673 +674 +675 +676 +677 +678 +679 +680 +681 +682 +683 +684 +685 +686 +687 +688 +689 +690 +691 +692 +693 +694 +695 +696 +697 +698 +699 +700 +701 +702 +703 +704 +705 +706 +707 +708 +709 +710 +711 +712 +713 +714 +715 +716 +717 +718 +719 +720 +721 +722 +723 +724 +725 +726 +727 +728 +729 +730 +731 +732 +733 +734 +735 +736 +737 +738 +739 +740 +741 +742 +743 +744 +745 +746 +747 +748 +749 +750 +751 +752 +753 +754 +755 +756 +757 +758 +759 +760 +761 +762 +763 +764 +765 +766 +767 +768 +769 +770 +771 +772 +773 +774 +775 +776 +777 +778 +779 +780 +781 +782 +783 +784 +785 +786 +787 +788 +789 +790 +791 +792 +793 +794 +795 +796 +797 +798 +799 +800 +801 +802 +803 +804 +805 +806 +807 +808 +809 +810 +811 +812 +813 +814 +815 +816 +817 +818 +819 +820 +821 +822 +823 +824 +825 +826 +827 +828 +829 +830 +831 +832 +833 +834 +835 +836 +837 +838 +839 +840 +841 +842 +843 +844 +845 +846 +847 +848 +849 +850 +851 +852 +853 +854 +855 +856 +857 +858 +859 +860 +861 +862 +863 +864 +865 +866 +867 +868 +869 +870 +871 +872 +873 +874 +875 +876 +877 +878 +879 +880 +881 +882 +883 +884 +885 +886 +887 +888 +889 +890 +891 +892 +893 +894 +895 +896 +897 +898 +899 +900 +901 +902 +903 +904 +905 +906 +907 +908 +909 +910 +911 +912 +913 +914 +915 +916 +917 +918 +919 +920 +921 +922 +923 +924 +925 +926 +927 +928 +929 +930 +931 +932 +933 +934 +935 +936 +937 +938 +939 +940 +941 +942 +943 +944 +945 +946 +947 +948 +949 +950 +951 +952 +953 +954 +955 +956 +957 +958 +959 +960 +961 +962 +963 +964 +965 +966 +967 +968 +969 +970 +971 +972 +973 +974 +975 +976 +977 +978 +979 +980 +981 +982 +983 +984 +985 +986 +987 +988 +989 +990 +991 +992 +993 +994 +995 +996 +997 +998 +999 +1000 +1001 +1002 +1003 +1004 +1005 +1006 +1007 +1008 +1009 +1010 +1011 +1012 +1013 +1014 +1015 +1016 +1017 +1018 +1019 +1020 +1021 +1022 +1023 +1024 +1025 +1026 +1027 +1028 +1029 +1030 +1031 +1032 +1033 +1034 +1035 +1036 +1037 +1038 +1039 +1040 +1041 +1042 +1043 +1044 +1045 +1046 +1047 +1048 +1049 +1050 +1051 +1052 +1053 +1054 +1055 +1056 +1057 +1058 +1059 +1060 +1061 +1062 +1063 +1064 +1065 +1066 +1067 +1068 +1069 +1070 +1071 +1072 +1073 +1074 +1075 +1076 +1077 +1078 +1079 +1080 +1081 +1082 +1083 +1084 +1085 +1086 +1087 +1088 +1089 +1090 +1091 +1092 +1093 +1094 +1095 +1096 +1097 +1098 +1099 +1100 +1101 +1102 +1103 +1104 +1105 +1106 +1107 +1108 +1109 +1110 +1111 +1112 +1113 +1114 +1115 +1116 +1117 +1118 +1119 +1120 +1121 +1122 +1123 +1124 +1125 +1126 +1127 +1128 +1129 +1130 +1131 +1132 +
//! Wrapper for a transient Calloop event source.
+//!
+//! If you have high level event source that you expect to remain in the event
+//! loop indefinitely, and another event source nested inside that one that you
+//! expect to require removal or disabling from time to time, this module can
+//! handle it for you.
+
+/// A [`TransientSource`] wraps a Calloop event source and manages its
+/// registration. A user of this type only needs to perform the usual Calloop
+/// calls (`process_events()` and `*register()`) and the return value of
+/// [`process_events()`](crate::EventSource::process_events).
+///
+/// Rather than needing to check for the full set of
+/// [`PostAction`](crate::PostAction) values returned from `process_events()`,
+/// you can just check for `Continue` or `Reregister` and pass that back out
+/// through your own `process_events()` implementation. In your registration
+/// functions, you then only need to call the same function on this type ie.
+/// `register()` inside `register()` etc.
+///
+/// For example, say you have a source that contains a channel along with some
+/// other logic. If the channel's sending end has been dropped, it needs to be
+/// removed from the loop. So to manage this, you use this in your struct:
+///
+/// ```none,actually-rust-but-see-https://github.com/rust-lang/rust/issues/63193
+/// struct CompositeSource {
+/// // Event source for channel.
+/// mpsc_receiver: TransientSource<calloop::channel::Channel<T>>,
+///
+/// // Any other fields go here...
+/// }
+/// ```
+///
+/// To create the transient source, you can simply use the `Into`
+/// implementation:
+///
+/// ```none,actually-rust-but-see-https://github.com/rust-lang/rust/issues/63193
+/// let (sender, source) = channel();
+/// let mpsc_receiver: TransientSource<Channel> = source.into();
+/// ```
+///
+/// (If you want to start off with an empty `TransientSource`, you can just use
+/// `Default::default()` instead.)
+///
+/// `TransientSource` implements [`EventSource`](crate::EventSource) and passes
+/// through `process_events()` calls, so in the parent's `process_events()`
+/// implementation you can just do this:
+///
+/// ```none,actually-rust-but-see-https://github.com/rust-lang/rust/issues/63193
+/// fn process_events<F>(
+/// &mut self,
+/// readiness: calloop::Readiness,
+/// token: calloop::Token,
+/// callback: F,
+/// ) -> Result<calloop::PostAction, Self::Error>
+/// where
+/// F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+/// {
+/// let channel_return = self.mpsc_receiver.process_events(readiness, token, callback)?;
+///
+/// // Perform other logic here...
+///
+/// Ok(channel_return)
+/// }
+/// ```
+///
+/// Note that:
+///
+/// - You can call `process_events()` on the `TransientSource<Channel>` even
+/// if the channel has been unregistered and dropped. All that will happen
+/// is that you won't get any events from it.
+///
+/// - The [`PostAction`](crate::PostAction) returned from `process_events()`
+/// will only ever be `PostAction::Continue` or `PostAction::Reregister`.
+/// You will still need to combine this with the result of any other sources
+/// (transient or not).
+///
+/// Once you return `channel_return` from your `process_events()` method (and
+/// assuming it propagates all the way up to the event loop itself through any
+/// other event sources), the event loop might call `reregister()` on your
+/// source. All your source has to do is:
+///
+/// ```none,actually-rust-but-see-https://github.com/rust-lang/rust/issues/63193
+/// fn reregister(
+/// &mut self,
+/// poll: &mut calloop::Poll,
+/// token_factory: &mut calloop::TokenFactory,
+/// ) -> crate::Result<()> {
+/// self.mpsc_receiver.reregister(poll, token_factory)?;
+///
+/// // Other registration actions...
+///
+/// Ok(())
+/// }
+/// ```
+///
+/// The `TransientSource` will take care of updating the registration of the
+/// inner source, even if it actually needs to be unregistered or initially
+/// registered.
+///
+/// ## Replacing or removing `TransientSource`s
+///
+/// Not properly removing or replacing `TransientSource`s can cause spurious
+/// wakeups of the event loop, and in some cases can leak file descriptors or
+/// fail to free entries in Calloop's internal data structures. No unsoundness
+/// or undefined behaviour will result, but leaking file descriptors can result
+/// in errors or panics.
+///
+/// If you want to remove a source before it returns `PostAction::Remove`, use
+/// the [`TransientSource::remove()`] method. If you want to replace a source
+/// with another one, use the [`TransientSource::replace()`] method. Either of
+/// these may be called at any time during processing or from outside the event
+/// loop. Both require either returning `PostAction::Reregister` from the
+/// `process_event()` call that does this, or reregistering the event source
+/// some other way eg. via the top-level loop handle.
+///
+/// If, instead, you directly assign a new source to the variable holding the
+/// `TransientSource`, the inner source will be dropped before it can be
+/// unregistered. For example:
+///
+/// ```none,actually-rust-but-see-https://github.com/rust-lang/rust/issues/63193
+/// self.mpsc_receiver = Default::default();
+/// self.mpsc_receiver = new_channel.into();
+/// ```
+#[derive(Debug, Default)]
+pub struct TransientSource<T> {
+ state: TransientSourceState<T>,
+}
+
+/// This is the internal state of the [`TransientSource`], as a separate type so
+/// it's not exposed.
+#[derive(Debug)]
+enum TransientSourceState<T> {
+ /// The source should be kept in the loop.
+ Keep(T),
+ /// The source needs to be registered with the loop.
+ Register(T),
+ /// The source needs to be disabled but kept.
+ Disable(T),
+ /// The source needs to be removed from the loop.
+ Remove(T),
+ /// The source is being replaced by another. For most API purposes (eg.
+ /// `map()`), this will be treated as the `Register` state enclosing the new
+ /// source.
+ Replace {
+ /// The new source, which will be registered and used from now on.
+ new: T,
+ /// The old source, which will be unregistered and dropped.
+ old: T,
+ },
+ /// The source has been removed from the loop and dropped (this might also
+ /// be observed if there is a panic while changing states).
+ None,
+}
+
+impl<T> Default for TransientSourceState<T> {
+ fn default() -> Self {
+ Self::None
+ }
+}
+
+impl<T> TransientSourceState<T> {
+ /// If a caller needs to flag the contained source for removal or
+ /// registration, we need to replace the enum variant safely. This requires
+ /// having a `None` value in there temporarily while we do the swap.
+ ///
+ /// If the variant is `None` the value will not change and `replacer` will
+ /// not be called. If the variant is `Replace` then `replacer` will be
+ /// called **on the new source**, which may cause the old source to leak
+ /// registration in the event loop if it has not yet been unregistered.
+ ///
+ /// The `replacer` function here is expected to be one of the enum variant
+ /// constructors eg. `replace(TransientSource::Remove)`.
+ fn replace_state<F>(&mut self, replacer: F)
+ where
+ F: FnOnce(T) -> Self,
+ {
+ *self = match std::mem::take(self) {
+ Self::Keep(source)
+ | Self::Register(source)
+ | Self::Remove(source)
+ | Self::Disable(source)
+ | Self::Replace { new: source, .. } => replacer(source),
+ Self::None => return,
+ };
+ }
+}
+
+impl<T> TransientSource<T> {
+ /// Apply a function to the enclosed source, if it exists and is not about
+ /// to be removed.
+ pub fn map<F, U>(&mut self, f: F) -> Option<U>
+ where
+ F: FnOnce(&mut T) -> U,
+ {
+ match &mut self.state {
+ TransientSourceState::Keep(source)
+ | TransientSourceState::Register(source)
+ | TransientSourceState::Disable(source)
+ | TransientSourceState::Replace { new: source, .. } => Some(f(source)),
+ TransientSourceState::Remove(_) | TransientSourceState::None => None,
+ }
+ }
+
+ /// Returns `true` if there is no wrapped event source.
+ pub fn is_none(&self) -> bool {
+ matches!(self.state, TransientSourceState::None)
+ }
+
+ /// Removes the wrapped event source from the event loop and this wrapper.
+ ///
+ /// If this is called from outside of the event loop, you will need to wake
+ /// up the event loop for any changes to take place. If it is called from
+ /// within the event loop, you must return `PostAction::Reregister` from
+ /// your own event source's `process_events()`, and the source will be
+ /// unregistered as needed after it exits.
+ pub fn remove(&mut self) {
+ self.state.replace_state(TransientSourceState::Remove);
+ }
+
+ /// Replace the currently wrapped source with the given one. No more events
+ /// will be generated from the old source after this point. The old source
+ /// will not be dropped immediately, it will be kept so that it can be
+ /// deregistered.
+ ///
+ /// If this is called from outside of the event loop, you will need to wake
+ /// up the event loop for any changes to take place. If it is called from
+ /// within the event loop, you must return `PostAction::Reregister` from
+ /// your own event source's `process_events()`, and the sources will be
+ /// registered and unregistered as needed after it exits.
+ pub fn replace(&mut self, new: T) {
+ self.state
+ .replace_state(|old| TransientSourceState::Replace { new, old });
+ }
+}
+
+impl<T: crate::EventSource> From<T> for TransientSource<T> {
+ fn from(source: T) -> Self {
+ Self {
+ state: TransientSourceState::Register(source),
+ }
+ }
+}
+
+impl<T: crate::EventSource> crate::EventSource for TransientSource<T> {
+ type Event = T::Event;
+ type Metadata = T::Metadata;
+ type Ret = T::Ret;
+ type Error = T::Error;
+
+ fn process_events<F>(
+ &mut self,
+ readiness: crate::Readiness,
+ token: crate::Token,
+ callback: F,
+ ) -> Result<crate::PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ let reregister = if let TransientSourceState::Keep(source) = &mut self.state {
+ let child_post_action = source.process_events(readiness, token, callback)?;
+
+ match child_post_action {
+ // Nothing needs to change.
+ crate::PostAction::Continue => false,
+
+ // Our child source needs re-registration, therefore this
+ // wrapper needs re-registration.
+ crate::PostAction::Reregister => true,
+
+ // If our nested source needs to be removed or disabled, we need
+ // to swap it out for the "Remove" or "Disable" variant.
+ crate::PostAction::Disable => {
+ self.state.replace_state(TransientSourceState::Disable);
+ true
+ }
+
+ crate::PostAction::Remove => {
+ self.state.replace_state(TransientSourceState::Remove);
+ true
+ }
+ }
+ } else {
+ false
+ };
+
+ let post_action = if reregister {
+ crate::PostAction::Reregister
+ } else {
+ crate::PostAction::Continue
+ };
+
+ Ok(post_action)
+ }
+
+ fn register(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ match &mut self.state {
+ TransientSourceState::Keep(source) => {
+ source.register(poll, token_factory)?;
+ }
+ TransientSourceState::Register(source)
+ | TransientSourceState::Disable(source)
+ | TransientSourceState::Replace { new: source, .. } => {
+ source.register(poll, token_factory)?;
+ self.state.replace_state(TransientSourceState::Keep);
+ // Drops the disposed source in the Replace case.
+ }
+ TransientSourceState::Remove(_source) => {
+ self.state.replace_state(|_| TransientSourceState::None);
+ }
+ TransientSourceState::None => (),
+ }
+ Ok(())
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ match &mut self.state {
+ TransientSourceState::Keep(source) => source.reregister(poll, token_factory)?,
+ TransientSourceState::Register(source) => {
+ source.register(poll, token_factory)?;
+ self.state.replace_state(TransientSourceState::Keep);
+ }
+ TransientSourceState::Disable(source) => {
+ source.unregister(poll)?;
+ }
+ TransientSourceState::Remove(source) => {
+ source.unregister(poll)?;
+ self.state.replace_state(|_| TransientSourceState::None);
+ }
+ TransientSourceState::Replace { new, old } => {
+ old.unregister(poll)?;
+ new.register(poll, token_factory)?;
+ self.state.replace_state(TransientSourceState::Keep);
+ // Drops 'dispose'.
+ }
+ TransientSourceState::None => (),
+ }
+ Ok(())
+ }
+
+ fn unregister(&mut self, poll: &mut crate::Poll) -> crate::Result<()> {
+ match &mut self.state {
+ TransientSourceState::Keep(source)
+ | TransientSourceState::Register(source)
+ | TransientSourceState::Disable(source) => source.unregister(poll)?,
+ TransientSourceState::Remove(source) => {
+ source.unregister(poll)?;
+ self.state.replace_state(|_| TransientSourceState::None);
+ }
+ TransientSourceState::Replace { new, old } => {
+ old.unregister(poll)?;
+ new.unregister(poll)?;
+ self.state.replace_state(TransientSourceState::Register);
+ }
+ TransientSourceState::None => (),
+ }
+ Ok(())
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+ use crate::{
+ channel::{channel, Channel, Event},
+ ping::{make_ping, PingSource},
+ Dispatcher, EventSource, PostAction,
+ };
+ use std::{
+ rc::Rc,
+ sync::atomic::{AtomicBool, Ordering},
+ time::Duration,
+ };
+
+ #[test]
+ fn test_transient_drop() {
+ // A test source that sets a flag when it's dropped.
+ struct TestSource<'a> {
+ dropped: &'a AtomicBool,
+ ping: PingSource,
+ }
+
+ impl<'a> Drop for TestSource<'a> {
+ fn drop(&mut self) {
+ self.dropped.store(true, Ordering::Relaxed)
+ }
+ }
+
+ impl<'a> crate::EventSource for TestSource<'a> {
+ type Event = ();
+ type Metadata = ();
+ type Ret = ();
+ type Error = Box<dyn std::error::Error + Sync + Send>;
+
+ fn process_events<F>(
+ &mut self,
+ readiness: crate::Readiness,
+ token: crate::Token,
+ callback: F,
+ ) -> Result<crate::PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ self.ping.process_events(readiness, token, callback)?;
+ Ok(PostAction::Remove)
+ }
+
+ fn register(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ self.ping.register(poll, token_factory)
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ self.ping.reregister(poll, token_factory)
+ }
+
+ fn unregister(&mut self, poll: &mut crate::Poll) -> crate::Result<()> {
+ self.ping.unregister(poll)
+ }
+ }
+
+ // Test that the inner source is actually dropped when it asks to be
+ // removed from the loop, while the TransientSource remains. We use two
+ // flags for this:
+ // - fired: should be set only when the inner event source has an event
+ // - dropped: set by the drop handler for the inner source (it's an
+ // AtomicBool becaues it requires a longer lifetime than the fired
+ // flag)
+ let mut fired = false;
+ let dropped = false.into();
+
+ // The inner source that should be dropped after the first loop run.
+ let (pinger, ping) = make_ping().unwrap();
+ let inner = TestSource {
+ dropped: &dropped,
+ ping,
+ };
+
+ // The TransientSource wrapper.
+ let outer: TransientSource<_> = inner.into();
+
+ let mut event_loop = crate::EventLoop::try_new().unwrap();
+ let handle = event_loop.handle();
+
+ let _token = handle
+ .insert_source(outer, |_, _, fired| {
+ *fired = true;
+ })
+ .unwrap();
+
+ // First loop run: the ping generates an event for the inner source.
+ pinger.ping();
+
+ event_loop.dispatch(Duration::ZERO, &mut fired).unwrap();
+
+ assert!(fired);
+ assert!(dropped.load(Ordering::Relaxed));
+
+ // Second loop run: the ping does nothing because the receiver has been
+ // dropped.
+ fired = false;
+
+ pinger.ping();
+
+ event_loop.dispatch(Duration::ZERO, &mut fired).unwrap();
+ assert!(!fired);
+ }
+
+ #[test]
+ fn test_transient_passthrough() {
+ // Test that event processing works when a source is nested inside a
+ // TransientSource. In particular, we want to ensure that the final
+ // event is received even if it corresponds to that same event source
+ // returning `PostAction::Remove`.
+ let (sender, receiver) = channel();
+ let outer: TransientSource<_> = receiver.into();
+
+ let mut event_loop = crate::EventLoop::try_new().unwrap();
+ let handle = event_loop.handle();
+
+ // Our callback puts the receied events in here for us to check later.
+ let mut msg_queue = vec![];
+
+ let _token = handle
+ .insert_source(outer, |msg, _, queue: &mut Vec<_>| {
+ queue.push(msg);
+ })
+ .unwrap();
+
+ // Send some data and drop the sender. We specifically want to test that
+ // we get the "closed" message.
+ sender.send(0u32).unwrap();
+ sender.send(1u32).unwrap();
+ sender.send(2u32).unwrap();
+ sender.send(3u32).unwrap();
+ drop(sender);
+
+ // Run loop once to process events.
+ event_loop.dispatch(Duration::ZERO, &mut msg_queue).unwrap();
+
+ assert!(matches!(
+ msg_queue.as_slice(),
+ &[
+ Event::Msg(0u32),
+ Event::Msg(1u32),
+ Event::Msg(2u32),
+ Event::Msg(3u32),
+ Event::Closed
+ ]
+ ));
+ }
+
+ #[test]
+ fn test_transient_map() {
+ struct IdSource {
+ id: u32,
+ ping: PingSource,
+ }
+
+ impl EventSource for IdSource {
+ type Event = u32;
+ type Metadata = ();
+ type Ret = ();
+ type Error = Box<dyn std::error::Error + Sync + Send>;
+
+ fn process_events<F>(
+ &mut self,
+ readiness: crate::Readiness,
+ token: crate::Token,
+ mut callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ let id = self.id;
+ self.ping
+ .process_events(readiness, token, |_, md| callback(id, md))?;
+
+ let action = if self.id > 2 {
+ PostAction::Remove
+ } else {
+ PostAction::Continue
+ };
+
+ Ok(action)
+ }
+
+ fn register(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ self.ping.register(poll, token_factory)
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ self.ping.reregister(poll, token_factory)
+ }
+
+ fn unregister(&mut self, poll: &mut crate::Poll) -> crate::Result<()> {
+ self.ping.unregister(poll)
+ }
+ }
+
+ struct WrapperSource(TransientSource<IdSource>);
+
+ impl EventSource for WrapperSource {
+ type Event = <IdSource as EventSource>::Event;
+ type Metadata = <IdSource as EventSource>::Metadata;
+ type Ret = <IdSource as EventSource>::Ret;
+ type Error = <IdSource as EventSource>::Error;
+
+ fn process_events<F>(
+ &mut self,
+ readiness: crate::Readiness,
+ token: crate::Token,
+ callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ let action = self.0.process_events(readiness, token, callback);
+ self.0.map(|inner| inner.id += 1);
+ action
+ }
+
+ fn register(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ self.0.map(|inner| inner.id += 1);
+ self.0.register(poll, token_factory)
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ self.0.map(|inner| inner.id += 1);
+ self.0.reregister(poll, token_factory)
+ }
+
+ fn unregister(&mut self, poll: &mut crate::Poll) -> crate::Result<()> {
+ self.0.map(|inner| inner.id += 1);
+ self.0.unregister(poll)
+ }
+ }
+
+ // To test the id later.
+ let mut id = 0;
+
+ // Create our source.
+ let (pinger, ping) = make_ping().unwrap();
+ let inner = IdSource { id, ping };
+
+ // The TransientSource wrapper.
+ let outer: TransientSource<_> = inner.into();
+
+ // The top level source.
+ let top = WrapperSource(outer);
+
+ // Create a dispatcher so we can check the source afterwards.
+ let dispatcher = Dispatcher::new(top, |got_id, _, test_id| {
+ *test_id = got_id;
+ });
+
+ let mut event_loop = crate::EventLoop::try_new().unwrap();
+ let handle = event_loop.handle();
+
+ let token = handle.register_dispatcher(dispatcher.clone()).unwrap();
+
+ // First loop run: the ping generates an event for the inner source.
+ // The ID should be 1 after the increment in register().
+ pinger.ping();
+ event_loop.dispatch(Duration::ZERO, &mut id).unwrap();
+ assert_eq!(id, 1);
+
+ // Second loop run: the ID should be 2 after the previous
+ // process_events().
+ pinger.ping();
+ event_loop.dispatch(Duration::ZERO, &mut id).unwrap();
+ assert_eq!(id, 2);
+
+ // Third loop run: the ID should be 3 after another process_events().
+ pinger.ping();
+ event_loop.dispatch(Duration::ZERO, &mut id).unwrap();
+ assert_eq!(id, 3);
+
+ // Fourth loop run: the callback is no longer called by the inner
+ // source, so our local ID is not incremented.
+ pinger.ping();
+ event_loop.dispatch(Duration::ZERO, &mut id).unwrap();
+ assert_eq!(id, 3);
+
+ // Remove the dispatcher so we can inspect the sources.
+ handle.remove(token);
+
+ let mut top_after = dispatcher.into_source_inner();
+
+ // I expect the inner source to be dropped, so the TransientSource
+ // variant is None (its version of None, not Option::None), so its map()
+ // won't call the passed-in function (hence the unreachable!()) and its
+ // return value should be Option::None.
+ assert!(top_after.0.map(|_| unreachable!()).is_none());
+ }
+
+ #[test]
+ fn test_transient_disable() {
+ // Test that disabling and enabling is handled properly.
+ struct DisablingSource(PingSource);
+
+ impl EventSource for DisablingSource {
+ type Event = ();
+ type Metadata = ();
+ type Ret = ();
+ type Error = Box<dyn std::error::Error + Sync + Send>;
+
+ fn process_events<F>(
+ &mut self,
+ readiness: crate::Readiness,
+ token: crate::Token,
+ callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ self.0.process_events(readiness, token, callback)?;
+ Ok(PostAction::Disable)
+ }
+
+ fn register(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ self.0.register(poll, token_factory)
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ self.0.reregister(poll, token_factory)
+ }
+
+ fn unregister(&mut self, poll: &mut crate::Poll) -> crate::Result<()> {
+ self.0.unregister(poll)
+ }
+ }
+
+ // Flag for checking when the source fires.
+ let mut fired = false;
+
+ // Create our source.
+ let (pinger, ping) = make_ping().unwrap();
+
+ let inner = DisablingSource(ping);
+
+ // The TransientSource wrapper.
+ let outer: TransientSource<_> = inner.into();
+
+ let mut event_loop = crate::EventLoop::try_new().unwrap();
+ let handle = event_loop.handle();
+ let token = handle
+ .insert_source(outer, |_, _, fired| {
+ *fired = true;
+ })
+ .unwrap();
+
+ // Ping here and not later, to check that disabling after an event is
+ // triggered but not processed does not discard the event.
+ pinger.ping();
+ event_loop.dispatch(Duration::ZERO, &mut fired).unwrap();
+ assert!(fired);
+
+ // Source should now be disabled.
+ pinger.ping();
+ fired = false;
+ event_loop.dispatch(Duration::ZERO, &mut fired).unwrap();
+ assert!(!fired);
+
+ // Re-enable the source.
+ handle.enable(&token).unwrap();
+
+ // Trigger another event.
+ pinger.ping();
+ fired = false;
+ event_loop.dispatch(Duration::ZERO, &mut fired).unwrap();
+ assert!(fired);
+ }
+
+ #[test]
+ fn test_transient_replace_unregister() {
+ // This is a bit of a complex test, but it essentially boils down to:
+ // how can a "parent" event source containing a TransientSource replace
+ // the "child" source without leaking the source's registration?
+
+ // First, a source that finishes immediately. This is so we cover the
+ // edge case of replacing a source as soon as it wants to be removed.
+ struct FinishImmediatelySource {
+ source: PingSource,
+ data: Option<i32>,
+ registered: bool,
+ dropped: Rc<AtomicBool>,
+ }
+
+ impl FinishImmediatelySource {
+ // The constructor passes out the drop flag so we can check that
+ // this source was or wasn't dropped.
+ fn new(source: PingSource, data: i32) -> (Self, Rc<AtomicBool>) {
+ let dropped = Rc::new(false.into());
+
+ (
+ Self {
+ source,
+ data: Some(data),
+ registered: false,
+ dropped: Rc::clone(&dropped),
+ },
+ dropped,
+ )
+ }
+ }
+
+ impl EventSource for FinishImmediatelySource {
+ type Event = i32;
+ type Metadata = ();
+ type Ret = ();
+ type Error = Box<dyn std::error::Error + Sync + Send>;
+
+ fn process_events<F>(
+ &mut self,
+ readiness: crate::Readiness,
+ token: crate::Token,
+ mut callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ let mut data = self.data.take();
+
+ self.source.process_events(readiness, token, |_, _| {
+ if let Some(data) = data.take() {
+ callback(data, &mut ())
+ }
+ })?;
+
+ self.data = data;
+
+ Ok(if self.data.is_none() {
+ PostAction::Remove
+ } else {
+ PostAction::Continue
+ })
+ }
+
+ fn register(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ self.registered = true;
+ self.source.register(poll, token_factory)
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ self.source.reregister(poll, token_factory)
+ }
+
+ fn unregister(&mut self, poll: &mut crate::Poll) -> crate::Result<()> {
+ self.registered = false;
+ self.source.unregister(poll)
+ }
+ }
+
+ // The drop handler sets a flag we can check for debugging (we want to
+ // know that the source itself was dropped), and also checks that the
+ // source was unregistered. Ultimately neither the source nor its
+ // registration should be leaked.
+
+ impl Drop for FinishImmediatelySource {
+ fn drop(&mut self) {
+ assert!(!self.registered, "source dropped while still registered");
+ self.dropped.store(true, Ordering::Relaxed);
+ }
+ }
+
+ // Our wrapper source handles detecting when the child source finishes,
+ // and replacing that child source with another one that will generate
+ // more events. This is one intended use case of the TransientSource.
+
+ struct WrapperSource {
+ current: TransientSource<FinishImmediatelySource>,
+ replacement: Option<FinishImmediatelySource>,
+ dropped: Rc<AtomicBool>,
+ }
+
+ impl WrapperSource {
+ // The constructor passes out the drop flag so we can check that
+ // this source was or wasn't dropped.
+ fn new(
+ first: FinishImmediatelySource,
+ second: FinishImmediatelySource,
+ ) -> (Self, Rc<AtomicBool>) {
+ let dropped = Rc::new(false.into());
+
+ (
+ Self {
+ current: first.into(),
+ replacement: second.into(),
+ dropped: Rc::clone(&dropped),
+ },
+ dropped,
+ )
+ }
+ }
+
+ impl EventSource for WrapperSource {
+ type Event = i32;
+ type Metadata = ();
+ type Ret = ();
+ type Error = Box<dyn std::error::Error + Sync + Send>;
+
+ fn process_events<F>(
+ &mut self,
+ readiness: crate::Readiness,
+ token: crate::Token,
+ mut callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ // Did our event source generate an event?
+ let mut fired = false;
+
+ let post_action = self.current.process_events(readiness, token, |data, _| {
+ callback(data, &mut ());
+ fired = true;
+ })?;
+
+ if fired {
+ // The event source will be unregistered after the current
+ // process_events() iteration is finished. The replace()
+ // method will handle doing that even while we've added a
+ // new source.
+ if let Some(replacement) = self.replacement.take() {
+ self.current.replace(replacement);
+ }
+
+ // Parent source is responsible for flagging this, but it's
+ // already set.
+ assert_eq!(post_action, PostAction::Reregister);
+ }
+
+ Ok(post_action)
+ }
+
+ fn register(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ self.current.register(poll, token_factory)
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ self.current.reregister(poll, token_factory)
+ }
+
+ fn unregister(&mut self, poll: &mut crate::Poll) -> crate::Result<()> {
+ self.current.unregister(poll)
+ }
+ }
+
+ impl Drop for WrapperSource {
+ fn drop(&mut self) {
+ self.dropped.store(true, Ordering::Relaxed);
+ }
+ }
+
+ // Construct the various nested sources - FinishImmediatelySource inside
+ // TransientSource inside WrapperSource. The numbers let us verify which
+ // event source fires first.
+ let (ping0_tx, ping0_rx) = crate::ping::make_ping().unwrap();
+ let (ping1_tx, ping1_rx) = crate::ping::make_ping().unwrap();
+ let (inner0, inner0_dropped) = FinishImmediatelySource::new(ping0_rx, 0);
+ let (inner1, inner1_dropped) = FinishImmediatelySource::new(ping1_rx, 1);
+ let (outer, outer_dropped) = WrapperSource::new(inner0, inner1);
+
+ // Now the actual test starts.
+
+ let mut event_loop: crate::EventLoop<(Option<i32>, crate::LoopSignal)> =
+ crate::EventLoop::try_new().unwrap();
+ let handle = event_loop.handle();
+ let signal = event_loop.get_signal();
+
+ // This is how we communicate with the event sources.
+ let mut context = (None, signal);
+
+ let _token = handle
+ .insert_source(outer, |data, _, (evt, sig)| {
+ *evt = Some(data);
+ sig.stop();
+ })
+ .unwrap();
+
+ // Ensure our sources fire.
+ ping0_tx.ping();
+ ping1_tx.ping();
+
+ // Use run() rather than dispatch() because it's not strictly part of
+ // any API contract as to how many runs of the event loop it takes to
+ // replace the nested source.
+ event_loop.run(None, &mut context, |_| {}).unwrap();
+
+ // First, make sure the inner source actually did fire.
+ assert_eq!(context.0.take(), Some(0), "first inner source did not fire");
+
+ // Make sure that the outer source is still alive.
+ assert!(
+ !outer_dropped.load(Ordering::Relaxed),
+ "outer source already dropped"
+ );
+
+ // Make sure that the inner child source IS dropped now.
+ assert!(
+ inner0_dropped.load(Ordering::Relaxed),
+ "first inner source not dropped"
+ );
+
+ // Make sure that, in between the first event and second event, the
+ // replacement child source still exists.
+ assert!(
+ !inner1_dropped.load(Ordering::Relaxed),
+ "replacement inner source dropped"
+ );
+
+ // Run the event loop until we get a second event.
+ event_loop.run(None, &mut context, |_| {}).unwrap();
+
+ // Ensure the replacement source fired (which checks that it was
+ // registered and is being processed by the TransientSource).
+ assert_eq!(context.0.take(), Some(1), "replacement source did not fire");
+ }
+
+ #[test]
+ fn test_transient_remove() {
+ // This tests that calling remove(), even before an event source has
+ // requested its own removal, results in the event source being removed.
+
+ const STOP_AT: i32 = 2;
+
+ // A wrapper source to automate the removal of the inner source.
+ struct WrapperSource {
+ inner: TransientSource<Channel<i32>>,
+ }
+
+ impl EventSource for WrapperSource {
+ type Event = i32;
+ type Metadata = ();
+ type Ret = ();
+ type Error = Box<dyn std::error::Error + Sync + Send>;
+
+ fn process_events<F>(
+ &mut self,
+ readiness: crate::Readiness,
+ token: crate::Token,
+ mut callback: F,
+ ) -> Result<PostAction, Self::Error>
+ where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+ {
+ let mut remove = false;
+
+ let mut post_action = self.inner.process_events(readiness, token, |evt, _| {
+ if let Event::Msg(num) = evt {
+ callback(num, &mut ());
+ remove = num >= STOP_AT;
+ }
+ })?;
+
+ if remove {
+ self.inner.remove();
+ post_action |= PostAction::Reregister;
+ }
+
+ Ok(post_action)
+ }
+
+ fn register(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ self.inner.register(poll, token_factory)
+ }
+
+ fn reregister(
+ &mut self,
+ poll: &mut crate::Poll,
+ token_factory: &mut crate::TokenFactory,
+ ) -> crate::Result<()> {
+ self.inner.reregister(poll, token_factory)
+ }
+
+ fn unregister(&mut self, poll: &mut crate::Poll) -> crate::Result<()> {
+ self.inner.unregister(poll)
+ }
+ }
+
+ // Create our sources and loop.
+
+ let (sender, receiver) = channel();
+ let wrapper = WrapperSource {
+ inner: receiver.into(),
+ };
+
+ let mut event_loop = crate::EventLoop::try_new().unwrap();
+ let handle = event_loop.handle();
+
+ handle
+ .insert_source(wrapper, |num, _, out: &mut Option<_>| {
+ *out = Some(num);
+ })
+ .unwrap();
+
+ // Storage for callback data.
+ let mut out = None;
+
+ // Send some data we expect to get callbacks for.
+ for num in 0..=STOP_AT {
+ sender.send(num).unwrap();
+ event_loop.dispatch(Duration::ZERO, &mut out).unwrap();
+ assert_eq!(out.take(), Some(num));
+ }
+
+ // Now we expect the receiver to be gone.
+ assert!(matches!(
+ sender.send(STOP_AT + 1),
+ Err(std::sync::mpsc::SendError { .. })
+ ));
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 +169 +170 +171 +172 +173 +174 +175 +176 +177 +178 +179 +180 +181 +182 +183 +184 +185 +186 +187 +188 +189 +190 +191 +192 +193 +194 +195 +196 +197 +198 +199 +200 +201 +202 +203 +204 +205 +206 +207 +208 +209 +210 +211 +212 +213 +214 +215 +216 +217 +218 +219 +220 +221 +222 +223 +224 +225 +226 +227 +228 +229 +230 +231 +232 +233 +234 +235 +236 +237 +238 +239 +240 +241 +242 +243 +244 +245 +246 +247 +248 +249 +250 +251 +252 +253 +254 +255 +256 +257 +258 +259 +260 +261 +262 +263 +264 +265 +266 +267 +268 +269 +270 +271 +272 +273 +274 +275 +276 +277 +278 +279 +280 +281 +282 +283 +284 +285 +286 +287 +288 +289 +290 +291 +292 +293 +294 +295 +296 +297 +298 +299 +300 +301 +302 +303 +304 +305 +306 +307 +308 +309 +310 +311 +312 +313 +314 +315 +316 +317 +318 +319 +320 +321 +322 +323 +324 +325 +326 +327 +328 +329 +330 +331 +332 +333 +334 +335 +336 +337 +338 +339 +340 +341 +342 +343 +344 +345 +346 +347 +348 +349 +350 +351 +352 +353 +354 +355 +356 +357 +358 +359 +360 +361 +362 +363 +364 +365 +366 +367 +368 +369 +370 +371 +372 +373 +374 +375 +376 +377 +378 +379 +380 +381 +382 +383 +384 +385 +386 +387 +388 +389 +390 +391 +392 +393 +394 +395 +396 +397 +398 +399 +400 +401 +402 +403 +404 +405 +406 +407 +408 +409 +410 +411 +412 +413 +414 +415 +416 +417 +418 +419 +420 +421 +422 +423 +424 +425 +426 +427 +428 +429 +430 +431 +432 +433 +434 +435 +436 +437 +438 +439 +440 +441 +442 +443 +444 +445 +446 +447 +
use std::{cell::RefCell, collections::HashMap, rc::Rc, sync::Arc, time::Duration};
+
+#[cfg(unix)]
+use std::os::unix::io::{AsFd, AsRawFd, BorrowedFd as Borrowed, RawFd as Raw};
+
+#[cfg(windows)]
+use std::os::windows::io::{AsRawSocket, AsSocket, BorrowedSocket as Borrowed, RawSocket as Raw};
+
+use polling::{Event, Events, PollMode, Poller};
+
+use crate::sources::timer::TimerWheel;
+use crate::token::TokenInner;
+use crate::RegistrationToken;
+
+/// Possible modes for registering a file descriptor
+#[derive(Copy, Clone, Debug)]
+pub enum Mode {
+ /// Single event generation
+ ///
+ /// This FD will be disabled as soon as it has generated one event.
+ ///
+ /// The user will need to use `LoopHandle::update()` to re-enable it if
+ /// desired.
+ OneShot,
+
+ /// Level-triggering
+ ///
+ /// This FD will report events on every poll as long as the requested interests
+ /// are available.
+ Level,
+
+ /// Edge-triggering
+ ///
+ /// This FD will report events only when it *gains* one of the requested interests.
+ /// it must thus be fully processed before it'll generate events again.
+ ///
+ /// This mode is not supported on certain platforms, and an error will be returned
+ /// if it is used.
+ ///
+ /// ## Supported Platforms
+ ///
+ /// As of the time of writing, the platforms that support edge triggered polling are
+ /// as follows:
+ ///
+ /// - Linux/Android
+ /// - macOS/iOS/tvOS/watchOS
+ /// - FreeBSD/OpenBSD/NetBSD/DragonflyBSD
+ Edge,
+}
+
+/// Interest to register regarding the file descriptor
+#[derive(Copy, Clone, Debug)]
+pub struct Interest {
+ /// Wait for the FD to be readable
+ pub readable: bool,
+
+ /// Wait for the FD to be writable
+ pub writable: bool,
+}
+
+impl Interest {
+ /// Shorthand for empty interest
+ pub const EMPTY: Interest = Interest {
+ readable: false,
+ writable: false,
+ };
+
+ /// Shorthand for read interest
+ pub const READ: Interest = Interest {
+ readable: true,
+ writable: false,
+ };
+
+ /// Shorthand for write interest
+ pub const WRITE: Interest = Interest {
+ readable: false,
+ writable: true,
+ };
+
+ /// Shorthand for read and write interest
+ pub const BOTH: Interest = Interest {
+ readable: true,
+ writable: true,
+ };
+}
+
+/// Readiness for a file descriptor notification
+#[derive(Copy, Clone, Debug)]
+pub struct Readiness {
+ /// Is the FD readable
+ pub readable: bool,
+
+ /// Is the FD writable
+ pub writable: bool,
+
+ /// Is the FD in an error state
+ pub error: bool,
+}
+
+impl Readiness {
+ /// Shorthand for empty readiness
+ pub const EMPTY: Readiness = Readiness {
+ readable: false,
+ writable: false,
+ error: false,
+ };
+}
+
+#[derive(Debug)]
+pub(crate) struct PollEvent {
+ pub(crate) readiness: Readiness,
+ pub(crate) token: Token,
+}
+
+/// Factory for creating tokens in your registrations
+///
+/// When composing event sources, each sub-source needs to
+/// have its own token to identify itself. This factory is
+/// provided to produce such unique tokens.
+
+#[derive(Debug)]
+pub struct TokenFactory {
+ next_token: TokenInner,
+}
+
+impl TokenFactory {
+ pub(crate) fn new(token: TokenInner) -> TokenFactory {
+ TokenFactory {
+ next_token: token.forget_sub_id(),
+ }
+ }
+
+ /// Get the "raw" registration token of this TokenFactory
+ pub(crate) fn registration_token(&self) -> RegistrationToken {
+ RegistrationToken::new(self.next_token.forget_sub_id())
+ }
+
+ /// Produce a new unique token
+ pub fn token(&mut self) -> Token {
+ let token = self.next_token;
+ self.next_token = token.increment_sub_id();
+ Token { inner: token }
+ }
+}
+
+/// A token (for implementation of the [`EventSource`](crate::EventSource) trait)
+///
+/// This token is produced by the [`TokenFactory`] and is used when calling the
+/// [`EventSource`](crate::EventSource) implementations to process event, in order
+/// to identify which sub-source produced them.
+///
+/// You should forward it to the [`Poll`] when registering your file descriptors.
+#[derive(Clone, Copy, Debug, PartialEq, Eq)]
+pub struct Token {
+ pub(crate) inner: TokenInner,
+}
+
+/// The polling system
+///
+/// This type represents the polling system of calloop, on which you
+/// can register your file descriptors. This interface is only accessible in
+/// implementations of the [`EventSource`](crate::EventSource) trait.
+///
+/// You only need to interact with this type if you are implementing your
+/// own event sources, while implementing the [`EventSource`](crate::EventSource) trait.
+/// And even in this case, you can often just use the [`Generic`](crate::generic::Generic) event
+/// source and delegate the implementations to it.
+pub struct Poll {
+ /// The handle to wepoll/epoll/kqueue/... used to poll for events.
+ pub(crate) poller: Arc<Poller>,
+
+ /// The buffer of events returned by the poller.
+ events: RefCell<Events>,
+
+ /// The sources registered as level triggered.
+ ///
+ /// Some platforms that `polling` supports do not support level-triggered events. As of the time
+ /// of writing, this only includes Solaris and illumos. To work around this, we emulate level
+ /// triggered events by keeping this map of file descriptors.
+ ///
+ /// One can emulate level triggered events on top of oneshot events by just re-registering the
+ /// file descriptor every time it is polled. However, this is not ideal, as it requires a
+ /// system call every time. It's better to use the intergrated system, if available.
+ level_triggered: Option<RefCell<HashMap<usize, (Raw, polling::Event)>>>,
+
+ pub(crate) timers: Rc<RefCell<TimerWheel>>,
+}
+
+impl std::fmt::Debug for Poll {
+ #[cfg_attr(feature = "nightly_coverage", coverage(off))]
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ f.write_str("Poll { ... }")
+ }
+}
+
+impl Poll {
+ pub(crate) fn new() -> crate::Result<Poll> {
+ Self::new_inner(false)
+ }
+
+ fn new_inner(force_fallback_lt: bool) -> crate::Result<Poll> {
+ let poller = Poller::new()?;
+ let level_triggered = if poller.supports_level() && !force_fallback_lt {
+ None
+ } else {
+ Some(RefCell::new(HashMap::new()))
+ };
+
+ Ok(Poll {
+ poller: Arc::new(poller),
+ events: RefCell::new(Events::new()),
+ timers: Rc::new(RefCell::new(TimerWheel::new())),
+ level_triggered,
+ })
+ }
+
+ pub(crate) fn poll(&self, mut timeout: Option<Duration>) -> crate::Result<Vec<PollEvent>> {
+ let now = std::time::Instant::now();
+
+ // Adjust the timeout for the timers.
+ if let Some(next_timeout) = self.timers.borrow().next_deadline() {
+ if next_timeout <= now {
+ timeout = Some(Duration::ZERO);
+ } else if let Some(deadline) = timeout {
+ timeout = Some(std::cmp::min(deadline, next_timeout - now));
+ } else {
+ timeout = Some(next_timeout - now);
+ }
+ };
+
+ let mut events = self.events.borrow_mut();
+ events.clear();
+ self.poller.wait(&mut events, timeout)?;
+
+ // Convert `polling` events to `calloop` events.
+ let level_triggered = self.level_triggered.as_ref().map(RefCell::borrow);
+ let mut poll_events = events
+ .iter()
+ .map(|ev| {
+ // If we need to emulate level-triggered events...
+ if let Some(level_triggered) = level_triggered.as_ref() {
+ // ...and this event is from a level-triggered source...
+ if let Some((source, interest)) = level_triggered.get(&ev.key) {
+ // ...then we need to re-register the source.
+ // SAFETY: The source is valid.
+ self.poller
+ .modify(unsafe { Borrowed::borrow_raw(*source) }, *interest)?;
+ }
+ }
+
+ Ok(PollEvent {
+ readiness: Readiness {
+ readable: ev.readable,
+ writable: ev.writable,
+ error: false,
+ },
+ token: Token {
+ inner: TokenInner::from(ev.key),
+ },
+ })
+ })
+ .collect::<std::io::Result<Vec<_>>>()?;
+
+ drop(events);
+
+ // Update 'now' as some time may have elapsed in poll()
+ let now = std::time::Instant::now();
+ let mut timers = self.timers.borrow_mut();
+ while let Some((_, token)) = timers.next_expired(now) {
+ poll_events.push(PollEvent {
+ readiness: Readiness {
+ readable: true,
+ writable: false,
+ error: false,
+ },
+ token,
+ });
+ }
+
+ Ok(poll_events)
+ }
+
+ /// Register a new file descriptor for polling
+ ///
+ /// The file descriptor will be registered with given interest,
+ /// mode and token. This function will fail if given a
+ /// bad file descriptor or if the provided file descriptor is already
+ /// registered.
+ ///
+ /// # Safety
+ ///
+ /// The registered source must not be dropped before it is unregistered.
+ ///
+ /// # Leaking tokens
+ ///
+ /// If your event source is dropped without being unregistered, the token
+ /// passed in here will remain on the heap and continue to be used by the
+ /// polling system even though no event source will match it.
+ pub unsafe fn register(
+ &self,
+ #[cfg(unix)] fd: impl AsFd,
+ #[cfg(windows)] fd: impl AsSocket,
+ interest: Interest,
+ mode: Mode,
+ token: Token,
+ ) -> crate::Result<()> {
+ let raw = {
+ #[cfg(unix)]
+ {
+ fd.as_fd().as_raw_fd()
+ }
+
+ #[cfg(windows)]
+ {
+ fd.as_socket().as_raw_socket()
+ }
+ };
+
+ let ev = cvt_interest(interest, token);
+
+ // SAFETY: See invariant on function.
+ unsafe {
+ self.poller
+ .add_with_mode(raw, ev, cvt_mode(mode, self.poller.supports_level()))?;
+ }
+
+ // If this is level triggered and we're emulating level triggered mode...
+ if let (Mode::Level, Some(level_triggered)) = (mode, self.level_triggered.as_ref()) {
+ // ...then we need to keep track of the source.
+ let mut level_triggered = level_triggered.borrow_mut();
+ level_triggered.insert(ev.key, (raw, ev));
+ }
+
+ Ok(())
+ }
+
+ /// Update the registration for a file descriptor
+ ///
+ /// This allows you to change the interest, mode or token of a file
+ /// descriptor. Fails if the provided fd is not currently registered.
+ ///
+ /// See note on [`register()`](Self::register()) regarding leaking.
+ pub fn reregister(
+ &self,
+ #[cfg(unix)] fd: impl AsFd,
+ #[cfg(windows)] fd: impl AsSocket,
+ interest: Interest,
+ mode: Mode,
+ token: Token,
+ ) -> crate::Result<()> {
+ let (borrowed, raw) = {
+ #[cfg(unix)]
+ {
+ (fd.as_fd(), fd.as_fd().as_raw_fd())
+ }
+
+ #[cfg(windows)]
+ {
+ (fd.as_socket(), fd.as_socket().as_raw_socket())
+ }
+ };
+
+ let ev = cvt_interest(interest, token);
+ self.poller
+ .modify_with_mode(borrowed, ev, cvt_mode(mode, self.poller.supports_level()))?;
+
+ // If this is level triggered and we're emulating level triggered mode...
+ if let (Mode::Level, Some(level_triggered)) = (mode, self.level_triggered.as_ref()) {
+ // ...then we need to keep track of the source.
+ let mut level_triggered = level_triggered.borrow_mut();
+ level_triggered.insert(ev.key, (raw, ev));
+ }
+
+ Ok(())
+ }
+
+ /// Unregister a file descriptor
+ ///
+ /// This file descriptor will no longer generate events. Fails if the
+ /// provided file descriptor is not currently registered.
+ pub fn unregister(
+ &self,
+ #[cfg(unix)] fd: impl AsFd,
+ #[cfg(windows)] fd: impl AsSocket,
+ ) -> crate::Result<()> {
+ let (borrowed, raw) = {
+ #[cfg(unix)]
+ {
+ (fd.as_fd(), fd.as_fd().as_raw_fd())
+ }
+
+ #[cfg(windows)]
+ {
+ (fd.as_socket(), fd.as_socket().as_raw_socket())
+ }
+ };
+ self.poller.delete(borrowed)?;
+
+ if let Some(level_triggered) = self.level_triggered.as_ref() {
+ let mut level_triggered = level_triggered.borrow_mut();
+ level_triggered.retain(|_, (source, _)| *source != raw);
+ }
+
+ Ok(())
+ }
+
+ /// Get a thread-safe handle which can be used to wake up the `Poll`.
+ pub(crate) fn notifier(&self) -> Notifier {
+ Notifier(self.poller.clone())
+ }
+
+ /// Get a reference to the poller.
+ pub(crate) fn poller(&self) -> &Arc<Poller> {
+ &self.poller
+ }
+}
+
+/// Thread-safe handle which can be used to wake up the `Poll`.
+#[derive(Clone)]
+pub(crate) struct Notifier(Arc<Poller>);
+
+impl Notifier {
+ pub(crate) fn notify(&self) -> crate::Result<()> {
+ self.0.notify()?;
+
+ Ok(())
+ }
+}
+
+fn cvt_interest(interest: Interest, tok: Token) -> Event {
+ let mut event = Event::none(tok.inner.into());
+ event.readable = interest.readable;
+ event.writable = interest.writable;
+ event
+}
+
+fn cvt_mode(mode: Mode, supports_other_modes: bool) -> PollMode {
+ if !supports_other_modes {
+ return PollMode::Oneshot;
+ }
+
+ match mode {
+ Mode::Edge => PollMode::Edge,
+ Mode::Level => PollMode::Level,
+ Mode::OneShot => PollMode::Oneshot,
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +
// Several implementations of the internals of `Token` depending on the size of `usize`
+
+use std::convert::TryInto;
+
+#[cfg(target_pointer_width = "64")]
+const BITS_VERSION: usize = 16;
+#[cfg(target_pointer_width = "64")]
+const BITS_SUBID: usize = 16;
+
+#[cfg(target_pointer_width = "32")]
+const BITS_VERSION: usize = 8;
+#[cfg(target_pointer_width = "32")]
+const BITS_SUBID: usize = 8;
+
+#[cfg(target_pointer_width = "16")]
+const BITS_VERSION: usize = 4;
+#[cfg(target_pointer_width = "16")]
+const BITS_SUBID: usize = 4;
+
+const MASK_VERSION: usize = (1 << BITS_VERSION) - 1;
+const MASK_SUBID: usize = (1 << BITS_SUBID) - 1;
+
+#[derive(Clone, Copy, PartialEq, Eq, Debug)]
+pub(crate) struct TokenInner {
+ id: u32,
+ version: u16,
+ sub_id: u16,
+}
+
+impl TokenInner {
+ pub(crate) fn new(id: usize) -> Result<TokenInner, ()> {
+ Ok(TokenInner {
+ id: id.try_into().map_err(|_| ())?,
+ version: 0,
+ sub_id: 0,
+ })
+ }
+
+ pub(crate) fn get_id(self) -> usize {
+ self.id as usize
+ }
+
+ pub(crate) fn same_source_as(self, other: TokenInner) -> bool {
+ self.id == other.id && self.version == other.version
+ }
+
+ pub(crate) fn increment_version(self) -> TokenInner {
+ TokenInner {
+ id: self.id,
+ version: self.version.wrapping_add(1) & (MASK_VERSION as u16),
+ sub_id: 0,
+ }
+ }
+
+ pub(crate) fn increment_sub_id(self) -> TokenInner {
+ let sub_id = match self.sub_id.checked_add(1) {
+ Some(sid) if sid <= (MASK_SUBID as u16) => sid,
+ _ => panic!("Maximum number of sub-ids reached for source #{}", self.id),
+ };
+
+ TokenInner {
+ id: self.id,
+ version: self.version,
+ sub_id,
+ }
+ }
+
+ pub(crate) fn forget_sub_id(self) -> TokenInner {
+ TokenInner {
+ id: self.id,
+ version: self.version,
+ sub_id: 0,
+ }
+ }
+}
+
+impl From<usize> for TokenInner {
+ fn from(value: usize) -> Self {
+ let sub_id = (value & MASK_SUBID) as u16;
+ let version = ((value >> BITS_SUBID) & MASK_VERSION) as u16;
+ let id = (value >> (BITS_SUBID + BITS_VERSION)) as u32;
+ TokenInner {
+ id,
+ version,
+ sub_id,
+ }
+ }
+}
+
+impl From<TokenInner> for usize {
+ fn from(token: TokenInner) -> Self {
+ ((token.id as usize) << (BITS_SUBID + BITS_VERSION))
+ + ((token.version as usize) << BITS_SUBID)
+ + (token.sub_id as usize)
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+
+ #[should_panic]
+ #[test]
+ fn overflow_subid() {
+ let token = TokenInner {
+ id: 0,
+ version: 0,
+ sub_id: MASK_SUBID as u16,
+ };
+ token.increment_sub_id();
+ }
+}
+
fn:
) to \
+ restrict the search to a given item kind.","Accepted kinds are: fn
, mod
, struct
, \
+ enum
, trait
, type
, macro
, \
+ and const
.","Search functions by type signature (e.g., vec -> usize
or \
+ -> vec
or String, enum:Cow -> bool
)","You can look for items with an exact name by putting double quotes around \
+ your request: \"string\"
","Look for functions that accept or return \
+ slices and \
+ arrays by writing \
+ square brackets (e.g., -> [u8]
or [] -> Option
)","Look for items inside another one by searching for a path: vec::Vec
",].map(x=>""+x+"
").join("");const div_infos=document.createElement("div");addClass(div_infos,"infos");div_infos.innerHTML="${value.replaceAll(" ", " ")}
`}else{error[index]=value}});output+=`Returns true
if the result is Ok
and the value inside of it matches a predicate.
let x: Result<u32, &str> = Ok(2);\nassert_eq!(x.is_ok_and(|x| x > 1), true);\n\nlet x: Result<u32, &str> = Ok(0);\nassert_eq!(x.is_ok_and(|x| x > 1), false);\n\nlet x: Result<u32, &str> = Err(\"hey\");\nassert_eq!(x.is_ok_and(|x| x > 1), false);
Returns true
if the result is Err
and the value inside of it matches a predicate.
use std::io::{Error, ErrorKind};\n\nlet x: Result<u32, Error> = Err(Error::new(ErrorKind::NotFound, \"!\"));\nassert_eq!(x.is_err_and(|x| x.kind() == ErrorKind::NotFound), true);\n\nlet x: Result<u32, Error> = Err(Error::new(ErrorKind::PermissionDenied, \"!\"));\nassert_eq!(x.is_err_and(|x| x.kind() == ErrorKind::NotFound), false);\n\nlet x: Result<u32, Error> = Ok(123);\nassert_eq!(x.is_err_and(|x| x.kind() == ErrorKind::NotFound), false);
Converts from Result<T, E>
to Option<E>
.
Converts self
into an Option<E>
, consuming self
,\nand discarding the success value, if any.
let x: Result<u32, &str> = Ok(2);\nassert_eq!(x.err(), None);\n\nlet x: Result<u32, &str> = Err(\"Nothing here\");\nassert_eq!(x.err(), Some(\"Nothing here\"));
Converts from &Result<T, E>
to Result<&T, &E>
.
Produces a new Result
, containing a reference\ninto the original, leaving the original in place.
let x: Result<u32, &str> = Ok(2);\nassert_eq!(x.as_ref(), Ok(&2));\n\nlet x: Result<u32, &str> = Err(\"Error\");\nassert_eq!(x.as_ref(), Err(&\"Error\"));
Converts from &mut Result<T, E>
to Result<&mut T, &mut E>
.
fn mutate(r: &mut Result<i32, i32>) {\n match r.as_mut() {\n Ok(v) => *v = 42,\n Err(e) => *e = 0,\n }\n}\n\nlet mut x: Result<i32, i32> = Ok(2);\nmutate(&mut x);\nassert_eq!(x.unwrap(), 42);\n\nlet mut x: Result<i32, i32> = Err(13);\nmutate(&mut x);\nassert_eq!(x.unwrap_err(), 0);
Maps a Result<T, E>
to Result<U, E>
by applying a function to a\ncontained Ok
value, leaving an Err
value untouched.
This function can be used to compose the results of two functions.
\nPrint the numbers on each line of a string multiplied by two.
\n\nlet line = \"1\\n2\\n3\\n4\\n\";\n\nfor num in line.lines() {\n match num.parse::<i32>().map(|i| i * 2) {\n Ok(n) => println!(\"{n}\"),\n Err(..) => {}\n }\n}
Returns the provided default (if Err
), or\napplies a function to the contained value (if Ok
).
Arguments passed to map_or
are eagerly evaluated; if you are passing\nthe result of a function call, it is recommended to use map_or_else
,\nwhich is lazily evaluated.
let x: Result<_, &str> = Ok(\"foo\");\nassert_eq!(x.map_or(42, |v| v.len()), 3);\n\nlet x: Result<&str, _> = Err(\"bar\");\nassert_eq!(x.map_or(42, |v| v.len()), 42);
Maps a Result<T, E>
to U
by applying fallback function default
to\na contained Err
value, or function f
to a contained Ok
value.
This function can be used to unpack a successful result\nwhile handling an error.
\nlet k = 21;\n\nlet x : Result<_, &str> = Ok(\"foo\");\nassert_eq!(x.map_or_else(|e| k * 2, |v| v.len()), 3);\n\nlet x : Result<&str, _> = Err(\"bar\");\nassert_eq!(x.map_or_else(|e| k * 2, |v| v.len()), 42);
Maps a Result<T, E>
to Result<T, F>
by applying a function to a\ncontained Err
value, leaving an Ok
value untouched.
This function can be used to pass through a successful result while handling\nan error.
\nfn stringify(x: u32) -> String { format!(\"error code: {x}\") }\n\nlet x: Result<u32, u32> = Ok(2);\nassert_eq!(x.map_err(stringify), Ok(2));\n\nlet x: Result<u32, u32> = Err(13);\nassert_eq!(x.map_err(stringify), Err(\"error code: 13\".to_string()));
Converts from Result<T, E>
(or &Result<T, E>
) to Result<&<T as Deref>::Target, &E>
.
Coerces the Ok
variant of the original Result
via Deref
\nand returns the new Result
.
let x: Result<String, u32> = Ok(\"hello\".to_string());\nlet y: Result<&str, &u32> = Ok(\"hello\");\nassert_eq!(x.as_deref(), y);\n\nlet x: Result<String, u32> = Err(42);\nlet y: Result<&str, &u32> = Err(&42);\nassert_eq!(x.as_deref(), y);
Converts from Result<T, E>
(or &mut Result<T, E>
) to Result<&mut <T as DerefMut>::Target, &mut E>
.
Coerces the Ok
variant of the original Result
via DerefMut
\nand returns the new Result
.
let mut s = \"HELLO\".to_string();\nlet mut x: Result<String, u32> = Ok(\"hello\".to_string());\nlet y: Result<&mut str, &mut u32> = Ok(&mut s);\nassert_eq!(x.as_deref_mut().map(|x| { x.make_ascii_uppercase(); x }), y);\n\nlet mut i = 42;\nlet mut x: Result<String, u32> = Err(42);\nlet y: Result<&mut str, &mut u32> = Err(&mut i);\nassert_eq!(x.as_deref_mut().map(|x| { x.make_ascii_uppercase(); x }), y);
Returns an iterator over the possibly contained value.
\nThe iterator yields one value if the result is Result::Ok
, otherwise none.
let x: Result<u32, &str> = Ok(7);\nassert_eq!(x.iter().next(), Some(&7));\n\nlet x: Result<u32, &str> = Err(\"nothing!\");\nassert_eq!(x.iter().next(), None);
Returns a mutable iterator over the possibly contained value.
\nThe iterator yields one value if the result is Result::Ok
, otherwise none.
let mut x: Result<u32, &str> = Ok(7);\nmatch x.iter_mut().next() {\n Some(v) => *v = 40,\n None => {},\n}\nassert_eq!(x, Ok(40));\n\nlet mut x: Result<u32, &str> = Err(\"nothing!\");\nassert_eq!(x.iter_mut().next(), None);
Returns the contained Ok
value, consuming the self
value.
Because this function may panic, its use is generally discouraged.\nInstead, prefer to use pattern matching and handle the Err
\ncase explicitly, or call unwrap_or
, unwrap_or_else
, or\nunwrap_or_default
.
Panics if the value is an Err
, with a panic message including the\npassed message, and the content of the Err
.
let x: Result<u32, &str> = Err(\"emergency failure\");\nx.expect(\"Testing expect\"); // panics with `Testing expect: emergency failure`
We recommend that expect
messages are used to describe the reason you\nexpect the Result
should be Ok
.
let path = std::env::var(\"IMPORTANT_PATH\")\n .expect(\"env variable `IMPORTANT_PATH` should be set by `wrapper_script.sh`\");
Hint: If you’re having trouble remembering how to phrase expect\nerror messages remember to focus on the word “should” as in “env\nvariable should be set by blah” or “the given binary should be available\nand executable by the current user”.
\nFor more detail on expect message styles and the reasoning behind our recommendation please\nrefer to the section on “Common Message\nStyles” in the\nstd::error
module docs.
Returns the contained Ok
value, consuming the self
value.
Because this function may panic, its use is generally discouraged.\nInstead, prefer to use pattern matching and handle the Err
\ncase explicitly, or call unwrap_or
, unwrap_or_else
, or\nunwrap_or_default
.
Panics if the value is an Err
, with a panic message provided by the\nErr
’s value.
Basic usage:
\n\nlet x: Result<u32, &str> = Ok(2);\nassert_eq!(x.unwrap(), 2);
let x: Result<u32, &str> = Err(\"emergency failure\");\nx.unwrap(); // panics with `emergency failure`
Returns the contained Ok
value or a default
Consumes the self
argument then, if Ok
, returns the contained\nvalue, otherwise if Err
, returns the default value for that\ntype.
Converts a string to an integer, turning poorly-formed strings\ninto 0 (the default value for integers). parse
converts\na string to any other type that implements FromStr
, returning an\nErr
on error.
let good_year_from_input = \"1909\";\nlet bad_year_from_input = \"190blarg\";\nlet good_year = good_year_from_input.parse().unwrap_or_default();\nlet bad_year = bad_year_from_input.parse().unwrap_or_default();\n\nassert_eq!(1909, good_year);\nassert_eq!(0, bad_year);
Returns the contained Err
value, consuming the self
value.
Panics if the value is an Ok
, with a panic message including the\npassed message, and the content of the Ok
.
let x: Result<u32, &str> = Ok(10);\nx.expect_err(\"Testing expect_err\"); // panics with `Testing expect_err: 10`
Returns the contained Err
value, consuming the self
value.
Panics if the value is an Ok
, with a custom panic message provided\nby the Ok
’s value.
let x: Result<u32, &str> = Ok(2);\nx.unwrap_err(); // panics with `2`
let x: Result<u32, &str> = Err(\"emergency failure\");\nassert_eq!(x.unwrap_err(), \"emergency failure\");
unwrap_infallible
)Returns the contained Ok
value, but never panics.
Unlike unwrap
, this method is known to never panic on the\nresult types it is implemented for. Therefore, it can be used\ninstead of unwrap
as a maintainability safeguard that will fail\nto compile if the error type of the Result
is later changed\nto an error that can actually occur.
\nfn only_good_news() -> Result<String, !> {\n Ok(\"this is fine\".into())\n}\n\nlet s: String = only_good_news().into_ok();\nprintln!(\"{s}\");
unwrap_infallible
)Returns the contained Err
value, but never panics.
Unlike unwrap_err
, this method is known to never panic on the\nresult types it is implemented for. Therefore, it can be used\ninstead of unwrap_err
as a maintainability safeguard that will fail\nto compile if the ok type of the Result
is later changed\nto a type that can actually occur.
\nfn only_bad_news() -> Result<!, String> {\n Err(\"Oops, it failed\".into())\n}\n\nlet error: String = only_bad_news().into_err();\nprintln!(\"{error}\");
Returns res
if the result is Ok
, otherwise returns the Err
value of self
.
Arguments passed to and
are eagerly evaluated; if you are passing the\nresult of a function call, it is recommended to use and_then
, which is\nlazily evaluated.
let x: Result<u32, &str> = Ok(2);\nlet y: Result<&str, &str> = Err(\"late error\");\nassert_eq!(x.and(y), Err(\"late error\"));\n\nlet x: Result<u32, &str> = Err(\"early error\");\nlet y: Result<&str, &str> = Ok(\"foo\");\nassert_eq!(x.and(y), Err(\"early error\"));\n\nlet x: Result<u32, &str> = Err(\"not a 2\");\nlet y: Result<&str, &str> = Err(\"late error\");\nassert_eq!(x.and(y), Err(\"not a 2\"));\n\nlet x: Result<u32, &str> = Ok(2);\nlet y: Result<&str, &str> = Ok(\"different result type\");\nassert_eq!(x.and(y), Ok(\"different result type\"));
Calls op
if the result is Ok
, otherwise returns the Err
value of self
.
This function can be used for control flow based on Result
values.
fn sq_then_to_string(x: u32) -> Result<String, &'static str> {\n x.checked_mul(x).map(|sq| sq.to_string()).ok_or(\"overflowed\")\n}\n\nassert_eq!(Ok(2).and_then(sq_then_to_string), Ok(4.to_string()));\nassert_eq!(Ok(1_000_000).and_then(sq_then_to_string), Err(\"overflowed\"));\nassert_eq!(Err(\"not a number\").and_then(sq_then_to_string), Err(\"not a number\"));
Often used to chain fallible operations that may return Err
.
use std::{io::ErrorKind, path::Path};\n\n// Note: on Windows \"/\" maps to \"C:\\\"\nlet root_modified_time = Path::new(\"/\").metadata().and_then(|md| md.modified());\nassert!(root_modified_time.is_ok());\n\nlet should_fail = Path::new(\"/bad/path\").metadata().and_then(|md| md.modified());\nassert!(should_fail.is_err());\nassert_eq!(should_fail.unwrap_err().kind(), ErrorKind::NotFound);
Returns res
if the result is Err
, otherwise returns the Ok
value of self
.
Arguments passed to or
are eagerly evaluated; if you are passing the\nresult of a function call, it is recommended to use or_else
, which is\nlazily evaluated.
let x: Result<u32, &str> = Ok(2);\nlet y: Result<u32, &str> = Err(\"late error\");\nassert_eq!(x.or(y), Ok(2));\n\nlet x: Result<u32, &str> = Err(\"early error\");\nlet y: Result<u32, &str> = Ok(2);\nassert_eq!(x.or(y), Ok(2));\n\nlet x: Result<u32, &str> = Err(\"not a 2\");\nlet y: Result<u32, &str> = Err(\"late error\");\nassert_eq!(x.or(y), Err(\"late error\"));\n\nlet x: Result<u32, &str> = Ok(2);\nlet y: Result<u32, &str> = Ok(100);\nassert_eq!(x.or(y), Ok(2));
Calls op
if the result is Err
, otherwise returns the Ok
value of self
.
This function can be used for control flow based on result values.
\nfn sq(x: u32) -> Result<u32, u32> { Ok(x * x) }\nfn err(x: u32) -> Result<u32, u32> { Err(x) }\n\nassert_eq!(Ok(2).or_else(sq).or_else(sq), Ok(2));\nassert_eq!(Ok(2).or_else(err).or_else(sq), Ok(2));\nassert_eq!(Err(3).or_else(sq).or_else(err), Ok(9));\nassert_eq!(Err(3).or_else(err).or_else(err), Err(3));
Returns the contained Ok
value or a provided default.
Arguments passed to unwrap_or
are eagerly evaluated; if you are passing\nthe result of a function call, it is recommended to use unwrap_or_else
,\nwhich is lazily evaluated.
let default = 2;\nlet x: Result<u32, &str> = Ok(9);\nassert_eq!(x.unwrap_or(default), 9);\n\nlet x: Result<u32, &str> = Err(\"error\");\nassert_eq!(x.unwrap_or(default), default);
Returns the contained Ok
value, consuming the self
value,\nwithout checking that the value is not an Err
.
Calling this method on an Err
is undefined behavior.
let x: Result<u32, &str> = Ok(2);\nassert_eq!(unsafe { x.unwrap_unchecked() }, 2);
let x: Result<u32, &str> = Err(\"emergency failure\");\nunsafe { x.unwrap_unchecked(); } // Undefined behavior!
Returns the contained Err
value, consuming the self
value,\nwithout checking that the value is not an Ok
.
Calling this method on an Ok
is undefined behavior.
let x: Result<u32, &str> = Ok(2);\nunsafe { x.unwrap_err_unchecked() }; // Undefined behavior!
let x: Result<u32, &str> = Err(\"emergency failure\");\nassert_eq!(unsafe { x.unwrap_err_unchecked() }, \"emergency failure\");
Maps a Result<&mut T, E>
to a Result<T, E>
by copying the contents of the\nOk
part.
let mut val = 12;\nlet x: Result<&mut i32, i32> = Ok(&mut val);\nassert_eq!(x, Ok(&mut 12));\nlet copied = x.copied();\nassert_eq!(copied, Ok(12));
Maps a Result<&mut T, E>
to a Result<T, E>
by cloning the contents of the\nOk
part.
let mut val = 12;\nlet x: Result<&mut i32, i32> = Ok(&mut val);\nassert_eq!(x, Ok(&mut 12));\nlet cloned = x.cloned();\nassert_eq!(cloned, Ok(12));
Transposes a Result
of an Option
into an Option
of a Result
.
Ok(None)
will be mapped to None
.\nOk(Some(_))
and Err(_)
will be mapped to Some(Ok(_))
and Some(Err(_))
.
#[derive(Debug, Eq, PartialEq)]\nstruct SomeErr;\n\nlet x: Result<Option<i32>, SomeErr> = Ok(Some(5));\nlet y: Option<Result<i32, SomeErr>> = Some(Ok(5));\nassert_eq!(x.transpose(), y);
result_flattening
)Converts from Result<Result<T, E>, E>
to Result<T, E>
#![feature(result_flattening)]\nlet x: Result<Result<&'static str, u32>, u32> = Ok(Ok(\"hello\"));\nassert_eq!(Ok(\"hello\"), x.flatten());\n\nlet x: Result<Result<&'static str, u32>, u32> = Ok(Err(6));\nassert_eq!(Err(6), x.flatten());\n\nlet x: Result<Result<&'static str, u32>, u32> = Err(6);\nassert_eq!(Err(6), x.flatten());
Flattening only removes one level of nesting at a time:
\n\n#![feature(result_flattening)]\nlet x: Result<Result<Result<&'static str, u32>, u32>, u32> = Ok(Ok(Ok(\"hello\")));\nassert_eq!(Ok(Ok(\"hello\")), x.flatten());\nassert_eq!(Ok(\"hello\"), x.flatten().flatten());
self
and other
) and is used by the <=
\noperator. Read moreTakes each element in the Iterator
: if it is an Err
, no further\nelements are taken, and the Err
is returned. Should no Err
occur, a\ncontainer with the values of each Result
is returned.
Here is an example which increments every integer in a vector,\nchecking for overflow:
\n\nlet v = vec![1, 2];\nlet res: Result<Vec<u32>, &'static str> = v.iter().map(|x: &u32|\n x.checked_add(1).ok_or(\"Overflow!\")\n).collect();\nassert_eq!(res, Ok(vec![2, 3]));
Here is another example that tries to subtract one from another list\nof integers, this time checking for underflow:
\n\nlet v = vec![1, 2, 0];\nlet res: Result<Vec<u32>, &'static str> = v.iter().map(|x: &u32|\n x.checked_sub(1).ok_or(\"Underflow!\")\n).collect();\nassert_eq!(res, Err(\"Underflow!\"));
Here is a variation on the previous example, showing that no\nfurther elements are taken from iter
after the first Err
.
let v = vec![3, 2, 1, 10];\nlet mut shared = 0;\nlet res: Result<Vec<u32>, &'static str> = v.iter().map(|x: &u32| {\n shared += x;\n x.checked_sub(2).ok_or(\"Underflow!\")\n}).collect();\nassert_eq!(res, Err(\"Underflow!\"));\nassert_eq!(shared, 6);
Since the third element caused an underflow, no further elements were taken,\nso the final value of shared
is 6 (= 3 + 2 + 1
), not 16.
Returns a consuming iterator over the possibly contained value.
\nThe iterator yields one value if the result is Result::Ok
, otherwise none.
let x: Result<u32, &str> = Ok(5);\nlet v: Vec<u32> = x.into_iter().collect();\nassert_eq!(v, [5]);\n\nlet x: Result<u32, &str> = Err(\"nothing!\");\nlet v: Vec<u32> = x.into_iter().collect();\nassert_eq!(v, []);
Takes each element in the Iterator
: if it is an Err
, no further\nelements are taken, and the Err
is returned. Should no Err
\noccur, the sum of all elements is returned.
This sums up every integer in a vector, rejecting the sum if a negative\nelement is encountered:
\n\nlet f = |&x: &i32| if x < 0 { Err(\"Negative element found\") } else { Ok(x) };\nlet v = vec![1, 2];\nlet res: Result<i32, _> = v.iter().map(f).sum();\nassert_eq!(res, Ok(3));\nlet v = vec![1, -2];\nlet res: Result<i32, _> = v.iter().map(f).sum();\nassert_eq!(res, Err(\"Negative element found\"));
try_trait_v2
)Residual
type. Read moreTakes each element in the Iterator
: if it is an Err
, no further\nelements are taken, and the Err
is returned. Should no Err
\noccur, the product of all elements is returned.
This multiplies each number in a vector of strings,\nif a string could not be parsed the operation returns Err
:
let nums = vec![\"5\", \"10\", \"1\", \"2\"];\nlet total: Result<usize, _> = nums.iter().map(|w| w.parse::<usize>()).product();\nassert_eq!(total, Ok(100));\nlet nums = vec![\"5\", \"10\", \"one\", \"2\"];\nlet total: Result<usize, _> = nums.iter().map(|w| w.parse::<usize>()).product();\nassert!(total.is_err());
try_trait_v2
)?
when not short-circuiting.try_trait_v2
)FromResidual::from_residual
\nas part of ?
when short-circuiting. Read moretry_trait_v2
)Output
type. Read moretry_trait_v2
)?
to decide whether the operator should produce a value\n(because this returned ControlFlow::Continue
)\nor propagate a value back to the caller\n(because this returned ControlFlow::Break
). Read moreIf you're looking for calloop's API documentation, they are available on docs.rs
for the released versions. There are also the docs of the current development version.
This book presents a step-by-step tutorial to get yourself familiar with calloop and how it is used:
+An event loop is one way to write concurrent code. Other ways include threading (sort of), or asynchronous syntax.
+When you write concurrent code, you need to know two things:
+This chapter covers what the first thing means, and how Calloop accomplishes the second thing.
+A blocking operation is one that waits for an event to happen, and doesn't do anything else while it's waiting. For example, if you try to read from a network socket, and there is no data available, the read operation could wait for some indefinite amount of time. Your program will be in a state where it does not need to use any CPU cycles, or indeed do anything at all, and it won't proceed until there is data to read.
+Examples of blocking operations are:
+When any of these operations are ready to go, we call it an event. We call the underlying things (files, network sockets, timers, etc.) sources for events. So, for example, you can create an event source that corresponds to a file, and it will generate events when it is ready for reading, or writing, or encounters an error.
+An event loop like Calloop, as the name suggests, runs in a loop. At the start of the loop, Calloop checks all the sources you've added to see if any events have happened for those sources. If they have, Calloop will call a function that you provide (known as a callback).
+This function will (possibly) be given some data for the event itself (eg. the bytes received), some state for the event source (eg. the socket, or a type that wraps it in a neater API), and some state for the whole program.
+Calloop will do this one by one for each source that has a new event. If a file is ready for reading, your file-event-source callback will be called. If a timer has elapsed, your timer-event-source callback will be called.
+It is up to you to write the code to do things when events happen. For example, your callback might read data from a file "ready for reading" event into a queue. When the queue contains a valid message, the same callback could send that message over an internal channel to another event source. That second event source could have its own callback that processes entire messages and updates the program's state. And so on.
+This "one by one" nature of event loops is important. When you approach concurrency using threads, operations in any thread can be interleaved with operations in any other thread. This is typically made robust by either passing messages or using shared memory with synchronisation.
+Callbacks in an event loop do not run in parallel, they run one after the other. Unless you (or your dependencies) have introduced threading, you can (and should) write your callbacks as single-threaded code.
+This single-threaded nature makes event loops much more similar to code that uses async
/await
than to multithreaded code. There are benefits and tradeoffs to either approach.
Calloop will take care of a lot of integration and error handling boilerplate for you. It also makes it clearer what parts of your code are the non-blocking actions to perform as a result of events. If you like to think of your program in terms of taking action in reaction to events, this can be a great advantage!
+However, this comes at the expense of needing to make your program's state much more explicit. For example, take this async code:
+do_thing_one().await;
+do_thing_two().await;
+do_thing_three().await;
+
+The state of the program is simply given by: what line is it up to? You know if it's done "thing one" because execution has proceeded to line two. No other state is required. In Calloop, however, you will need extra variables and code so that when your callback is called, it knows whether to run do_thing_one()
, do_thing_two()
, or do_thing_three()
.
All of this leads us to the most important rule of event loop code: never block the loop! This means: never use blocking calls inside one of your event callbacks. Do not use synchronous file write()
calls in a callback. Do not sleep()
in a callback. Do not join()
a thread in a callback. Don't you do it!
If you do, the event loop will have no way to proceed, and just... wait for your blocking operation to complete. Nothing is going to run in a parallel thread. Nothing is going to stop your callback and move on to the next one. If your callback needs to wait for a blocking operation, your code must allow it to keep track of where it's up to, return from the callback, and wait for the event like any other.
+Calloop is designed to work by composition. This means that you build up more complex logic in your program by combining simpler event sources into more complex ones. Want a network socket with custom backoff/timeout logic? Create a type containing a network socket using the Generic file descriptor adapter, a timer, and tie them together with your backoff logic and state. There is a much more detailed example of composition in our ZeroMQ example.
+ +Calloop's structure is entirely built around the EventSource
trait, which represents something that is capable of generating events. To receive those events, you need to give ownership of the event source to calloop, along with a closure that will be invoked whenever said source generated an event. This is thus a push-based model, that is most suited for contexts where your program needs to react to (unpredictable) outside events, rather than wait efficiently for the completion of some operation it initiated.
The Generic
event source wraps a file descriptor ("fd") and fires its callback any time there are events on it ie. becoming readable or writable, or encountering an error. It's pretty simple, but it's what every other event source is based around. And since the platforms that calloop runs on expose many different kinds of events via fds, it's usually the key to using those events in calloop.
For example on Linux, fd-based interfaces are available for GPIO, I2C, USB, UART, network interfaces, timers and many other systems. Integrating these into calloop starts with obtaining the appropriate fd, creating a Generic
event source from it, and building up a more useful, abstracted event source around that. A detailed example of this is given for ZeroMQ.
You do not have to use a low-level fd either: any type that implements AsFd
can be provided. This means that you can use a wrapper type that handles allocation and disposal itself, and implement AsRawFd
on it so that Generic
can manage it in the event loop.
Creating a Generic
event source requires three things:
OwnedFd
or a wrapper type that implements AsFd
The easiest constructor to use is the new()
method, but if you need control over the associated error type there is also new_with_error()
.
Rust 1.63 introduced a concept of file descriptor ownership and borrowing through the OwnedFd
and BorrowedFd
types. The AsFd
trait provides a way to get a BorrowedFd
corresponding to a file, socket, etc. while guaranteeing the fd will be valid for the lifetime of the BorrowedFd
.
Not all third party crates use AsFd
yet, and may instead provide types implementing AsRawFd
. 'AsFdWrapper' provides a way to adapt these types. To use this safely, ensure the AsRawFd
implementation of the type it wraps returns a valid fd as long as the type exists. And to avoid an fd leak, it should ultimately be close
d properly.
Safe types like OwnedFd
and BorrowedFd
should be preferred over RawFd
s, and the use of RawFd
s outside of implementing FFI shouldn't be necessary as libraries move to using the IO safe types and traits.
Timer event sources are used to manipulate time-related actions. Those are provided under the calloop::timer
module, with the Timer
type at its core.
A Timer
source has a simple behavior: it is programmed to wait for some duration, or until a certain point in time. Once that deadline is reached, the source generates an event.
So with use calloop::timer::Timer
at the top of our .rs
file, we can create a timer that will wait for 5 seconds:
use std::time::Duration;
+
+use calloop::{
+ timer::{TimeoutAction, Timer},
+ EventLoop,
+};
+
+fn main() {
+ let mut event_loop = EventLoop::try_new().expect("Failed to initialize the event loop!");
+
+ let timer = Timer::from_duration(Duration::from_secs(5));
+
+ event_loop
+ .handle()
+ .insert_source(timer, |deadline, _: &mut (), _shared_data| {
+ println!("Event fired for: {:?}", deadline);
+ TimeoutAction::Drop
+ })
+ .expect("Failed to insert event source!");
+
+ event_loop
+ .dispatch(None, &mut ())
+ .expect("Error during event loop!");
+}
+
+We have an event source, we have our shared data, and we know how to start our loop running. All that is left is to learn how to combine these things:
+use std::time::Duration;
+
+use calloop::{
+ timer::{TimeoutAction, Timer},
+ EventLoop,
+};
+
+fn main() {
+ let mut event_loop = EventLoop::try_new().expect("Failed to initialize the event loop!");
+
+ let timer = Timer::from_duration(Duration::from_secs(5));
+
+ event_loop
+ .handle()
+ .insert_source(timer, |deadline, _: &mut (), _shared_data| {
+ println!("Event fired for: {:?}", deadline);
+ TimeoutAction::Drop
+ })
+ .expect("Failed to insert event source!");
+
+ event_loop
+ .dispatch(None, &mut ())
+ .expect("Error during event loop!");
+}
+
+Breaking this down, the callback we provide receives 3 arguments:
+Instant
representing the time at which this timer was scheduled to expire. Due to how the event loop works, it might be that your callback is not invoked at the exact time where the timer expired (if an other callback was being processed at the time for example), so the original deadline is given if you need precise time tracking.&mut ()
, as the timers don't use the EventSource
functionality.In addition your callback is expected to return a TimeoutAction
, that will instruct calloop what to do next. This enum has 3 values:
Drop
will disable the timer and destroy it, freeing the callback.ToInstant
will reschedule the callback to fire again at given Instant
, invoking the same callback again. This is useful if you need to create a timer that fires events at regular intervals, for example to encode key repetition in a graphical app. You would compute the next instant by adding the duration to the previous instant. It is not a problem if that duration is in the past, it'll simply cause the timer to fire again instantly. This way, even if some other part of your app lags, you'll still have on average the correct amount of events per second.ToDuration
will reschedule the callback to fire again after a given Duration
. This is useful if you need to schedule some background task to execute again after some time after it was last completed, when there is no point in catching up some previous lag.Putting it all together, we have:
+use std::time::Duration;
+
+use calloop::{
+ timer::{TimeoutAction, Timer},
+ EventLoop,
+};
+
+fn main() {
+ let mut event_loop = EventLoop::try_new().expect("Failed to initialize the event loop!");
+
+ let timer = Timer::from_duration(Duration::from_secs(5));
+
+ event_loop
+ .handle()
+ .insert_source(timer, |deadline, _: &mut (), _shared_data| {
+ println!("Event fired for: {:?}", deadline);
+ TimeoutAction::Drop
+ })
+ .expect("Failed to insert event source!");
+
+ event_loop
+ .dispatch(None, &mut ())
+ .expect("Error during event loop!");
+}
+
+
+ The Ping
event source has one very simple job — wake up the event loop. Use this when you know there are events for your event source to process, but those events aren't going to wake the event loop up themselves.
For example, calloop's own message channel
uses Rust's native MPSC channel internally. Because there's no way for the internal message queue to wake up the event loop, it's coupled with a Ping
source that wakes the loop up when there are new messages.
The Ping
has two ends — the event source part (PingSource
), that goes in the event loop, and the sending end (Ping
) you use to "send" the ping. To wake the event loop up, call ping()
on the sending end.
++ +Do not forget to process the events of the
+PingSource
if you are using it as part of a larger event source! Even though the events carry no information (they are just()
values), theprocess_events()
method must be called in order to "reset" thePingSource
. Otherwise the event loop will be continually woken up until you do, effectively becoming a busy-loop.
Most error handling crates/guides/documentation for Rust focus on one of two situations:
+Result
s from closure or trait methods that it might callCalloop has to do both of these things. It needs to provide a library user with errors that work well with ?
and common error-handling idioms in their own code, and it needs to handle errors from the callbacks you give to process_events()
or insert_source()
. It also needs to provide some flexibility in the EventSource
trait, which is used both for internal event sources and by users of the library.
Because of this, error handling in Calloop leans more towards having separate error types for different concerns. This may mean that there is some extra conversion code in places like returning results from process_events()
, or in callbacks that use other libraries. However, we try to make it smoother to do these conversions, and to make sure information isn't lost in doing so.
If your crate already has some form of structured error handling, Calloop's error types should pose no problem to integrate into this. All of Calloop's errors implement std::error::Error
and can be manipulated the same as any other error types.
The place where this becomes the most complex is in the process_events()
method on the EventSource
trait.
The EventSource
trait contains an associated type named Error
, which forms part of the return type from process_events()
. This type must be convertible into Box<dyn std::error::Error + Sync + Send>
, which means you can use:
std::error::Error
Box<dyn std::error::Error + Sync + Send>
anyhow::Error
As a rule, if you implement EventSource
you should try to split your errors into two different categories:
Event
associated type eg. as an enum or Result
.Error
associated type.For an example, take Calloop's channel type, calloop::channel::Channel
. When the sending end is dropped, no more messages can be received after that point. But this is not returned as an error when calling process_events()
, because you still want to (and can!) receive messages sent before that point that might still be in the queue. Hence the events received by the callback for this source can be Msg(e)
or Closed
.
However, if the internal ping source produces an error, there is no way for the sending end of the channel to notify the receiver. It is impossible to process more events on this event source, and the caller needs to decide how to recover from this situation. Hence this is returned as a ChannelError
from process_events()
.
Another example might be an event source that represents a running subprocess. If the subprocess exits with a non-zero status code, or the executable can't be found, those don't mean that events can no longer be processed. They can be provided to the caller through the callback. But if the lower level sources being used to run (eg. an asynchronous executor or subprocess file descriptor) fail to work as expected, process_events()
should return an error.
While it is centered on event sources and callbacks, calloop also provides adapters to integrate with Rust's async ecosystem. These adapters come in two parts: a futures executor and an async adapter for IO type.
+ +++Enable the
+executor
feature!To use
+calloop::futures
you need to enable theexecutor
feature in yourCargo.toml
like so:+[dependencies.calloop] +features = [ "executor" ] +version = ... +
Let's say you have some async code that looks like this:
+sender.send("Hello,").await.ok();
+receiver.next().await.map(|m| println!("Received: {}", m));
+sender.send("world!").await.ok();
+receiver.next().await.map(|m| println!("Received: {}", m));
+"So long!"
+
+...and a corresponding block that receives and sends to this one. I will call one of these blocks "friendly" and the other one "aloof".
+To run async code in Calloop, you use the components in calloop::futures
. First, obtain both an executor and a scheduler with calloop::futures::executor()
:
+use calloop::EventLoop; + +// futures = "0.3" +use futures::sink::SinkExt; +use futures::stream::StreamExt; + +fn main() -> std::io::Result<()> { + let (exec, sched) = calloop::futures::executor()?; + + let mut event_loop = EventLoop::try_new()?; + let handle = event_loop.handle(); + + handle + .insert_source(exec, |evt, _metadata, _shared| { + // Print the value of the async block ie. the return value. + println!("Async block ended with: {}", evt); + }) + .map_err(|e| e.error)?; + + // Let's create two channels for our async blocks below. The blocks will + // exchange messages via these channels. + let (mut sender_friendly, mut receiver_friendly) = futures::channel::mpsc::unbounded(); + let (mut sender_aloof, mut receiver_aloof) = futures::channel::mpsc::unbounded(); + + // Our toy async code. + let async_friendly_task = async move { + sender_friendly.send("Hello,").await.ok(); + if let Some(msg) = receiver_aloof.next().await { + println!("Aloof said: {}", msg); + } + sender_friendly.send("world!").await.ok(); + if let Some(msg) = receiver_aloof.next().await { + println!("Aloof said: {}", msg); + } + "Bye!" + }; + + let async_aloof_task = async move { + if let Some(msg) = receiver_friendly.next().await { + println!("Friendly said: {}", msg); + } + sender_aloof.send("Oh,").await.ok(); + if let Some(msg) = receiver_friendly.next().await { + println!("Friendly said: {}", msg); + } + sender_aloof.send("it's you.").await.ok(); + "Regards." + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_friendly_task).unwrap(); + sched.schedule(async_aloof_task).unwrap(); + + // Run the event loop. + println!("Starting event loop. Use Ctrl-C to exit."); + event_loop.run(None, &mut (), |_| {})?; + println!("Event loop ended."); + + Ok(()) +} +
The executor, the part that executes the future, goes in the event loop:
++use calloop::EventLoop; + +// futures = "0.3" +use futures::sink::SinkExt; +use futures::stream::StreamExt; + +fn main() -> std::io::Result<()> { + let (exec, sched) = calloop::futures::executor()?; + + let mut event_loop = EventLoop::try_new()?; + let handle = event_loop.handle(); + + handle + .insert_source(exec, |evt, _metadata, _shared| { + // Print the value of the async block ie. the return value. + println!("Async block ended with: {}", evt); + }) + .map_err(|e| e.error)?; + + // Let's create two channels for our async blocks below. The blocks will + // exchange messages via these channels. + let (mut sender_friendly, mut receiver_friendly) = futures::channel::mpsc::unbounded(); + let (mut sender_aloof, mut receiver_aloof) = futures::channel::mpsc::unbounded(); + + // Our toy async code. + let async_friendly_task = async move { + sender_friendly.send("Hello,").await.ok(); + if let Some(msg) = receiver_aloof.next().await { + println!("Aloof said: {}", msg); + } + sender_friendly.send("world!").await.ok(); + if let Some(msg) = receiver_aloof.next().await { + println!("Aloof said: {}", msg); + } + "Bye!" + }; + + let async_aloof_task = async move { + if let Some(msg) = receiver_friendly.next().await { + println!("Friendly said: {}", msg); + } + sender_aloof.send("Oh,").await.ok(); + if let Some(msg) = receiver_friendly.next().await { + println!("Friendly said: {}", msg); + } + sender_aloof.send("it's you.").await.ok(); + "Regards." + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_friendly_task).unwrap(); + sched.schedule(async_aloof_task).unwrap(); + + // Run the event loop. + println!("Starting event loop. Use Ctrl-C to exit."); + event_loop.run(None, &mut (), |_| {})?; + println!("Event loop ended."); + + Ok(()) +} +
Now let's write our async code in full:
++use calloop::EventLoop; + +// futures = "0.3" +use futures::sink::SinkExt; +use futures::stream::StreamExt; + +fn main() -> std::io::Result<()> { + let (exec, sched) = calloop::futures::executor()?; + + let mut event_loop = EventLoop::try_new()?; + let handle = event_loop.handle(); + + handle + .insert_source(exec, |evt, _metadata, _shared| { + // Print the value of the async block ie. the return value. + println!("Async block ended with: {}", evt); + }) + .map_err(|e| e.error)?; + + // Let's create two channels for our async blocks below. The blocks will + // exchange messages via these channels. + let (mut sender_friendly, mut receiver_friendly) = futures::channel::mpsc::unbounded(); + let (mut sender_aloof, mut receiver_aloof) = futures::channel::mpsc::unbounded(); + + // Our toy async code. + let async_friendly_task = async move { + sender_friendly.send("Hello,").await.ok(); + if let Some(msg) = receiver_aloof.next().await { + println!("Aloof said: {}", msg); + } + sender_friendly.send("world!").await.ok(); + if let Some(msg) = receiver_aloof.next().await { + println!("Aloof said: {}", msg); + } + "Bye!" + }; + + let async_aloof_task = async move { + if let Some(msg) = receiver_friendly.next().await { + println!("Friendly said: {}", msg); + } + sender_aloof.send("Oh,").await.ok(); + if let Some(msg) = receiver_friendly.next().await { + println!("Friendly said: {}", msg); + } + sender_aloof.send("it's you.").await.ok(); + "Regards." + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_friendly_task).unwrap(); + sched.schedule(async_aloof_task).unwrap(); + + // Run the event loop. + println!("Starting event loop. Use Ctrl-C to exit."); + event_loop.run(None, &mut (), |_| {})?; + println!("Event loop ended."); + + Ok(()) +} +
Like any block in Rust, the value of your async block is the last expression ie. it is effectively "returned" from the block, which means it will be provided to your executor's callback as the first argument (the "event"). You'll see this in the output with the Async block ended with: ...
lines.
Finally, we run the loop:
++use calloop::EventLoop; + +// futures = "0.3" +use futures::sink::SinkExt; +use futures::stream::StreamExt; + +fn main() -> std::io::Result<()> { + let (exec, sched) = calloop::futures::executor()?; + + let mut event_loop = EventLoop::try_new()?; + let handle = event_loop.handle(); + + handle + .insert_source(exec, |evt, _metadata, _shared| { + // Print the value of the async block ie. the return value. + println!("Async block ended with: {}", evt); + }) + .map_err(|e| e.error)?; + + // Let's create two channels for our async blocks below. The blocks will + // exchange messages via these channels. + let (mut sender_friendly, mut receiver_friendly) = futures::channel::mpsc::unbounded(); + let (mut sender_aloof, mut receiver_aloof) = futures::channel::mpsc::unbounded(); + + // Our toy async code. + let async_friendly_task = async move { + sender_friendly.send("Hello,").await.ok(); + if let Some(msg) = receiver_aloof.next().await { + println!("Aloof said: {}", msg); + } + sender_friendly.send("world!").await.ok(); + if let Some(msg) = receiver_aloof.next().await { + println!("Aloof said: {}", msg); + } + "Bye!" + }; + + let async_aloof_task = async move { + if let Some(msg) = receiver_friendly.next().await { + println!("Friendly said: {}", msg); + } + sender_aloof.send("Oh,").await.ok(); + if let Some(msg) = receiver_friendly.next().await { + println!("Friendly said: {}", msg); + } + sender_aloof.send("it's you.").await.ok(); + "Regards." + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_friendly_task).unwrap(); + sched.schedule(async_aloof_task).unwrap(); + + // Run the event loop. + println!("Starting event loop. Use Ctrl-C to exit."); + event_loop.run(None, &mut (), |_| {})?; + println!("Event loop ended."); + + Ok(()) +} +
And our output looks like:
+Starting event loop.
+Friendly said: Hello,
+Aloof said: Oh,
+Friendly said: world!
+Async block ended with: Regards.
+Aloof said: it's you.
+Async block ended with: Bye!
+Event loop ended.
+
+Note that for the sake of keeping this example short, I've written the async code before running the loop. But async code can be scheduled from callbacks, or other sources within the loop too.
+++ +Note about threads
+One of Calloop's strengths is that it is completely single threaded as written. However, many async crates are implemented using threads eg.
+async-std
andasync-process
. This is not an inherent problem! Calloop will work perfectly well with such implementations in general. However, if you have selected Calloop because of your own constraints around threading, be aware of this.
++This section is about adapting blocking IO types for use with
+async
Rust code, and powering thatasync
code with Calloop. If you just want to add blocking IO types to your event loop and use Calloop's callback/composition-based design, you only need to wrap your blocking IO type in a generic event source.
You may find that you need to write ordinary Rust async
code around blocking IO types. Calloop provides the ability to wrap blocking types — anything that implements the AsFd
trait — in its own async type. This can be polled in any executor you may have chosen for your async code, but if you're using Calloop you'll probably be using Calloop's executor.
++Enable the
+futures-io
feature!To use
+calloop::io
you need to enable thefutures-io
feature in yourCargo.toml
like so:+[dependencies.calloop] +features = [ "futures-io" ] +version = ... +
Realistically you will probably also want to use this with async code, so you should also enable the
+executor
feature too.
Just like in the async example, we will use the components in calloop::futures
. First, obtain both an executor and a scheduler with calloop::futures::executor()
:
+use calloop::EventLoop; + +use futures::io::{AsyncReadExt, AsyncWriteExt}; + +fn main() -> std::io::Result<()> { + let (exec, sched) = calloop::futures::executor()?; + + let mut event_loop = EventLoop::try_new()?; + let handle = event_loop.handle(); + + handle + .insert_source(exec, |evt, _metadata, _shared| { + // Print the value of the async block ie. the return value. + println!("Async block ended with: {}", evt); + }) + .map_err(|e| e.error)?; + + let (sender, receiver) = std::os::unix::net::UnixStream::pair().unwrap(); + let mut sender = handle.adapt_io(sender).unwrap(); + let mut receiver = handle.adapt_io(receiver).unwrap(); + + let async_receive = async move { + let mut buf = [0u8; 12]; + // Here's our async-ified Unix domain socket. + receiver.read_exact(&mut buf).await.unwrap(); + std::str::from_utf8(&buf).unwrap().to_owned() + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_receive).unwrap(); + + let async_send = async move { + // Here's our async-ified Unix domain socket. + sender.write_all(b"Hello, world!").await.unwrap(); + "Sent data...".to_owned() + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_send).unwrap(); + + // Run the event loop. + println!("Starting event loop. Use Ctrl-C to exit."); + event_loop.run(None, &mut event_loop.get_signal(), |_| {})?; + println!("Event loop ended."); + + Ok(()) +} +
The executor goes in the event loop:
++use calloop::EventLoop; + +use futures::io::{AsyncReadExt, AsyncWriteExt}; + +fn main() -> std::io::Result<()> { + let (exec, sched) = calloop::futures::executor()?; + + let mut event_loop = EventLoop::try_new()?; + let handle = event_loop.handle(); + + handle + .insert_source(exec, |evt, _metadata, _shared| { + // Print the value of the async block ie. the return value. + println!("Async block ended with: {}", evt); + }) + .map_err(|e| e.error)?; + + let (sender, receiver) = std::os::unix::net::UnixStream::pair().unwrap(); + let mut sender = handle.adapt_io(sender).unwrap(); + let mut receiver = handle.adapt_io(receiver).unwrap(); + + let async_receive = async move { + let mut buf = [0u8; 12]; + // Here's our async-ified Unix domain socket. + receiver.read_exact(&mut buf).await.unwrap(); + std::str::from_utf8(&buf).unwrap().to_owned() + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_receive).unwrap(); + + let async_send = async move { + // Here's our async-ified Unix domain socket. + sender.write_all(b"Hello, world!").await.unwrap(); + "Sent data...".to_owned() + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_send).unwrap(); + + // Run the event loop. + println!("Starting event loop. Use Ctrl-C to exit."); + event_loop.run(None, &mut event_loop.get_signal(), |_| {})?; + println!("Event loop ended."); + + Ok(()) +} +
For our blocking IO types, let's use an unnamed pair of Unix domain stream sockets. To convert them to async types, we simply call calloop::LoopHandle::adapt_io()
:
+use calloop::EventLoop; + +use futures::io::{AsyncReadExt, AsyncWriteExt}; + +fn main() -> std::io::Result<()> { + let (exec, sched) = calloop::futures::executor()?; + + let mut event_loop = EventLoop::try_new()?; + let handle = event_loop.handle(); + + handle + .insert_source(exec, |evt, _metadata, _shared| { + // Print the value of the async block ie. the return value. + println!("Async block ended with: {}", evt); + }) + .map_err(|e| e.error)?; + + let (sender, receiver) = std::os::unix::net::UnixStream::pair().unwrap(); + let mut sender = handle.adapt_io(sender).unwrap(); + let mut receiver = handle.adapt_io(receiver).unwrap(); + + let async_receive = async move { + let mut buf = [0u8; 12]; + // Here's our async-ified Unix domain socket. + receiver.read_exact(&mut buf).await.unwrap(); + std::str::from_utf8(&buf).unwrap().to_owned() + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_receive).unwrap(); + + let async_send = async move { + // Here's our async-ified Unix domain socket. + sender.write_all(b"Hello, world!").await.unwrap(); + "Sent data...".to_owned() + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_send).unwrap(); + + // Run the event loop. + println!("Starting event loop. Use Ctrl-C to exit."); + event_loop.run(None, &mut event_loop.get_signal(), |_| {})?; + println!("Event loop ended."); + + Ok(()) +} +
Note that most of the useful async functionality for the returned type is expressed through various traits in futures::io
. So we need to explicitly use
these:
+use calloop::EventLoop; + +use futures::io::{AsyncReadExt, AsyncWriteExt}; + +fn main() -> std::io::Result<()> { + let (exec, sched) = calloop::futures::executor()?; + + let mut event_loop = EventLoop::try_new()?; + let handle = event_loop.handle(); + + handle + .insert_source(exec, |evt, _metadata, _shared| { + // Print the value of the async block ie. the return value. + println!("Async block ended with: {}", evt); + }) + .map_err(|e| e.error)?; + + let (sender, receiver) = std::os::unix::net::UnixStream::pair().unwrap(); + let mut sender = handle.adapt_io(sender).unwrap(); + let mut receiver = handle.adapt_io(receiver).unwrap(); + + let async_receive = async move { + let mut buf = [0u8; 12]; + // Here's our async-ified Unix domain socket. + receiver.read_exact(&mut buf).await.unwrap(); + std::str::from_utf8(&buf).unwrap().to_owned() + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_receive).unwrap(); + + let async_send = async move { + // Here's our async-ified Unix domain socket. + sender.write_all(b"Hello, world!").await.unwrap(); + "Sent data...".to_owned() + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_send).unwrap(); + + // Run the event loop. + println!("Starting event loop. Use Ctrl-C to exit."); + event_loop.run(None, &mut event_loop.get_signal(), |_| {})?; + println!("Event loop ended."); + + Ok(()) +} +
We can now write async code around these types. Here's the receiving code:
++use calloop::EventLoop; + +use futures::io::{AsyncReadExt, AsyncWriteExt}; + +fn main() -> std::io::Result<()> { + let (exec, sched) = calloop::futures::executor()?; + + let mut event_loop = EventLoop::try_new()?; + let handle = event_loop.handle(); + + handle + .insert_source(exec, |evt, _metadata, _shared| { + // Print the value of the async block ie. the return value. + println!("Async block ended with: {}", evt); + }) + .map_err(|e| e.error)?; + + let (sender, receiver) = std::os::unix::net::UnixStream::pair().unwrap(); + let mut sender = handle.adapt_io(sender).unwrap(); + let mut receiver = handle.adapt_io(receiver).unwrap(); + + let async_receive = async move { + let mut buf = [0u8; 12]; + // Here's our async-ified Unix domain socket. + receiver.read_exact(&mut buf).await.unwrap(); + std::str::from_utf8(&buf).unwrap().to_owned() + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_receive).unwrap(); + + let async_send = async move { + // Here's our async-ified Unix domain socket. + sender.write_all(b"Hello, world!").await.unwrap(); + "Sent data...".to_owned() + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_send).unwrap(); + + // Run the event loop. + println!("Starting event loop. Use Ctrl-C to exit."); + event_loop.run(None, &mut event_loop.get_signal(), |_| {})?; + println!("Event loop ended."); + + Ok(()) +} +
And here's the sending code. The receiving and sending code can be created and added to the executor in either order.
++use calloop::EventLoop; + +use futures::io::{AsyncReadExt, AsyncWriteExt}; + +fn main() -> std::io::Result<()> { + let (exec, sched) = calloop::futures::executor()?; + + let mut event_loop = EventLoop::try_new()?; + let handle = event_loop.handle(); + + handle + .insert_source(exec, |evt, _metadata, _shared| { + // Print the value of the async block ie. the return value. + println!("Async block ended with: {}", evt); + }) + .map_err(|e| e.error)?; + + let (sender, receiver) = std::os::unix::net::UnixStream::pair().unwrap(); + let mut sender = handle.adapt_io(sender).unwrap(); + let mut receiver = handle.adapt_io(receiver).unwrap(); + + let async_receive = async move { + let mut buf = [0u8; 12]; + // Here's our async-ified Unix domain socket. + receiver.read_exact(&mut buf).await.unwrap(); + std::str::from_utf8(&buf).unwrap().to_owned() + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_receive).unwrap(); + + let async_send = async move { + // Here's our async-ified Unix domain socket. + sender.write_all(b"Hello, world!").await.unwrap(); + "Sent data...".to_owned() + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_send).unwrap(); + + // Run the event loop. + println!("Starting event loop. Use Ctrl-C to exit."); + event_loop.run(None, &mut event_loop.get_signal(), |_| {})?; + println!("Event loop ended."); + + Ok(()) +} +
All that's left is to run the loop:
++use calloop::EventLoop; + +use futures::io::{AsyncReadExt, AsyncWriteExt}; + +fn main() -> std::io::Result<()> { + let (exec, sched) = calloop::futures::executor()?; + + let mut event_loop = EventLoop::try_new()?; + let handle = event_loop.handle(); + + handle + .insert_source(exec, |evt, _metadata, _shared| { + // Print the value of the async block ie. the return value. + println!("Async block ended with: {}", evt); + }) + .map_err(|e| e.error)?; + + let (sender, receiver) = std::os::unix::net::UnixStream::pair().unwrap(); + let mut sender = handle.adapt_io(sender).unwrap(); + let mut receiver = handle.adapt_io(receiver).unwrap(); + + let async_receive = async move { + let mut buf = [0u8; 12]; + // Here's our async-ified Unix domain socket. + receiver.read_exact(&mut buf).await.unwrap(); + std::str::from_utf8(&buf).unwrap().to_owned() + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_receive).unwrap(); + + let async_send = async move { + // Here's our async-ified Unix domain socket. + sender.write_all(b"Hello, world!").await.unwrap(); + "Sent data...".to_owned() + }; + + // Schedule the async block to be run in the event loop. + sched.schedule(async_send).unwrap(); + + // Run the event loop. + println!("Starting event loop. Use Ctrl-C to exit."); + event_loop.run(None, &mut event_loop.get_signal(), |_| {})?; + println!("Event loop ended."); + + Ok(()) +} +
And the output we get is:
+Starting event loop. Use Ctrl-C to exit.
+Async block ended with: Sent data...
+Async block ended with: Hello, world
+^C
+
+
+ The previous chapter showed how to use callbacks, event data and shared data to control our program. However, more complex programs will require more complex shared data, and more complex interactions between events. Eventually this will run up against ownership issues and just the basic mental load of the poor programmer.
+In this chapter we're going to build something more complex: an event source based on ZeroMQ sockets.
+ZeroMQ is (very) basically a highly abstracted socket library. You can create ZeroMQ sockets over TCP, PGM, IPC, in-process and more, and generally not worry about the transport mechanics. It guarantees atomic message transfer, and handles queuing, retries, reconnection and balancing under the hood. It also lets you integrate it with event loops and reactors by exposing a file descriptor that you can wait on.
+But we can't just wrap this file descriptor in Calloop's generic::Generic
source and be done — there are a few subtleties we need to take care of for it to work right and be useful.
++ +Disclaimer
+It might be tempting, at the end of this chapter, to think that the code we've written is the definitive ZeroMQ wrapper, able to address any use case or higher level pattern you like. Certainly it will be a lot more suited to Calloop than using ZeroMQ sockets by themselves, but it is not the only way to use them with Calloop. Here are some things I have not addressed, for the sake of simplicity:
++
+- We will not handle fairness — our code will totally monopolise the event loop if we receive many messages at once.
+- We do not consider back pressure beyond whatever built-in zsocket settings the caller might use.
+- We just drop pending messages in the zsocket's internal queue (in and out) on shutdown. In a real application, you might want to make more specific decisions about the timeout and linger periods before dropping the zsocket, depending on your application's requirements.
+- We don't deal with zsocket errors much. In fact, the overall error handling of event sources is usually highly specific to your application, so what we end up writing here is almost certainly not going to survive contact with your own code base. Here we just use
+?
everywhere, which will eventually cause the event loop to exit with an error.So by all means, take the code we write here and use and adapt it, but please please note the caveats above and think carefully about your own program.
+
Calloop is designed to work by composition. It provides you with some single-responsibility sources (timers, message channels, file descriptors), and you can combine these together, bit by bit, to make more complex event sources. These new sources can express more and more of your program's internal logic and the relationships between them, always in terms of events and how you process them.
+You can greatly simplify even a highly complex program if you identify and expose the "real" events you care about and use composition to tidy the other events away in internal details of event sources.
+So what do we need to compose?
+Most obviously, ZeroMQ exposes a file descriptor for us to use. (This is a common thing for event-related libraries to do, so if you're wondering how to integrate, say, I²C or GPIO on Linux with Calloop, that's your answer.)
+Calloop can use file descriptors via the calloop::generic::Generic
source. So that's one.
Secondly, we might want to send messages on the socket. This means our event source needs to react when we send it a message. Calloop has a message channel for precisely this purpose: calloop::channel::Channel
. That's another one.
The third event source we need is a bit subtle, but since this isn't a mystery novel I can save you hours of debugging and spoil the ending now: we need a "ping" event source because ZeroMQ's FD is edge triggered.
+ZeroMQ's file descriptor is not the FD of an actual file or socket — you do not actually read data from it. It exists as an interface, with three important details:
+It is only ever readable. Even if the underlying socket can be written to, the FD that ZeroMQ gives you signals this by becoming readable. In fact, this FD will become readable under three circumstances: the ZeroMQ socket (henceforth called a "zsocket") is readable, writeable, or has an error. There is a separate function call, zmq::Socket::get_events()
that will tell you which.
It is edge triggered. It will only ever change from not-readable to readable when the socket's state changes. So if a zsocket receives two messages, and you only read one, the file descriptor will not wake up the event loop again. Why not? Because it hasn't changed state! After you read one message, the zsocket still has events waiting. If it receives yet another message... it still has events waiting. No change in internal state = no external event.
+This edge triggering also covers user actions. If a zsocket becomes writeable, and then you write to the zsocket, it might immediately (and atomically) change from writeable to readable. In this case you will not get another event on the FD.
+(The docs make this quite explicit, but there's a lot of docs to read so I'm spelling it out here.)
+What this adds up to is this: when we create our zsocket, it might already be readable or writeable. So when we add it to our event loop, it won't fire any events. Our entire source will just sit there until we wake it up by sending a message (which we might never do if it's eg. a pull socket).
+So the last event source we need is something that doesn't really convey any kind of message except "please wake up the event loop on the next iteration", and that is exactly what a calloop::ping::PingSource
does. And that's three.
In the last chapter we worked out a list of the event sources we need to compose into a new type:
+calloop::generic::Generic
calloop::channel::Channel
calloop::ping::Ping
So at a minimum, our type needs to contain these:
+pub struct ZeroMQSource
+{
+ // Calloop components.
+ socket: calloop::generic::Generic<calloop::generic::FdWrapper<zmq::Socket>>,
+ mpsc_receiver: calloop::channel::Channel<?>,
+ wake_ping_receiver: calloop::ping::PingSource,
+}
+
+Note that I've left the type for the channel as ?
— we'll get to that a bit later.
What else do we need? If the PingSource
is there to wake up the loop manually, we need to keep the other end of it. The ping is an internal detail — users of our type don't need to know it's there. We also need the zsocket itself, so we can actually detect and process events on it. That gives us:
pub struct ZeroMQSource
+{
+ // Calloop components.
+ socket: calloop::generic::Generic<calloop::generic::FdWrapper<zmq::Socket>>,
+ mpsc_receiver: calloop::channel::Channel<?>,
+ wake_ping_receiver: calloop::ping::PingSource,
+
+ /// Sending end of the ping source.
+ wake_ping_sender: calloop::ping::Ping,
+}
+
+The most obvious candidate for the type of the message queue would be zmq::Message
. But ZeroMQ sockets are capable of sending multipart messages, and this is even mandatory for eg. the PUB
zsocket type, where the first part of the message is the topic.
Therefore it makes more sense to accept a sequence of messages to cover the most general case, and that sequence can have a length of one for single-part messages. But with one more tweak: we can accept a sequence of things that can be transformed into zmq::Message
values. The exact type we'll use will be a generic type like so:
pub struct ZeroMQSource<T>
+where
+ T: IntoIterator,
+ T::Item: Into<zmq::Message>,
+{
+ mpsc_receiver: calloop::channel::Channel<T>,
+ // ...
+}
+
+++Enforcing single messages
+Remember that it's not just
+Vec<T>
and other sequence types that implementIntoIterator
—Option<T>
implements it too! There is alsostd::iter::Once<T>
. So if a user of our API wants to enforce that all "multi"-part messages actually contain exactly one part, they can use this API withT
being, say,std::iter::Once<zmq::Message>
(or even just[zmq::Message; 1]
in Rust 2021 edition).
The EventSource
trait has four associated types:
Event
- when an event is generated that our caller cares about (ie. not some internal thing), this is the data we provide to their callback. This will be another sequence of messages, but because we're constructing it we can be more opinionated about the type and use the return type of zmq::Socket::recv_multipart()
which is Vec<Vec<u8>>
.
Metadata
- this is a more persistent kind of data, perhaps the underlying file descriptor or socket, or maybe some stateful object that the callback can manipulate. It is passed by exclusive reference to the Metadata
type. In our case we don't use this, so it's ()
.
Ret
- this is the return type of the callback that's called on an event. Usually this will be a Result
of some sort; in our case it's std::io::Result<()>
just to signal whether some underlying operation failed or not.
Error
- this is the error type returned by process_events()
(not the user callback!). Having this as an associated type allows you to have more control over error propagation in nested event sources. We will use Anyhow, which is like a more fully-features Box<dyn Error>
. It allows you to add context to any other error with a context()
method.
So together these are:
+impl<T> calloop::EventSource for ZeroMQSource<T>
+where
+ T: IntoIterator,
+ T::Item: Into<zmq::Message>,
+{
+ type Event = Vec<Vec<u8>>;
+ type Metadata = ();
+ type Ret = io::Result<()>;
+ type Error = anyhow::Error;
+ // ...
+}
+
+I have saved one surprise for later to emphasise some important principles, but for now, let's move on to defining some methods!
+ +Now that we've figured out the types we need, we can get to work writing some methods. We'll need to implement the methods defined in the calloop::EventSource
trait, and a constructor function to create the source.
Creating our source is fairly straightforward. We can let the caller set up the zsocket the way they need, and take ownership of it when it's initialised. Our caller needs not only the source itself, but the sending end of the MPSC channel so they can send messages, so we need to return that too.
+A common pattern in Calloop's own constructor functions is to return a tuple containing (a) the source and (b) a type to use the source. So that's what we'll do:
+// Converts a `zmq::Socket` into a `ZeroMQSource` plus the sending end of an
+// MPSC channel to enqueue outgoing messages.
+pub fn from_socket(socket: zmq::Socket) -> io::Result<(Self, calloop::channel::Sender<T>)> {
+ let (mpsc_sender, mpsc_receiver) = calloop::channel::channel();
+ let (wake_ping_sender, wake_ping_receiver) = calloop::ping::make_ping()?;
+
+ let fd = socket.get_fd()?;
+
+ let socket_source =
+ calloop::generic::Generic::from_fd(fd, calloop::Interest::READ, calloop::Mode::Edge);
+
+ Ok((
+ Self {
+ socket,
+ socket_source,
+ mpsc_receiver,
+ wake_ping_receiver,
+ wake_ping_sender,
+ },
+ mpsc_sender,
+ ))
+}
+
+Calloop's event sources have a kind of life cycle, starting with registration. When you add an event source to the event loop, under the hood the source will register itself with the loop. Under certain circumstances a source will need to re-register itself. And finally there is the unregister action when an event source is removed from the loop. These are expressed via the calloop::EventSource
methods:
fn register(&mut self, poll: &mut calloop::Poll, token_factory: &mut calloop::TokenFactory) -> calloop::Result<()>
fn reregister(&mut self, poll: &mut calloop::Poll, token_factory: &mut calloop::TokenFactory) -> calloop::Result<()>
fn unregister(&mut self, poll: &mut calloop::Poll) -> calloop::Result<()>
The first two methods take a token factory, which is a way for Calloop to keep track of why your source was woken up. When we get to actually processing events, you'll see how this works. But for now, all you need to do is recursively pass the token factory into whatever sources your own event source is composed of. This includes other composed sources, which will pass the token factory into their sources, and so on.
+In practise this looks like:
+fn register(
+ &mut self,
+ poll: &mut calloop::Poll,
+ token_factory: &mut calloop::TokenFactory
+) -> calloop::Result<()>
+{
+ self.socket_source.register(poll, token_factory)?;
+ self.mpsc_receiver.register(poll, token_factory)?;
+ self.wake_ping_receiver.register(poll, token_factory)?;
+ self.wake_ping_sender.ping();
+
+ Ok(())
+}
+
+fn reregister(
+ &mut self,
+ poll: &mut calloop::Poll,
+ token_factory: &mut calloop::TokenFactory
+) -> calloop::Result<()>
+{
+ self.socket_source.reregister(poll, token_factory)?;
+ self.mpsc_receiver.reregister(poll, token_factory)?;
+ self.wake_ping_receiver.reregister(poll, token_factory)?;
+
+ self.wake_ping_sender.ping();
+
+ Ok(())
+}
+
+
+fn unregister(&mut self, poll: &mut calloop::Poll)-> calloop::Result<()> {
+ self.socket_source.unregister(poll)?;
+ self.mpsc_receiver.unregister(poll)?;
+ self.wake_ping_receiver.unregister(poll)?;
+ Ok(())
+}
+
+++Note the
+self.wake_ping_sender.ping()
call in the first two functions! This is how we manually prompt the event loop to wake up and run our source on the next iteration, to properly account for the zsocket's edge-triggering.
ZeroMQ sockets have their own internal queues and state, and therefore need a bit of care when shutting down. Depending on zsocket type and settings, when the ZeroMQ context is dropped, it could block waiting for certain operations to complete. We can write a drop handler to avoid this, but again note that it's only one of many ways to handle zsocket shutdown.
+impl<T> Drop for ZeroMQSource<T>
+where
+ T: IntoIterator,
+ T::Item: Into<zmq::Message>,
+{
+ fn drop(&mut self) {
+ // This is one way to stop socket code (especially PUSH sockets) hanging
+ // at the end of any blocking functions.
+ //
+ // - https://stackoverflow.com/a/38338578/188535
+ // - http://api.zeromq.org/4-0:zmq-ctx-term
+ self.socket.set_linger(0).ok();
+ self.socket.set_rcvtimeo(0).ok();
+ self.socket.set_sndtimeo(0).ok();
+
+ // Double result because (a) possible failure on call and (b) possible
+ // failure decoding.
+ if let Ok(Ok(last_endpoint)) = self.socket.get_last_endpoint() {
+ self.socket.disconnect(&last_endpoint).ok();
+ }
+ }
+}
+
+
+ Finally, the real functionality we care about! Processing events! This is also a method in the calloop::EventSource
trait:
fn process_events<F>(
+ &mut self,
+ readiness: calloop::Readiness,
+ token: calloop::Token,
+ mut callback: F,
+) -> Result<calloop::PostAction, Self::Error>
+where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+
+What a mouthful! But when you break it down, it's not so complicated:
+We take our own state, of course, as &mut self
.
We take a Readiness
value - this is mainly useful for "real" file descriptors, and tells you whether the event source was woken up for a read or write event. We ignore it though, because our internal sources are always only readable (remember that even if the zsocket is writeable, the FD it exposes is only ever readable).
We take a token. This gives us a way to process events that arise from our internal sources. In general, composed sources should not actually need to use this directly; sub-sources will check their own tokens against this and run if necessary.
+We take a callback. We call this callback with any "real" events that our caller will care about; in our case, that means messages we receive on the zsocket. It is closely related to the EventSource
trait's associated types. Note that the callback our caller supplies when adding our source to the loop actually takes an extra argument, which is some data that we won't know about in our source. Calloop's internals take care of combining our arguments here with this extra data.
Finally we return a PostAction
, which tells the loop whether it needs to change the state of our event source, perhaps as a result of actions we took. For example, you might require that your source be removed from the loop (with PostAction::Remove
) if it only has a certain thing to do. Ordinarily though, you'd return PostAction::Continue
for your source to keep waiting for events.
Note that these PostAction
values correspond to various methods on the LoopHandle
type (eg. PostAction::Disable
does the same as LoopHandle::disable()
). Whether you control your event source by returning a PostAction
or using the LoopHandle
methods depends on whether it makes more sense for these actions to be taken from within your event source or by something else in your code.
Implementing process_events()
for a type that contains various Calloop sources composed together, like we have, is done recursively by calling our internal sources' process_events()
method. The token
that Calloop gives us is how each event source determines whether it was responsible for the wakeup and has events to process.
If we were woken up because of the ping source, then the ping source's process_events()
will see that the token matches its own, and call the callback (possibly multiple times). If we were woken up because a message was sent through the MPSC channel, then the channel's process_events()
will match on the token instead and call the callback for every message waiting. The zsocket is a little different, and we'll go over that in detail.
For error handling we're using Anyhow, hence the context()
calls on each fallible operation. These just add a message to any error that might appear in a traceback.
So a first draft of our code might look like:
+fn process_events<F>(
+ &mut self,
+ readiness: calloop::Readiness,
+ token: calloop::Token,
+ mut callback: F,
+) -> Result<calloop::PostAction, Self::Error>
+where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+{
+ // Runs if we were woken up on startup/registration.
+ self.wake_ping_receiver
+ .process_events(readiness, token, |_, _| {})
+ .context("Failed after registration")?;
+
+ // Runs if we received a message over the MPSC channel.
+ self.mpsc_receiver
+ .process_events(readiness, token, |evt, _| {
+ // 'evt' could be a message or a "sending end closed"
+ // notification. We don't care about the latter.
+ if let calloop::channel::Event::Msg(msg) = evt {
+ self.socket
+ .send_multipart(msg, 0)
+ .context("Failed to send message")?;
+ }
+ })?;
+
+ // Runs if the zsocket became read/write-able.
+ self.socket
+ .process_events(readiness, token, |_, _| {
+ let events =
+ self.socket
+ .get_events()
+ .context("Failed to read ZeroMQ events")?;
+
+ if events.contains(zmq::POLLOUT) {
+ // Wait, what do we do here?
+ }
+
+ if events.contains(zmq::POLLIN) {
+ let messages =
+ self.socket
+ .recv_multipart(0)
+ .context("Failed to receive message")?;
+
+ callback(messages, &mut ())
+ .context("Error in event callback")?;
+ }
+ })?;
+
+ Ok(calloop::PostAction::Continue)
+}
+
+We process the events from whichever source woke up our composed source, and if we woke up because the zsocket became readable, we call the callback with the message we received. Finally we return PostAction::Continue
to remain in the event loop.
Don't worry about getting this to compile, it is a good start but it's wrong in a few ways.
+Firstly, we've gone to all the trouble of using a ping to wake up the source, and then we just... drain its internal events and return. Which achieves nothing.
+Secondly, we don't seem to know what to do when our zsocket becomes writeable (the actual zsocket, not the "interface" file descriptor).
+Thirdly, we commit one of the worst sins you can commit in an event-loop-based system. Can you see it? It's this part:
+self.mpsc_receiver
+ .process_events(readiness, token, |evt, _| {
+ if let calloop::channel::Event::Msg(msg) = evt {
+ self.socket
+ .file
+ .send_multipart(msg, 0)
+ .context("Failed to send message")?;
+ }
+ })?;
+
+We block the event loop! In the middle of processing events from the MPSC channel, we call zmq::Socket::send_multipart()
which could, under certain circumstances, block! We shouldn't do that.
Let's deal with this badness first then. We want to decouple "receiving messages over the MPSC channel" from "sending messages on the zsocket". There are different ways to do this, but they boil down to: buffer messages or drop messages (or maybe a combination of both). We'll use the first approach, with an internal FIFO queue. When we receive messages, we push them onto the back of the queue. When the zsocket is writeable, we pop messages from the front of the queue.
+The standard library has collections::VecDeque<T>
which provides efficient double-ended queuing, so let's use that. This is some extra internal state, so we need to add it to our type, which becomes:
pub struct ZeroMQSource<T>
+where
+ T: IntoIterator,
+ T::Item: Into<zmq::Message>,
+{
+ // Calloop components.
+ socket: calloop::generic::Generic<calloop::generic::FdWrapper<zmq::Socket>>,
+ mpsc_receiver: calloop::channel::Channel<T>,
+ wake_ping_receiver: calloop::ping::PingSource,
+
+ /// Sending end of the ping source.
+ wake_ping_sender: calloop::ping::Ping,
+
+ /// FIFO queue for the messages to be published.
+ outbox: std::collections::VecDeque<T>,
+}
+
+Our MPSC receiving code becomes:
+let outbox = &mut self.outbox;
+
+self.mpsc_receiver
+ .process_events(readiness, token, |evt, _| {
+ if let calloop::channel::Event::Msg(msg) = evt {
+ outbox.push_back(msg);
+ }
+ })?;
+
+And our "zsocket is writeable" code becomes:
+self.socket
+ .file
+ .process_events(readiness, token, |_, _| {
+ let events = self
+ .socket
+ .file
+ .get_events()
+ .context("Failed to read ZeroMQ events")?;
+
+ if events.contains(zmq::POLLOUT) {
+ if let Some(parts) = self.outbox.pop_front() {
+ self.socket
+ .file
+ .send_multipart(parts, 0)
+ .context("Failed to send message")?;
+ }
+ }
+
+ if events.contains(zmq::POLLIN) {
+ let messages =
+ self.socket
+ .file
+ .recv_multipart(0)
+ .context("Failed to receive message")?;
+ callback(messages, &mut ())
+ .context("Error in event callback")?;
+ }
+ })?;
+
+
+So we've not only solved problem #3, we've also figured out #2, which suggests we're on the right track. But we still have (at least) that first issue to sort out.
+ +We have three events that could wake up our event source: the ping, the channel, and the zsocket itself becoming ready to use. All three of these reasons potentially mean doing something on the zsocket: if the ping fired, we need to check for any pending events. If the channel received a message, we want to check if the zsocket is already readable and send it. If the zsocket becomes readable or writeable, we want to read from or write to it. In other words... We want to run it every time!
+Also notice that in the zsocket process_events()
call, we don't use any of the arguments, including the event itself. That file descriptor is merely a signalling mechanism! Sending and receiving messages is what will actually clear any pending events on it, and reset it to a state where it will wake the event loop later.
let events = self
+ .socket
+ .get_events()
+ .context("Failed to read ZeroMQ events")?;
+
+if events.contains(zmq::POLLOUT) {
+ if let Some(parts) = self.outbox.pop_front() {
+ self.socket
+ .send_multipart(parts, 0)
+ .context("Failed to send message")?;
+ }
+}
+
+if events.contains(zmq::POLLIN) {
+ let messages =
+ self.socket
+ .recv_multipart(0)
+ .context("Failed to receive message")?;
+ callback(messages, &mut ())
+ .context("Error in event callback")?;
+}
+
+So the second draft of our process_events()
function is now:
fn process_events<F>(
+ &mut self,
+ readiness: calloop::Readiness,
+ token: calloop::Token,
+ mut callback: F,
+) -> Result<calloop::PostAction, Self::Error>
+where
+ F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret,
+{
+ // Runs if we were woken up on startup/registration.
+ self.wake_ping_receiver
+ .process_events(readiness, token, |_, _| {})?;
+
+ // Runs if we were woken up because a message was sent on the channel.
+ let outbox = &mut self.outbox;
+
+ self.mpsc_receiver
+ .process_events(readiness, token, |evt, _| {
+ if let calloop::channel::Event::Msg(msg) = evt {
+ outbox.push_back(msg);
+ }
+ })?;
+
+ // Always process any pending zsocket events.
+
+ let events = self
+ .socket
+ .get_events()
+ .context("Failed to read ZeroMQ events")?;
+
+ if events.contains(zmq::POLLOUT) {
+ if let Some(parts) = self.outbox.pop_front() {
+ self.socket
+ .send_multipart(parts, 0)
+ .context("Failed to send message")?;
+ }
+ }
+
+ if events.contains(zmq::POLLIN) {
+ let messages =
+ self.socket
+ .recv_multipart(0)
+ .context("Failed to receive message")?;
+ callback(messages, &mut ())
+ .context("Error in event callback")?;
+ }
+
+ Ok(calloop::PostAction::Continue)
+}
+
+There is one more issue to take care of, and it's got nothing to do with Calloop. We still haven't fully dealt with ZeroMQ's edge-triggered nature.
+Consider this situation:
+ZeroMQSource
and add that to our loop.If we do this, it's possible we'll never actually receive any replies that are sent to our zsocket! Why? Because:
+events
events
to check if the socket is readableThe zsocket will change from writeable to readable before we leave process_events()
. So the "interface" file descriptor will become readable again. But because it is edge triggered, it will not wake up our event source after we leave process_events()
. So our source will not wake up again (at least, not due to the self.socket
event source).
For this specific example, it will suffice to re-read the zsocket events in between the if
statements. Then when we get to the second events
check, it will indeed contain zmq::POLLIN
and receive the pending message. But this is not good enough for the general case! If we replace REQ with REP above, we'll get the opposite problem: our first check (for POLLOUT
) will be false. Our second check (POLLIN
) will be true. We'll receive a message, leave process_events()
, and never wake up again.
The full solution is to recognise that any user action on a ZeroMQ socket can cause the pending events to change, or just to remain active, without re-triggering the "interface" file descriptor. So we need to (a) do this repeatedly and (b) keep track of when we have or haven't performed an action on the zsocket. Here's one way to do it:
+loop {
+ let events = self
+ .socket
+ .get_events()
+ .context("Failed to read ZeroMQ events")?;
+
+ let mut used_socket = false;
+
+ if events.contains(zmq::POLLOUT) {
+ if let Some(parts) = self.outbox.pop_front() {
+ self.socket
+ .as_ref()
+ .send_multipart(parts, 0)
+ .context("Failed to send message")?;
+ used_socket = true;
+ }
+ }
+
+ if events.contains(zmq::POLLIN) {
+ let messages =
+ self.socket
+ .recv_multipart(0)
+ .context("Failed to receive message")?;
+ used_socket = true;
+
+ callback(messages, &mut ())
+ .context("Error in event callback")?;
+ }
+
+ if !used_socket {
+ break;
+ }
+}
+
+Now we have a flag that we set if, and only if, we call a send or receive method on the zsocket. If that flag is set at the end of the loop, we go around again.
+++ +Greediness
+Remember my disclaimer at the start of the chapter, about this code being "greedy"? This is what I mean. This loop will run until the entire message queue is empty, so if it has a lot of messages in it, any other sources in our event loop will not be run until this loop is finished.
+An alternative approach is to use more state to determine whether we want to run again on the next loop iteration (perhaps using the ping source), so that Calloop can run any other sources in between individual messages being received.
+
This is the full source code for a Calloop event source based on a ZeroMQ socket. You might find it useful as a kind of reference. Please read the disclaimer at the start of this chapter if you skipped straight here!
++//! A Calloop event source implementation for ZeroMQ sockets. + +use std::{collections, io}; + +use anyhow::Context; + +/// A Calloop event source that contains a ZeroMQ socket (of any kind) and a +/// Calloop MPSC channel for sending over it. +/// +/// The basic interface is: +/// - create a zmq::Socket for your ZeroMQ socket +/// - use `ZeroMQSource::from_socket()` to turn it into a Calloop event source +/// (plus the sending end of the channel) +/// - queue messages to be sent by sending them on the sending end of the MPSC +/// channel +/// - add the event source to the Calloop event loop with a callback to handle +/// reading +/// - the sending end of the MPSC channel can be cloned and sent across threads +/// if necessary +/// +/// This type is parameterised by `T`: +/// +/// T where T: IntoIterator, T::Item: Into<zmq::Message> +// +/// This means that `T` is anything that can be converted to an iterator, and +/// the items in the iterator are anything that can be converted to a +/// `zmq::Message`. So eg. a `Vec<String>` would work. +/// +/// The callback is called whenever the underlying socket becomes readable. It +/// is called with a vec of byte sequences (`Vec<Vec<u8>>`) and the event loop +/// data set by the user. +/// +/// Note about why the read data is a vec of multipart message parts: we don't +/// know what kind of socket this is, or what will be sent, so the most general +/// thing we can do is receive the entirety of a multipart message and call the +/// user callback with the whole set. Usually the number of parts in a multipart +/// message will be one, but the code will work just the same when it's not. +/// +/// This event source also allows you to use different event sources to publish +/// messages over the same writeable ZeroMQ socket (usually PUB or PUSH). +/// Messages should be sent over the Calloop MPSC channel sending end. This end +/// can be cloned and used by multiple senders. + +pub struct ZeroMQSource<T> +where + T: IntoIterator, + T::Item: Into<zmq::Message>, +{ + // Calloop components. + /// Event source for ZeroMQ socket. + socket: calloop::generic::Generic<calloop::generic::FdWrapper<zmq::Socket>>, + + /// Event source for channel. + mpsc_receiver: calloop::channel::Channel<T>, + + /// Because the ZeroMQ socket is edge triggered, we need a way to "wake" the + /// event source upon (re-)registration. We do this with a separate + /// `calloop::ping::Ping` source. + wake_ping_receiver: calloop::ping::PingSource, + + /// Sending end of the ping source. + wake_ping_sender: calloop::ping::Ping, + + // ZeroMQ socket. + /// FIFO queue for the messages to be published. + outbox: collections::VecDeque<T>, +} + +impl<T> ZeroMQSource<T> +where + T: IntoIterator, + T::Item: Into<zmq::Message>, +{ + // Converts a `zmq::Socket` into a `ZeroMQSource` plus the sending end of an + // MPSC channel to enqueue outgoing messages. + pub fn from_socket(socket: zmq::Socket) -> io::Result<(Self, calloop::channel::Sender<T>)> { + let (mpsc_sender, mpsc_receiver) = calloop::channel::channel(); + let (wake_ping_sender, wake_ping_receiver) = calloop::ping::make_ping()?; + + let socket = calloop::generic::Generic::new( + unsafe { calloop::generic::FdWrapper::new(socket) }, + calloop::Interest::READ, + calloop::Mode::Edge, + ); + + Ok(( + Self { + socket, + mpsc_receiver, + wake_ping_receiver, + wake_ping_sender, + outbox: collections::VecDeque::new(), + }, + mpsc_sender, + )) + } +} + +/// This event source runs for three events: +/// +/// 1. The event source was registered. It is forced to run so that any pending +/// events on the socket are processed. +/// +/// 2. A message was sent over the MPSC channel. In this case we put it in the +/// internal queue. +/// +/// 3. The ZeroMQ socket is readable. For this, we read off a complete multipart +/// message and call the user callback with it. +/// +/// The callback provided to `process_events()` may be called multiple times +/// within a single call to `process_events()`. +impl<T> calloop::EventSource for ZeroMQSource<T> +where + T: IntoIterator, + T::Item: Into<zmq::Message>, +{ + type Event = Vec<Vec<u8>>; + type Metadata = (); + type Ret = io::Result<()>; + type Error = anyhow::Error; + + fn process_events<F>( + &mut self, + readiness: calloop::Readiness, + token: calloop::Token, + mut callback: F, + ) -> Result<calloop::PostAction, Self::Error> + where + F: FnMut(Self::Event, &mut Self::Metadata) -> Self::Ret, + { + // Runs if we were woken up on startup/registration. + self.wake_ping_receiver + .process_events(readiness, token, |_, _| {}) + .context("Failed after registration")?; + + // Runs if we were woken up because a message was sent on the channel. + let outbox = &mut self.outbox; + + self.mpsc_receiver + .process_events(readiness, token, |evt, _| { + if let calloop::channel::Event::Msg(msg) = evt { + outbox.push_back(msg); + } + }) + .context("Failed to processing outgoing messages")?; + + // The ZeroMQ file descriptor is edge triggered. This means that if (a) + // messages are added to the queue before registration, or (b) the + // socket became writeable before messages were enqueued, we will need + // to run the loop below. Hence, it always runs if this event source + // fires. The process_events() method doesn't do anything though, so we + // ignore it. + + loop { + // According to the docs, the edge-triggered FD will not change + // state if a socket goes directly from being readable to being + // writeable (or vice-versa) without there being an in-between point + // where there are no events. This can happen as a result of sending + // or receiving on the socket while processing such an event. The + // "used_socket" flag below tracks whether we perform an operation + // on the socket that warrants reading the events again. + let events = self + .socket + .get_ref() + .get_events() + .context("Failed to read ZeroMQ events")?; + + let mut used_socket = false; + + if events.contains(zmq::POLLOUT) { + if let Some(parts) = self.outbox.pop_front() { + self.socket + .get_ref() + .send_multipart(parts, 0) + .context("Failed to send message")?; + used_socket = true; + } + } + + if events.contains(zmq::POLLIN) { + // Batch up multipart messages. ZeroMQ guarantees atomic message + // sending, which includes all parts of a multipart message. + let messages = self + .socket + .get_ref() + .recv_multipart(0) + .context("Failed to receive message")?; + used_socket = true; + + // Capture and report errors from the callback, but don't propagate + // them up. + callback(messages, &mut ()).context("Error in event callback")?; + } + + if !used_socket { + break; + } + } + + Ok(calloop::PostAction::Continue) + } + + fn register( + &mut self, + poll: &mut calloop::Poll, + token_factory: &mut calloop::TokenFactory, + ) -> calloop::Result<()> { + self.socket.register(poll, token_factory)?; + self.mpsc_receiver.register(poll, token_factory)?; + self.wake_ping_receiver.register(poll, token_factory)?; + + self.wake_ping_sender.ping(); + + Ok(()) + } + + fn reregister( + &mut self, + poll: &mut calloop::Poll, + token_factory: &mut calloop::TokenFactory, + ) -> calloop::Result<()> { + self.socket.reregister(poll, token_factory)?; + self.mpsc_receiver.reregister(poll, token_factory)?; + self.wake_ping_receiver.reregister(poll, token_factory)?; + + self.wake_ping_sender.ping(); + + Ok(()) + } + + fn unregister(&mut self, poll: &mut calloop::Poll) -> calloop::Result<()> { + self.socket.unregister(poll)?; + self.mpsc_receiver.unregister(poll)?; + self.wake_ping_receiver.unregister(poll)?; + Ok(()) + } +} + +impl<T> Drop for ZeroMQSource<T> +where + T: IntoIterator, + T::Item: Into<zmq::Message>, +{ + fn drop(&mut self) { + // This is one way to stop socket code (especially PUSH sockets) hanging + // at the end of any blocking functions. + // + // - https://stackoverflow.com/a/38338578/188535 + // - http://api.zeromq.org/4-0:zmq-ctx-term + self.socket.get_ref().set_linger(0).ok(); + self.socket.get_ref().set_rcvtimeo(0).ok(); + self.socket.get_ref().set_sndtimeo(0).ok(); + + // Double result because (a) possible failure on call and (b) possible + // failure decoding. + if let Ok(Ok(last_endpoint)) = self.socket.get_ref().get_last_endpoint() { + self.socket.get_ref().disconnect(&last_endpoint).ok(); + } + } +} + +pub fn main() {} +
Dependencies are:
+[dependencies]
+calloop = { path = '../..' }
+zmq = "0.9"
+anyhow = "1.0"
+
+
+ If you're looking for calloop's API documentation, they are available on docs.rs
for the released versions. There are also the docs of the current development version.
This book presents a step-by-step tutorial to get yourself familiar with calloop and how it is used:
+