Skip to content

Commit ba33267

Browse files
committed
Async blocking task support
Added the `BlockingTaskQueue` type BlockingTaskQueue allows a Rust closure to be scheduled on a foreign thread where blocking operations are okay. The closure runs inside the parent future, which is nice because it allows the closure to reference its outside scope. On the foreign side, a `BlockingTaskQueue` is a native type that runs a task in some sort of thread queue (`DispatchQueue`, `CoroutineContext`, `futures.Executor`, etc.). Updated some of the foreign code so that the handle-maps used for callback interfaces can also be used for this. It's quite messy right now, but mozilla#1823 should clean things up. Renamed `Handle` to `UniffiHandle` on Kotlin, since `Handle` can easily conflict with user names. In fact it did on the `custom_types` fixture, the only reason that this worked before was that that fixture didn't use callback interfaces. Added new tests for this in the futures fixtures. Updated the tests to check that handles are being released properly.
1 parent 075efd9 commit ba33267

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+1174
-157
lines changed

Cargo.lock

Lines changed: 75 additions & 6 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

docs/manual/src/futures.md

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -71,3 +71,63 @@ pub trait SayAfterTrait: Send + Sync {
7171
async fn say_after(&self, ms: u16, who: String) -> String;
7272
}
7373
```
74+
75+
## Blocking tasks
76+
77+
Rust executors are designed around an assumption that the `Future::poll` function will return quickly.
78+
This assumption, combined with cooperative scheduling, allows for a large number of futures to be handled by a small number of threads.
79+
Foreign executors make similar assumptions and sometimes more extreme ones.
80+
For example, the Python eventloop is single threaded -- if any task spends a long time between `await` points, then it will block all other tasks from progressing.
81+
82+
This raises the question of how async code can interact with blocking code that performs blocking IO, long-running computations without `await` breaks, etc.
83+
UniFFI defines the `BlockingTaskQueue` type, which is a foreign object that schedules work on a thread where it's okay to block.
84+
85+
On Rust, `BlockingTaskQueue` is a UniFFI type that can safely run blocking code.
86+
It's `execute` method works like tokio's [block_in_place](https://docs.rs/tokio/latest/tokio/task/fn.block_in_place.html) function.
87+
It inputs a closure and runs it in the `BlockingTaskQueue`.
88+
This closure can reference the outside scope (i.e. it does not need to be `'static`).
89+
For example:
90+
91+
```rust
92+
#[derive(uniffi::Object)]
93+
struct DataStore {
94+
// Used to run blocking tasks
95+
queue: uniffi::BlockingTaskQueue,
96+
// Low-level DB object with blocking methods
97+
db: Mutex<Database>,
98+
}
99+
100+
#[uniffi::export]
101+
impl DataStore {
102+
#[uniffi::constructor]
103+
fn new(queue: uniffi::BlockingTaskQueue) -> Self {
104+
Self {
105+
queue,
106+
db: Mutex::new(Database::new())
107+
}
108+
}
109+
110+
async fn fetch_all_items(&self) -> Vec<DbItem> {
111+
self.queue.execute(|| self.db.lock().fetch_all_items()).await
112+
}
113+
}
114+
```
115+
116+
On the foreign side `BlockingTaskQueue` corresponds to a language-dependent class.
117+
118+
### Kotlin
119+
Kotlin uses `CoroutineContext` for its `BlockingTaskQueue`.
120+
Any `CoroutineContext` will work, but `Dispatchers.IO` is usually a good choice.
121+
A DataStore from the example above can be created with `DataStore(Dispatchers.IO)`.
122+
123+
### Swift
124+
Swift uses `DispatchQueue` for its `BlockingTaskQueue`.
125+
The user-initiated global queue is normally a good choice.
126+
A DataStore from the example above can be created with `DataStore(queue: DispatchQueue.global(qos: .userInitiated)`.
127+
The `DispatchQueue` should be concurrent.
128+
129+
### Python
130+
131+
Python uses a `futures.Executor` for its `BlockingTaskQueue`.
132+
`ThreadPoolExecutor` is typically a good choice.
133+
A DataStore from the example above can be created with `DataStore(ThreadPoolExecutor())`.

fixtures/futures/Cargo.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ path = "src/bin.rs"
1717
[dependencies]
1818
uniffi = { workspace = true, features = ["tokio", "cli"] }
1919
async-trait = "0.1"
20+
futures = "0.3.29"
2021
thiserror = "1.0"
2122
tokio = { version = "1.24.1", features = ["time", "sync"] }
2223
once_cell = "1.18.0"

fixtures/futures/src/lib.rs

Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ use std::{
1111
time::Duration,
1212
};
1313

14+
use futures::stream::{FuturesUnordered, StreamExt};
15+
1416
/// Non-blocking timer future.
1517
pub struct TimerFuture {
1618
shared_state: Arc<Mutex<SharedState>>,
@@ -385,6 +387,59 @@ impl SayAfterUdlTrait for SayAfterImpl2 {
385387
#[uniffi::export]
386388
fn get_say_after_udl_traits() -> Vec<Arc<dyn SayAfterUdlTrait>> {
387389
vec![Arc::new(SayAfterImpl1), Arc::new(SayAfterImpl2)]
390+
391+
/// Async function that uses a blocking task queue to do its work
392+
#[uniffi::export]
393+
pub async fn calc_square(queue: uniffi::BlockingTaskQueue, value: i32) -> i32 {
394+
queue.execute(|| value * value).await
395+
}
396+
397+
/// Same as before, but this one runs multiple tasks
398+
#[uniffi::export]
399+
pub async fn calc_squares(queue: uniffi::BlockingTaskQueue, items: Vec<i32>) -> Vec<i32> {
400+
// Use `FuturesUnordered` to test our blocking task queue code which is known to be a tricky API to work with.
401+
// In particular, if we don't notify the waker then FuturesUnordered will not poll again.
402+
let mut futures: FuturesUnordered<_> = (0..items.len())
403+
.map(|i| {
404+
// Test that we can use references from the surrounding scope
405+
let items = &items;
406+
queue.execute(move || items[i] * items[i])
407+
})
408+
.collect();
409+
let mut results = vec![];
410+
while let Some(result) = futures.next().await {
411+
results.push(result);
412+
}
413+
results.sort();
414+
results
415+
}
416+
417+
/// ...and this one uses multiple BlockingTaskQueues
418+
#[uniffi::export]
419+
pub async fn calc_squares_multi_queue(
420+
queues: Vec<uniffi::BlockingTaskQueue>,
421+
items: Vec<i32>,
422+
) -> Vec<i32> {
423+
let mut futures: FuturesUnordered<_> = (0..items.len())
424+
.map(|i| {
425+
// Test that we can use references from the surrounding scope
426+
let items = &items;
427+
queues[i].execute(move || items[i] * items[i])
428+
})
429+
.collect();
430+
let mut results = vec![];
431+
while let Some(result) = futures.next().await {
432+
results.push(result);
433+
}
434+
results.sort();
435+
results
436+
}
437+
438+
/// Like calc_square, but it clones the BlockingTaskQueue first then drops both copies. Used to
439+
/// test that a) the clone works and b) we correctly drop the references.
440+
#[uniffi::export]
441+
pub async fn calc_square_with_clone(queue: uniffi::BlockingTaskQueue, value: i32) -> i32 {
442+
queue.clone().execute(|| value * value).await
388443
}
389444

390445
uniffi::include_scaffolding!("futures");

0 commit comments

Comments
 (0)