Skip to content

Commit 110aa25

Browse files
committed
io_uring: fix race in unified task_work running
We use a bit to manage if we need to add the shared task_work, but a list + lock for the pending work. Before aborting a current run of the task_work we check if the list is empty, but we do so without grabbing the lock that protects it. This can lead to races where we think we have nothing left to run, where in practice we could be racing with a task adding new work to the list. If we do hit that race condition, we could be left with work items that need processing, but the shared task_work is not active. Ensure that we grab the lock before checking if the list is empty, so we know if it's safe to exit the run or not. Link: https://lore.kernel.org/io-uring/[email protected]/ Cc: [email protected] # 5.11+ Reported-by: Forza <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
1 parent 44eff40 commit 110aa25

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

fs/io_uring.c

+5-1
Original file line numberDiff line numberDiff line change
@@ -1959,9 +1959,13 @@ static void tctx_task_work(struct callback_head *cb)
19591959
node = next;
19601960
}
19611961
if (wq_list_empty(&tctx->task_list)) {
1962+
spin_lock_irq(&tctx->task_lock);
19621963
clear_bit(0, &tctx->task_state);
1963-
if (wq_list_empty(&tctx->task_list))
1964+
if (wq_list_empty(&tctx->task_list)) {
1965+
spin_unlock_irq(&tctx->task_lock);
19641966
break;
1967+
}
1968+
spin_unlock_irq(&tctx->task_lock);
19651969
/* another tctx_task_work() is enqueued, yield */
19661970
if (test_and_set_bit(0, &tctx->task_state))
19671971
break;

0 commit comments

Comments
 (0)