Skip to content

Commit 76f970c

Browse files
deggemanIngo Molnar
authored and
Ingo Molnar
committed
Revert "sched/core: Reduce cost of sched_move_task when config autogroup"
This reverts commit eff6c8c. Hazem reported a 30% drop in UnixBench spawn test with commit eff6c8c ("sched/core: Reduce cost of sched_move_task when config autogroup") on a m6g.xlarge AWS EC2 instance with 4 vCPUs and 16 GiB RAM (aarch64) (single level MC sched domain): https://lkml.kernel.org/r/[email protected] There is an early bail from sched_move_task() if p->sched_task_group is equal to p's 'cpu cgroup' (sched_get_task_group()). E.g. both are pointing to taskgroup '/user.slice/user-1000.slice/session-1.scope' (Ubuntu '22.04.5 LTS'). So in: do_exit() sched_autogroup_exit_task() sched_move_task() if sched_get_task_group(p) == p->sched_task_group return /* p is enqueued */ dequeue_task() \ sched_change_group() | task_change_group_fair() | detach_task_cfs_rq() | (1) set_task_rq() | attach_task_cfs_rq() | enqueue_task() / (1) isn't called for p anymore. Turns out that the regression is related to sgs->group_util in group_is_overloaded() and group_has_capacity(). If (1) isn't called for all the 'spawn' tasks then sgs->group_util is ~900 and sgs->group_capacity = 1024 (single CPU sched domain) and this leads to group_is_overloaded() returning true (2) and group_has_capacity() false (3) much more often compared to the case when (1) is called. I.e. there are much more cases of 'group_is_overloaded' and 'group_fully_busy' in WF_FORK wakeup sched_balance_find_dst_cpu() which then returns much more often a CPU != smp_processor_id() (5). This isn't good for these extremely short running tasks (FORK + EXIT) and also involves calling sched_balance_find_dst_group_cpu() unnecessary (single CPU sched domain). Instead if (1) is called for 'p->flags & PF_EXITING' then the path (4),(6) is taken much more often. select_task_rq_fair(..., wake_flags = WF_FORK) cpu = smp_processor_id() new_cpu = sched_balance_find_dst_cpu(..., cpu, ...) group = sched_balance_find_dst_group(..., cpu) do { update_sg_wakeup_stats() sgs->group_type = group_classify() if group_is_overloaded() (2) return group_overloaded if !group_has_capacity() (3) return group_fully_busy return group_has_spare (4) } while group if local_sgs.group_type > idlest_sgs.group_type return idlest (5) case group_has_spare: if local_sgs.idle_cpus >= idlest_sgs.idle_cpus return NULL (6) Unixbench Tests './Run -c 4 spawn' on: (a) VM AWS instance (m7gd.16xlarge) with v6.13 ('maxcpus=4 nr_cpus=4') and Ubuntu 22.04.5 LTS (aarch64). Shell & test run in '/user.slice/user-1000.slice/session-1.scope'. w/o patch w/ patch 21005 27120 (b) i7-13700K with tip/sched/core ('nosmt maxcpus=8 nr_cpus=8') and Ubuntu 22.04.5 LTS (x86_64). Shell & test run in '/A'. w/o patch w/ patch 67675 88806 CONFIG_SCHED_AUTOGROUP=y & /sys/proc/kernel/sched_autogroup_enabled equal 0 or 1. Reported-by: Hazem Mohamed Abuelfotoh <[email protected]> Signed-off-by: Dietmar Eggemann <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Vincent Guittot <[email protected]> Tested-by: Hagar Hemdan <[email protected]> Cc: Linus Torvalds <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent f3fa0e4 commit 76f970c

File tree

1 file changed

+3
-18
lines changed

1 file changed

+3
-18
lines changed

kernel/sched/core.c

+3-18
Original file line numberDiff line numberDiff line change
@@ -9016,7 +9016,7 @@ void sched_release_group(struct task_group *tg)
90169016
spin_unlock_irqrestore(&task_group_lock, flags);
90179017
}
90189018

9019-
static struct task_group *sched_get_task_group(struct task_struct *tsk)
9019+
static void sched_change_group(struct task_struct *tsk)
90209020
{
90219021
struct task_group *tg;
90229022

@@ -9028,13 +9028,7 @@ static struct task_group *sched_get_task_group(struct task_struct *tsk)
90289028
tg = container_of(task_css_check(tsk, cpu_cgrp_id, true),
90299029
struct task_group, css);
90309030
tg = autogroup_task_group(tsk, tg);
9031-
9032-
return tg;
9033-
}
9034-
9035-
static void sched_change_group(struct task_struct *tsk, struct task_group *group)
9036-
{
9037-
tsk->sched_task_group = group;
9031+
tsk->sched_task_group = tg;
90389032

90399033
#ifdef CONFIG_FAIR_GROUP_SCHED
90409034
if (tsk->sched_class->task_change_group)
@@ -9055,20 +9049,11 @@ void sched_move_task(struct task_struct *tsk, bool for_autogroup)
90559049
{
90569050
int queued, running, queue_flags =
90579051
DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
9058-
struct task_group *group;
90599052
struct rq *rq;
90609053

90619054
CLASS(task_rq_lock, rq_guard)(tsk);
90629055
rq = rq_guard.rq;
90639056

9064-
/*
9065-
* Esp. with SCHED_AUTOGROUP enabled it is possible to get superfluous
9066-
* group changes.
9067-
*/
9068-
group = sched_get_task_group(tsk);
9069-
if (group == tsk->sched_task_group)
9070-
return;
9071-
90729057
update_rq_clock(rq);
90739058

90749059
running = task_current_donor(rq, tsk);
@@ -9079,7 +9064,7 @@ void sched_move_task(struct task_struct *tsk, bool for_autogroup)
90799064
if (running)
90809065
put_prev_task(rq, tsk);
90819066

9082-
sched_change_group(tsk, group);
9067+
sched_change_group(tsk);
90839068
if (!for_autogroup)
90849069
scx_cgroup_move_task(tsk);
90859070

0 commit comments

Comments
 (0)