bpf: Remove migrate_disable in kprobe_multi_link_prog_run

[ Upstream commit abdaf49be5424db74e19d167c10d7dad79a0efc2 ]

Graph tracer framework ensures we won't migrate, kprobe_multi_link_prog_run
called all the way from graph tracer, which disables preemption in
function_graph_enter_regs, as Jiri and Yonghong suggested, there is no
need to use migrate_disable. As a result, some overhead may will be reduced.
And add cant_sleep check for __this_cpu_inc_return.

Fixes: 0dcac27254 ("bpf: Add multi kprobe link")
Signed-off-by: Tao Chen <chen.dylane@linux.dev>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20250814121430.2347454-1-chen.dylane@linux.dev
Signed-off-by: Sasha Levin <sashal@kernel.org>
This commit is contained in:
Tao Chen 2025-08-14 20:14:29 +08:00 committed by Greg Kroah-Hartman
parent a95f5f187c
commit 1d7c0e3e21

View File

@ -2636,18 +2636,23 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
struct bpf_run_ctx *old_run_ctx;
int err;
/*
* graph tracer framework ensures we won't migrate, so there is no need
* to use migrate_disable for bpf_prog_run again. The check here just for
* __this_cpu_inc_return.
*/
cant_sleep();
if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
err = 0;
goto out;
}
migrate_disable();
rcu_read_lock();
old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
err = bpf_prog_run(link->link.prog, regs);
bpf_reset_run_ctx(old_run_ctx);
rcu_read_unlock();
migrate_enable();
out:
__this_cpu_dec(bpf_prog_active);