linux-yocto/kernel/bpf
Hou Tao 7bf4461f1c bpf: Check rcu_read_lock_trace_held() in bpf_map_lookup_percpu_elem()
[ Upstream commit d496557826 ]

bpf_map_lookup_percpu_elem() helper is also available for sleepable bpf
program. When BPF JIT is disabled or under 32-bit host,
bpf_map_lookup_percpu_elem() will not be inlined. Using it in a
sleepable bpf program will trigger the warning in
bpf_map_lookup_percpu_elem(), because the bpf program only holds
rcu_read_lock_trace lock. Therefore, add the missed check.

Reported-by: syzbot+dce5aae19ae4d6399986@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/bpf/000000000000176a130617420310@google.com/
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20250526062534.1105938-1-houtao@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-06-27 11:08:53 +01:00
..
preload
arraymap.c bpf: Check percpu map value size first 2024-10-17 15:24:15 +02:00
bloom_filter.c bpf: Check bloom filter map value size 2024-05-17 12:02:11 +02:00
bpf_cgrp_storage.c bpf: Only fails the busy counter check in bpf_cgrp_storage_get if it creates storage 2025-05-02 07:50:53 +02:00
bpf_inode_storage.c
bpf_iter.c
bpf_local_storage.c bpf: bpf_local_storage: Always use bpf_mem_alloc in PREEMPT_RT 2025-02-08 09:52:06 +01:00
bpf_lru_list.c
bpf_lru_list.h
bpf_lsm.c
bpf_struct_ops_types.h
bpf_struct_ops.c
bpf_task_storage.c
btf.c bpf: Check size for BTF-based ctx access of pointer members 2024-12-19 18:11:25 +01:00
cgroup_iter.c
cgroup.c bpf: Allow pre-ordering for bpf cgroup progs 2025-06-04 14:41:58 +02:00
core.c bpf: Avoid __bpf_prog_ret0_warn when jit fails 2025-06-19 15:28:18 +02:00
cpumap.c bpf: report RCU QS in cpumap kthread 2024-03-26 18:20:12 -04:00
cpumask.c
devmap.c bpf: fix OOB devmap writes when deleting elements 2024-12-14 19:59:56 +01:00
disasm.c
disasm.h
dispatcher.c
hashtab.c bpf: fix possible endless loop in BPF map iteration 2025-06-04 14:41:53 +02:00
helpers.c bpf: Check rcu_read_lock_trace_held() in bpf_map_lookup_percpu_elem() 2025-06-27 11:08:53 +01:00
inode.c
Kconfig
link_iter.c
local_storage.c
log.c
lpm_trie.c bpf: Fix exact match conditions in trie_get_next_key() 2024-12-14 19:59:51 +01:00
Makefile
map_in_map.c bpf: Optimize the free of inner map 2024-06-21 14:38:15 +02:00
map_in_map.h bpf: Add map and need_defer parameters to .map_fd_put_ptr() 2024-01-25 15:35:22 -08:00
map_iter.c
memalloc.c bpf: Use c->unit_size to select target cache during free 2024-01-25 15:35:28 -08:00
mmap_unlock_work.h
mprog.c
net_namespace.c
offload.c
percpu_freelist.c
percpu_freelist.h
prog_iter.c
queue_stack_maps.c
reuseport_array.c
ringbuf.c bpf: Use raw_spinlock_t in ringbuf 2025-03-22 12:50:37 -07:00
stackmap.c bpf: Fix stackmap overflow check on 32-bit arches 2024-03-26 18:19:39 -04:00
syscall.c bpf: Allow pre-ordering for bpf cgroup progs 2025-06-04 14:41:58 +02:00
sysfs_btf.c
task_iter.c bpf: Fix iter/task tid filtering 2024-11-01 01:58:25 +01:00
tcx.c
tnum.c
trampoline.c
verifier.c bpf: don't do clean_live_states when state->loop_entry->branches > 0 2025-06-04 14:42:07 +02:00