linux-yocto/kernel/bpf
Willem de Bruijn b71a75739a bpf: Adjust free target to avoid global starvation of LRU map
[ Upstream commit d4adf1c9ee ]

BPF_MAP_TYPE_LRU_HASH can recycle most recent elements well before the
map is full, due to percpu reservations and force shrink before
neighbor stealing. Once a CPU is unable to borrow from the global map,
it will once steal one elem from a neighbor and after that each time
flush this one element to the global list and immediately recycle it.

Batch value LOCAL_FREE_TARGET (128) will exhaust a 10K element map
with 79 CPUs. CPU 79 will observe this behavior even while its
neighbors hold 78 * 127 + 1 * 15 == 9921 free elements (99%).

CPUs need not be active concurrently. The issue can appear with
affinity migration, e.g., irqbalance. Each CPU can reserve and then
hold onto its 128 elements indefinitely.

Avoid global list exhaustion by limiting aggregate percpu caches to
half of map size, by adjusting LOCAL_FREE_TARGET based on cpu count.
This change has no effect on sufficiently large tables.

Similar to LOCAL_NR_SCANS and lru->nr_scans, introduce a map variable
lru->free_target. The extra field fits in a hole in struct bpf_lru.
The cacheline is already warm where read in the hot path. The field is
only accessed with the lru lock held.

Tested-by: Anton Protopopov <a.s.protopopov@gmail.com>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
Link: https://lore.kernel.org/r/20250618215803.3587312-1-willemdebruijn.kernel@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-07-17 18:35:21 +02:00
..
preload
arraymap.c bpf: Check percpu map value size first 2024-10-17 15:24:15 +02:00
bloom_filter.c bpf: Check bloom filter map value size 2024-05-17 12:02:11 +02:00
bpf_cgrp_storage.c bpf: Only fails the busy counter check in bpf_cgrp_storage_get if it creates storage 2025-05-02 07:50:53 +02:00
bpf_inode_storage.c
bpf_iter.c
bpf_local_storage.c bpf: bpf_local_storage: Always use bpf_mem_alloc in PREEMPT_RT 2025-02-08 09:52:06 +01:00
bpf_lru_list.c bpf: Adjust free target to avoid global starvation of LRU map 2025-07-17 18:35:21 +02:00
bpf_lru_list.h bpf: Adjust free target to avoid global starvation of LRU map 2025-07-17 18:35:21 +02:00
bpf_lsm.c
bpf_struct_ops_types.h
bpf_struct_ops.c
bpf_task_storage.c
btf.c bpf: Check size for BTF-based ctx access of pointer members 2024-12-19 18:11:25 +01:00
cgroup_iter.c
cgroup.c bpf: Allow pre-ordering for bpf cgroup progs 2025-06-04 14:41:58 +02:00
core.c bpf: Avoid __bpf_prog_ret0_warn when jit fails 2025-06-19 15:28:18 +02:00
cpumap.c bpf: report RCU QS in cpumap kthread 2024-03-26 18:20:12 -04:00
cpumask.c
devmap.c bpf: fix OOB devmap writes when deleting elements 2024-12-14 19:59:56 +01:00
disasm.c
disasm.h
dispatcher.c
hashtab.c bpf: fix possible endless loop in BPF map iteration 2025-06-04 14:41:53 +02:00
helpers.c bpf: Check rcu_read_lock_trace_held() in bpf_map_lookup_percpu_elem() 2025-06-27 11:08:53 +01:00
inode.c
Kconfig
link_iter.c
local_storage.c
log.c
lpm_trie.c bpf: Fix exact match conditions in trie_get_next_key() 2024-12-14 19:59:51 +01:00
Makefile
map_in_map.c bpf: Optimize the free of inner map 2024-06-21 14:38:15 +02:00
map_in_map.h bpf: Add map and need_defer parameters to .map_fd_put_ptr() 2024-01-25 15:35:22 -08:00
map_iter.c
memalloc.c bpf: Use c->unit_size to select target cache during free 2024-01-25 15:35:28 -08:00
mmap_unlock_work.h
mprog.c
net_namespace.c
offload.c
percpu_freelist.c
percpu_freelist.h
prog_iter.c
queue_stack_maps.c
reuseport_array.c
ringbuf.c bpf: Use raw_spinlock_t in ringbuf 2025-03-22 12:50:37 -07:00
stackmap.c bpf: Fix stackmap overflow check on 32-bit arches 2024-03-26 18:19:39 -04:00
syscall.c bpf: Allow pre-ordering for bpf cgroup progs 2025-06-04 14:41:58 +02:00
sysfs_btf.c
task_iter.c bpf: Fix iter/task tid filtering 2024-11-01 01:58:25 +01:00
tcx.c
tnum.c
trampoline.c
verifier.c bpf: don't do clean_live_states when state->loop_entry->branches > 0 2025-06-04 14:42:07 +02:00