linux-yocto/kernel/locking
Boqun Feng 95f0958240 locking/lockdep: Decrease nr_unused_locks if lock unused in zap_class()
commit 495f53d5cc upstream.

Currently, when a lock class is allocated, nr_unused_locks will be
increased by 1, until it gets used: nr_unused_locks will be decreased by
1 in mark_lock(). However, one scenario is missed: a lock class may be
zapped without even being used once. This could result into a situation
that nr_unused_locks != 0 but no unused lock class is active in the
system, and when `cat /proc/lockdep_stats`, a WARN_ON() will
be triggered in a CONFIG_DEBUG_LOCKDEP=y kernel:

  [...] DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused)
  [...] WARNING: CPU: 41 PID: 1121 at kernel/locking/lockdep_proc.c:283 lockdep_stats_show+0xba9/0xbd0

And as a result, lockdep will be disabled after this.

Therefore, nr_unused_locks needs to be accounted correctly at
zap_class() time.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Waiman Long <longman@redhat.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20250326180831.510348-1-boqun.feng@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-04-20 10:15:45 +02:00
..
irqflag-debug.c
lock_events_list.h
lock_events.c
lock_events.h
lockdep_internals.h
lockdep_proc.c
lockdep_states.h
lockdep.c locking/lockdep: Decrease nr_unused_locks if lock unused in zap_class() 2025-04-20 10:15:45 +02:00
locktorture.c
Makefile
mcs_spinlock.h
mutex-debug.c
mutex.c
mutex.h
osq_lock.c
percpu-rwsem.c
qrwlock.c
qspinlock_paravirt.h
qspinlock_stat.h
qspinlock.c
rtmutex_api.c
rtmutex_common.h
rtmutex.c
rwbase_rt.c
rwsem.c Locking changes for v6.12: 2024-09-29 08:51:30 -07:00
semaphore.c locking/semaphore: Use wake_q to wake up processes outside lock critical section 2025-04-10 14:39:31 +02:00
spinlock_debug.c
spinlock_rt.c
spinlock.c
test-ww_mutex.c locking/ww_mutex/test: Use swap() macro 2025-02-17 10:04:44 +01:00
ww_mutex.h
ww_rt_mutex.c