rcu: Remove references to old grace-period-wait primitives

The rcu_barrier_sched(), synchronize_sched(), and synchronize_rcu_bh()
RCU API members have been gone for many years.  This commit therefore
removes non-historical instances of them.

Reported-by: Joe Perches <joe@perches.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
This commit is contained in:
Paul E. McKenney 2025-01-09 08:52:15 -08:00 committed by Boqun Feng
parent 81a208c56e
commit 73298c7cf1
2 changed files with 8 additions and 14 deletions

View File

@ -329,10 +329,7 @@ Answer:
was first added back in 2005. This is because on_each_cpu() was first added back in 2005. This is because on_each_cpu()
disables preemption, which acted as an RCU read-side critical disables preemption, which acted as an RCU read-side critical
section, thus preventing CPU 0's grace period from completing section, thus preventing CPU 0's grace period from completing
until on_each_cpu() had dealt with all of the CPUs. However, until on_each_cpu() had dealt with all of the CPUs.
with the advent of preemptible RCU, rcu_barrier() no longer
waited on nonpreemptible regions of code in preemptible kernels,
that being the job of the new rcu_barrier_sched() function.
However, with the RCU flavor consolidation around v4.20, this However, with the RCU flavor consolidation around v4.20, this
possibility was once again ruled out, because the consolidated possibility was once again ruled out, because the consolidated

View File

@ -806,11 +806,9 @@ do { \
* sections, invocation of the corresponding RCU callback is deferred * sections, invocation of the corresponding RCU callback is deferred
* until after the all the other CPUs exit their critical sections. * until after the all the other CPUs exit their critical sections.
* *
* In v5.0 and later kernels, synchronize_rcu() and call_rcu() also * Both synchronize_rcu() and call_rcu() also wait for regions of code
* wait for regions of code with preemption disabled, including regions of * with preemption disabled, including regions of code with interrupts or
* code with interrupts or softirqs disabled. In pre-v5.0 kernels, which * softirqs disabled.
* define synchronize_sched(), only code enclosed within rcu_read_lock()
* and rcu_read_unlock() are guaranteed to be waited for.
* *
* Note, however, that RCU callbacks are permitted to run concurrently * Note, however, that RCU callbacks are permitted to run concurrently
* with new RCU read-side critical sections. One way that this can happen * with new RCU read-side critical sections. One way that this can happen
@ -865,11 +863,10 @@ static __always_inline void rcu_read_lock(void)
* rcu_read_unlock() - marks the end of an RCU read-side critical section. * rcu_read_unlock() - marks the end of an RCU read-side critical section.
* *
* In almost all situations, rcu_read_unlock() is immune from deadlock. * In almost all situations, rcu_read_unlock() is immune from deadlock.
* In recent kernels that have consolidated synchronize_sched() and * This deadlock immunity also extends to the scheduler's runqueue
* synchronize_rcu_bh() into synchronize_rcu(), this deadlock immunity * and priority-inheritance spinlocks, courtesy of the quiescent-state
* also extends to the scheduler's runqueue and priority-inheritance * deferral that is carried out when rcu_read_unlock() is invoked with
* spinlocks, courtesy of the quiescent-state deferral that is carried * interrupts disabled.
* out when rcu_read_unlock() is invoked with interrupts disabled.
* *
* See rcu_read_lock() for more information. * See rcu_read_lock() for more information.
*/ */