mirror of
https://github.com/torvalds/linux.git
synced 2025-04-09 14:45:27 +00:00
rcu: Remove references to old grace-period-wait primitives
The rcu_barrier_sched(), synchronize_sched(), and synchronize_rcu_bh() RCU API members have been gone for many years. This commit therefore removes non-historical instances of them. Reported-by: Joe Perches <joe@perches.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
This commit is contained in:
parent
81a208c56e
commit
73298c7cf1
@ -329,10 +329,7 @@ Answer:
|
||||
was first added back in 2005. This is because on_each_cpu()
|
||||
disables preemption, which acted as an RCU read-side critical
|
||||
section, thus preventing CPU 0's grace period from completing
|
||||
until on_each_cpu() had dealt with all of the CPUs. However,
|
||||
with the advent of preemptible RCU, rcu_barrier() no longer
|
||||
waited on nonpreemptible regions of code in preemptible kernels,
|
||||
that being the job of the new rcu_barrier_sched() function.
|
||||
until on_each_cpu() had dealt with all of the CPUs.
|
||||
|
||||
However, with the RCU flavor consolidation around v4.20, this
|
||||
possibility was once again ruled out, because the consolidated
|
||||
|
@ -806,11 +806,9 @@ do { \
|
||||
* sections, invocation of the corresponding RCU callback is deferred
|
||||
* until after the all the other CPUs exit their critical sections.
|
||||
*
|
||||
* In v5.0 and later kernels, synchronize_rcu() and call_rcu() also
|
||||
* wait for regions of code with preemption disabled, including regions of
|
||||
* code with interrupts or softirqs disabled. In pre-v5.0 kernels, which
|
||||
* define synchronize_sched(), only code enclosed within rcu_read_lock()
|
||||
* and rcu_read_unlock() are guaranteed to be waited for.
|
||||
* Both synchronize_rcu() and call_rcu() also wait for regions of code
|
||||
* with preemption disabled, including regions of code with interrupts or
|
||||
* softirqs disabled.
|
||||
*
|
||||
* Note, however, that RCU callbacks are permitted to run concurrently
|
||||
* with new RCU read-side critical sections. One way that this can happen
|
||||
@ -865,11 +863,10 @@ static __always_inline void rcu_read_lock(void)
|
||||
* rcu_read_unlock() - marks the end of an RCU read-side critical section.
|
||||
*
|
||||
* In almost all situations, rcu_read_unlock() is immune from deadlock.
|
||||
* In recent kernels that have consolidated synchronize_sched() and
|
||||
* synchronize_rcu_bh() into synchronize_rcu(), this deadlock immunity
|
||||
* also extends to the scheduler's runqueue and priority-inheritance
|
||||
* spinlocks, courtesy of the quiescent-state deferral that is carried
|
||||
* out when rcu_read_unlock() is invoked with interrupts disabled.
|
||||
* This deadlock immunity also extends to the scheduler's runqueue
|
||||
* and priority-inheritance spinlocks, courtesy of the quiescent-state
|
||||
* deferral that is carried out when rcu_read_unlock() is invoked with
|
||||
* interrupts disabled.
|
||||
*
|
||||
* See rcu_read_lock() for more information.
|
||||
*/
|
||||
|
Loading…
x
Reference in New Issue
Block a user