mirror of
https://github.com/torvalds/linux.git
synced 2025-04-06 00:16:18 +00:00

The function worker_thread() is programmed in a way that roughly doubles the number of expectable context switches, because it enforces blocking reads: Performance counter stats for 'perf bench sched pipe': 2,000,004 context-switches 11.859548321 seconds time elapsed 0.674871000 seconds user 8.076890000 seconds sys The result of this behavior is that the blocking reads by far dominate the performance analysis of 'perf bench sched pipe': Samples: 78K of event 'cycles:P', Event count (approx.): 27964965844 Overhead Command Shared Object Symbol 25.28% sched-pipe [kernel.kallsyms] [k] read_hpet 8.11% sched-pipe [kernel.kallsyms] [k] retbleed_untrain_ret 2.82% sched-pipe [kernel.kallsyms] [k] pipe_write From the code, it is unclear if that behavior is wanted but the log says that at least Ingo Molnar aims to mimic lmbench's lat_ctx, that doesn't handle the pipe ends that way (https://sourceforge.net/p/lmbench/code/HEAD/tree/trunk/lmbench2/src/lat_ctx.c) Fix worker_thread() by always first feeding the write ends of the pipes and then trying to read. This roughly halves the context switches and runtime of pure 'perf bench sched pipe': Performance counter stats for 'perf bench sched pipe': 1,005,770 context-switches 6.033448041 seconds time elapsed 0.423142000 seconds user 4.519829000 seconds sys And the blocking reads do no longer dominate the analysis at the above extreme: Samples: 40K of event 'cycles:P', Event count (approx.): 14309364879 Overhead Command Shared Object Symbol 12.20% sched-pipe [kernel.kallsyms] [k] read_hpet 9.23% sched-pipe [kernel.kallsyms] [k] retbleed_untrain_ret 3.68% sched-pipe [kernel.kallsyms] [k] pipe_write Signed-off-by: Dirk Gouders <dirk@gouders.net> Acked-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250323140316.19027-2-dirk@gouders.net Signed-off-by: Namhyung Kim <namhyung@kernel.org>