A moderately busy cycle for documentation this time around:

- The most significant change is the replacement of the old kernel-doc
   script (a monstrous collection of Perl regexes that predates the Git era)
   with a Python reimplementation.  That, too, is a horrifying collection of
   regexes, but in a much cleaner and more maintainable structure that
   integrates far better with the Sphinx build system.
 
   This change has been in linux-next for the full 6.15 cycle; the small
   number of problems that turned up have been addressed, seemingly to
   everybody's satisfaction.  The Perl kernel-doc script remains in tree (as
   scripts/kernel-doc.pl) and can be used with a command-line option if need
   be.  Unless some reason to keep it around materializes, it will probably
   go away in 6.17.
 
   Credit goes to Mauro Carvalho Chehab for doing all this work.
 
 - Some RTLA documentation updates
 
 - A handful of Chinese translations
 
 - The usual collection of typo fixes, general updates, etc.
 -----BEGIN PGP SIGNATURE-----
 
 iQFDBAABCAAtFiEEIw+MvkEiF49krdp9F0NaE2wMflgFAmg0j/IPHGNvcmJldEBs
 d24ubmV0AAoJEBdDWhNsDH5Yu7sH/1w2LtO8XB/KTRNmuz3tV6KzGtDvQVwqgxB2
 X8bbeJlBtYenvuak66RjCfucOh7Y8Ni3UN0G2BGa67KBAxmZEYc6u+IF4SrJUg5g
 DuS6+ZXgqV4TrjWMRof5LtPS8KbNJLGnqgxSVdEPSBV0jJ13r3gb3/e7X06iNAKR
 X4Nq+h5aa1tCwZTkPOSHHQn4qm3Tb1LQreDSn8gnBn6e8nVJIakNlwaVYkClhI9B
 byvItInv32LPAXPDkcEWITvLNUTiMobTyfBYHOD6i3nImQ+j4ZiMMmOUjiB+0jDO
 UQDvoUa46ipXkLBsBOrYEkM/iKXBawMwTa3CcudxR4scvVgATJs=
 =BQ9X
 -----END PGP SIGNATURE-----

Merge tag 'docs-6.16' of git://git.lwn.net/linux

Pull documentation updates from Jonathan Corbet:
 "A moderately busy cycle for documentation this time around:

   - The most significant change is the replacement of the old
     kernel-doc script (a monstrous collection of Perl regexes that
     predates the Git era) with a Python reimplementation. That, too, is
     a horrifying collection of regexes, but in a much cleaner and more
     maintainable structure that integrates far better with the Sphinx
     build system.

     This change has been in linux-next for the full 6.15 cycle; the
     small number of problems that turned up have been addressed,
     seemingly to everybody's satisfaction. The Perl kernel-doc script
     remains in tree (as scripts/kernel-doc.pl) and can be used with a
     command-line option if need be. Unless some reason to keep it
     around materializes, it will probably go away in 6.17.

     Credit goes to Mauro Carvalho Chehab for doing all this work.

   - Some RTLA documentation updates

   - A handful of Chinese translations

   - The usual collection of typo fixes, general updates, etc"

* tag 'docs-6.16' of git://git.lwn.net/linux: (85 commits)
  Docs: doc-guide: update sphinx.rst Sphinx version number
  docs: doc-guide: clarify latest theme usage
  Documentation/scheduler: Fix typo in sched-stats domain field description
  scripts: kernel-doc: prevent a KeyError when checking output
  docs: kerneldoc.py: simplify exception handling logic
  MAINTAINERS: update linux-doc entry to cover new Python scripts
  docs: align with scripts/syscall.tbl migration
  Documentation: NTB: Fix typo
  Documentation: ioctl-number: Update table intro
  docs: conf.py: drop backward support for old Sphinx versions
  Docs: driver-api/basics: add kobject_event interfaces
  Docs: relay: editing cleanups
  docs: fix "incase" typo in coresight/panic.rst
  Fix spelling error for 'parallel'
  docs: admin-guide: fix typos in reporting-issues.rst
  docs: dmaengine: add explanation for DMA_ASYNC_TX capability
  Documentation: leds: improve readibility of multicolor doc
  docs: fix typo in firmware-related section
  docs: Makefile: Inherit PYTHONPYCACHEPREFIX setting as env variable
  Documentation: ioctl-number: Update outdated submission info
  ...
This commit is contained in:
Linus Torvalds 2025-05-27 11:22:19 -07:00
commit 3e443d1673
54 changed files with 7335 additions and 2814 deletions

1
.gitignore vendored
View File

@ -40,6 +40,7 @@
*.o
*.o.*
*.patch
*.pyc
*.rmeta
*.rpm
*.rsi

2
.pylintrc Normal file
View File

@ -0,0 +1,2 @@
[MASTER]
init-hook='import sys; sys.path += ["scripts/lib/kdoc", "scripts/lib/abi"]'

View File

@ -60,9 +60,8 @@ endif #HAVE_LATEXMK
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
KERNELDOC = $(srctree)/scripts/kernel-doc
KERNELDOC_CONF = -D kerneldoc_srctree=$(srctree) -D kerneldoc_bin=$(KERNELDOC)
ALLSPHINXOPTS = $(KERNELDOC_CONF) $(PAPEROPT_$(PAPER)) $(SPHINXOPTS)
ALLSPHINXOPTS = -D kerneldoc_srctree=$(srctree) -D kerneldoc_bin=$(KERNELDOC)
ALLSPHINXOPTS += $(PAPEROPT_$(PAPER)) $(SPHINXOPTS)
ifneq ($(wildcard $(srctree)/.config),)
ifeq ($(CONFIG_RUST),y)
# Let Sphinx know we will include rustdoc
@ -83,9 +82,11 @@ loop_cmd = $(echo-cmd) $(cmd_$(1)) || exit;
# $5 reST source folder relative to $(src),
# e.g. "userspace-api/media" for the linux-tv book-set at ./Documentation/userspace-api/media
PYTHONPYCACHEPREFIX ?= $(abspath $(BUILDDIR)/__pycache__)
quiet_cmd_sphinx = SPHINX $@ --> file://$(abspath $(BUILDDIR)/$3/$4)
cmd_sphinx = $(MAKE) BUILDDIR=$(abspath $(BUILDDIR)) $(build)=Documentation/userspace-api/media $2 && \
PYTHONDONTWRITEBYTECODE=1 \
PYTHONPYCACHEPREFIX="$(PYTHONPYCACHEPREFIX)" \
BUILDDIR=$(abspath $(BUILDDIR)) SPHINX_CONF=$(abspath $(src)/$5/$(SPHINX_CONF)) \
$(PYTHON3) $(srctree)/scripts/jobserver-exec \
$(CONFIG_SHELL) $(srctree)/Documentation/sphinx/parallel-wrapper.sh \

View File

@ -1,17 +1,17 @@
===========================
Namespaces research control
===========================
====================================
User namespaces and resource control
====================================
There are a lot of kinds of objects in the kernel that don't have
individual limits or that have limits that are ineffective when a set
of processes is allowed to switch user ids. With user namespaces
enabled in a kernel for people who don't trust their users or their
users programs to play nice this problems becomes more acute.
The kernel contains many kinds of objects that either don't have
individual limits or that have limits which are ineffective when
a set of processes is allowed to switch their UID. On a system
where the admins don't trust their users or their users' programs,
user namespaces expose the system to potential misuse of resources.
Therefore it is recommended that memory control groups be enabled in
kernels that enable user namespaces, and it is further recommended
that userspace configure memory control groups to limit how much
memory user's they don't trust to play nice can use.
In order to mitigate this, we recommend that admins enable memory
control groups on any system that enables user namespaces.
Furthermore, we recommend that admins configure the memory control
groups to limit the maximum memory usable by any untrusted user.
Memory control groups can be configured by installing the libcgroup
package present on most distros editing /etc/cgrules.conf,

View File

@ -231,7 +231,7 @@ are the following:
present).
The existence of the limit may be a result of some (often unintentional)
BIOS settings, restrictions coming from a service processor or another
BIOS settings, restrictions coming from a service processor or other
BIOS/HW-based mechanisms.
This does not cover ACPI thermal limitations which can be discovered
@ -258,8 +258,8 @@ are the following:
extension on ARM). If one cannot be determined, this attribute should
not be present.
Note, that failed attempt to retrieve current frequency for a given
CPU(s) will result in an appropriate error, i.e: EAGAIN for CPU that
Note that failed attempt to retrieve current frequency for a given
CPU(s) will result in an appropriate error, i.e.: EAGAIN for CPU that
remains idle (raised on ARM).
``cpuinfo_max_freq``
@ -499,7 +499,7 @@ This governor exposes the following tunables:
represented by it to be 1.5 times as high as the transition latency
(the default)::
# echo `$(($(cat cpuinfo_transition_latency) * 3 / 2)) > ondemand/sampling_rate
# echo `$(($(cat cpuinfo_transition_latency) * 3 / 2))` > ondemand/sampling_rate
``up_threshold``
If the estimated CPU load is above this value (in percent), the governor

View File

@ -347,7 +347,7 @@ again.
[:ref:`details<uninstall>`]
.. _submit_improvements:
.. _submit_improvements_qbtl:
Did you run into trouble following any of the above steps that is not cleared up
by the reference section below? Or do you have ideas how to improve the text?
@ -1070,7 +1070,7 @@ complicated, and harder to follow.
That being said: this of course is a balancing act. Hence, if you think an
additional use-case is worth describing, suggest it to the maintainers of this
document, as :ref:`described above <submit_improvements>`.
document, as :ref:`described above <submit_improvements_qbtl>`.
..

View File

@ -41,7 +41,7 @@ If you are facing multiple issues with the Linux kernel at once, report each
separately. While writing your report, include all information relevant to the
issue, like the kernel and the distro used. In case of a regression, CC the
regressions mailing list (regressions@lists.linux.dev) to your report. Also try
to pin-point the culprit with a bisection; if you succeed, include its
to pinpoint the culprit with a bisection; if you succeed, include its
commit-id and CC everyone in the sign-off-by chain.
Once the report is out, answer any questions that come up and help where you
@ -206,7 +206,7 @@ Reporting issues only occurring in older kernel version lines
This subsection is for you, if you tried the latest mainline kernel as outlined
above, but failed to reproduce your issue there; at the same time you want to
see the issue fixed in a still supported stable or longterm series or vendor
kernels regularly rebased on those. If that the case, follow these steps:
kernels regularly rebased on those. If that is the case, follow these steps:
* Prepare yourself for the possibility that going through the next few steps
might not get the issue solved in older releases: the fix might be too big
@ -312,7 +312,7 @@ small modifications to a kernel based on a recent Linux version; that for
example often holds true for the mainline kernels shipped by Debian GNU/Linux
Sid or Fedora Rawhide. Some developers will also accept reports about issues
with kernels from distributions shipping the latest stable kernel, as long as
its only slightly modified; that for example is often the case for Arch Linux,
it's only slightly modified; that for example is often the case for Arch Linux,
regular Fedora releases, and openSUSE Tumbleweed. But keep in mind, you better
want to use a mainline Linux and avoid using a stable kernel for this
process, as outlined in the section 'Install a fresh kernel for testing' in more

View File

@ -267,7 +267,7 @@ culprit might be known already. For further details on what actually qualifies
as a regression check out Documentation/admin-guide/reporting-regressions.rst.
If you run into any problems while following this guide or have ideas how to
improve it, :ref:`please let the kernel developers know <submit_improvements>`.
improve it, :ref:`please let the kernel developers know <submit_improvements_vbbr>`.
.. _introprep_bissbs:
@ -1055,7 +1055,7 @@ follow these instructions.
[:ref:`details <introoptional_bisref>`]
.. _submit_improvements:
.. _submit_improvements_vbbr:
Conclusion
----------

View File

@ -130,7 +130,7 @@ instructions. Clang 5 supports them as well.
=================== ===========================
_readfsbase_u64() Read the FS base register
_readfsbase_u64() Read the GS base register
_readgsbase_u64() Read the GS base register
_writefsbase_u64() Write the FS base register
_writegsbase_u64() Write the GS base register
=================== ===========================

View File

@ -28,16 +28,6 @@ def have_command(cmd):
"""
return shutil.which(cmd) is not None
# Get Sphinx version
major, minor, patch = sphinx.version_info[:3]
#
# Warn about older versions that we don't want to support for much
# longer.
#
if (major < 2) or (major == 2 and minor < 4):
print('WARNING: support for Sphinx < 2.4 will be removed soon.')
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
@ -57,11 +47,9 @@ extensions = ['kerneldoc', 'rstFlatTable', 'kernel_include',
'maintainers_include', 'sphinx.ext.autosectionlabel',
'kernel_abi', 'kernel_feat', 'translations']
if major >= 3:
if (major > 3) or (minor > 0 or patch >= 2):
# Sphinx c function parser is more pedantic with regards to type
# checking. Due to that, having macros at c:function cause problems.
# Those needed to be scaped by using c_id_attributes[] array
# Since Sphinx version 3, the C function parser is more pedantic with regards
# to type checking. Due to that, having macros at c:function cause problems.
# Those needed to be escaped by using c_id_attributes[] array
c_id_attributes = [
# GCC Compiler types not parsed by Sphinx:
"__restrict__",
@ -125,9 +113,6 @@ if major >= 3:
"__bpf_kfunc",
]
else:
extensions.append('cdomain')
# Ensure that autosectionlabel will produce unique names
autosectionlabel_prefix_document = True
autosectionlabel_maxdepth = 2
@ -149,10 +134,6 @@ if 'SPHINX_IMGMATH' in os.environ:
else:
sys.stderr.write("Unknown env SPHINX_IMGMATH=%s ignored.\n" % env_sphinx_imgmath)
# Always load imgmath for Sphinx <1.8 or for epub docs
load_imgmath = (load_imgmath or (major == 1 and minor < 8)
or 'epub' in sys.argv)
if load_imgmath:
extensions.append("sphinx.ext.imgmath")
math_renderer = 'imgmath'
@ -322,14 +303,6 @@ if "DOCS_CSS" in os.environ:
for l in css:
html_css_files.append(l)
if major <= 1 and minor < 8:
html_context = {
'css_files': [],
}
for l in html_css_files:
html_context['css_files'].append('_static/' + l)
if html_theme == 'alabaster':
html_theme_options = {
'description': get_cline_version(),
@ -409,11 +382,6 @@ latex_elements = {
''',
}
# Fix reference escape troubles with Sphinx 1.4.x
if major == 1:
latex_elements['preamble'] += '\\renewcommand*{\\DUrole}[2]{ #2 }\n'
# Load kerneldoc specific LaTeX settings
latex_elements['preamble'] += '''
% Load kerneldoc specific LaTeX settings
@ -540,7 +508,7 @@ pdf_documents = [
# kernel-doc extension configuration for running Sphinx directly (e.g. by Read
# the Docs). In a normal build, these are supplied from the Makefile via command
# line arguments.
kerneldoc_bin = '../scripts/kernel-doc'
kerneldoc_bin = '../scripts/kernel-doc.py'
kerneldoc_srctree = '..'
# ------------------------------------------------------------------------------

View File

@ -28,7 +28,7 @@ Sphinx Install
==============
The ReST markups currently used by the Documentation/ files are meant to be
built with ``Sphinx`` version 2.4.4 or higher.
built with ``Sphinx`` version 3.4.3 or higher.
There's a script that checks for the Sphinx requirements. Please see
:ref:`sphinx-pre-install` for further details.
@ -42,12 +42,6 @@ with your distributions. In order to do so, it is recommended to install
Sphinx inside a virtual environment, using ``virtualenv-3``
or ``virtualenv``, depending on how your distribution packaged Python 3.
.. note::
#) It is recommended to use the RTD theme for html output. Depending
on the Sphinx version, it should be installed separately,
with ``pip install sphinx_rtd_theme``.
In summary, if you want to install the latest version of Sphinx, you
should do::
@ -162,6 +156,12 @@ By default, the "Alabaster" theme is used to build the HTML documentation;
this theme is bundled with Sphinx and need not be installed separately.
The Sphinx theme can be overridden by using the ``DOCS_THEME`` make variable.
.. note::
Some people might prefer to use the RTD theme for html output.
Depending on the Sphinx version, it should be installed separately,
with ``pip install sphinx_rtd_theme``.
There is another make variable ``SPHINXDIRS``, which is useful when test
building a subset of documentation. For example, you can build documents
under ``Documentation/doc-guide`` by running

View File

@ -108,6 +108,9 @@ Kernel objects manipulation
.. kernel-doc:: lib/kobject.c
:export:
.. kernel-doc:: lib/kobject_uevent.c
:export:
Kernel utility functions
------------------------

View File

@ -217,10 +217,12 @@ Currently, the types available are:
- DMA_ASYNC_TX
- Must not be set by the device, and will be set by the framework
if needed
- The device supports asynchronous memory-to-memory operations,
including memcpy, memset, xor, pq, xor_val, and pq_val.
- TODO: What is it about?
- This capability is automatically set by the DMA engine
framework and must not be configured manually by device
drivers.
- DMA_SLAVE

View File

@ -35,7 +35,7 @@ anyone who has written a pci driver.
NTB Typical client driver implementation
----------------------------------------
Primary purpose of NTB is to share some peace of memory between at least two
Primary purpose of NTB is to share some piece of memory between at least two
systems. So the NTB device features like Scratchpad/Message registers are
mainly used to perform the proper memory window initialization. Typically
there are two types of memory window interfaces supported by the NTB API:

View File

@ -161,6 +161,7 @@ rely on 64bit DMA to eliminate another kind of bounce buffer.
.. kernel-doc:: drivers/usb/core/urb.c
:export:
.. c:namespace:: usb_core
.. kernel-doc:: drivers/usb/core/message.c
:export:

View File

@ -32,7 +32,7 @@ functions in the relay interface code - please see that for details.
Semantics
=========
Each relay channel has one buffer per CPU, each buffer has one or more
Each relay channel has one buffer per CPU; each buffer has one or more
sub-buffers. Messages are written to the first sub-buffer until it is
too full to contain a new message, in which case it is written to
the next (if available). Messages are never split across sub-buffers.
@ -40,7 +40,7 @@ At this point, userspace can be notified so it empties the first
sub-buffer, while the kernel continues writing to the next.
When notified that a sub-buffer is full, the kernel knows how many
bytes of it are padding i.e. unused space occurring because a complete
bytes of it are padding, i.e., unused space occurring because a complete
message couldn't fit into a sub-buffer. Userspace can use this
knowledge to copy only valid data.
@ -71,7 +71,7 @@ klog and relay-apps example code
================================
The relay interface itself is ready to use, but to make things easier,
a couple simple utility functions and a set of examples are provided.
a couple of simple utility functions and a set of examples are provided.
The relay-apps example tarball, available on the relay sourceforge
site, contains a set of self-contained examples, each consisting of a
@ -91,7 +91,7 @@ registered will data actually be logged (see the klog and kleak
examples for details).
It is of course possible to use the relay interface from scratch,
i.e. without using any of the relay-apps example code or klog, but
i.e., without using any of the relay-apps example code or klog, but
you'll have to implement communication between userspace and kernel,
allowing both to convey the state of buffers (full, empty, amount of
padding). The read() interface both removes padding and internally
@ -119,7 +119,7 @@ mmap() results in channel buffer being mapped into the caller's
must map the entire file, which is NRBUF * SUBBUFSIZE.
read() read the contents of a channel buffer. The bytes read are
'consumed' by the reader, i.e. they won't be available
'consumed' by the reader, i.e., they won't be available
again to subsequent reads. If the channel is being used
in no-overwrite mode (the default), it can be read at any
time even if there's an active kernel writer. If the
@ -138,7 +138,7 @@ poll() POLLIN/POLLRDNORM/POLLERR supported. User applications are
notified when sub-buffer boundaries are crossed.
close() decrements the channel buffer's refcount. When the refcount
reaches 0, i.e. when no process or kernel client has the
reaches 0, i.e., when no process or kernel client has the
buffer open, the channel buffer is freed.
=========== ============================================================
@ -149,7 +149,7 @@ host filesystem must be mounted. For example::
.. Note::
the host filesystem doesn't need to be mounted for kernel
The host filesystem doesn't need to be mounted for kernel
clients to create or use channels - it only needs to be
mounted when user space applications need access to the buffer
data.
@ -325,7 +325,7 @@ section, as it pertains mainly to mmap() implementations.
In 'overwrite' mode, also known as 'flight recorder' mode, writes
continuously cycle around the buffer and will never fail, but will
unconditionally overwrite old data regardless of whether it's actually
been consumed. In no-overwrite mode, writes will fail, i.e. data will
been consumed. In no-overwrite mode, writes will fail, i.e., data will
be lost, if the number of unconsumed sub-buffers equals the total
number of sub-buffers in the channel. It should be clear that if
there is no consumer or if the consumer can't consume sub-buffers fast
@ -344,7 +344,7 @@ initialize the next sub-buffer if appropriate 2) finalize the previous
sub-buffer if appropriate and 3) return a boolean value indicating
whether or not to actually move on to the next sub-buffer.
To implement 'no-overwrite' mode, the userspace client would provide
To implement 'no-overwrite' mode, the userspace client provides
an implementation of the subbuf_start() callback something like the
following::
@ -364,9 +364,9 @@ following::
return 1;
}
If the current buffer is full, i.e. all sub-buffers remain unconsumed,
If the current buffer is full, i.e., all sub-buffers remain unconsumed,
the callback returns 0 to indicate that the buffer switch should not
occur yet, i.e. until the consumer has had a chance to read the
occur yet, i.e., until the consumer has had a chance to read the
current set of ready sub-buffers. For the relay_buf_full() function
to make sense, the consumer is responsible for notifying the relay
interface when sub-buffers have been consumed via
@ -400,7 +400,7 @@ consulted.
The default subbuf_start() implementation, used if the client doesn't
define any callbacks, or doesn't define the subbuf_start() callback,
implements the simplest possible 'no-overwrite' mode, i.e. it does
implements the simplest possible 'no-overwrite' mode, i.e., it does
nothing but return 0.
Header information can be reserved at the beginning of each sub-buffer
@ -467,7 +467,7 @@ rather than open and close a new channel for each use. relay_reset()
can be used for this purpose - it resets a channel to its initial
state without reallocating channel buffer memory or destroying
existing mappings. It should however only be called when it's safe to
do so, i.e. when the channel isn't currently being written to.
do so, i.e., when the channel isn't currently being written to.
Finally, there are a couple of utility callbacks that can be used for
different purposes. buf_mapped() is called whenever a channel buffer

View File

@ -26,7 +26,7 @@ i915 with the DRM scheduler is:
which configures a slot with N contexts
* After I915_CONTEXT_ENGINES_EXT_PARALLEL a user can submit N batches to
a slot in a single execbuf IOCTL and the batches run on the GPU in
paralllel
parallel
* Initially only for GuC submission but execlists can be supported if
needed
* Convert the i915 to use the DRM scheduler

View File

@ -182,7 +182,7 @@ value and use PIO write (by setting SubIP write opcode) to do a write operation.
THC also includes two GPIO pins, one for interrupt and the other for device reset control.
Interrupt line can be configured to either level triggerred or edge triggerred by setting MMIO
Interrupt line can be configured to either level triggered or edge triggered by setting MMIO
Control register.
Reset line is controlled by BIOS (or EFI) through ACPI _RST method, driver needs to call this
@ -302,10 +302,10 @@ waiting for interrupt ready then read out the data from system memory.
3.3.2 Software DMA channel
~~~~~~~~~~~~~~~~~~~~~~~~~~
THC supports a software triggerred RxDMA mode to read the touch data from touch IC. This SW RxDMA
THC supports a software triggered RxDMA mode to read the touch data from touch IC. This SW RxDMA
is the 3rd THC RxDMA engine with the similar functionalities as the existing two RxDMAs, the only
difference is this SW RxDMA is triggerred by software, and RxDMA2 is triggerred by external Touch IC
interrupt. It gives a flexiblity to software driver to use RxDMA read Touch IC data in any time.
difference is this SW RxDMA is triggered by software, and RxDMA2 is triggered by external Touch IC
interrupt. It gives a flexibility to software driver to use RxDMA read Touch IC data in any time.
Before software starts a SW RxDMA, it shall stop the 1st and 2nd RxDMA, clear PRD read/write pointer
and quiesce the device interrupt (THC_DEVINT_QUIESCE_HW_STS = 1), other operations are the same with

View File

@ -84,7 +84,7 @@ which are kept separately from the kernel's own documentation.
Firmware-related documentation
==============================
The following holds information on the kernel's expectations regarding the
platform firmwares.
platform firmware.
.. toctree::
:maxdepth: 1

View File

@ -18,42 +18,48 @@ array. These files are children under the LED parent node created by the
led_class framework. The led_class framework is documented in led-class.rst
within this documentation directory.
Each colored LED will be indexed under the multi_* files. The order of the
colors will be arbitrary. The multi_index file can be read to determine the
Each colored LED will be indexed under the ``multi_*`` files. The order of the
colors will be arbitrary. The ``multi_index`` file can be read to determine the
color name to indexed value.
The multi_index file is an array that contains the string list of the colors as
they are defined in each multi_* array file.
The ``multi_index`` file is an array that contains the string list of the colors as
they are defined in each ``multi_*`` array file.
The multi_intensity is an array that can be read or written to for the
The ``multi_intensity`` is an array that can be read or written to for the
individual color intensities. All elements within this array must be written in
order for the color LED intensities to be updated.
Directory Layout Example
========================
.. code-block:: console
root:/sys/class/leds/multicolor:status# ls -lR
-rw-r--r-- 1 root root 4096 Oct 19 16:16 brightness
-r--r--r-- 1 root root 4096 Oct 19 16:16 max_brightness
-r--r--r-- 1 root root 4096 Oct 19 16:16 multi_index
-rw-r--r-- 1 root root 4096 Oct 19 16:16 multi_intensity
..
Multicolor Class Brightness Control
===================================
The brightness level for each LED is calculated based on the color LED
intensity setting divided by the global max_brightness setting multiplied by
the requested brightness.
led_brightness = brightness * multi_intensity/max_brightness
``led_brightness = brightness * multi_intensity/max_brightness``
Example:
A user first writes the multi_intensity file with the brightness levels
for each LED that are necessary to achieve a certain color output from a
multicolor LED group.
cat /sys/class/leds/multicolor:status/multi_index
.. code-block:: console
# cat /sys/class/leds/multicolor:status/multi_index
green blue red
echo 43 226 138 > /sys/class/leds/multicolor:status/multi_intensity
# echo 43 226 138 > /sys/class/leds/multicolor:status/multi_intensity
red -
intensity = 138
@ -65,22 +71,36 @@ blue -
intensity = 226
max_brightness = 255
..
The user can control the brightness of that multicolor LED group by writing the
global 'brightness' control. Assuming a max_brightness of 255 the user
may want to dim the LED color group to half. The user would write a value of
128 to the global brightness file then the values written to each LED will be
adjusted base on this value.
cat /sys/class/leds/multicolor:status/max_brightness
.. code-block:: console
# cat /sys/class/leds/multicolor:status/max_brightness
255
echo 128 > /sys/class/leds/multicolor:status/brightness
# echo 128 > /sys/class/leds/multicolor:status/brightness
..
.. code-block:: none
adjusted_red_value = 128 * 138/255 = 69
adjusted_green_value = 128 * 43/255 = 21
adjusted_blue_value = 128 * 226/255 = 113
..
Reading the global brightness file will return the current brightness value of
the color LED group.
cat /sys/class/leds/multicolor:status/brightness
.. code-block:: console
# cat /sys/class/leds/multicolor:status/brightness
128
..

View File

@ -251,12 +251,12 @@ there is no prospect of a migration to version 3 of the GPL in the
foreseeable future.
It is imperative that all code contributed to the kernel be legitimately
free software. For that reason, code from anonymous (or pseudonymous)
contributors will not be accepted. All contributors are required to "sign
off" on their code, stating that the code can be distributed with the
kernel under the GPL. Code which has not been licensed as free software by
its owner, or which risks creating copyright-related problems for the
kernel (such as code which derives from reverse-engineering efforts lacking
free software. For that reason, code from contributors without a known
identity or anonymous contributors will not be accepted. All contributors are
required to "sign off" on their code, stating that the code can be distributed
with the kernel under the GPL. Code which has not been licensed as free
software by its owner, or which risks creating copyright-related problems for
the kernel (such as code which derives from reverse-engineering efforts lacking
proper safeguards) cannot be contributed.
Questions about copyright-related issues are common on Linux development

View File

@ -248,6 +248,52 @@ To summarize, you need a commit that includes:
- fallback stub in ``kernel/sys_ni.c``
.. _syscall_generic_6_11:
Since 6.11
~~~~~~~~~~
Starting with kernel version 6.11, general system call implementation for the
following architectures no longer requires modifications to
``include/uapi/asm-generic/unistd.h``:
- arc
- arm64
- csky
- hexagon
- loongarch
- nios2
- openrisc
- riscv
Instead, you need to update ``scripts/syscall.tbl`` and, if applicable, adjust
``arch/*/kernel/Makefile.syscalls``.
As ``scripts/syscall.tbl`` serves as a common syscall table across multiple
architectures, a new entry is required in this table::
468 common xyzzy sys_xyzzy
Note that adding an entry to ``scripts/syscall.tbl`` with the "common" ABI
also affects all architectures that share this table. For more limited or
architecture-specific changes, consider using an architecture-specific ABI or
defining a new one.
If a new ABI, say ``xyz``, is introduced, the corresponding updates should be
made to ``arch/*/kernel/Makefile.syscalls`` as well::
syscall_abis_{32,64} += xyz (...)
To summarize, you need a commit that includes:
- ``CONFIG`` option for the new function, normally in ``init/Kconfig``
- ``SYSCALL_DEFINEn(xyzzy, ...)`` for the entry point
- corresponding prototype in ``include/linux/syscalls.h``
- new entry in ``scripts/syscall.tbl``
- (if needed) Makefile updates in ``arch/*/kernel/Makefile.syscalls``
- fallback stub in ``kernel/sys_ni.c``
x86 System Call Implementation
------------------------------
@ -353,6 +399,41 @@ To summarize, you need:
``include/uapi/asm-generic/unistd.h``
Since 6.11
~~~~~~~~~~
This applies to all the architectures listed in :ref:`Since 6.11<syscall_generic_6_11>`
under "Generic System Call Implementation", except arm64. See
:ref:`Compatibility System Calls (arm64)<compat_arm64>` for more information.
You need to extend the entry in ``scripts/syscall.tbl`` with an extra column
to indicate that a 32-bit userspace program running on a 64-bit kernel should
hit the compat entry point::
468 common xyzzy sys_xyzzy compat_sys_xyzzy
To summarize, you need:
- ``COMPAT_SYSCALL_DEFINEn(xyzzy, ...)`` for the compat entry point
- corresponding prototype in ``include/linux/compat.h``
- modification of the entry in ``scripts/syscall.tbl`` to include an extra
"compat" column
- (if needed) 32-bit mapping struct in ``include/linux/compat.h``
.. _compat_arm64:
Compatibility System Calls (arm64)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
On arm64, there is a dedicated syscall table for compatibility system calls
targeting 32-bit (AArch32) userspace: ``arch/arm64/tools/syscall_32.tbl``.
You need to add an additional line to this table specifying the compat
entry point::
468 common xyzzy sys_xyzzy compat_sys_xyzzy
Compatibility System Calls (x86)
--------------------------------
@ -575,3 +656,6 @@ References and Sources
- Recommendation from Linus Torvalds that x32 system calls should prefer
compatibility with 64-bit versions rather than 32-bit versions:
https://lore.kernel.org/r/CA+55aFxfmwfB7jbbrXxa=K7VBYPfAvmu3XOkGrLbB1UFjX1+Ew@mail.gmail.com
- Patch series revising system call table infrastructure to use
scripts/syscall.tbl across multiple architectures:
https://lore.kernel.org/lkml/20240704143611.2979589-1-arnd@kernel.org

View File

@ -135,7 +135,7 @@ of idleness (busy, idle and newly idle):
cpu was idle but no busier group was found
23) # of times in this domain sched_balance_rq() was called when the
was just becoming idle
cpu was just becoming idle
24) # of times in this domain sched_balance_rq() checked but found the
load did not require balancing when the cpu was just becoming idle
25) # of times in this domain sched_balance_rq() tried to move one or more

View File

@ -128,13 +128,8 @@ def note_failure(target):
# own C role, but both match the same regex, so we try both.
#
def markup_func_ref_sphinx3(docname, app, match):
cdom = app.env.domains['c']
#
# Go through the dance of getting an xref out of the C domain
#
base_target = match.group(2)
target_text = nodes.Text(match.group(0))
xref = None
possible_targets = [base_target]
# Check if this document has a namespace, and if so, try
# cross-referencing inside it first.
@ -146,22 +141,8 @@ def markup_func_ref_sphinx3(docname, app, match):
if (target not in Skipfuncs) and not failure_seen(target):
lit_text = nodes.literal(classes=['xref', 'c', 'c-func'])
lit_text += target_text
pxref = addnodes.pending_xref('', refdomain = 'c',
reftype = 'function',
reftarget = target,
modname = None,
classname = None)
#
# XXX The Latex builder will throw NoUri exceptions here,
# work around that by ignoring them.
#
try:
xref = cdom.resolve_xref(app.env, docname, app.builder,
'function', target, pxref,
lit_text)
except NoUri:
xref = None
xref = add_and_resolve_xref(app, docname, 'c', 'function',
target, contnode=lit_text)
if xref:
return xref
note_failure(target)
@ -188,13 +169,8 @@ def markup_c_ref(docname, app, match):
RE_typedef: 'type',
}
cdom = app.env.domains['c']
#
# Go through the dance of getting an xref out of the C domain
#
base_target = match.group(2)
target_text = nodes.Text(match.group(0))
xref = None
possible_targets = [base_target]
# Check if this document has a namespace, and if so, try
# cross-referencing inside it first.
@ -206,21 +182,9 @@ def markup_c_ref(docname, app, match):
if not (match.re == RE_function and target in Skipfuncs):
lit_text = nodes.literal(classes=['xref', 'c', class_str[match.re]])
lit_text += target_text
pxref = addnodes.pending_xref('', refdomain = 'c',
reftype = reftype_str[match.re],
reftarget = target, modname = None,
classname = None)
#
# XXX The Latex builder will throw NoUri exceptions here,
# work around that by ignoring them.
#
try:
xref = cdom.resolve_xref(app.env, docname, app.builder,
reftype_str[match.re], target, pxref,
lit_text)
except NoUri:
xref = None
xref = add_and_resolve_xref(app, docname, 'c',
reftype_str[match.re], target,
contnode=lit_text)
if xref:
return xref
@ -231,30 +195,12 @@ def markup_c_ref(docname, app, match):
# cross reference to that page
#
def markup_doc_ref(docname, app, match):
stddom = app.env.domains['std']
#
# Go through the dance of getting an xref out of the std domain
#
absolute = match.group(1)
target = match.group(2)
if absolute:
target = "/" + target
xref = None
pxref = addnodes.pending_xref('', refdomain = 'std', reftype = 'doc',
reftarget = target, modname = None,
classname = None, refexplicit = False)
#
# XXX The Latex builder will throw NoUri exceptions here,
# work around that by ignoring them.
#
try:
xref = stddom.resolve_xref(app.env, docname, app.builder, 'doc',
target, pxref, None)
except NoUri:
xref = None
#
# Return the xref if we got it; otherwise just return the plain text.
#
xref = add_and_resolve_xref(app, docname, 'std', 'doc', target)
if xref:
return xref
else:
@ -265,10 +211,6 @@ def markup_doc_ref(docname, app, match):
# with a cross reference to that page
#
def markup_abi_ref(docname, app, match, warning=False):
stddom = app.env.domains['std']
#
# Go through the dance of getting an xref out of the std domain
#
kernel_abi = get_kernel_abi()
fname = match.group(1)
@ -280,7 +222,18 @@ def markup_abi_ref(docname, app, match, warning=False):
kernel_abi.log.warning("%s not found", fname)
return nodes.Text(match.group(0))
pxref = addnodes.pending_xref('', refdomain = 'std', reftype = 'ref',
xref = add_and_resolve_xref(app, docname, 'std', 'ref', target)
if xref:
return xref
else:
return nodes.Text(match.group(0))
def add_and_resolve_xref(app, docname, domain, reftype, target, contnode=None):
#
# Go through the dance of getting an xref out of the corresponding domain
#
dom_obj = app.env.domains[domain]
pxref = addnodes.pending_xref('', refdomain = domain, reftype = reftype,
reftarget = target, modname = None,
classname = None, refexplicit = False)
@ -289,17 +242,15 @@ def markup_abi_ref(docname, app, match, warning=False):
# work around that by ignoring them.
#
try:
xref = stddom.resolve_xref(app.env, docname, app.builder, 'ref',
target, pxref, None)
xref = dom_obj.resolve_xref(app.env, docname, app.builder, reftype,
target, pxref, contnode)
except NoUri:
xref = None
#
# Return the xref if we got it; otherwise just return the plain text.
#
if xref:
return xref
else:
return nodes.Text(match.group(0))
return None
#
# Variant of markup_abi_ref() that warns whan a reference is not found

View File

@ -40,8 +40,40 @@ from docutils.parsers.rst import directives, Directive
import sphinx
from sphinx.util.docutils import switch_source_input
from sphinx.util import logging
from pprint import pformat
srctree = os.path.abspath(os.environ["srctree"])
sys.path.insert(0, os.path.join(srctree, "scripts/lib/kdoc"))
from kdoc_files import KernelFiles
from kdoc_output import RestFormat
__version__ = '1.0'
kfiles = None
logger = logging.getLogger(__name__)
def cmd_str(cmd):
"""
Helper function to output a command line that can be used to produce
the same records via command line. Helpful to debug troubles at the
script.
"""
cmd_line = ""
for w in cmd:
if w == "" or " " in w:
esc_cmd = "'" + w + "'"
else:
esc_cmd = w
if cmd_line:
cmd_line += " " + esc_cmd
continue
else:
cmd_line = esc_cmd
return cmd_line
class KernelDocDirective(Directive):
"""Extract kernel-doc comments from the specified file"""
@ -56,19 +88,48 @@ class KernelDocDirective(Directive):
'functions': directives.unchanged,
}
has_content = False
logger = logging.getLogger('kerneldoc')
verbose = 0
parse_args = {}
msg_args = {}
def handle_args(self):
def run(self):
env = self.state.document.settings.env
cmd = [env.config.kerneldoc_bin, '-rst', '-enable-lineno']
filename = env.config.kerneldoc_srctree + '/' + self.arguments[0]
# Arguments used by KernelFiles.parse() function
self.parse_args = {
"file_list": [filename],
"export_file": []
}
# Arguments used by KernelFiles.msg() function
self.msg_args = {
"enable_lineno": True,
"export": False,
"internal": False,
"symbol": [],
"nosymbol": [],
"no_doc_sections": False
}
export_file_patterns = []
verbose = os.environ.get("V")
if verbose:
try:
self.verbose = int(verbose)
except ValueError:
pass
# Tell sphinx of the dependency
env.note_dependency(os.path.abspath(filename))
tab_width = self.options.get('tab-width', self.state.document.settings.tab_width)
self.tab_width = self.options.get('tab-width',
self.state.document.settings.tab_width)
# 'function' is an alias of 'identifiers'
if 'functions' in self.options:
@ -77,35 +138,66 @@ class KernelDocDirective(Directive):
# FIXME: make this nicer and more robust against errors
if 'export' in self.options:
cmd += ['-export']
self.msg_args["export"] = True
export_file_patterns = str(self.options.get('export')).split()
elif 'internal' in self.options:
cmd += ['-internal']
self.msg_args["internal"] = True
export_file_patterns = str(self.options.get('internal')).split()
elif 'doc' in self.options:
cmd += ['-function', str(self.options.get('doc'))]
func = str(self.options.get('doc'))
cmd += ['-function', func]
self.msg_args["symbol"].append(func)
elif 'identifiers' in self.options:
identifiers = self.options.get('identifiers').split()
if identifiers:
for i in identifiers:
i = i.rstrip("\\").strip()
if not i:
continue
cmd += ['-function', i]
self.msg_args["symbol"].append(i)
else:
cmd += ['-no-doc-sections']
self.msg_args["no_doc_sections"] = True
if 'no-identifiers' in self.options:
no_identifiers = self.options.get('no-identifiers').split()
if no_identifiers:
for i in no_identifiers:
i = i.rstrip("\\").strip()
if not i:
continue
cmd += ['-nosymbol', i]
self.msg_args["nosymbol"].append(i)
for pattern in export_file_patterns:
pattern = pattern.rstrip("\\").strip()
if not pattern:
continue
for f in glob.glob(env.config.kerneldoc_srctree + '/' + pattern):
env.note_dependency(os.path.abspath(f))
cmd += ['-export-file', f]
self.parse_args["export_file"].append(f)
# Export file is needed by both parse and msg, as kernel-doc
# cache exports.
self.msg_args["export_file"] = self.parse_args["export_file"]
cmd += [filename]
try:
self.logger.verbose("calling kernel-doc '%s'" % (" ".join(cmd)))
return cmd
def run_cmd(self, cmd):
"""
Execute an external kernel-doc command.
"""
env = self.state.document.settings.env
node = nodes.section()
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
@ -115,13 +207,27 @@ class KernelDocDirective(Directive):
if p.returncode != 0:
sys.stderr.write(err)
self.logger.warning("kernel-doc '%s' failed with return code %d"
logger.warning("kernel-doc '%s' failed with return code %d"
% (" ".join(cmd), p.returncode))
return [nodes.error(None, nodes.paragraph(text = "kernel-doc missing"))]
elif env.config.kerneldoc_verbosity > 0:
sys.stderr.write(err)
lines = statemachine.string2lines(out, tab_width, convert_whitespace=True)
filenames = self.parse_args["file_list"]
for filename in filenames:
self.parse_msg(filename, node, out, cmd)
return node.children
def parse_msg(self, filename, node, out, cmd):
"""
Handles a kernel-doc output for a given file
"""
env = self.state.document.settings.env
lines = statemachine.string2lines(out, self.tab_width,
convert_whitespace=True)
result = ViewList()
lineoffset = 0;
@ -137,20 +243,61 @@ class KernelDocDirective(Directive):
result.append(line, doc + ": " + filename, lineoffset)
lineoffset += 1
node = nodes.section()
self.do_parse(result, node)
def run_kdoc(self, cmd, kfiles):
"""
Execute kernel-doc classes directly instead of running as a separate
command.
"""
env = self.state.document.settings.env
node = nodes.section()
kfiles.parse(**self.parse_args)
filenames = self.parse_args["file_list"]
for filename, out in kfiles.msg(**self.msg_args, filenames=filenames):
self.parse_msg(filename, node, out, cmd)
return node.children
def run(self):
global kfiles
cmd = self.handle_args()
if self.verbose >= 1:
logger.info(cmd_str(cmd))
try:
if kfiles:
return self.run_kdoc(cmd, kfiles)
else:
return self.run_cmd(cmd)
except Exception as e: # pylint: disable=W0703
self.logger.warning("kernel-doc '%s' processing failed with: %s" %
(" ".join(cmd), str(e)))
logger.warning("kernel-doc '%s' processing failed with: %s" %
(cmd_str(cmd), pformat(e)))
return [nodes.error(None, nodes.paragraph(text = "kernel-doc missing"))]
def do_parse(self, result, node):
with switch_source_input(self.state, result):
self.state.nested_parse(result, 0, node, match_titles=1)
def setup_kfiles(app):
global kfiles
kerneldoc_bin = app.env.config.kerneldoc_bin
if kerneldoc_bin and kerneldoc_bin.endswith("kernel-doc.py"):
print("Using Python kernel-doc")
out_style = RestFormat()
kfiles = KernelFiles(out_style=out_style, logger=logger)
else:
print(f"Using {kerneldoc_bin}")
def setup(app):
app.add_config_value('kerneldoc_bin', None, 'env')
app.add_config_value('kerneldoc_srctree', None, 'env')
@ -158,6 +305,8 @@ def setup(app):
app.add_directive('kernel-doc', KernelDocDirective)
app.connect('builder-inited', setup_kfiles)
return dict(
version = __version__,
parallel_read_safe = True,

View File

@ -63,7 +63,6 @@ of an out-of-bounds address, while the second call will influence
microarchitectural state dependent on this value. This may provide an
arbitrary read primitive.
====================================
Mitigating speculation side-channels
====================================

View File

@ -6,5 +6,13 @@ debugging of operating system timer latency.
The *timerlat* tracer outputs information in two ways. It periodically
prints the timer latency at the timer *IRQ* handler and the *Thread*
handler. It also enable the trace of the most relevant information via
handler. It also enables the trace of the most relevant information via
**osnoise:** tracepoints.
The **rtla timerlat** tool sets the options of the *timerlat* tracer
and collects and displays a summary of the results. By default,
the collection is done synchronously in kernel space using a dedicated
BPF program attached to the *timerlat* tracer. If either BPF or
the **osnoise:timerlat_sample** tracepoint it attaches to is
unavailable, the **rtla timerlat** tool falls back to using tracefs to
process the data asynchronously in user space.

View File

@ -16,13 +16,10 @@ DESCRIPTION
.. include:: common_timerlat_description.rst
The *timerlat* tracer outputs information in two ways. It periodically
prints the timer latency at the timer *IRQ* handler and the *Thread* handler.
It also provides information for each noise via the **osnoise:** tracepoints.
The **rtla timerlat top** mode displays a summary of the periodic output
from the *timerlat* tracer. The **rtla hist hist** mode displays a histogram
of each tracer event occurrence. For further details, please refer to the
respective man page.
from the *timerlat* tracer. The **rtla timerlat hist** mode displays
a histogram of each tracer event occurrence. For further details, please
refer to the respective man page.
MODES
=====

View File

@ -68,7 +68,7 @@ or from crashdump kernel using a special device file /dev/crash_tmc_xxx.
This device file is created only when there is a valid crashdata available.
General flow of trace capture and decode in case of kernel panic
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Enable source and sink on all the cores using the sysfs interface.
ETR sinks should have trace buffers allocated from reserved memory,
by selecting "resrv" buffer mode from sysfs.

View File

@ -1,39 +1,103 @@
==========================
Linux Tracing Technologies
==========================
================================
Linux Tracing Technologies Guide
================================
Tracing in the Linux kernel is a powerful mechanism that allows
developers and system administrators to analyze and debug system
behavior. This guide provides documentation on various tracing
frameworks and tools available in the Linux kernel.
Introduction to Tracing
-----------------------
This section provides an overview of Linux tracing mechanisms
and debugging approaches.
.. toctree::
:maxdepth: 2
:maxdepth: 1
ftrace-design
debugging
tracepoints
tracepoint-analysis
ring-buffer-map
Core Tracing Frameworks
-----------------------
The following are the primary tracing frameworks integrated into
the Linux kernel.
.. toctree::
:maxdepth: 1
ftrace
ftrace-design
ftrace-uses
fprobe
kprobes
kprobetrace
uprobetracer
fprobetrace
tracepoints
fprobe
ring-buffer-design
Event Tracing and Analysis
--------------------------
A detailed explanation of event tracing mechanisms and their
applications.
.. toctree::
:maxdepth: 1
events
events-kmem
events-power
events-nmi
events-msr
mmiotrace
boottime-trace
histogram
histogram-design
boottime-trace
debugging
hwlat_detector
osnoise-tracer
timerlat-tracer
Hardware and Performance Tracing
--------------------------------
This section covers tracing features that monitor hardware
interactions and system performance.
.. toctree::
:maxdepth: 1
intel_th
ring-buffer-design
ring-buffer-map
stm
sys-t
coresight/index
user_events
rv/index
hisi-ptt
mmiotrace
hwlat_detector
osnoise-tracer
timerlat-tracer
User-Space Tracing
------------------
These tools allow tracing user-space applications and
interactions.
.. toctree::
:maxdepth: 1
user_events
uprobetracer
Additional Resources
--------------------
For more details, refer to the respective documentation of each
tracing tool and framework.
.. only:: subproject and html
Indices
=======
* :ref:`genindex`

View File

@ -428,13 +428,14 @@ los desarrolladores, que corren el riesgo de quedar enterrados bajo una
carga de correo electrónico, incumplir las convenciones utilizadas en las
listas de Linux, o ambas cosas.
La mayoría de las listas de correo del kernel se ejecutan en
vger.kernel.org; la lista principal se puede encontrar en:
La mayoría de las listas de correo del kernel se alojan en kernel.org; la
lista principal se puede encontrar en:
http://vger.kernel.org/vger-lists.html
https://subspace.kernel.org
Sim embargo, hay listas alojadas en otros lugares; varios de ellos se
encuentran en redhat.com/mailman/listinfo.
Sin embargo, hay listas alojadas en otros lugares; consulte el archivo
MAINTAINERS para obtener la lista relevante para cualquier subsistema en
particular.
La lista de correo principal para el desarrollo del kernel es, por
supuesto, linux-kernel. Esta lista es un lugar intimidante; el volumen

View File

@ -334,7 +334,7 @@ con el árbol principal, necesitan probar su integración. Para ello, existe
un repositorio especial de pruebas en el que se encuentran casi todos los
árboles de subsistema, actualizado casi a diario:
https://git.kernel.org/?p=linux/kernel/git/next/linux-next.git
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git
De esta manera, linux-next ofrece una perspectiva resumida de lo que se
espera que entre en el kernel principal en el próximo período de "merge"
@ -378,13 +378,13 @@ desarrolladores del kernel participan en la lista de correo del kernel de
Linux. Detalles sobre cómo para suscribirse y darse de baja de la lista se
pueden encontrar en:
http://vger.kernel.org/vger-lists.html#linux-kernel
https://subspace.kernel.org/subscribing.html
Existen archivos de la lista de correo en la web en muchos lugares
distintos. Utilice un motor de búsqueda para encontrar estos archivos. Por
ejemplo:
http://dir.gmane.org/gmane.linux.kernel
https://lore.kernel.org/linux-kernel/
Es muy recomendable que busque en los archivos sobre el tema que desea
tratar, antes de publicarlo en la lista. Un montón de cosas ya discutidas
@ -398,13 +398,13 @@ los diferentes grupos.
Muchas de las listas están alojadas en kernel.org. La información sobre
estas puede ser encontrada en:
http://vger.kernel.org/vger-lists.html
https://subspace.kernel.org
Recuerde mantener buenos hábitos de comportamiento al usar las listas.
Aunque un poco cursi, la siguiente URL tiene algunas pautas simples para
interactuar con la lista (o cualquier lista):
http://www.albion.com/netiquette/
https://subspace.kernel.org/etiquette.html
Si varias personas responden a su correo, el CC (lista de destinatarios)
puede hacerse bastante grande. No elimine a nadie de la lista CC: sin una

View File

@ -170,9 +170,8 @@ Recursos varios
* Título: **linux-kernel mailing list archives and search engines**
:URL: http://vger.kernel.org/vger-lists.html
:URL: http://www.uwsg.indiana.edu/hypermail/linux/kernel/index.html
:URL: http://groups.google.com/group/mlist.linux.kernel
:URL: https://subspace.kernel.org
:URL: https://lore.kernel.org
:Palabras Clave: linux-kernel, archives, buscar, search, archivos.
:Descripción: Algunos de los archivadores de listas de correo del
kernel de Linux. Si usted tiene uno mejor/otro, por favor hágamelo

View File

@ -136,11 +136,11 @@ algo documentado en la web, referencie esto.
Cuando se vincule a archivos de listas de correo, preferiblemente use el
servicio de archivador de mensajes lore.kernel.org. Para crear la URL del
enlace, utilice el contenido del encabezado ("header") ``Message-Id`` del
enlace, utilice el contenido del encabezado ("header") ``Message-ID`` del
mensaje sin los corchetes angulares que lo rodean.
Por ejemplo::
Link: https://lore.kernel.org/r/30th.anniversary.repost@klaava.Helsinki.FI/
Link: https://lore.kernel.org/30th.anniversary.repost@klaava.Helsinki.FI
Verifique el enlace para asegurarse de que realmente funciona y apunta al
mensaje correspondiente.
@ -257,10 +257,10 @@ archivo MAINTAINERS una lista específica de los subsistemas; su parche
probablemente recibirá más atención allí. Sin embargo, no envíe spam a
listas no relacionadas.
Muchas listas relacionadas con el kernel están alojadas en vger.kernel.org;
Muchas listas relacionadas con el kernel están alojadas en kernel.org;
puedes encontrar un listado de estas en
http://vger.kernel.org/vger-lists.html. Existen listas relacionadas con el
kernel alojadas en otros lugares, no obstante.
https://subspace.kernel.org. Existen listas relacionadas con el kernel
alojadas en otros lugares, no obstante.
¡No envíe más de 15 parches a la vez a las listas de correo de vger!
@ -907,9 +907,6 @@ Referencias
<http://www.kroah.com/log/linux/maintainer-06.html>
NO!!!! Gente, no mas bombas enormes de parches a linux-kernel@vger.kernel.org!
<https://lore.kernel.org/r/20050711.125305.08322243.davem@davemloft.net>
Kernel Documentation/process/coding-style.rst
Email de Linus Torvalds sobre la forma canónica de los parches:

View File

@ -0,0 +1,459 @@
.. SPDX-License-Identifier: GPL-2.0
=========================
Linux内核中文文档翻译规范
=========================
修订记录:
- v1.0 2025年3月28日司延腾、慕冬亮共同编写了该规范。
制定规范的背景
==============
过去几年在广大社区爱好者的友好合作下Linux 内核中文文档迎来了蓬勃的发
展。在翻译的早期,一切都是混乱的,社区对译稿只有一个准确翻译的要求,以鼓
励更多的开发者参与进来这是从0到1的必然过程所以早期的中文文档目录更加
具有多样性,不过好在文档不多,维护上并没有过大的压力。
然而,世事变幻,不觉有年,现在内核中文文档在前进的道路上越走越远,很多潜
在的问题逐渐浮出水面,而且随着中文文档数量的增加,翻译更多的文档与提高中
文文档可维护性之间的矛盾愈发尖锐。由于文档翻译的特殊性,很多开发者并不会
一直更新文档,如果中文文档落后英文文档太多,文档更新的工作量会远大于重新
翻译。而且邮件列表中陆续有新的面孔出现,他们那股热情,就像燃烧的火焰,能
瞬间点燃整个空间,可是他们的补丁往往具有个性,这会给审阅带来了很大的困难,
reviewer 们只能耐心地指导他们如何与社区更好地合作,但是这项工作具有重复
性,长此以往,会渐渐浇灭 reviewer 审阅的热情。
虽然内核文档中已经有了类似的贡献指南,但是缺乏专门针对于中文翻译的,尤其
是对于新手来说,浏览大量的文档反而更加迷惑,该文档就是为了缓解这一问题而
编写,目的是为提供给新手一个快速翻译指南。
详细的贡献指南Documentation/translations/zh_CN/process/index.rst。
环境搭建
========
工欲善其事必先利其器,如果您目前对内核文档翻译满怀热情,并且会独立地安装
linux 发行版和简单地使用 linux 命令行,那么可以迅速开始了。若您尚不具备该
能力,很多网站上会有详细的手把手教程,最多一个上午,您应该就能掌握对应技
能。您需要注意的一点是,请不要使用 root 用户进行后续步骤和文档翻译。
拉取开发树
----------
中文文档翻译工作目前独立于 linux-doc 开发树开展,所以您需要拉取该开发树,
打开终端命令行执行::
git clone git://git.kernel.org/pub/scm/linux/kernel/git/alexs/linux.git
如果您遇到网络连接问题,也可以执行以下命令::
git clone https://mirrors.hust.edu.cn/git/kernel-doc-zh.git linux
这是 Alex 开发树的镜像库,每两个小时同步一次上游。如果您了解到更快的 mirror
请随时 **添加**
命令执行完毕后,您会在当前目录下得到一个 linux 目录,该目录就是您之后的工作
仓库,请把它放在一个稳妥的位置。
安装文档构建环境
----------------
内核仓库里面提供了一个半自动化脚本,执行该脚本,会检测您的发行版中需要安
装哪些软件包,请按照命令行提示进行安装,通常您只需要复制命令并执行就行。
::
cd linux
./scripts/sphinx-pre-install
以Fedora为例它的输出是这样的::
You should run:
sudo dnf install -y dejavu-sans-fonts dejavu-sans-mono-fonts dejavu-serif-fonts google-noto-sans-cjk-fonts graphviz-gd latexmk librsvg2-tools texlive-anyfontsize texlive-capt-of texlive-collection-fontsrecommended texlive-ctex texlive-eqparbox texlive-fncychap texlive-framed texlive-luatex85 texlive-multirow texlive-needspace texlive-tabulary texlive-threeparttable texlive-upquote texlive-wrapfig texlive-xecjk
Sphinx needs to be installed either:
1) via pip/pypi with:
/usr/bin/python3 -m venv sphinx_latest
. sphinx_latest/bin/activate
pip install -r ./Documentation/sphinx/requirements.txt
If you want to exit the virtualenv, you can use:
deactivate
2) As a package with:
sudo dnf install -y python3-sphinx
Please note that Sphinx >= 3.0 will currently produce false-positive
warning when the same name is used for more than one type (functions,
structs, enums,...). This is known Sphinx bug. For more details, see:
https://github.com/sphinx-doc/sphinx/pull/8313
请您按照提示复制打印的命令到命令行执行,您必须具备 root 权限才能执行 sudo
开头的命令。
如果您处于一个多用户环境中,为了避免对其他人造成影响,建议您配置单用户
sphinx 虚拟环境,即只需要执行::
/usr/bin/python3 -m venv sphinx_latest
. sphinx_latest/bin/activate
pip install -r ./Documentation/sphinx/requirements.txt
最后执行以下命令退出虚拟环境::
deactivate
您可以在任何需要的时候再次执行以下命令进入虚拟环境::
. sphinx_latest/bin/activate
进行第一次文档编译
------------------
进入开发树目录::
cd linux
这是一个标准的编译和调试流程,请每次构建时都严格执行::
. sphinx_latest/bin/activate
make cleandocs
make htmldocs
deactivate
检查编译结果
------------
编译输出在Documentation/output/目录下,请用浏览器打开该目录下对应
的文件进行检查。
git和邮箱配置
-------------
打开命令行执行::
sudo dnf install git-email
vim ~/.gitconfig
这里是我的一个配置文件示范,请根据您的邮箱域名服务商提供的手册替换到对
应的字段。
::
[user]
name = Yanteng Si # 这会出现在您的补丁头部签名栏
email = si.yanteng@linux.dev # 这会出现在您的补丁头部签名栏
[sendemail]
from = Yanteng Si <si.yanteng@linux.dev> # 这会出现在您的补丁头部
smtpencryption = ssl
smtpserver = smtp.migadu.com
smtpuser = si.yanteng@linux.dev
smtppass = <passwd> # 建议使用第三方客户端专用密码
chainreplyto = false
smtpserverport = 465
关于邮件客户端的配置请查阅Documentation/translations/zh_CN/process/email-clients.rst。
开始翻译文档
============
文档索引结构
------------
目前中文文档是在Documentation/translations/zh_CN/目录下进行,该
目录结构最终会与Documentation/结构一致,所以您只需要将您感兴趣的英文
文档文件和对应的 index.rst 复制到 zh_CN 目录下对应的位置,然后修改更
上一级的 index 即可开始您的翻译。
为了保证翻译的文档补丁被顺利合并,不建议多人同时翻译一个目录,因为这会
造成补丁之间互相依赖,往往会导致一部分补丁被合并,另一部分产生冲突。
如果实在无法避免两个人同时对一个目录进行翻译的情况,请将补丁制作进一个补
丁集。但是不推荐刚开始就这么做,因为经过实践,在没有指导的情况下,新手很
难一次处理好这个补丁集。
请执行以下命令,新建开发分支::
git checkout docs-next
git branch my-trans
git checkout my-trans
译文格式要求
------------
- 每行长度最多不超过40个字符
- 每行长度请保持一致
- 标题的下划线长度请按照一个英文一个字符、一个中文两个字符与标题对齐
- 其它的修饰符请与英文文档保持一致
此外在译文的头部,您需要插入以下内容::
.. SPDX-License-Identifier: GPL-2.0
.. include:: ../disclaimer-zh_CN.rst #您需要了解该文件的路径,根
据您实际翻译的文档灵活调整
:Original: Documentation/xxx/xxx.rst #替换为您翻译的英文文档路径
:翻译:
司延腾 Yanteng Si <si.yanteng@linux.dev> #替换为您自己的联系方式
翻译技巧
--------
中文文档有每行40字符限制因为一个中文字符等于2个英文字符。但是社区并没有
那么严格,一个诀窍是将您的翻译的内容与英文原文的每行长度对齐即可,这样,
您也不必总是检查有没有超限。
如果您的英文阅读能力有限,可以考虑使用辅助翻译工具,例如 deepseek 。但是您
必须仔细地打磨,使译文达到“信达雅”的标准。
**请注意** 社区不接受纯机器翻译的文档,社区工作建立在信任的基础上,请认真对待。
编译和检查
----------
请执行::
. sphinx_latest/bin/activate
make cleandocs
make htmldocs
解决与您翻译的文档相关的 warning 和 error然后执行::
make cleandocs #该步骤不能省略,否则可能不会再次输出真实存在的警告
make htmldocs
deactivate
进入 output 目录用浏览器打开您翻译的文档,检查渲染的页面是否正常,如果正常,
继续进行后续步骤,否则请尝试解决。
制作补丁
========
提交改动
--------
执行以下命令,在弹出的交互式页面中填写必要的信息。
::
git add .
git commit -s -v
请参考以下信息进行输入::
docs/zh_CN: Add self-protection index Chinese translation
Translate .../security/self-protection.rst into Chinese.
Update the translation through commit b080e52110ea #请执行git log <您翻译的英文文档路径> 复制最顶部第一个补丁的sha值的前12位替换掉12位sha值。
("docs: update self-protection __ro_after_init status")
Signed-off-by: Yanteng Si <si.yanteng@linux.dev> #如果您前面的步骤正确执行该行会自动显示否则请检查gitconfig文件。
保存并退出。
**请注意** 以上四行,缺少任何一行,您都将会在第一轮审阅后返工,如果您需要一个更加明确的示例,请对 zh_CN 目录执行 git log。
导出补丁和制作封面
------------------
这个时候,可以导出补丁,做发送邮件列表最后的准备了。命令行执行::
git format-patch -N
然后命令行会输出类似下面的内容::
0001-docs-zh_CN-add-xxxxxxxx.patch
0002-docs-zh_CN-add-xxxxxxxx.patch
……
测试补丁
--------
内核提供了一个补丁检测脚本,请执行::
./scripts/checkpatch.pl *.patch
参考脚本输出,解决掉所有的 error 和 warning通常情况下只有下面这个
warning 不需要解决::
WARNING: added, moved or deleted file(s), does MAINTAINERS need updating?
一个简单的解决方法是一次只检查一个补丁,然后打上该补丁,直接对译文进行修改,
然后执行以下命令为补丁追加更改::
git checkout docs-next
git branch test-trans
git am 0001-xxxxx.patch
./scripts/checkpatch.pl 0001-xxxxx.patch
直接修改您的翻译
git add .
git am --amend
保存退出
git am 0002-xxxxx.patch
……
重新导出再次检测,重复这个过程,直到处理完所有的补丁。
最后,如果检测时没有 warning 和 error 需要被处理或者您只有一个补丁,请跳
过下面这个步骤,否则请重新导出补丁制作封面::
git format-patch -N --cover-letter --thread=shallow #N为您的补丁数量,N一般要大于1。
然后命令行会输出类似下面的内容::
0000-cover-letter.patch
0001-docs-zh_CN-add-xxxxxxxx.patch
0002-docs-zh_CN-add-xxxxxxxx.patch
您需要用编辑器打开0号补丁修改两处内容::
vim 0000-cover-letter.patch
...
Subject: [PATCH 0/1] *** SUBJECT HERE *** #修改该字段,概括您的补丁集都做了哪些事情
*** BLURB HERE *** #修改该字段,详细描述您的补丁集做了哪些事情
Yanteng Si (1):
docs/zh_CN: add xxxxx
...
如果您只有一个补丁则可以不制作封面即0号补丁只需要执行::
git format-patch -1
把补丁提交到邮件列表
====================
恭喜您,您的文档翻译现在可以提交到邮件列表了。
获取维护者和审阅者邮箱以及邮件列表地址
--------------------------------------
内核提供了一个自动化脚本工具,请执行::
./scripts/get_maintainer.pl *.patch
将输出的邮箱地址保存下来。
将补丁提交到邮件列表
--------------------
打开上面您保存的邮件地址,执行::
git send-email *.patch --to <maintainer email addr> --cc <others addr> #一个to对应一个地址一个cc对应一个地址有几个就写几个。
执行该命令时请确保网络通常邮件发送成功一般会返回250。
您可以先发送给自己,尝试发出的 patch 是否可以用 'git am' 工具正常打上。
如果检查正常, 您就可以放心的发送到社区评审了。
如果该步骤被中断,您可以检查一下,继续用上条命令发送失败的补丁,一定不要再
次发送已经发送成功的补丁。
积极参与审阅过程并迭代补丁
==========================
补丁提交到邮件列表并不代表万事大吉,您还需要积极回复 maintainer 和
reviewer 的评论,做到每条都有回复,每个回复都落实到位。
如何回复评论
------------
- 请先将您的邮箱客户端信件回复修改为 **纯文本** 格式,并去除所有签名,尤其是
企业邮箱。
- 然后点击回复按钮,并将要回复的邮件带入,
- 在第一条评论行尾换行,输入您的回复
- 在第二条评论行尾换行,输入您的回复
- 直到处理完最后一条评论,换行空两行输入问候语和署名
注意,信件回复请尽量使用英文。
迭代补丁
--------
建议您每回复一条评论,就修改一处翻译。然后重新生成补丁,相信您现在已经具
备了灵活使用 git am --amend 的能力。
每次迭代一个补丁,不要一次多个::
git am <您要修改的补丁>
直接对文件进行您的修改
git add .
git commit --amend
当您将所有的评论落实到位后,导出第二版补丁,并修改封面::
git format-patch -N -v 2 --cover-letter --thread=shallow
打开0号补丁在 BLURB HERE 处编写相较于上个版本,您做了哪些改动。
然后执行::
git send-email v2* --to <maintainer email addr> --cc <others addr>
这样,新的一版补丁就又发送到邮件列表等待审阅,之后就是重复这个过程。
审阅周期
--------
因为有时邮件列表比较繁忙,您的邮件可能会被淹没,如果超过两周没有得到任何
回复,请自己回复自己,回复的内容为 Ping.
最终,如果您落实好了所有的评论,并且一段时间后没有最新的评论,您的补丁将
会先进入 Alex 的开发树,然后进入 linux-doc 开发树,最终在下个窗口打开
时合并进 mainline 仓库。
紧急处理
--------
如果您发送到邮件列表之后。发现发错了补丁集,尤其是在多个版本迭代的过程中;
自己发现了一些不妥的翻译;发送错了邮件列表……
git email默认会抄送给您一份所以您可以切换为审阅者的角色审查自己的补丁
并留下评论,描述有何不妥,将在下个版本怎么改,并付诸行动,重新提交,但是
注意频率,每天提交的次数不要超过两次。
新手任务
--------
对于首次参与 Linux 内核中文文档翻译的新手,建议您在 linux 目录中运行以下命令:
::
./script/checktransupdate.py -l zh_CN``
该命令会列出需要翻译或更新的英文文档。
关于详细操作说明,请参考: Documentation/translations/zh_CN/doc-guide/checktransupdate.rst\
进阶
----
希望您不只是单纯的翻译内核文档,在熟悉了一起与社区工作之后,您可以审阅其他
开发者的翻译,或者提出具有建设性的主张。与此同时,与文档对应的代码更加有趣,
而且需要完善的地方还有很多,勇敢地去探索,然后提交你的想法吧。
常见的问题
==========
Maintainer回复补丁不能正常apply
-------------------------------
这通常是因为您的补丁与邮件列表其他人的补丁产生了冲突,别人的补丁先被 apply 了,
您的补丁集就无法成功 apply 了,这需要您更新本地分支,在本地解决完冲突后再次提交。
请尽量避免冲突,不要多个人同时翻译一个目录。翻译之前可以通过 git log 查看您感
兴趣的目录近期有没有其他人翻译,如果有,请提前私信联系对方,请求其代为发送您
的补丁。如果对方未来一个月内没有提交新补丁的打算,您可以独自发送。
回信被邮件列表拒收
------------------
大部分情况下,是由于您发送了非纯文本格式的信件,请尽量避免使用 webmail推荐
使用邮件客户端,比如 thunderbird记得在设置中的回信配置那改为纯文本发送。
如果超过了24小时您依旧没有在<https://lore.kernel.org/linux-doc/>发现您的邮
件,请联系您的网络管理员帮忙解决。

View File

@ -21,18 +21,18 @@
这是中文内核文档树的顶级目录。内核文档,就像内核本身一样,在很大程度上是一
项正在进行的工作;当我们努力将许多分散的文件整合成一个连贯的整体时尤其如此。
另外随时欢迎您对内核文档进行改进如果您想提供帮助请加入vger.kernel.org
上的linux-doc邮件列表。
上的linux-doc邮件列表并按照Documentation/translations/zh_CN/how-to.rst的
指引提交补丁。提交补丁之前请确保执行"make htmldocs”后无与翻译有关的异常输出。
顺便说下,中文文档也需要遵守内核编码风格,风格中中文和英文的主要不同就是中文
的字符标点占用两个英文字符宽度所以当英文要求不要超过每行100个字符时
中文就不要超过50个字符。另外也要注意'-''='等符号与相关标题的对齐。在将
补丁提交到社区之前,一定要进行必要的 ``checkpatch.pl`` 检查和编译测试,确保
``make htmldocs/pdfdocs`` 中不增加新的告警,最后,安装检查你生成的
html/pdf 文件,确认它们看起来是正常的。
如何翻译内核文档
----------------
提交之前请确认你的补丁可以正常提交到中文文档维护库:
https://git.kernel.org/pub/scm/linux/kernel/git/alexs/linux.git/
如果你的补丁依赖于其他人的补丁, 可以与其他人商量后由某一个人合并提交。
翻译文档本身是一件很简单的事情,但是提交补丁需要注意一些细节,为了保证内核中文文档的高质量可持续发展,提供了一份翻译指南。
.. toctree::
:maxdepth: 1
how-to.rst
与Linux 内核社区一起工作
------------------------

View File

@ -0,0 +1,160 @@
.. SPDX-License-Identifier: GPL-2.0
.. include:: ../disclaimer-zh_CN.rst
:Original: Documentation/networking/index.rst
:翻译:
王亚鑫 Wang Yaxin <wang.yaxin@zte.com.cn>
:校译:
网络
====
有关网络设备netdev开发过程的详细指南请参考:ref:`netdev-FAQ`
目录:
.. toctree::
:maxdepth: 1
msg_zerocopy
Todolist:
* af_xdp
* bareudp
* batman-adv
* can
* can_ucan_protocol
* device_drivers/index
* diagnostic/index
* dsa/index
* devlink/index
* caif/index
* ethtool-netlink
* ieee802154
* iso15765-2
* j1939
* kapi
* failover
* net_dim
* net_failover
* page_pool
* phy
* sfp-phylink
* alias
* bridge
* snmp_counter
* checksum-offloads
* segmentation-offloads
* scaling
* tls
* tls-offload
* tls-handshake
* nfc
* 6lowpan
* 6pack
* arcnet-hardware
* arcnet
* atm
* ax25
* bonding
* cdc_mbim
* dccp
* dctcp
* devmem
* dns_resolver
* driver
* eql
* fib_trie
* filter
* generic-hdlc
* generic_netlink
* netlink_spec/index
* gen_stats
* gtp
* ila
* ioam6-sysctl
* ip_dynaddr
* ipsec
* ip-sysctl
* ipv6
* ipvlan
* ipvs-sysctl
* kcm
* l2tp
* lapb-module
* mac80211-injection
* mctp
* mpls-sysctl
* mptcp
* mptcp-sysctl
* multiqueue
* multi-pf-netdev
* napi
* net_cachelines/index
* netconsole
* netdev-features
* netdevices
* netfilter-sysctl
* netif-msg
* netmem
* nexthop-group-resilient
* nf_conntrack-sysctl
* nf_flowtable
* oa-tc6-framework
* openvswitch
* operstates
* packet_mmap
* phonet
* phy-link-topology
* pktgen
* plip
* ppp_generic
* proc_net_tcp
* pse-pd/index
* radiotap-headers
* rds
* regulatory
* representors
* rxrpc
* sctp
* secid
* seg6-sysctl
* skbuff
* smc-sysctl
* sriov
* statistics
* strparser
* switchdev
* sysfs-tagging
* tc-actions-env-rules
* tc-queue-filters
* tcp_ao
* tcp-thin
* team
* timestamping
* tipc
* tproxy
* tuntap
* udplite
* vrf
* vxlan
* x25
* x25-iface
* xfrm_device
* xfrm_proc
* xfrm_sync
* xfrm_sysctl
* xdp-rx-metadata
* xsk-tx-metadata
.. only:: subproject and html
Indices
=======
* :ref:`genindex`

View File

@ -0,0 +1,223 @@
.. SPDX-License-Identifier: GPL-2.0
.. include:: ../disclaimer-zh_CN.rst
:Original: Documentation/networking/msg_zerocopy.rst
:翻译:
王亚鑫 Wang Yaxin <wang.yaxin@zte.com.cn>
:校译:
- 徐鑫 xu xin <xu.xin16@zte.com.cn>
- 何配林 He Peilin <he.peilin@zte.com.cn>
============
MSG_ZEROCOPY
============
简介
====
MSG_ZEROCOPY 标志用于启用套接字发送调用的免拷贝功能。该功能目前适用于 TCP、UDP 和 VSOCK
(使用 virtio 传输)套接字。
机遇与注意事项
--------------
在用户进程与内核之间拷贝大型缓冲区可能会消耗大量资源。Linux 支持多种免拷贝的接口如sendfile
和 splice。MSG_ZEROCOPY 标志将底层的拷贝避免机制扩展到了常见的套接字发送调用中。
免拷贝并非毫无代价。在实现上它通过页面固定page pinning将按字节拷贝的成本替换为页面统计
page accounting和完成通知的开销。因此MSG_ZEROCOPY 通常仅在写入量超过大约 10 KB 时
才有效。
页面固定还会改变系统调用的语义。它会暂时在进程和网络堆栈之间共享缓冲区。与拷贝不同,进程在系统
调用返回后不能立即覆盖缓冲区,否则可能会修改正在传输中的数据。内核的完整性不会受到影响,但有缺
陷的程序可能会破坏自己的数据流。
当内核返回数据可以安全修改的通知时,进程才可以修改数据。因此,将现有应用程序转换为使用
MSG_ZEROCOPY 并非总是像简单地传递该标志那样容易。
更多信息
--------
本文档的大部分内容是来自于 netdev 2.1 上发表的一篇长篇论文。如需更深入的信息,请参阅该论文和
演讲,或者浏览 LWN.net 上的精彩报道,也可以直接阅读源码。
论文、幻灯片、视频:
https://netdevconf.org/2.1/session.html?debruijn
LWN 文章:
https://lwn.net/Articles/726917/
补丁集:
[PATCH net-next v4 0/9] socket sendmsg MSG_ZEROCOPY
https://lore.kernel.org/netdev/20170803202945.70750-1-willemdebruijn.kernel@gmail.com
接口
====
传递 MSG_ZEROCOPY 标志是启用免拷贝功能的最明显步骤,但并非唯一的步骤。
套接字设置
----------
当应用程序向 send 系统调用传递未定义的标志时,内核通常会宽容对待。默认情况下,它会简单地忽略
这些标志。为了避免为那些偶然传递此标志的遗留进程启用免拷贝模式,进程必须首先通过设置套接字选项
来表明意图:
::
if (setsockopt(fd, SOL_SOCKET, SO_ZEROCOPY, &one, sizeof(one)))
error(1, errno, "setsockopt zerocopy");
传输
----
对 send或 sendto、sendmsg、sendmmsg本身的改动非常简单。只需传递新的标志即可。
::
ret = send(fd, buf, sizeof(buf), MSG_ZEROCOPY);
如果零拷贝操作失败,将返回 -1并设置 errno 为 ENOBUFS。这种情况可能发生在套接字超出其
optmem 限制,或者用户超出其锁定页面的 ulimit 时。
混合使用免拷贝和拷贝
~~~~~~~~~~~~~~~~~~~~
许多工作负载同时包含大型和小型缓冲区。由于对于小数据包来说,免拷贝的成本高于拷贝,因此该
功能是通过标志实现的。带有标志的调用和没有标志的调用可以安全地混合使用。
通知
----
当内核认为可以安全地重用之前传递的缓冲区时,它必须通知进程。完成通知在套接字的错误队列上
排队,类似于传输时间戳接口。
通知本身是一个简单的标量值。每个套接字都维护一个内部的无符号 32 位计数器。每次带有
MSG_ZEROCOPY 标志的 send 调用成功发送数据时,计数器都会增加。如果调用失败或长度为零,
则计数器不会增加。该计数器统计系统调用的调用次数,而不是字节数。在 UINT_MAX 次调用后,
计数器会循环。
通知接收
~~~~~~~~
下面的代码片段展示了 API 的使用。在最简单的情况下,每次 send 系统调用后,都会对错误队列
进行轮询和 recvmsg 调用。
从错误队列读取始终是一个非阻塞操作。poll 调用用于阻塞,直到出现错误。它会在其输出标志中
设置 POLLERR。该标志不需要在 events 字段中设置。错误会无条件地发出信号。
::
pfd.fd = fd;
pfd.events = 0;
if (poll(&pfd, 1, -1) != 1 || pfd.revents & POLLERR == 0)
error(1, errno, "poll");
ret = recvmsg(fd, &msg, MSG_ERRQUEUE);
if (ret == -1)
error(1, errno, "recvmsg");
read_notification(msg);
这个示例仅用于演示目的。在实际应用中,不等待通知,而是每隔几次 send 调用就进行一次非阻塞
读取会更高效。
零拷贝通知可以与其他套接字操作乱序处理。通常,拥有错误队列套接字会阻塞其他操作,直到错误
被读取。然而,零拷贝通知具有零错误代码,因此不会阻塞 send 和 recv 调用。
通知批处理
~~~~~~~~~~~~
可以使用 recvmmsg 调用来一次性读取多个未决的数据包。这通常不是必需的。在每条消息中,内核
返回的不是一个单一的值,而是一个范围。当错误队列上有一个通知正在等待接收时,它会将连续的通
知合并起来。
当一个新的通知即将被排队时,它会检查队列尾部的通知的范围是否可以扩展以包含新的值。如果是这
样,它会丢弃新的通知数据包,并增大未处理通知的范围上限值。
对于按顺序确认数据的协议(如 TCP每个通知都可以合并到前一个通知中因此在任何时候在等待
的通知都不会超过一个。
有序交付是常见的情况,但不能保证。在重传和套接字拆除时,通知可能会乱序到达。
通知解析
~~~~~~~~
下面的代码片段演示了如何解析控制消息:前面代码片段中的 read_notification() 调用。通知
以标准错误格式 sock_extended_err 编码。
控制数据中的级别和类型字段是协议族特定的,对于 TCP 或 UDP 套接字,分别为 IP_RECVERR 或
IPV6_RECVERR。对于 VSOCK 套接字cmsg_level 为 SOL_VSOCKcmsg_type 为 VSOCK_RECVERR。
错误来源是新的类型 SO_EE_ORIGIN_ZEROCOPY。如前所述ee_errno 为零,以避免在套接字上
阻塞地读取和写入系统调用。
32 位通知范围编码为 [ee_info, ee_data]。这个范围是包含边界值的。除了下面讨论的 ee_code
字段外,结构中的其他字段应被视为未定义的。
::
struct sock_extended_err *serr;
struct cmsghdr *cm;
cm = CMSG_FIRSTHDR(msg);
if (cm->cmsg_level != SOL_IP &&
cm->cmsg_type != IP_RECVERR)
error(1, 0, "cmsg");
serr = (void *) CMSG_DATA(cm);
if (serr->ee_errno != 0 ||
serr->ee_origin != SO_EE_ORIGIN_ZEROCOPY)
error(1, 0, "serr");
printf("completed: %u..%u\n", serr->ee_info, serr->ee_data);
延迟拷贝
~~~~~~~~
传递标志 MSG_ZEROCOPY 是向内核发出的一个提示,让内核采用免拷贝的策略,同时也是一种约
定,即内核会对完成通知进行排队处理。但这并不保证拷贝操作一定会被省略。
拷贝避免不总是适用的。不支持分散/聚集 I/O 的设备无法发送由内核生成的协议头加上零拷贝用户
数据组成的数据包。数据包可能需要在协议栈底层转换为一份私有数据副本,例如用于计算校验和。
在所有这些情况下,当内核释放对共享页面的持有权时,它会返回一个完成通知。该通知可能在(已
拷贝)数据完全传输之前到达。因此。零拷贝完成通知并不是传输完成通知。
如果数据不在缓存中,延迟拷贝可能会比立即在系统调用中拷贝开销更大。进程还会因通知处理而产
生成本,但却没有带来任何好处。因此,内核会在返回时通过在 ee_code 字段中设置标志
SO_EE_CODE_ZEROCOPY_COPIED 来指示数据是否以拷贝的方式完成。进程可以利用这个信号,在
同一套接字上后续的请求中停止传递 MSG_ZEROCOPY 标志。
实现
====
环回
----
对于 TCP 和 UDP
如果接收进程不读取其套接字,发送到本地套接字的数据可能会无限期排队。无限期的通知延迟是不
可接受的。因此,所有使用 MSG_ZEROCOPY 生成并环回到本地套接字的数据包都将产生延迟拷贝。
这包括环回到数据包套接字例如tcpdump和 tun 设备。
对于 VSOCK
发送到本地套接字的数据路径与非本地套接字相同。
测试
====
更具体的示例代码可以在内核源码的 tools/testing/selftests/net/msg_zerocopy.c 中找到。
要留意环回约束问题。该测试可以在一对主机之间进行。但如果是在本地的一对进程之间运行,例如当使用
msg_zerocopy.sh 脚本在跨命名空间的虚拟以太网veth对之间运行时测试将不会显示出任何性能
提升。为了便于测试,可以通过让 skb_orphan_frags_rx 与 skb_orphan_frags 相同,来暂时放宽
环回限制。
对于 VSOCK 类型套接字的示例可以在 tools/testing/vsock/vsock_test_zerocopy.c 中找到。

View File

@ -28,10 +28,10 @@ or number from the table below. Because of the large number of drivers,
many drivers share a partial letter with other drivers.
If you are writing a driver for a new device and need a letter, pick an
unused block with enough room for expansion: 32 to 256 ioctl commands.
You can register the block by patching this file and submitting the
patch to Linus Torvalds. Or you can e-mail me at <mec@shout.net> and
I'll register one for you.
unused block with enough room for expansion: 32 to 256 ioctl commands
should suffice. You can register the block by patching this file and
submitting the patch through :doc:`usual patch submission process
</process/submitting-patches>`.
The second argument to _IO, _IOW, _IOR, or _IOWR is a sequence number
to distinguish ioctls from each other. The third argument to _IOW,
@ -62,9 +62,8 @@ Following this convention is good because:
(5) When following the convention, the driver code can use generic
code to copy the parameters between user and kernel space.
This table lists ioctls visible from user land for Linux/x86. It contains
most drivers up to 2.6.31, but I know I am missing some. There has been
no attempt to list non-X86 architectures or ioctls from drivers/staging/.
This table lists ioctls visible from userland, excluding ones from
drivers/staging/.
==== ===== ======================================================= ================================================================
Code Seq# Include File Comments

View File

@ -5510,7 +5510,7 @@ F: Documentation/dev-tools/checkpatch.rst
CHINESE DOCUMENTATION
M: Alex Shi <alexs@kernel.org>
M: Yanteng Si <siyanteng@loongson.cn>
M: Yanteng Si <si.yanteng@linux.dev>
R: Dongliang Mu <dzm91@hust.edu.cn>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/alexs/linux.git
S: Maintained
@ -7088,7 +7088,10 @@ T: git git://git.lwn.net/linux.git docs-next
F: Documentation/
F: scripts/check-variable-fonts.sh
F: scripts/documentation-file-ref-check
F: scripts/kernel-doc
F: scripts/get_abi.py
F: scripts/kernel-doc*
F: scripts/lib/abi/*
F: scripts/lib/kdoc/*
F: scripts/sphinx-pre-install
X: Documentation/ABI/
X: Documentation/admin-guide/media/

View File

@ -458,6 +458,11 @@ endif
HOSTRUSTC = rustc
HOSTPKG_CONFIG = pkg-config
# the KERNELDOC macro needs to be exported, as scripts/Makefile.build
# has a logic to call it
KERNELDOC = $(srctree)/scripts/kernel-doc.py
export KERNELDOC
KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
-O2 -fomit-frame-pointer -std=gnu11
KBUILD_USERCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(USERCFLAGS)

View File

@ -236,7 +236,7 @@ always-$(CONFIG_DRM_HEADER_TEST) += \
quiet_cmd_hdrtest = HDRTEST $(patsubst %.hdrtest,%.h,$@)
cmd_hdrtest = \
$(CC) $(c_flags) -fsyntax-only -x c /dev/null -include $< -include $<; \
$(srctree)/scripts/kernel-doc -none $(if $(CONFIG_WERROR)$(CONFIG_DRM_WERROR),-Werror) $<; \
PYTHONDONTWRITEBYTECODE=1 $(KERNELDOC) -none $(if $(CONFIG_WERROR)$(CONFIG_DRM_WERROR),-Werror) $<; \
touch $@
$(obj)/%.hdrtest: $(src)/%.h FORCE

View File

@ -408,7 +408,7 @@ obj-$(CONFIG_DRM_I915_GVT_KVMGT) += kvmgt.o
#
# Enable locally for CONFIG_DRM_I915_WERROR=y. See also scripts/Makefile.build
ifdef CONFIG_DRM_I915_WERROR
cmd_checkdoc = $(srctree)/scripts/kernel-doc -none -Werror $<
cmd_checkdoc = PYTHONDONTWRITEBYTECODE=1 $(KERNELDOC) -none -Werror $<
endif
# header test

View File

@ -11,7 +11,7 @@ always-$(CONFIG_DRM_HEADER_TEST) += \
quiet_cmd_hdrtest = HDRTEST $(patsubst %.hdrtest,%.h,$@)
cmd_hdrtest = \
$(CC) $(c_flags) -fsyntax-only -x c /dev/null -include $< -include $<; \
$(srctree)/scripts/kernel-doc -none $(if $(CONFIG_WERROR)$(CONFIG_DRM_WERROR),-Werror) $<; \
PYTHONDONTWRITEBYTECODE=1 $(KERNELDOC) -none $(if $(CONFIG_WERROR)$(CONFIG_DRM_WERROR),-Werror) $<; \
touch $@
$(obj)/%.hdrtest: $(src)/%.h FORCE

View File

@ -83,7 +83,7 @@ else ifeq ($(KBUILD_CHECKSRC),2)
endif
ifneq ($(KBUILD_EXTRA_WARN),)
cmd_checkdoc = $(srctree)/scripts/kernel-doc -none $(KDOCFLAGS) \
cmd_checkdoc = PYTHONDONTWRITEBYTECODE=1 $(KERNELDOC) -none $(KDOCFLAGS) \
$(if $(findstring 2, $(KBUILD_EXTRA_WARN)), -Wall) \
$<
endif

View File

@ -54,7 +54,7 @@ for file in `find $1 -name '*.c'`; do
if [[ ${FILES_INCLUDED[$file]+_} ]]; then
continue;
fi
str=$(scripts/kernel-doc -export "$file" 2>/dev/null)
str=$(PYTHONDONTWRITEBYTECODE=1 scripts/kernel-doc -export "$file" 2>/dev/null)
if [[ -n "$str" ]]; then
echo "$file"
fi

File diff suppressed because it is too large Load Diff

1
scripts/kernel-doc Symbolic link
View File

@ -0,0 +1 @@
kernel-doc.py

2439
scripts/kernel-doc.pl Executable file

File diff suppressed because it is too large Load Diff

315
scripts/kernel-doc.py Executable file
View File

@ -0,0 +1,315 @@
#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
# Copyright(c) 2025: Mauro Carvalho Chehab <mchehab@kernel.org>.
#
# pylint: disable=C0103,R0915
#
# Converted from the kernel-doc script originally written in Perl
# under GPLv2, copyrighted since 1998 by the following authors:
#
# Aditya Srivastava <yashsri421@gmail.com>
# Akira Yokosawa <akiyks@gmail.com>
# Alexander A. Klimov <grandmaster@al2klimov.de>
# Alexander Lobakin <aleksander.lobakin@intel.com>
# André Almeida <andrealmeid@igalia.com>
# Andy Shevchenko <andriy.shevchenko@linux.intel.com>
# Anna-Maria Behnsen <anna-maria@linutronix.de>
# Armin Kuster <akuster@mvista.com>
# Bart Van Assche <bart.vanassche@sandisk.com>
# Ben Hutchings <ben@decadent.org.uk>
# Borislav Petkov <bbpetkov@yahoo.de>
# Chen-Yu Tsai <wenst@chromium.org>
# Coco Li <lixiaoyan@google.com>
# Conchúr Navid <conchur@web.de>
# Daniel Santos <daniel.santos@pobox.com>
# Danilo Cesar Lemes de Paula <danilo.cesar@collabora.co.uk>
# Dan Luedtke <mail@danrl.de>
# Donald Hunter <donald.hunter@gmail.com>
# Gabriel Krisman Bertazi <krisman@collabora.co.uk>
# Greg Kroah-Hartman <gregkh@linuxfoundation.org>
# Harvey Harrison <harvey.harrison@gmail.com>
# Horia Geanta <horia.geanta@freescale.com>
# Ilya Dryomov <idryomov@gmail.com>
# Jakub Kicinski <kuba@kernel.org>
# Jani Nikula <jani.nikula@intel.com>
# Jason Baron <jbaron@redhat.com>
# Jason Gunthorpe <jgg@nvidia.com>
# Jérémy Bobbio <lunar@debian.org>
# Johannes Berg <johannes.berg@intel.com>
# Johannes Weiner <hannes@cmpxchg.org>
# Jonathan Cameron <Jonathan.Cameron@huawei.com>
# Jonathan Corbet <corbet@lwn.net>
# Jonathan Neuschäfer <j.neuschaefer@gmx.net>
# Kamil Rytarowski <n54@gmx.com>
# Kees Cook <kees@kernel.org>
# Laurent Pinchart <laurent.pinchart@ideasonboard.com>
# Levin, Alexander (Sasha Levin) <alexander.levin@verizon.com>
# Linus Torvalds <torvalds@linux-foundation.org>
# Lucas De Marchi <lucas.demarchi@profusion.mobi>
# Mark Rutland <mark.rutland@arm.com>
# Markus Heiser <markus.heiser@darmarit.de>
# Martin Waitz <tali@admingilde.org>
# Masahiro Yamada <masahiroy@kernel.org>
# Matthew Wilcox <willy@infradead.org>
# Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
# Michal Wajdeczko <michal.wajdeczko@intel.com>
# Michael Zucchi
# Mike Rapoport <rppt@linux.ibm.com>
# Niklas Söderlund <niklas.soderlund@corigine.com>
# Nishanth Menon <nm@ti.com>
# Paolo Bonzini <pbonzini@redhat.com>
# Pavan Kumar Linga <pavan.kumar.linga@intel.com>
# Pavel Pisa <pisa@cmp.felk.cvut.cz>
# Peter Maydell <peter.maydell@linaro.org>
# Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
# Randy Dunlap <rdunlap@infradead.org>
# Richard Kennedy <richard@rsk.demon.co.uk>
# Rich Walker <rw@shadow.org.uk>
# Rolf Eike Beer <eike-kernel@sf-tec.de>
# Sakari Ailus <sakari.ailus@linux.intel.com>
# Silvio Fricke <silvio.fricke@gmail.com>
# Simon Huggins
# Tim Waugh <twaugh@redhat.com>
# Tomasz Warniełło <tomasz.warniello@gmail.com>
# Utkarsh Tripathi <utripathi2002@gmail.com>
# valdis.kletnieks@vt.edu <valdis.kletnieks@vt.edu>
# Vegard Nossum <vegard.nossum@oracle.com>
# Will Deacon <will.deacon@arm.com>
# Yacine Belkadi <yacine.belkadi.1@gmail.com>
# Yujie Liu <yujie.liu@intel.com>
"""
kernel_doc
==========
Print formatted kernel documentation to stdout
Read C language source or header FILEs, extract embedded
documentation comments, and print formatted documentation
to standard output.
The documentation comments are identified by the "/**"
opening comment mark.
See Documentation/doc-guide/kernel-doc.rst for the
documentation comment syntax.
"""
import argparse
import logging
import os
import sys
# Import Python modules
LIB_DIR = "lib/kdoc"
SRC_DIR = os.path.dirname(os.path.realpath(__file__))
sys.path.insert(0, os.path.join(SRC_DIR, LIB_DIR))
from kdoc_files import KernelFiles # pylint: disable=C0413
from kdoc_output import RestFormat, ManFormat # pylint: disable=C0413
DESC = """
Read C language source or header FILEs, extract embedded documentation comments,
and print formatted documentation to standard output.
The documentation comments are identified by the "/**" opening comment mark.
See Documentation/doc-guide/kernel-doc.rst for the documentation comment syntax.
"""
EXPORT_FILE_DESC = """
Specify an additional FILE in which to look for EXPORT_SYMBOL information.
May be used multiple times.
"""
EXPORT_DESC = """
Only output documentation for the symbols that have been
exported using EXPORT_SYMBOL() and related macros in any input
FILE or -export-file FILE.
"""
INTERNAL_DESC = """
Only output documentation for the symbols that have NOT been
exported using EXPORT_SYMBOL() and related macros in any input
FILE or -export-file FILE.
"""
FUNCTION_DESC = """
Only output documentation for the given function or DOC: section
title. All other functions and DOC: sections are ignored.
May be used multiple times.
"""
NOSYMBOL_DESC = """
Exclude the specified symbol from the output documentation.
May be used multiple times.
"""
FILES_DESC = """
Header and C source files to be parsed.
"""
WARN_CONTENTS_BEFORE_SECTIONS_DESC = """
Warns if there are contents before sections (deprecated).
This option is kept just for backward-compatibility, but it does nothing,
neither here nor at the original Perl script.
"""
class MsgFormatter(logging.Formatter):
"""Helper class to format warnings on a similar way to kernel-doc.pl"""
def format(self, record):
record.levelname = record.levelname.capitalize()
return logging.Formatter.format(self, record)
def main():
"""Main program"""
parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter,
description=DESC)
# Normal arguments
parser.add_argument("-v", "-verbose", "--verbose", action="store_true",
help="Verbose output, more warnings and other information.")
parser.add_argument("-d", "-debug", "--debug", action="store_true",
help="Enable debug messages")
parser.add_argument("-M", "-modulename", "--modulename",
default="Kernel API",
help="Allow setting a module name at the output.")
parser.add_argument("-l", "-enable-lineno", "--enable_lineno",
action="store_true",
help="Enable line number output (only in ReST mode)")
# Arguments to control the warning behavior
parser.add_argument("-Wreturn", "--wreturn", action="store_true",
help="Warns about the lack of a return markup on functions.")
parser.add_argument("-Wshort-desc", "-Wshort-description", "--wshort-desc",
action="store_true",
help="Warns if initial short description is missing")
parser.add_argument("-Wcontents-before-sections",
"--wcontents-before-sections", action="store_true",
help=WARN_CONTENTS_BEFORE_SECTIONS_DESC)
parser.add_argument("-Wall", "--wall", action="store_true",
help="Enable all types of warnings")
parser.add_argument("-Werror", "--werror", action="store_true",
help="Treat warnings as errors.")
parser.add_argument("-export-file", "--export-file", action='append',
help=EXPORT_FILE_DESC)
# Output format mutually-exclusive group
out_group = parser.add_argument_group("Output format selection (mutually exclusive)")
out_fmt = out_group.add_mutually_exclusive_group()
out_fmt.add_argument("-m", "-man", "--man", action="store_true",
help="Output troff manual page format.")
out_fmt.add_argument("-r", "-rst", "--rst", action="store_true",
help="Output reStructuredText format (default).")
out_fmt.add_argument("-N", "-none", "--none", action="store_true",
help="Do not output documentation, only warnings.")
# Output selection mutually-exclusive group
sel_group = parser.add_argument_group("Output selection (mutually exclusive)")
sel_mut = sel_group.add_mutually_exclusive_group()
sel_mut.add_argument("-e", "-export", "--export", action='store_true',
help=EXPORT_DESC)
sel_mut.add_argument("-i", "-internal", "--internal", action='store_true',
help=INTERNAL_DESC)
sel_mut.add_argument("-s", "-function", "--symbol", action='append',
help=FUNCTION_DESC)
# Those are valid for all 3 types of filter
parser.add_argument("-n", "-nosymbol", "--nosymbol", action='append',
help=NOSYMBOL_DESC)
parser.add_argument("-D", "-no-doc-sections", "--no-doc-sections",
action='store_true', help="Don't outputt DOC sections")
parser.add_argument("files", metavar="FILE",
nargs="+", help=FILES_DESC)
args = parser.parse_args()
if args.wall:
args.wreturn = True
args.wshort_desc = True
args.wcontents_before_sections = True
logger = logging.getLogger()
if not args.debug:
logger.setLevel(logging.INFO)
else:
logger.setLevel(logging.DEBUG)
formatter = MsgFormatter('%(levelname)s: %(message)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
if args.man:
out_style = ManFormat(modulename=args.modulename)
elif args.none:
out_style = None
else:
out_style = RestFormat()
kfiles = KernelFiles(verbose=args.verbose,
out_style=out_style, werror=args.werror,
wreturn=args.wreturn, wshort_desc=args.wshort_desc,
wcontents_before_sections=args.wcontents_before_sections)
kfiles.parse(args.files, export_file=args.export_file)
for t in kfiles.msg(enable_lineno=args.enable_lineno, export=args.export,
internal=args.internal, symbol=args.symbol,
nosymbol=args.nosymbol, export_file=args.export_file,
no_doc_sections=args.no_doc_sections):
msg = t[1]
if msg:
print(msg)
error_count = kfiles.errors
if not error_count:
sys.exit(0)
if args.werror:
print(f"{error_count} warnings as errors")
sys.exit(error_count)
if args.verbose:
print(f"{error_count} errors")
if args.none:
sys.exit(0)
sys.exit(error_count)
# Call main method
if __name__ == "__main__":
main()

View File

@ -0,0 +1,291 @@
#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
# Copyright(c) 2025: Mauro Carvalho Chehab <mchehab@kernel.org>.
#
# pylint: disable=R0903,R0913,R0914,R0917
"""
Parse lernel-doc tags on multiple kernel source files.
"""
import argparse
import logging
import os
import re
from kdoc_parser import KernelDoc
from kdoc_output import OutputFormat
class GlobSourceFiles:
"""
Parse C source code file names and directories via an Interactor.
"""
def __init__(self, srctree=None, valid_extensions=None):
"""
Initialize valid extensions with a tuple.
If not defined, assume default C extensions (.c and .h)
It would be possible to use python's glob function, but it is
very slow, and it is not interactive. So, it would wait to read all
directories before actually do something.
So, let's use our own implementation.
"""
if not valid_extensions:
self.extensions = (".c", ".h")
else:
self.extensions = valid_extensions
self.srctree = srctree
def _parse_dir(self, dirname):
"""Internal function to parse files recursively"""
with os.scandir(dirname) as obj:
for entry in obj:
name = os.path.join(dirname, entry.name)
if entry.is_dir():
yield from self._parse_dir(name)
if not entry.is_file():
continue
basename = os.path.basename(name)
if not basename.endswith(self.extensions):
continue
yield name
def parse_files(self, file_list, file_not_found_cb):
"""
Define an interator to parse all source files from file_list,
handling directories if any
"""
if not file_list:
return
for fname in file_list:
if self.srctree:
f = os.path.join(self.srctree, fname)
else:
f = fname
if os.path.isdir(f):
yield from self._parse_dir(f)
elif os.path.isfile(f):
yield f
elif file_not_found_cb:
file_not_found_cb(fname)
class KernelFiles():
"""
Parse kernel-doc tags on multiple kernel source files.
There are two type of parsers defined here:
- self.parse_file(): parses both kernel-doc markups and
EXPORT_SYMBOL* macros;
- self.process_export_file(): parses only EXPORT_SYMBOL* macros.
"""
def warning(self, msg):
"""Ancillary routine to output a warning and increment error count"""
self.config.log.warning(msg)
self.errors += 1
def error(self, msg):
"""Ancillary routine to output an error and increment error count"""
self.config.log.error(msg)
self.errors += 1
def parse_file(self, fname):
"""
Parse a single Kernel source.
"""
# Prevent parsing the same file twice if results are cached
if fname in self.files:
return
doc = KernelDoc(self.config, fname)
export_table, entries = doc.parse_kdoc()
self.export_table[fname] = export_table
self.files.add(fname)
self.export_files.add(fname) # parse_kdoc() already check exports
self.results[fname] = entries
def process_export_file(self, fname):
"""
Parses EXPORT_SYMBOL* macros from a single Kernel source file.
"""
# Prevent parsing the same file twice if results are cached
if fname in self.export_files:
return
doc = KernelDoc(self.config, fname)
export_table = doc.parse_export()
if not export_table:
self.error(f"Error: Cannot check EXPORT_SYMBOL* on {fname}")
export_table = set()
self.export_table[fname] = export_table
self.export_files.add(fname)
def file_not_found_cb(self, fname):
"""
Callback to warn if a file was not found.
"""
self.error(f"Cannot find file {fname}")
def __init__(self, verbose=False, out_style=None,
werror=False, wreturn=False, wshort_desc=False,
wcontents_before_sections=False,
logger=None):
"""
Initialize startup variables and parse all files
"""
if not verbose:
verbose = bool(os.environ.get("KBUILD_VERBOSE", 0))
if out_style is None:
out_style = OutputFormat()
if not werror:
kcflags = os.environ.get("KCFLAGS", None)
if kcflags:
match = re.search(r"(\s|^)-Werror(\s|$)/", kcflags)
if match:
werror = True
# reading this variable is for backwards compat just in case
# someone was calling it with the variable from outside the
# kernel's build system
kdoc_werror = os.environ.get("KDOC_WERROR", None)
if kdoc_werror:
werror = kdoc_werror
# Some variables are global to the parser logic as a whole as they are
# used to send control configuration to KernelDoc class. As such,
# those variables are read-only inside the KernelDoc.
self.config = argparse.Namespace
self.config.verbose = verbose
self.config.werror = werror
self.config.wreturn = wreturn
self.config.wshort_desc = wshort_desc
self.config.wcontents_before_sections = wcontents_before_sections
if not logger:
self.config.log = logging.getLogger("kernel-doc")
else:
self.config.log = logger
self.config.warning = self.warning
self.config.src_tree = os.environ.get("SRCTREE", None)
# Initialize variables that are internal to KernelFiles
self.out_style = out_style
self.errors = 0
self.results = {}
self.files = set()
self.export_files = set()
self.export_table = {}
def parse(self, file_list, export_file=None):
"""
Parse all files
"""
glob = GlobSourceFiles(srctree=self.config.src_tree)
for fname in glob.parse_files(file_list, self.file_not_found_cb):
self.parse_file(fname)
for fname in glob.parse_files(export_file, self.file_not_found_cb):
self.process_export_file(fname)
def out_msg(self, fname, name, arg):
"""
Return output messages from a file name using the output style
filtering.
If output type was not handled by the syler, return None.
"""
# NOTE: we can add rules here to filter out unwanted parts,
# although OutputFormat.msg already does that.
return self.out_style.msg(fname, name, arg)
def msg(self, enable_lineno=False, export=False, internal=False,
symbol=None, nosymbol=None, no_doc_sections=False,
filenames=None, export_file=None):
"""
Interacts over the kernel-doc results and output messages,
returning kernel-doc markups on each interaction
"""
self.out_style.set_config(self.config)
if not filenames:
filenames = sorted(self.results.keys())
glob = GlobSourceFiles(srctree=self.config.src_tree)
for fname in filenames:
function_table = set()
if internal or export:
if not export_file:
export_file = [fname]
for f in glob.parse_files(export_file, self.file_not_found_cb):
function_table |= self.export_table[f]
if symbol:
for s in symbol:
function_table.add(s)
self.out_style.set_filter(export, internal, symbol, nosymbol,
function_table, enable_lineno,
no_doc_sections)
msg = ""
if fname not in self.results:
self.config.log.warning("No kernel-doc for file %s", fname)
continue
for name, arg in self.results[fname]:
m = self.out_msg(fname, name, arg)
if m is None:
ln = arg.get("ln", 0)
dtype = arg.get('type', "")
self.config.log.warning("%s:%d Can't handle %s",
fname, ln, dtype)
else:
msg += m
if msg:
yield fname, msg

View File

@ -0,0 +1,793 @@
#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
# Copyright(c) 2025: Mauro Carvalho Chehab <mchehab@kernel.org>.
#
# pylint: disable=C0301,R0902,R0911,R0912,R0913,R0914,R0915,R0917
"""
Implement output filters to print kernel-doc documentation.
The implementation uses a virtual base class (OutputFormat) which
contains a dispatches to virtual methods, and some code to filter
out output messages.
The actual implementation is done on one separate class per each type
of output. Currently, there are output classes for ReST and man/troff.
"""
import os
import re
from datetime import datetime
from kdoc_parser import KernelDoc, type_param
from kdoc_re import KernRe
function_pointer = KernRe(r"([^\(]*\(\*)\s*\)\s*\(([^\)]*)\)", cache=False)
# match expressions used to find embedded type information
type_constant = KernRe(r"\b``([^\`]+)``\b", cache=False)
type_constant2 = KernRe(r"\%([-_*\w]+)", cache=False)
type_func = KernRe(r"(\w+)\(\)", cache=False)
type_param_ref = KernRe(r"([\!~\*]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)", cache=False)
# Special RST handling for func ptr params
type_fp_param = KernRe(r"\@(\w+)\(\)", cache=False)
# Special RST handling for structs with func ptr params
type_fp_param2 = KernRe(r"\@(\w+->\S+)\(\)", cache=False)
type_env = KernRe(r"(\$\w+)", cache=False)
type_enum = KernRe(r"\&(enum\s*([_\w]+))", cache=False)
type_struct = KernRe(r"\&(struct\s*([_\w]+))", cache=False)
type_typedef = KernRe(r"\&(typedef\s*([_\w]+))", cache=False)
type_union = KernRe(r"\&(union\s*([_\w]+))", cache=False)
type_member = KernRe(r"\&([_\w]+)(\.|->)([_\w]+)", cache=False)
type_fallback = KernRe(r"\&([_\w]+)", cache=False)
type_member_func = type_member + KernRe(r"\(\)", cache=False)
class OutputFormat:
"""
Base class for OutputFormat. If used as-is, it means that only
warnings will be displayed.
"""
# output mode.
OUTPUT_ALL = 0 # output all symbols and doc sections
OUTPUT_INCLUDE = 1 # output only specified symbols
OUTPUT_EXPORTED = 2 # output exported symbols
OUTPUT_INTERNAL = 3 # output non-exported symbols
# Virtual member to be overriden at the inherited classes
highlights = []
def __init__(self):
"""Declare internal vars and set mode to OUTPUT_ALL"""
self.out_mode = self.OUTPUT_ALL
self.enable_lineno = None
self.nosymbol = {}
self.symbol = None
self.function_table = None
self.config = None
self.no_doc_sections = False
self.data = ""
def set_config(self, config):
"""
Setup global config variables used by both parser and output.
"""
self.config = config
def set_filter(self, export, internal, symbol, nosymbol, function_table,
enable_lineno, no_doc_sections):
"""
Initialize filter variables according with the requested mode.
Only one choice is valid between export, internal and symbol.
The nosymbol filter can be used on all modes.
"""
self.enable_lineno = enable_lineno
self.no_doc_sections = no_doc_sections
self.function_table = function_table
if symbol:
self.out_mode = self.OUTPUT_INCLUDE
elif export:
self.out_mode = self.OUTPUT_EXPORTED
elif internal:
self.out_mode = self.OUTPUT_INTERNAL
else:
self.out_mode = self.OUTPUT_ALL
if nosymbol:
self.nosymbol = set(nosymbol)
def highlight_block(self, block):
"""
Apply the RST highlights to a sub-block of text.
"""
for r, sub in self.highlights:
block = r.sub(sub, block)
return block
def out_warnings(self, args):
"""
Output warnings for identifiers that will be displayed.
"""
warnings = args.get('warnings', [])
for log_msg in warnings:
self.config.warning(log_msg)
def check_doc(self, name, args):
"""Check if DOC should be output"""
if self.no_doc_sections:
return False
if name in self.nosymbol:
return False
if self.out_mode == self.OUTPUT_ALL:
self.out_warnings(args)
return True
if self.out_mode == self.OUTPUT_INCLUDE:
if name in self.function_table:
self.out_warnings(args)
return True
return False
def check_declaration(self, dtype, name, args):
"""
Checks if a declaration should be output or not based on the
filtering criteria.
"""
if name in self.nosymbol:
return False
if self.out_mode == self.OUTPUT_ALL:
self.out_warnings(args)
return True
if self.out_mode in [self.OUTPUT_INCLUDE, self.OUTPUT_EXPORTED]:
if name in self.function_table:
return True
if self.out_mode == self.OUTPUT_INTERNAL:
if dtype != "function":
self.out_warnings(args)
return True
if name not in self.function_table:
self.out_warnings(args)
return True
return False
def msg(self, fname, name, args):
"""
Handles a single entry from kernel-doc parser
"""
self.data = ""
dtype = args.get('type', "")
if dtype == "doc":
self.out_doc(fname, name, args)
return self.data
if not self.check_declaration(dtype, name, args):
return self.data
if dtype == "function":
self.out_function(fname, name, args)
return self.data
if dtype == "enum":
self.out_enum(fname, name, args)
return self.data
if dtype == "typedef":
self.out_typedef(fname, name, args)
return self.data
if dtype in ["struct", "union"]:
self.out_struct(fname, name, args)
return self.data
# Warn if some type requires an output logic
self.config.log.warning("doesn't now how to output '%s' block",
dtype)
return None
# Virtual methods to be overridden by inherited classes
# At the base class, those do nothing.
def out_doc(self, fname, name, args):
"""Outputs a DOC block"""
def out_function(self, fname, name, args):
"""Outputs a function"""
def out_enum(self, fname, name, args):
"""Outputs an enum"""
def out_typedef(self, fname, name, args):
"""Outputs a typedef"""
def out_struct(self, fname, name, args):
"""Outputs a struct"""
class RestFormat(OutputFormat):
"""Consts and functions used by ReST output"""
highlights = [
(type_constant, r"``\1``"),
(type_constant2, r"``\1``"),
# Note: need to escape () to avoid func matching later
(type_member_func, r":c:type:`\1\2\3\\(\\) <\1>`"),
(type_member, r":c:type:`\1\2\3 <\1>`"),
(type_fp_param, r"**\1\\(\\)**"),
(type_fp_param2, r"**\1\\(\\)**"),
(type_func, r"\1()"),
(type_enum, r":c:type:`\1 <\2>`"),
(type_struct, r":c:type:`\1 <\2>`"),
(type_typedef, r":c:type:`\1 <\2>`"),
(type_union, r":c:type:`\1 <\2>`"),
# in rst this can refer to any type
(type_fallback, r":c:type:`\1`"),
(type_param_ref, r"**\1\2**")
]
blankline = "\n"
sphinx_literal = KernRe(r'^[^.].*::$', cache=False)
sphinx_cblock = KernRe(r'^\.\.\ +code-block::', cache=False)
def __init__(self):
"""
Creates class variables.
Not really mandatory, but it is a good coding style and makes
pylint happy.
"""
super().__init__()
self.lineprefix = ""
def print_lineno(self, ln):
"""Outputs a line number"""
if self.enable_lineno and ln is not None:
ln += 1
self.data += f".. LINENO {ln}\n"
def output_highlight(self, args):
"""
Outputs a C symbol that may require being converted to ReST using
the self.highlights variable
"""
input_text = args
output = ""
in_literal = False
litprefix = ""
block = ""
for line in input_text.strip("\n").split("\n"):
# If we're in a literal block, see if we should drop out of it.
# Otherwise, pass the line straight through unmunged.
if in_literal:
if line.strip(): # If the line is not blank
# If this is the first non-blank line in a literal block,
# figure out the proper indent.
if not litprefix:
r = KernRe(r'^(\s*)')
if r.match(line):
litprefix = '^' + r.group(1)
else:
litprefix = ""
output += line + "\n"
elif not KernRe(litprefix).match(line):
in_literal = False
else:
output += line + "\n"
else:
output += line + "\n"
# Not in a literal block (or just dropped out)
if not in_literal:
block += line + "\n"
if self.sphinx_literal.match(line) or self.sphinx_cblock.match(line):
in_literal = True
litprefix = ""
output += self.highlight_block(block)
block = ""
# Handle any remaining block
if block:
output += self.highlight_block(block)
# Print the output with the line prefix
for line in output.strip("\n").split("\n"):
self.data += self.lineprefix + line + "\n"
def out_section(self, args, out_docblock=False):
"""
Outputs a block section.
This could use some work; it's used to output the DOC: sections, and
starts by putting out the name of the doc section itself, but that
tends to duplicate a header already in the template file.
"""
sectionlist = args.get('sectionlist', [])
sections = args.get('sections', {})
section_start_lines = args.get('section_start_lines', {})
for section in sectionlist:
# Skip sections that are in the nosymbol_table
if section in self.nosymbol:
continue
if out_docblock:
if not self.out_mode == self.OUTPUT_INCLUDE:
self.data += f".. _{section}:\n\n"
self.data += f'{self.lineprefix}**{section}**\n\n'
else:
self.data += f'{self.lineprefix}**{section}**\n\n'
self.print_lineno(section_start_lines.get(section, 0))
self.output_highlight(sections[section])
self.data += "\n"
self.data += "\n"
def out_doc(self, fname, name, args):
if not self.check_doc(name, args):
return
self.out_section(args, out_docblock=True)
def out_function(self, fname, name, args):
oldprefix = self.lineprefix
signature = ""
func_macro = args.get('func_macro', False)
if func_macro:
signature = args['function']
else:
if args.get('functiontype'):
signature = args['functiontype'] + " "
signature += args['function'] + " ("
parameterlist = args.get('parameterlist', [])
parameterdescs = args.get('parameterdescs', {})
parameterdesc_start_lines = args.get('parameterdesc_start_lines', {})
ln = args.get('declaration_start_line', 0)
count = 0
for parameter in parameterlist:
if count != 0:
signature += ", "
count += 1
dtype = args['parametertypes'].get(parameter, "")
if function_pointer.search(dtype):
signature += function_pointer.group(1) + parameter + function_pointer.group(3)
else:
signature += dtype
if not func_macro:
signature += ")"
self.print_lineno(ln)
if args.get('typedef') or not args.get('functiontype'):
self.data += f".. c:macro:: {args['function']}\n\n"
if args.get('typedef'):
self.data += " **Typedef**: "
self.lineprefix = ""
self.output_highlight(args.get('purpose', ""))
self.data += "\n\n**Syntax**\n\n"
self.data += f" ``{signature}``\n\n"
else:
self.data += f"``{signature}``\n\n"
else:
self.data += f".. c:function:: {signature}\n\n"
if not args.get('typedef'):
self.print_lineno(ln)
self.lineprefix = " "
self.output_highlight(args.get('purpose', ""))
self.data += "\n"
# Put descriptive text into a container (HTML <div>) to help set
# function prototypes apart
self.lineprefix = " "
if parameterlist:
self.data += ".. container:: kernelindent\n\n"
self.data += f"{self.lineprefix}**Parameters**\n\n"
for parameter in parameterlist:
parameter_name = KernRe(r'\[.*').sub('', parameter)
dtype = args['parametertypes'].get(parameter, "")
if dtype:
self.data += f"{self.lineprefix}``{dtype}``\n"
else:
self.data += f"{self.lineprefix}``{parameter}``\n"
self.print_lineno(parameterdesc_start_lines.get(parameter_name, 0))
self.lineprefix = " "
if parameter_name in parameterdescs and \
parameterdescs[parameter_name] != KernelDoc.undescribed:
self.output_highlight(parameterdescs[parameter_name])
self.data += "\n"
else:
self.data += f"{self.lineprefix}*undescribed*\n\n"
self.lineprefix = " "
self.out_section(args)
self.lineprefix = oldprefix
def out_enum(self, fname, name, args):
oldprefix = self.lineprefix
name = args.get('enum', '')
parameterlist = args.get('parameterlist', [])
parameterdescs = args.get('parameterdescs', {})
ln = args.get('declaration_start_line', 0)
self.data += f"\n\n.. c:enum:: {name}\n\n"
self.print_lineno(ln)
self.lineprefix = " "
self.output_highlight(args.get('purpose', ''))
self.data += "\n"
self.data += ".. container:: kernelindent\n\n"
outer = self.lineprefix + " "
self.lineprefix = outer + " "
self.data += f"{outer}**Constants**\n\n"
for parameter in parameterlist:
self.data += f"{outer}``{parameter}``\n"
if parameterdescs.get(parameter, '') != KernelDoc.undescribed:
self.output_highlight(parameterdescs[parameter])
else:
self.data += f"{self.lineprefix}*undescribed*\n\n"
self.data += "\n"
self.lineprefix = oldprefix
self.out_section(args)
def out_typedef(self, fname, name, args):
oldprefix = self.lineprefix
name = args.get('typedef', '')
ln = args.get('declaration_start_line', 0)
self.data += f"\n\n.. c:type:: {name}\n\n"
self.print_lineno(ln)
self.lineprefix = " "
self.output_highlight(args.get('purpose', ''))
self.data += "\n"
self.lineprefix = oldprefix
self.out_section(args)
def out_struct(self, fname, name, args):
name = args.get('struct', "")
purpose = args.get('purpose', "")
declaration = args.get('definition', "")
dtype = args.get('type', "struct")
ln = args.get('declaration_start_line', 0)
parameterlist = args.get('parameterlist', [])
parameterdescs = args.get('parameterdescs', {})
parameterdesc_start_lines = args.get('parameterdesc_start_lines', {})
self.data += f"\n\n.. c:{dtype}:: {name}\n\n"
self.print_lineno(ln)
oldprefix = self.lineprefix
self.lineprefix += " "
self.output_highlight(purpose)
self.data += "\n"
self.data += ".. container:: kernelindent\n\n"
self.data += f"{self.lineprefix}**Definition**::\n\n"
self.lineprefix = self.lineprefix + " "
declaration = declaration.replace("\t", self.lineprefix)
self.data += f"{self.lineprefix}{dtype} {name}" + ' {' + "\n"
self.data += f"{declaration}{self.lineprefix}" + "};\n\n"
self.lineprefix = " "
self.data += f"{self.lineprefix}**Members**\n\n"
for parameter in parameterlist:
if not parameter or parameter.startswith("#"):
continue
parameter_name = parameter.split("[", maxsplit=1)[0]
if parameterdescs.get(parameter_name) == KernelDoc.undescribed:
continue
self.print_lineno(parameterdesc_start_lines.get(parameter_name, 0))
self.data += f"{self.lineprefix}``{parameter}``\n"
self.lineprefix = " "
self.output_highlight(parameterdescs[parameter_name])
self.lineprefix = " "
self.data += "\n"
self.data += "\n"
self.lineprefix = oldprefix
self.out_section(args)
class ManFormat(OutputFormat):
"""Consts and functions used by man pages output"""
highlights = (
(type_constant, r"\1"),
(type_constant2, r"\1"),
(type_func, r"\\fB\1\\fP"),
(type_enum, r"\\fI\1\\fP"),
(type_struct, r"\\fI\1\\fP"),
(type_typedef, r"\\fI\1\\fP"),
(type_union, r"\\fI\1\\fP"),
(type_param, r"\\fI\1\\fP"),
(type_param_ref, r"\\fI\1\2\\fP"),
(type_member, r"\\fI\1\2\3\\fP"),
(type_fallback, r"\\fI\1\\fP")
)
blankline = ""
date_formats = [
"%a %b %d %H:%M:%S %Z %Y",
"%a %b %d %H:%M:%S %Y",
"%Y-%m-%d",
"%b %d %Y",
"%B %d %Y",
"%m %d %Y",
]
def __init__(self, modulename):
"""
Creates class variables.
Not really mandatory, but it is a good coding style and makes
pylint happy.
"""
super().__init__()
self.modulename = modulename
dt = None
tstamp = os.environ.get("KBUILD_BUILD_TIMESTAMP")
if tstamp:
for fmt in self.date_formats:
try:
dt = datetime.strptime(tstamp, fmt)
break
except ValueError:
pass
if not dt:
dt = datetime.now()
self.man_date = dt.strftime("%B %Y")
def output_highlight(self, block):
"""
Outputs a C symbol that may require being highlighted with
self.highlights variable using troff syntax
"""
contents = self.highlight_block(block)
if isinstance(contents, list):
contents = "\n".join(contents)
for line in contents.strip("\n").split("\n"):
line = KernRe(r"^\s*").sub("", line)
if not line:
continue
if line[0] == ".":
self.data += "\\&" + line + "\n"
else:
self.data += line + "\n"
def out_doc(self, fname, name, args):
sectionlist = args.get('sectionlist', [])
sections = args.get('sections', {})
if not self.check_doc(name, args):
return
self.data += f'.TH "{self.modulename}" 9 "{self.modulename}" "{self.man_date}" "API Manual" LINUX' + "\n"
for section in sectionlist:
self.data += f'.SH "{section}"' + "\n"
self.output_highlight(sections.get(section))
def out_function(self, fname, name, args):
"""output function in man"""
parameterlist = args.get('parameterlist', [])
parameterdescs = args.get('parameterdescs', {})
sectionlist = args.get('sectionlist', [])
sections = args.get('sections', {})
self.data += f'.TH "{args["function"]}" 9 "{args["function"]}" "{self.man_date}" "Kernel Hacker\'s Manual" LINUX' + "\n"
self.data += ".SH NAME\n"
self.data += f"{args['function']} \\- {args['purpose']}\n"
self.data += ".SH SYNOPSIS\n"
if args.get('functiontype', ''):
self.data += f'.B "{args["functiontype"]}" {args["function"]}' + "\n"
else:
self.data += f'.B "{args["function"]}' + "\n"
count = 0
parenth = "("
post = ","
for parameter in parameterlist:
if count == len(parameterlist) - 1:
post = ");"
dtype = args['parametertypes'].get(parameter, "")
if function_pointer.match(dtype):
# Pointer-to-function
self.data += f'".BI "{parenth}{function_pointer.group(1)}" " ") ({function_pointer.group(2)}){post}"' + "\n"
else:
dtype = KernRe(r'([^\*])$').sub(r'\1 ', dtype)
self.data += f'.BI "{parenth}{dtype}" "{post}"' + "\n"
count += 1
parenth = ""
if parameterlist:
self.data += ".SH ARGUMENTS\n"
for parameter in parameterlist:
parameter_name = re.sub(r'\[.*', '', parameter)
self.data += f'.IP "{parameter}" 12' + "\n"
self.output_highlight(parameterdescs.get(parameter_name, ""))
for section in sectionlist:
self.data += f'.SH "{section.upper()}"' + "\n"
self.output_highlight(sections[section])
def out_enum(self, fname, name, args):
name = args.get('enum', '')
parameterlist = args.get('parameterlist', [])
sectionlist = args.get('sectionlist', [])
sections = args.get('sections', {})
self.data += f'.TH "{self.modulename}" 9 "enum {args["enum"]}" "{self.man_date}" "API Manual" LINUX' + "\n"
self.data += ".SH NAME\n"
self.data += f"enum {args['enum']} \\- {args['purpose']}\n"
self.data += ".SH SYNOPSIS\n"
self.data += f"enum {args['enum']}" + " {\n"
count = 0
for parameter in parameterlist:
self.data += f'.br\n.BI " {parameter}"' + "\n"
if count == len(parameterlist) - 1:
self.data += "\n};\n"
else:
self.data += ", \n.br\n"
count += 1
self.data += ".SH Constants\n"
for parameter in parameterlist:
parameter_name = KernRe(r'\[.*').sub('', parameter)
self.data += f'.IP "{parameter}" 12' + "\n"
self.output_highlight(args['parameterdescs'].get(parameter_name, ""))
for section in sectionlist:
self.data += f'.SH "{section}"' + "\n"
self.output_highlight(sections[section])
def out_typedef(self, fname, name, args):
module = self.modulename
typedef = args.get('typedef')
purpose = args.get('purpose')
sectionlist = args.get('sectionlist', [])
sections = args.get('sections', {})
self.data += f'.TH "{module}" 9 "{typedef}" "{self.man_date}" "API Manual" LINUX' + "\n"
self.data += ".SH NAME\n"
self.data += f"typedef {typedef} \\- {purpose}\n"
for section in sectionlist:
self.data += f'.SH "{section}"' + "\n"
self.output_highlight(sections.get(section))
def out_struct(self, fname, name, args):
module = self.modulename
struct_type = args.get('type')
struct_name = args.get('struct')
purpose = args.get('purpose')
definition = args.get('definition')
sectionlist = args.get('sectionlist', [])
parameterlist = args.get('parameterlist', [])
sections = args.get('sections', {})
parameterdescs = args.get('parameterdescs', {})
self.data += f'.TH "{module}" 9 "{struct_type} {struct_name}" "{self.man_date}" "API Manual" LINUX' + "\n"
self.data += ".SH NAME\n"
self.data += f"{struct_type} {struct_name} \\- {purpose}\n"
# Replace tabs with two spaces and handle newlines
declaration = definition.replace("\t", " ")
declaration = KernRe(r"\n").sub('"\n.br\n.BI "', declaration)
self.data += ".SH SYNOPSIS\n"
self.data += f"{struct_type} {struct_name} " + "{" + "\n.br\n"
self.data += f'.BI "{declaration}\n' + "};\n.br\n\n"
self.data += ".SH Members\n"
for parameter in parameterlist:
if parameter.startswith("#"):
continue
parameter_name = re.sub(r"\[.*", "", parameter)
if parameterdescs.get(parameter_name) == KernelDoc.undescribed:
continue
self.data += f'.IP "{parameter}" 12' + "\n"
self.output_highlight(parameterdescs.get(parameter_name))
for section in sectionlist:
self.data += f'.SH "{section}"' + "\n"
self.output_highlight(sections.get(section))

File diff suppressed because it is too large Load Diff

273
scripts/lib/kdoc/kdoc_re.py Normal file
View File

@ -0,0 +1,273 @@
#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
# Copyright(c) 2025: Mauro Carvalho Chehab <mchehab@kernel.org>.
"""
Regular expression ancillary classes.
Those help caching regular expressions and do matching for kernel-doc.
"""
import re
# Local cache for regular expressions
re_cache = {}
class KernRe:
"""
Helper class to simplify regex declaration and usage,
It calls re.compile for a given pattern. It also allows adding
regular expressions and define sub at class init time.
Regular expressions can be cached via an argument, helping to speedup
searches.
"""
def _add_regex(self, string, flags):
"""
Adds a new regex or re-use it from the cache.
"""
if string in re_cache:
self.regex = re_cache[string]
else:
self.regex = re.compile(string, flags=flags)
if self.cache:
re_cache[string] = self.regex
def __init__(self, string, cache=True, flags=0):
"""
Compile a regular expression and initialize internal vars.
"""
self.cache = cache
self.last_match = None
self._add_regex(string, flags)
def __str__(self):
"""
Return the regular expression pattern.
"""
return self.regex.pattern
def __add__(self, other):
"""
Allows adding two regular expressions into one.
"""
return KernRe(str(self) + str(other), cache=self.cache or other.cache,
flags=self.regex.flags | other.regex.flags)
def match(self, string):
"""
Handles a re.match storing its results
"""
self.last_match = self.regex.match(string)
return self.last_match
def search(self, string):
"""
Handles a re.search storing its results
"""
self.last_match = self.regex.search(string)
return self.last_match
def findall(self, string):
"""
Alias to re.findall
"""
return self.regex.findall(string)
def split(self, string):
"""
Alias to re.split
"""
return self.regex.split(string)
def sub(self, sub, string, count=0):
"""
Alias to re.sub
"""
return self.regex.sub(sub, string, count=count)
def group(self, num):
"""
Returns the group results of the last match
"""
return self.last_match.group(num)
class NestedMatch:
"""
Finding nested delimiters is hard with regular expressions. It is
even harder on Python with its normal re module, as there are several
advanced regular expressions that are missing.
This is the case of this pattern:
'\\bSTRUCT_GROUP(\\(((?:(?>[^)(]+)|(?1))*)\\))[^;]*;'
which is used to properly match open/close parenthesis of the
string search STRUCT_GROUP(),
Add a class that counts pairs of delimiters, using it to match and
replace nested expressions.
The original approach was suggested by:
https://stackoverflow.com/questions/5454322/python-how-to-match-nested-parentheses-with-regex
Although I re-implemented it to make it more generic and match 3 types
of delimiters. The logic checks if delimiters are paired. If not, it
will ignore the search string.
"""
# TODO: make NestedMatch handle multiple match groups
#
# Right now, regular expressions to match it are defined only up to
# the start delimiter, e.g.:
#
# \bSTRUCT_GROUP\(
#
# is similar to: STRUCT_GROUP\((.*)\)
# except that the content inside the match group is delimiter's aligned.
#
# The content inside parenthesis are converted into a single replace
# group (e.g. r`\1').
#
# It would be nice to change such definition to support multiple
# match groups, allowing a regex equivalent to.
#
# FOO\((.*), (.*), (.*)\)
#
# it is probably easier to define it not as a regular expression, but
# with some lexical definition like:
#
# FOO(arg1, arg2, arg3)
DELIMITER_PAIRS = {
'{': '}',
'(': ')',
'[': ']',
}
RE_DELIM = re.compile(r'[\{\}\[\]\(\)]')
def _search(self, regex, line):
"""
Finds paired blocks for a regex that ends with a delimiter.
The suggestion of using finditer to match pairs came from:
https://stackoverflow.com/questions/5454322/python-how-to-match-nested-parentheses-with-regex
but I ended using a different implementation to align all three types
of delimiters and seek for an initial regular expression.
The algorithm seeks for open/close paired delimiters and place them
into a stack, yielding a start/stop position of each match when the
stack is zeroed.
The algorithm shoud work fine for properly paired lines, but will
silently ignore end delimiters that preceeds an start delimiter.
This should be OK for kernel-doc parser, as unaligned delimiters
would cause compilation errors. So, we don't need to rise exceptions
to cover such issues.
"""
stack = []
for match_re in regex.finditer(line):
start = match_re.start()
offset = match_re.end()
d = line[offset - 1]
if d not in self.DELIMITER_PAIRS:
continue
end = self.DELIMITER_PAIRS[d]
stack.append(end)
for match in self.RE_DELIM.finditer(line[offset:]):
pos = match.start() + offset
d = line[pos]
if d in self.DELIMITER_PAIRS:
end = self.DELIMITER_PAIRS[d]
stack.append(end)
continue
# Does the end delimiter match what it is expected?
if stack and d == stack[-1]:
stack.pop()
if not stack:
yield start, offset, pos + 1
break
def search(self, regex, line):
"""
This is similar to re.search:
It matches a regex that it is followed by a delimiter,
returning occurrences only if all delimiters are paired.
"""
for t in self._search(regex, line):
yield line[t[0]:t[2]]
def sub(self, regex, sub, line, count=0):
"""
This is similar to re.sub:
It matches a regex that it is followed by a delimiter,
replacing occurrences only if all delimiters are paired.
if r'\1' is used, it works just like re: it places there the
matched paired data with the delimiter stripped.
If count is different than zero, it will replace at most count
items.
"""
out = ""
cur_pos = 0
n = 0
for start, end, pos in self._search(regex, line):
out += line[cur_pos:start]
# Value, ignoring start/end delimiters
value = line[end:pos - 1]
# replaces \1 at the sub string, if \1 is used there
new_sub = sub
new_sub = new_sub.replace(r'\1', value)
out += new_sub
# Drop end ';' if any
if line[pos] == ';':
pos += 1
cur_pos = pos
n += 1
if count and count >= n:
break
# Append the remaining string
l = len(line)
out += line[cur_pos:l]
return out

View File

@ -13,6 +13,13 @@ RTLA depends on the following libraries and tools:
- libtraceevent
- libcpupower (optional, for --deepest-idle-state)
For BPF sample collection support, the following extra dependencies are
required:
- libbpf 1.0.0 or later
- bpftool with skeleton support
- clang with BPF CO-RE support
It also depends on python3-docutils to compile man pages.
For development, we suggest the following steps for compiling rtla: