| #
2ca0b54d |
| 23-Mar-2026 |
Mauro Carvalho Chehab <mchehab+huawei@kernel.org> |
docs: c_lex.py: store logger on its data
By having the logger stored there, any code using CTokenizer can log messages there.
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed
docs: c_lex.py: store logger on its data
By having the logger stored there, any code using CTokenizer can log messages there.
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed-off-by: Jonathan Corbet <corbet@lwn.net> Message-ID: <467979dc18149e4b2a7113c178e0cb07919632f2.1774256269.git.mchehab+huawei@kernel.org>
show more ...
|
| #
8c0b7c0d |
| 18-Mar-2026 |
Mauro Carvalho Chehab <mchehab+huawei@kernel.org> |
docs: kdoc: add c_lex to generated documentation
Do some fixes at groups() description for it to be parsed by Sphinx and add it to the documentation.
Signed-off-by: Mauro Carvalho Chehab <mchehab+h
docs: kdoc: add c_lex to generated documentation
Do some fixes at groups() description for it to be parsed by Sphinx and add it to the documentation.
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed-off-by: Jonathan Corbet <corbet@lwn.net> Message-ID: <799178cf30dd4022fdb1d029ba998a458e037b52.1773823995.git.mchehab+huawei@kernel.org>
show more ...
|
| #
024e200e |
| 17-Mar-2026 |
Mauro Carvalho Chehab <mchehab+huawei@kernel.org> |
docs: c_lex: setup a logger to report tokenizer issues
Report file that has issues detected via CMatch and CTokenizer.
This is done by setting up a logger that will be overriden by kdoc_parser, whe
docs: c_lex: setup a logger to report tokenizer issues
Report file that has issues detected via CMatch and CTokenizer.
This is done by setting up a logger that will be overriden by kdoc_parser, when used on it.
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Jonathan Corbet <corbet@lwn.net> Message-ID: <903ad83ae176196a50444e66177a4f5bcdef5199.1773770483.git.mchehab+huawei@kernel.org>
show more ...
|
| #
9aaeb817 |
| 17-Mar-2026 |
Mauro Carvalho Chehab <mchehab+huawei@kernel.org> |
docs: c_lex: properly implement a sub() method for CMatch
Implement a sub() method to do what it is expected, parsing backref arguments like \0, \1, \2, ...
Signed-off-by: Mauro Carvalho Chehab <mc
docs: c_lex: properly implement a sub() method for CMatch
Implement a sub() method to do what it is expected, parsing backref arguments like \0, \1, \2, ...
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed-off-by: Jonathan Corbet <corbet@lwn.net> Message-ID: <dbc45b86db18783289d94cfdbba4b72792c47929.1773770483.git.mchehab+huawei@kernel.org>
show more ...
|
| #
f1cf9f7c |
| 17-Mar-2026 |
Mauro Carvalho Chehab <mchehab+huawei@kernel.org> |
docs: kdoc: create a CMatch to match nested C blocks
The NextMatch code is complex, and will become even more complex if we add there support for arguments.
Now that we have a tokenizer, we can use
docs: kdoc: create a CMatch to match nested C blocks
The NextMatch code is complex, and will become even more complex if we add there support for arguments.
Now that we have a tokenizer, we can use a better solution, easier to be understood.
Yet, to improve performance, it is better to make it use a previously tokenized code, changing its ABI.
So, reimplement NextMatch using the CTokener class. Once it is done, we can drop NestedMatch.
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed-off-by: Jonathan Corbet <corbet@lwn.net> Message-ID: <fa818ea164216b17520b588e3f12b81499b76dd7.1773770483.git.mchehab+huawei@kernel.org>
show more ...
|
| #
df50e848 |
| 17-Mar-2026 |
Mauro Carvalho Chehab <mchehab+huawei@kernel.org> |
docs: add a C tokenizer to be used by kernel-doc
Handling C code purely using regular expressions doesn't work well.
Add a C tokenizer to help doing it the right way.
The tokenizer was written usi
docs: add a C tokenizer to be used by kernel-doc
Handling C code purely using regular expressions doesn't work well.
Add a C tokenizer to help doing it the right way.
The tokenizer was written using as basis the Python re documentation tokenizer example from: https://docs.python.org/3/library/re.html#writing-a-tokenizer
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed-off-by: Jonathan Corbet <corbet@lwn.net> Message-ID: <39787bb8022e10c65df40c746077f7f66d07ffed.1773770483.git.mchehab+huawei@kernel.org>
show more ...
|