Skip to content

mkdocstrings_handlers.python special ¤

This package implements a handler for the Python language.

Modules¤

debug ¤

Debugging utilities.

Classes¤

Environment dataclass ¤

Dataclass to store environment information.

Source code in python/debug.py
@dataclass
class Environment:
    """Dataclass to store environment information."""

    interpreter_name: str
    """Python interpreter name."""
    interpreter_version: str
    """Python interpreter version."""
    platform: str
    """Operating System."""
    packages: List[Package]
    """Installed packages."""
    variables: List[Variable]
    """Environment variables."""
Attributes¤
interpreter_name: str dataclass-field ¤

Python interpreter name.

interpreter_version: str dataclass-field ¤

Python interpreter version.

packages: List[mkdocstrings_handlers.python.debug.Package] dataclass-field ¤

Installed packages.

platform: str dataclass-field ¤

Operating System.

variables: List[mkdocstrings_handlers.python.debug.Variable] dataclass-field ¤

Environment variables.

Package dataclass ¤

Dataclass describing a Python package.

Source code in python/debug.py
@dataclass
class Package:
    """Dataclass describing a Python package."""

    name: str
    """Package name."""
    version: str
    """Package version."""
Attributes¤
name: str dataclass-field ¤

Package name.

version: str dataclass-field ¤

Package version.

Variable dataclass ¤

Dataclass describing an environment variable.

Source code in python/debug.py
@dataclass
class Variable:
    """Dataclass describing an environment variable."""

    name: str
    """Variable name."""
    value: str
    """Variable value."""
Attributes¤
name: str dataclass-field ¤

Variable name.

value: str dataclass-field ¤

Variable value.

Functions¤

get_debug_info() -> Environment ¤

Get debug/environment information.

Returns:

Type Description
Environment

Environment information.

Source code in python/debug.py
def get_debug_info() -> Environment:
    """Get debug/environment information.

    Returns:
        Environment information.
    """
    py_name, py_version = _interpreter_name_version()
    packages = ["mkdocstrings-python-legacy"]
    variables = ["PYTHONPATH", *[var for var in os.environ if var.startswith("MKDOCSTRINGS_PYTHON_LEGACY")]]
    return Environment(
        interpreter_name=py_name,
        interpreter_version=py_version,
        platform=platform.platform(),
        variables=[Variable(var, val) for var in variables if (val := os.getenv(var))],
        packages=[Package(pkg, get_version(pkg)) for pkg in packages],
    )
get_version(dist: str = 'mkdocstrings-python-legacy') -> str ¤

Get version of the given distribution.

Parameters:

Name Type Description Default
dist str

A distribution name.

'mkdocstrings-python-legacy'

Returns:

Type Description
str

A version number.

Source code in python/debug.py
def get_version(dist: str = "mkdocstrings-python-legacy") -> str:
    """Get version of the given distribution.

    Parameters:
        dist: A distribution name.

    Returns:
        A version number.
    """
    try:
        return metadata.version(dist)
    except metadata.PackageNotFoundError:
        return "0.0.0"
print_debug_info() -> None ¤

Print debug/environment information.

Source code in python/debug.py
def print_debug_info() -> None:
    """Print debug/environment information."""
    info = get_debug_info()
    print(f"- __System__: {info.platform}")
    print(f"- __Python__: {info.interpreter_name} {info.interpreter_version}")
    print("- __Environment variables__:")
    for var in info.variables:
        print(f"  - `{var.name}`: `{var.value}`")
    print("- __Installed packages__:")
    for pkg in info.packages:
        print(f"  - `{pkg.name}` v{pkg.version}")

handler ¤

This module implements a handler for the Python language.

It collects data with pytkdocs.

Classes¤

PythonHandler (BaseHandler) ¤

The Python handler class.

Attributes:

Name Type Description
domain str

The cross-documentation domain/language for this handler.

enable_inventory bool

Whether this handler is interested in enabling the creation of the objects.inv Sphinx inventory file.

Source code in python/handler.py
class PythonHandler(BaseHandler):
    """The Python handler class.

    Attributes:
        domain: The cross-documentation domain/language for this handler.
        enable_inventory: Whether this handler is interested in enabling the creation
            of the `objects.inv` Sphinx inventory file.
    """

    domain: str = "py"  # to match Sphinx's default domain
    enable_inventory: bool = True

    fallback_theme = "material"
    fallback_config: ClassVar[dict] = {"docstring_style": "markdown", "filters": ["!.*"]}
    """The configuration used when falling back to re-collecting an object to get its anchor.

    This configuration is used in [`Handlers.get_anchors`][mkdocstrings.handlers.base.Handlers.get_anchors].

    When trying to fix (optional) cross-references, the autorefs plugin will try to collect
    an object with every configured handler until one succeeds. It will then try to get
    an anchor for it. It's because objects can have multiple identifiers (aliases),
    for example their definition path and multiple import paths in Python.

    When re-collecting the object, we have no use for its members, or for its docstring being parsed.
    This is why the fallback configuration filters every member out, and uses the Markdown style,
    which we know will not generate any warnings.
    """

    default_config: ClassVar[dict] = {
        "filters": ["!^_[^_]"],
        "show_root_heading": False,
        "show_root_toc_entry": True,
        "show_root_full_path": True,
        "show_root_members_full_path": False,
        "show_object_full_path": False,
        "show_category_heading": False,
        "show_if_no_docstring": False,
        "show_signature": True,
        "show_signature_annotations": False,
        "show_source": True,
        "show_bases": True,
        "group_by_category": True,
        "heading_level": 2,
        "members_order": "alphabetical",
    }
    """
    **Headings options:**

    - `heading_level` (`int`): The initial heading level to use. Default: `2`.
    - `show_root_heading` (`bool`): Show the heading of the object at the root of the documentation tree
        (i.e. the object referenced by the identifier after `:::`). Default: `False`.
    - `show_root_toc_entry` (`bool`): If the root heading is not shown, at least add a ToC entry for it. Default: `True`.
    - `show_root_full_path` (`bool`): Show the full Python path for the root object heading. Default: `True`.
    - `show_root_members_full_path` (`bool`): Show the full Python path of the root members. Default: `False`.
    - `show_object_full_path` (`bool`): Show the full Python path of every object. Default: `False`.
    - `show_category_heading` (`bool`): When grouped by categories, show a heading for each category. Default: `False`.

    **Members options:**

    - `members` (`list[str] | False | None`): An explicit list of members to render. Default: `None`.
    - `members_order` (`str`): The members ordering to use. Options: `alphabetical` - order by the members names,
        `source` - order members as they appear in the source file. Default: `"alphabetical"`.
    - `filters` (`list[str] | None`): A list of filters applied to filter objects based on their name.
        A filter starting with `!` will exclude matching objects instead of including them.
        The `members` option takes precedence over `filters` (filters will still be applied recursively
        to lower members in the hierarchy). Default: `["!^_[^_]"]`.
    - `group_by_category` (`bool`): Group the object's children by categories: attributes, classes, functions, and modules. Default: `True`.

    **Docstrings options:**

    - `docstring_style` (`str`): The docstring style to use: `google`, `numpy`, `restructured-text`, or `None`. Default: `"google"`.
    - `docstring_options` (`dict`): The options for the docstring parser. See parsers under [`pytkdocs.parsers.docstrings`][].
    - `show_if_no_docstring` (`bool`): Show the object heading even if it has no docstring or children with docstrings. Default: `False`.

    **Signatures/annotations options:**

    - `show_signature` (`bool`): Show methods and functions signatures. Default: `True`.
    - `show_signature_annotations` (`bool`): Show the type annotations in methods and functions signatures. Default: `False`.

    **Additional options:**

    - `show_bases` (`bool`): Show the base classes of a class. Default: `True`.
    - `show_source` (`bool`): Show the source code of this object. Default: `True`.
    """

    def __init__(
        self,
        *args: Any,
        setup_commands: Optional[List[str]] = None,
        config_file_path: Optional[str] = None,
        paths: Optional[List[str]] = None,
        **kwargs: Any,
    ) -> None:
        """Initialize the handler.

        When instantiating a Python handler, we open a `pytkdocs` subprocess in the background with `subprocess.Popen`.
        It will allow us to feed input to and read output from this subprocess, keeping it alive during
        the whole documentation generation. Spawning a new Python subprocess for each "autodoc" instruction would be
        too resource intensive, and would slow down `mkdocstrings` a lot.

        Parameters:
            *args: Handler name, theme and custom templates.
            setup_commands: A list of python commands as strings to be executed in the subprocess before `pytkdocs`.
            config_file_path: The MkDocs configuration file path.
            paths: A list of paths to use as search paths.
            **kwargs: Same thing, but with keyword arguments.
        """
        logger.debug("Opening 'pytkdocs' subprocess")
        env = os.environ.copy()
        env["PYTHONUNBUFFERED"] = "1"

        self._config_file_path = config_file_path
        paths = paths or []
        if not paths and config_file_path:
            paths.append(os.path.dirname(config_file_path))
        search_paths = []
        for path in paths:
            if not os.path.isabs(path) and config_file_path:
                path = os.path.abspath(os.path.join(os.path.dirname(config_file_path), path))  # noqa: PLW2901
            if path not in search_paths:
                search_paths.append(path)
        self._paths = search_paths

        commands = []

        if search_paths:
            commands.extend([f"sys.path.insert(0, {path!r})" for path in reversed(search_paths)])

        if setup_commands:
            # prevent the Python interpreter or the setup commands
            # from writing to stdout as it would break pytkdocs output
            commands.extend(
                [
                    "from io import StringIO",
                    "sys.stdout = StringIO()",  # redirect stdout to memory buffer
                    *setup_commands,
                    "sys.stdout.flush()",
                    "sys.stdout = sys.__stdout__",  # restore stdout
                ],
            )

        if commands:
            final_commands = [
                "import sys",
                *commands,
                "from pytkdocs.cli import main as pytkdocs",
                "pytkdocs(['--line-by-line'])",
            ]
            cmd = [sys.executable, "-c", "; ".join(final_commands)]
        else:
            cmd = [sys.executable, "-m", "pytkdocs", "--line-by-line"]

        self.process = Popen(  # noqa: S603
            cmd,
            universal_newlines=True,
            stdout=PIPE,
            stdin=PIPE,
            bufsize=-1,
            env=env,
        )
        super().__init__(*args, **kwargs)

    @classmethod
    def load_inventory(
        cls,
        in_file: BinaryIO,
        url: str,
        base_url: Optional[str] = None,
        **kwargs: Any,  # noqa: ARG003
    ) -> Iterator[Tuple[str, str]]:
        """Yield items and their URLs from an inventory file streamed from `in_file`.

        This implements mkdocstrings' `load_inventory` "protocol" (see plugin.py).

        Arguments:
            in_file: The binary file-like object to read the inventory from.
            url: The URL that this file is being streamed from (used to guess `base_url`).
            base_url: The URL that this inventory's sub-paths are relative to.
            **kwargs: Ignore additional arguments passed from the config.

        Yields:
            Tuples of (item identifier, item URL).
        """
        if base_url is None:
            base_url = posixpath.dirname(url)

        for item in Inventory.parse_sphinx(in_file, domain_filter=("py",)).values():
            yield item.name, posixpath.join(base_url, item.uri)

    def collect(self, identifier: str, config: MutableMapping[str, Any]) -> CollectorItem:
        """Collect the documentation tree given an identifier and selection options.

        In this method, we feed one line of JSON to the standard input of the subprocess that was opened
        during instantiation of the collector. Then we read one line of JSON on its standard output.

        We load back the JSON text into a Python dictionary.
        If there is a decoding error, we log it as error and raise a CollectionError.

        If the dictionary contains an `error` key, we log it  as error (with the optional `traceback` value),
        and raise a CollectionError.

        If the dictionary values for keys `loading_errors` and `parsing_errors` are not empty,
        we log them as warnings.

        Then we pick up the only object within the `objects` list (there's always only one, because we collect
        them one by one), rebuild it's categories lists
        (see [`rebuild_category_lists()`][mkdocstrings_handlers.python.rendering.rebuild_category_lists]),
        and return it.

        Arguments:
            identifier: The dotted-path of a Python object available in the Python path.
            config: Selection options, used to alter the data collection done by `pytkdocs`.

        Raises:
            CollectionError: When there was a problem collecting the object documentation.

        Returns:
            The collected object-tree.
        """
        final_config = {}
        for option in ("filters", "members", "docstring_style", "docstring_options"):
            if option in config:
                final_config[option] = config[option]
            elif option in self.default_config:
                final_config[option] = self.default_config[option]

        logger.debug("Preparing input")
        json_input = json.dumps({"objects": [{"path": identifier, **final_config}]})

        logger.debug("Writing to process' stdin")
        self.process.stdin.write(json_input + "\n")  # type: ignore[union-attr]
        self.process.stdin.flush()  # type: ignore[union-attr]

        logger.debug("Reading process' stdout")
        stdout = self.process.stdout.readline()  # type: ignore[union-attr]

        logger.debug("Loading JSON output as Python object")
        try:
            result = json.loads(stdout)
        except json.decoder.JSONDecodeError as exception:
            error = "\n".join(("Error while loading JSON:", stdout, traceback.format_exc()))
            raise CollectionError(error) from exception

        if "error" in result:
            error = result["error"]
            if "traceback" in result:
                error += f"\n{result['traceback']}"
            raise CollectionError(error)

        for loading_error in result["loading_errors"]:
            logger.warning(loading_error)

        for errors in result["parsing_errors"].values():
            for parsing_error in errors:
                logger.warning(parsing_error)

        # We always collect only one object at a time
        result = result["objects"][0]

        logger.debug("Rebuilding categories and children lists")
        rebuild_category_lists(result)

        return result

    def teardown(self) -> None:
        """Terminate the opened subprocess, set it to `None`."""
        logger.debug("Tearing process down")
        self.process.terminate()

    def render(self, data: CollectorItem, config: Mapping[str, Any]) -> str:  # noqa: D102 (ignore missing docstring)
        final_config = ChainMap(config, self.default_config)  # type: ignore[arg-type]

        template = self.env.get_template(f"{data['category']}.html")

        # Heading level is a "state" variable, that will change at each step
        # of the rendering recursion. Therefore, it's easier to use it as a plain value
        # than as an item in a dictionary.
        heading_level = final_config["heading_level"]
        members_order = final_config["members_order"]

        if members_order == "alphabetical":
            sort_function = sort_key_alphabetical
        elif members_order == "source":
            sort_function = sort_key_source
        else:
            raise PluginError(f"Unknown members_order '{members_order}', choose between 'alphabetical' and 'source'.")

        sort_object(data, sort_function=sort_function)

        return template.render(
            **{"config": final_config, data["category"]: data, "heading_level": heading_level, "root": True},
        )

    def get_anchors(self, data: CollectorItem) -> Tuple[str, ...]:  # noqa: D102 (ignore missing docstring)
        try:
            return (data["path"],)
        except KeyError:
            return ()

    def update_env(self, md: Markdown, config: dict) -> None:  # noqa: D102 (ignore missing docstring)
        super().update_env(md, config)
        self.env.trim_blocks = True
        self.env.lstrip_blocks = True
        self.env.keep_trailing_newline = False
        self.env.filters["brief_xref"] = do_brief_xref
Attributes¤
default_config: ClassVar[dict] ¤

Headings options:

  • heading_level (int): The initial heading level to use. Default: 2.
  • show_root_heading (bool): Show the heading of the object at the root of the documentation tree (i.e. the object referenced by the identifier after :::). Default: False.
  • show_root_toc_entry (bool): If the root heading is not shown, at least add a ToC entry for it. Default: True.
  • show_root_full_path (bool): Show the full Python path for the root object heading. Default: True.
  • show_root_members_full_path (bool): Show the full Python path of the root members. Default: False.
  • show_object_full_path (bool): Show the full Python path of every object. Default: False.
  • show_category_heading (bool): When grouped by categories, show a heading for each category. Default: False.

Members options:

  • members (list[str] | False | None): An explicit list of members to render. Default: None.
  • members_order (str): The members ordering to use. Options: alphabetical - order by the members names, source - order members as they appear in the source file. Default: "alphabetical".
  • filters (list[str] | None): A list of filters applied to filter objects based on their name. A filter starting with ! will exclude matching objects instead of including them. The members option takes precedence over filters (filters will still be applied recursively to lower members in the hierarchy). Default: ["!^_[^_]"].
  • group_by_category (bool): Group the object's children by categories: attributes, classes, functions, and modules. Default: True.

Docstrings options:

  • docstring_style (str): The docstring style to use: google, numpy, restructured-text, or None. Default: "google".
  • docstring_options (dict): The options for the docstring parser. See parsers under pytkdocs.parsers.docstrings.
  • show_if_no_docstring (bool): Show the object heading even if it has no docstring or children with docstrings. Default: False.

Signatures/annotations options:

  • show_signature (bool): Show methods and functions signatures. Default: True.
  • show_signature_annotations (bool): Show the type annotations in methods and functions signatures. Default: False.

Additional options:

  • show_bases (bool): Show the base classes of a class. Default: True.
  • show_source (bool): Show the source code of this object. Default: True.
domain: str ¤

The handler's domain, used to register objects in the inventory, for example "py".

enable_inventory: bool ¤

Whether the inventory creation is enabled.

fallback_config: ClassVar[dict] ¤

The configuration used when falling back to re-collecting an object to get its anchor.

This configuration is used in Handlers.get_anchors.

When trying to fix (optional) cross-references, the autorefs plugin will try to collect an object with every configured handler until one succeeds. It will then try to get an anchor for it. It's because objects can have multiple identifiers (aliases), for example their definition path and multiple import paths in Python.

When re-collecting the object, we have no use for its members, or for its docstring being parsed. This is why the fallback configuration filters every member out, and uses the Markdown style, which we know will not generate any warnings.

fallback_theme: str ¤

Fallback theme to use when a template isn't found in the configured theme.

Methods¤
collect(self, identifier: str, config: MutableMapping[str, Any]) -> Any ¤

Collect the documentation tree given an identifier and selection options.

In this method, we feed one line of JSON to the standard input of the subprocess that was opened during instantiation of the collector. Then we read one line of JSON on its standard output.

We load back the JSON text into a Python dictionary. If there is a decoding error, we log it as error and raise a CollectionError.

If the dictionary contains an error key, we log it as error (with the optional traceback value), and raise a CollectionError.

If the dictionary values for keys loading_errors and parsing_errors are not empty, we log them as warnings.

Then we pick up the only object within the objects list (there's always only one, because we collect them one by one), rebuild it's categories lists (see rebuild_category_lists()), and return it.

Parameters:

Name Type Description Default
identifier str

The dotted-path of a Python object available in the Python path.

required
config MutableMapping[str, Any]

Selection options, used to alter the data collection done by pytkdocs.

required

Exceptions:

Type Description
CollectionError

When there was a problem collecting the object documentation.

Returns:

Type Description
Any

The collected object-tree.

Source code in python/handler.py
def collect(self, identifier: str, config: MutableMapping[str, Any]) -> CollectorItem:
    """Collect the documentation tree given an identifier and selection options.

    In this method, we feed one line of JSON to the standard input of the subprocess that was opened
    during instantiation of the collector. Then we read one line of JSON on its standard output.

    We load back the JSON text into a Python dictionary.
    If there is a decoding error, we log it as error and raise a CollectionError.

    If the dictionary contains an `error` key, we log it  as error (with the optional `traceback` value),
    and raise a CollectionError.

    If the dictionary values for keys `loading_errors` and `parsing_errors` are not empty,
    we log them as warnings.

    Then we pick up the only object within the `objects` list (there's always only one, because we collect
    them one by one), rebuild it's categories lists
    (see [`rebuild_category_lists()`][mkdocstrings_handlers.python.rendering.rebuild_category_lists]),
    and return it.

    Arguments:
        identifier: The dotted-path of a Python object available in the Python path.
        config: Selection options, used to alter the data collection done by `pytkdocs`.

    Raises:
        CollectionError: When there was a problem collecting the object documentation.

    Returns:
        The collected object-tree.
    """
    final_config = {}
    for option in ("filters", "members", "docstring_style", "docstring_options"):
        if option in config:
            final_config[option] = config[option]
        elif option in self.default_config:
            final_config[option] = self.default_config[option]

    logger.debug("Preparing input")
    json_input = json.dumps({"objects": [{"path": identifier, **final_config}]})

    logger.debug("Writing to process' stdin")
    self.process.stdin.write(json_input + "\n")  # type: ignore[union-attr]
    self.process.stdin.flush()  # type: ignore[union-attr]

    logger.debug("Reading process' stdout")
    stdout = self.process.stdout.readline()  # type: ignore[union-attr]

    logger.debug("Loading JSON output as Python object")
    try:
        result = json.loads(stdout)
    except json.decoder.JSONDecodeError as exception:
        error = "\n".join(("Error while loading JSON:", stdout, traceback.format_exc()))
        raise CollectionError(error) from exception

    if "error" in result:
        error = result["error"]
        if "traceback" in result:
            error += f"\n{result['traceback']}"
        raise CollectionError(error)

    for loading_error in result["loading_errors"]:
        logger.warning(loading_error)

    for errors in result["parsing_errors"].values():
        for parsing_error in errors:
            logger.warning(parsing_error)

    # We always collect only one object at a time
    result = result["objects"][0]

    logger.debug("Rebuilding categories and children lists")
    rebuild_category_lists(result)

    return result
get_anchors(self, data: Any) -> Tuple[str, ...] ¤

Return the possible identifiers (HTML anchors) for a collected item.

Parameters:

Name Type Description Default
data Any

The collected data.

required

Returns:

Type Description
Tuple[str, ...]

The HTML anchors (without '#'), or an empty tuple if this item doesn't have an anchor.

Source code in python/handler.py
def get_anchors(self, data: CollectorItem) -> Tuple[str, ...]:  # noqa: D102 (ignore missing docstring)
    try:
        return (data["path"],)
    except KeyError:
        return ()
load_inventory(in_file: BinaryIO, url: str, base_url: Optional[str] = None, **kwargs: Any) -> Iterator[Tuple[str, str]] classmethod ¤

Yield items and their URLs from an inventory file streamed from in_file.

This implements mkdocstrings' load_inventory "protocol" (see plugin.py).

Parameters:

Name Type Description Default
in_file BinaryIO

The binary file-like object to read the inventory from.

required
url str

The URL that this file is being streamed from (used to guess base_url).

required
base_url Optional[str]

The URL that this inventory's sub-paths are relative to.

None
**kwargs Any

Ignore additional arguments passed from the config.

{}

Yields:

Type Description
Iterator[Tuple[str, str]]

Tuples of (item identifier, item URL).

Source code in python/handler.py
@classmethod
def load_inventory(
    cls,
    in_file: BinaryIO,
    url: str,
    base_url: Optional[str] = None,
    **kwargs: Any,  # noqa: ARG003
) -> Iterator[Tuple[str, str]]:
    """Yield items and their URLs from an inventory file streamed from `in_file`.

    This implements mkdocstrings' `load_inventory` "protocol" (see plugin.py).

    Arguments:
        in_file: The binary file-like object to read the inventory from.
        url: The URL that this file is being streamed from (used to guess `base_url`).
        base_url: The URL that this inventory's sub-paths are relative to.
        **kwargs: Ignore additional arguments passed from the config.

    Yields:
        Tuples of (item identifier, item URL).
    """
    if base_url is None:
        base_url = posixpath.dirname(url)

    for item in Inventory.parse_sphinx(in_file, domain_filter=("py",)).values():
        yield item.name, posixpath.join(base_url, item.uri)
render(self, data: Any, config: Mapping[str, Any]) -> str ¤

Render a template using provided data and configuration options.

Parameters:

Name Type Description Default
data Any

The collected data to render.

required
config Mapping[str, Any]

The handler's configuration options.

required

Returns:

Type Description
str

The rendered template as HTML.

Source code in python/handler.py
def render(self, data: CollectorItem, config: Mapping[str, Any]) -> str:  # noqa: D102 (ignore missing docstring)
    final_config = ChainMap(config, self.default_config)  # type: ignore[arg-type]

    template = self.env.get_template(f"{data['category']}.html")

    # Heading level is a "state" variable, that will change at each step
    # of the rendering recursion. Therefore, it's easier to use it as a plain value
    # than as an item in a dictionary.
    heading_level = final_config["heading_level"]
    members_order = final_config["members_order"]

    if members_order == "alphabetical":
        sort_function = sort_key_alphabetical
    elif members_order == "source":
        sort_function = sort_key_source
    else:
        raise PluginError(f"Unknown members_order '{members_order}', choose between 'alphabetical' and 'source'.")

    sort_object(data, sort_function=sort_function)

    return template.render(
        **{"config": final_config, data["category"]: data, "heading_level": heading_level, "root": True},
    )
teardown(self) -> None ¤

Terminate the opened subprocess, set it to None.

Source code in python/handler.py
def teardown(self) -> None:
    """Terminate the opened subprocess, set it to `None`."""
    logger.debug("Tearing process down")
    self.process.terminate()
update_env(self, md: Markdown, config: dict) -> None ¤

Update the Jinja environment.

Parameters:

Name Type Description Default
md Markdown

The Markdown instance. Useful to add functions able to convert Markdown into the environment filters.

required
config dict

Configuration options for mkdocs and mkdocstrings, read from mkdocs.yml. See the source code of mkdocstrings.plugin.MkdocstringsPlugin.on_config to see what's in this dictionary.

required
Source code in python/handler.py
def update_env(self, md: Markdown, config: dict) -> None:  # noqa: D102 (ignore missing docstring)
    super().update_env(md, config)
    self.env.trim_blocks = True
    self.env.lstrip_blocks = True
    self.env.keep_trailing_newline = False
    self.env.filters["brief_xref"] = do_brief_xref

Functions¤

get_handler(theme: str, custom_templates: Optional[str] = None, setup_commands: Optional[List[str]] = None, config_file_path: Optional[str] = None, paths: Optional[List[str]] = None, **config: Any) -> PythonHandler ¤

Simply return an instance of PythonHandler.

Parameters:

Name Type Description Default
theme str

The theme to use when rendering contents.

required
custom_templates Optional[str]

Directory containing custom templates.

None
setup_commands Optional[List[str]]

A list of commands as strings to be executed in the subprocess before pytkdocs.

None
config_file_path Optional[str]

The MkDocs configuration file path.

None
paths Optional[List[str]]

A list of paths to use as search paths.

None
config Any

Configuration passed to the handler.

{}

Returns:

Type Description
PythonHandler

An instance of PythonHandler.

Source code in python/handler.py
def get_handler(
    theme: str,
    custom_templates: Optional[str] = None,
    setup_commands: Optional[List[str]] = None,
    config_file_path: Optional[str] = None,
    paths: Optional[List[str]] = None,
    **config: Any,  # noqa: ARG001
) -> PythonHandler:
    """Simply return an instance of `PythonHandler`.

    Arguments:
        theme: The theme to use when rendering contents.
        custom_templates: Directory containing custom templates.
        setup_commands: A list of commands as strings to be executed in the subprocess before `pytkdocs`.
        config_file_path: The MkDocs configuration file path.
        paths: A list of paths to use as search paths.
        config: Configuration passed to the handler.

    Returns:
        An instance of `PythonHandler`.
    """
    return PythonHandler(
        handler="python",
        theme=theme,
        custom_templates=custom_templates,
        setup_commands=setup_commands,
        config_file_path=config_file_path,
        paths=paths,
    )

rendering ¤

This module implements rendering utilities.

Functions¤

do_brief_xref(path: str) -> Markup ¤

Filter to create cross-reference with brief text and full identifier as hover text.

Parameters:

Name Type Description Default
path str

The path to shorten and render.

required

Returns:

Type Description
Markup

A span containing the brief cross-reference and the full one on hover.

Source code in python/rendering.py
def do_brief_xref(path: str) -> Markup:
    """Filter to create cross-reference with brief text and full identifier as hover text.

    Arguments:
        path: The path to shorten and render.

    Returns:
        A span containing the brief cross-reference and the full one on hover.
    """
    brief = path.split(".")[-1]
    return Markup("<autoref identifier={path} optional hover>{brief}</autoref>").format(path=path, brief=brief)
rebuild_category_lists(obj: dict) -> None ¤

Recursively rebuild the category lists of a collected object.

Since pytkdocs dumps JSON on standard output, it must serialize the object-tree and flatten it to reduce data duplication and avoid cycle-references. Indeed, each node of the object-tree has a children list, containing all children, and another list for each category of children: attributes, classes, functions, methods and modules. It replaces the values in category lists with only the paths of the objects.

Here, we reconstruct these category lists by picking objects in the children list using their path.

For each object, we recurse on every one of its children.

Parameters:

Name Type Description Default
obj dict

The collected object, loaded back from JSON into a Python dictionary.

required
Source code in python/rendering.py
def rebuild_category_lists(obj: dict) -> None:
    """Recursively rebuild the category lists of a collected object.

    Since `pytkdocs` dumps JSON on standard output, it must serialize the object-tree and flatten it to reduce data
    duplication and avoid cycle-references. Indeed, each node of the object-tree has a `children` list, containing
    all children, and another list for each category of children: `attributes`, `classes`, `functions`, `methods`
    and `modules`. It replaces the values in category lists with only the paths of the objects.

    Here, we reconstruct these category lists by picking objects in the `children` list using their path.

    For each object, we recurse on every one of its children.

    Arguments:
        obj: The collected object, loaded back from JSON into a Python dictionary.
    """
    for category in ("attributes", "classes", "functions", "methods", "modules"):
        obj[category] = [obj["children"][path] for path in obj[category]]
    obj["children"] = [child for _, child in obj["children"].items()]
    for child in obj["children"]:
        rebuild_category_lists(child)
sort_key_alphabetical(item: Any) -> Any ¤

Return an item's name or the final unicode character.

Parameters:

Name Type Description Default
item Any

A collected item.

required

Returns:

Type Description
Any

Name or final unicode character.

Source code in python/rendering.py
def sort_key_alphabetical(item: CollectorItem) -> Any:
    """Return an item's name or the final unicode character.

    Arguments:
        item: A collected item.

    Returns:
        Name or final unicode character.
    """
    # chr(sys.maxunicode) is a string that contains the final unicode
    # character, so if 'name' isn't found on the object, the item will go to
    # the end of the list.
    return item.get("name", chr(sys.maxunicode))
sort_key_source(item: Any) -> Any ¤

Return an item's starting line number or -1.

Parameters:

Name Type Description Default
item Any

A collected item.

required

Returns:

Type Description
Any

Starting line number or -1.

Source code in python/rendering.py
def sort_key_source(item: CollectorItem) -> Any:
    """Return an item's starting line number or -1.

    Arguments:
        item: A collected item.

    Returns:
        Starting line number or -1.
    """
    # if 'line_start' isn't found on the object, the item will go to
    # the start of the list.
    return item.get("source", {}).get("line_start", -1)
sort_object(obj: Any, sort_function: Callable[[Any], Any]) -> None ¤

Sort the collected object's children.

Sorts the object's children list, then each category separately, and then recurses into each.

Parameters:

Name Type Description Default
obj Any

The collected object, as a dict. Note that this argument is mutated.

required
sort_function Callable[[Any], Any]

The sort key function used to determine the order of elements.

required
Source code in python/rendering.py
def sort_object(obj: CollectorItem, sort_function: Callable[[CollectorItem], Any]) -> None:
    """Sort the collected object's children.

    Sorts the object's children list, then each category separately, and then recurses into each.

    Arguments:
        obj: The collected object, as a dict. Note that this argument is mutated.
        sort_function: The sort key function used to determine the order of elements.
    """
    obj["children"].sort(key=sort_function)

    for category in ("attributes", "classes", "functions", "methods", "modules"):
        obj[category].sort(key=sort_function)

    for child in obj["children"]:
        sort_object(child, sort_function=sort_function)