owned this note
owned this note
Published
Linked with GitHub
# ZALOHA
Performance improvements in Python 3.8 in RHEL 8.2
## Python 3.8
The Python interpreter shipped in RHEL 8 is version 3.6, which was released back in 2016.
While Red Hat is committed to support it for the lifetime of RHEL 8, it is becoming a bit old for some use cases.
For those who need new Python features (and can live with the inevitable compatibility-breaking changes), RHEL 8.2 comes with Python 3.8. See the [release notes for the new python38 module](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/8.2_release_notes/index#enhancement_dynamic-programming-languages-web-and-database-servers).
The `python38` package joins the company of Python interepreters shipped in RHEL 8.2, alongside `python2` and `python3` packages, which we described in our earlier post, [Python in RHEL 8](https://developers.redhat.com/blog/2018/11/14/python-in-rhel-8/).
It is installable alongside the other Python interpreters, so it doesn't interfere with the existing Python stack.
Besides providing new features, packaging Python 3.8 allows maintainers to release performance and packaging improvements more quickly than in the rock-solid python3.
In this post, we'll focus on the performance side, leaving new features to upstream documents [What's New In Python 3.7](https://docs.python.org/3.8/whatsnew/3.7.html) and [What's New In Python 3.8](https://docs.python.org/3.8/whatsnew/3.8.html).
## Where have I seen this before?
Any improvements—performance and features—aren't specific to RHEL.
They originate in the upstream Python project, and trickle down through Fedora in a long process of review, integration and testing. If you see something in RHEL first, it's a security bug fix.
Writing this blog feels like taking credit for others' achievements. So let us set this straight: they *are* others' achievements.
Our role as RHEL packagers is like that of a museum curator, rather than a painter: it's not our job to create features, but to seek out the best ones and combine them in a pleasing experience.
We do have “painter” roles on the team. But just as fresh paint does not belong in an exhibition hall, original contributions go to the wider community first, and only appear in RHEL when they're well-tested—i.e. somewhat boring and obvious.
## Performance improvements
The major speed-up in our `python38` package comes from building with GCC's `-fno-semantic-interposition` flag.
It gives us up to 30% speedup with little change to the semantics. How is that possible?
### Python C API
All of Python's functionality is exposed in its extensive [C API](https://docs.python.org/3.8/c-api/index.html), which makes it possible to *extend* and *embed* Python. A large part of Python's success comes from its C API.
Extensions are modules written in a language like C, which can provide functionality to Python programs. A classic example is *NumPy*, a library written in Fortran and C that manipulates Python objects.
Embedding is using Python from within a larger application. Applications like Blender or Gimp embed Python to allow scripting.
Python¹ itself uses this C API internally: every attribute access goes through a call to [the `PyObject_GetAttr` function](https://github.com/python/cpython/blob/b10dc3e7a11fcdb97e285882eba6da92594f90f9/Objects/object.c#L884); every addition is a call to [`PyNumber_Add`](https://github.com/python/cpython/blob/b10dc3e7a11fcdb97e285882eba6da92594f90f9/Objects/abstract.c#L999), etc.
<small>¹ more correctly, **CPython** – the reference implementation of the Python language</small>
### Python dynamic library
Python can be built in two modes: **static** where all code lives in the Python executable, or **shared** where the Python executable is linked to its dynamic library called `libpython`. In RHEL, Python is built in the **shared** mode, since applications that embed Python, like Blender, use the Python C API of `libpython`.
The `python3.8` command is a minimal example of embedding: It only calls the `Py_BytesMain()` function:
```
int
main(int argc, char **argv)
{
return Py_BytesMain(argc, argv);
}
```
All the code lives in `libpython`. For example, on RHEL 8.2, the size of `/usr/bin/python3.8` is just around 8 KB, whereas the size of the `/usr/lib64/libpython3.8.so.1.0` library is around 3.6 MB.
### Semantic interposition
When executing a program, the dynamic loader allows you to override any symbol (such as a function) of dynamic libraries that will be used in the program, by setting the `LD_PRELOAD` environment variable. This is called ELF symbol interposition semantics, and it's enabled by default in GCC¹. This feature is commonly used, among other things, to trace memory allocation (i.e. override the libc `malloc/free`² functions) or to change clocks of a single application (i.e. override the libc `time` function).
The main drawback of building Python with the `-fno-semantic-interposition` flag is that `libpython` functions can no longer be overriden using `LD_PRELOAD`. However, the impact is limited to `libpython`. It is still possible, for example, to override `malloc/free` from `libc` to trace memory allocations.²
Python calls `libpython` functions from other `libpython` functions. To respect semantic interposition, a call to any `libpython` function has to go through what's called a Procedure Linkage Table (PLT) indirection.
<small>¹ In clang, semantic interposition is disabled by default.<br>
² By the way, Python has the [`tracemalloc` module](https://docs.python.org/dev/library/tracemalloc.html) to trace memory allocations.</small>
### LTO and function inlining
In recent years, GCC has enhanced Link Time Optimization (LTO) to produce even more efficient code. One common optimization is to inline function calls: to replace a function call with a copy of the code of the function. And once a function call is inlined, the compiler can go even further in terms of optimizations.
When semantic interposition is disabled (using the abovementioned `-fno-semantic-interposition`¹ flag), functions in `libpython` that call other `libpython` functions don't have to go through the PLT indirection anymore. Thus the LTO can optimize `libpython` further by inlining function calls—an optimization that would be illegal under semantic interposition rules.
<small>¹ The `-fno-semantic-interposition` flag was introduced in GCC 5.3</small>
### Code example
To see the optimization in practice, let's thake a look at the `_Py_CheckFunctionResult()` function. This function is used by Python to check if a C function either returned a result (is not `NULL`) or raised an exception.
Simplified C code:
```c
PyObject*
PyErr_Occurred(void)
{
PyThreadState *tstate = _PyRuntime.gilstate.tstate_current;
return tstate->curexc_type;
}
PyObject*
_Py_CheckFunctionResult(PyObject *callable, PyObject *result,
const char *where)
{
int err_occurred = (PyErr_Occurred() != NULL);
...
}
```
Let's first take a look at Python 3.6 in RHEL7 which has not been built with `-fno-semantic-interposition`.
Extract of the assembly code (read by gdb `disassemble` command):
```
Dump of assembler code for function _Py_CheckFunctionResult:
(...)
callq 0x7ffff7913d50 <PyErr_Occurred@plt>
(...)
```
As you can see, `_Py_CheckFunctionResult()` calls `PyErr_Occurred()`, and the call has to go through a PLT indirection, leading to a slowdown.
Now let's look at an extract of the assembly code after disabling semantic interposition:
```
Dump of assembler code for function _Py_CheckFunctionResult:
(...)
mov 0x40f7fe(%rip),%rcx # rcx = &_PyRuntime
mov 0x558(%rcx),%rsi # rsi = tstate = _PyRuntime.gilstate.tstate_current
(...)
mov 0x58(%rsi),%rdi # rdi = tstate->curexc_type
(...)
```
The `PyErr_Occurred()` function call was inlined by GCC! `_Py_CheckFunctionResult()` gets the `tstate` directly from `_PyRuntime`, and then it directly reads it's member `tstate->curexc_type`. There is no function call.
Inlining the `PyErr_Occurred()` function matters a lot for performance, since it is called often in Python, to check if an exception was raised.
## Try it!
Grab the new `python38` package from the RHEL8 repos and enjoy for yourself the speedup, as well as a host of new features!