Serdar Yegulalp
Senior Writer

Get started with Python’s new native JIT

how-to
Dec 24, 20256 mins

The native just-in-time compiler in Python 3.15 can speed up code by as much as 20% or more, although it’s still experimental.

souped up hot rod engine with supercharger - fast speed acceleration concept
Credit: Steve Mann / Shutterstock

JITing, or “just-in-time” compilation, can make relatively slow interpreted languages much faster. Until recently, JITting was available for Python only in the form of specialized third-party libraries, like Numba, or alternate versions of the Python interpreter, like PyPy.

A native JIT compiler has been added to Python over its last few releases. At first it didn’t provide any significant speedup. But with Python 3.15 (still in alpha but available for use now), the core Python development team has bolstered the native JIT to the point where it’s now showing significant performance gains for certain kinds of program.

Speedups from the JIT range widely, depending on the operation. Some programs show dramatic performance improvements, others not at all. But the work put into the JIT is beginning to pay off, and users can start taking advantage of it if they’re willing to experiment.

Activating the Python JIT

By default, the native Python JIT is disabled. It’s still considered an experimental feature, so it has to be manually enabled.

To enable the JIT, you set the PYTHON_JIT environment variable, either for the shell session Python is running in, or persistently as part of your user environment options. When the Python interpreter starts, it checks its runtime environment for the variable PYTHON_JIT. If PYTHON_JIT is unset or set to anything but 1, the JIT is off. If it’s set to 1, the JIT is enabled.

It’s probably not a good idea to enable PYTHON_JIT as a persistent option. If you’re doing this with a user environment where you’re only running Python with the JIT enabled, it might be useful. But for the most part, you’ll want to set PYTHON_JIT manually — for instance, as part of a shell script to configure the environment.

Verifying the JIT is working

For versions of Python with the JIT (Python 3.13 and above), the sys module in the standard library has a new namespace, sys._jit. Inside it are three utilities for inspecting the state of the JIT, all of which return either True or False. The three utilities:

  • sys._jit.is_available(): Lets you know if the current build of Python has the JIT. Most binary builds of Python shipped will now have the JIT available, except the “free-threaded” or “no-GIL” builds of Python.
  • sys._jit.is_enabled(): Lets you know if the JIT is currently enabled. It does not tell you if running code is currently being JITted, however.
  • sys._jit.is_active(): Lets you know if the topmost Python stack frame is currently executing JITted code. However, this is not a reliable way to tell if your program is using the JIT, because you may end up executing this check in a “cold” (non-JITted) path. It’s best to stick to performance measurements to see if the JIT is having any effect.

For the most part, you will want to use sys._jit.is_enabled() to determine if the JIT is available and running, as it gives you the most useful information.

Python code enhanced by the JIT

Because the JIT is in its early stages, its behavior is still somewhat opaque. There’s no end-user instrumentation for it yet, so there’s no way to gather statistics about how the JIT handles a given piece of code. The only real way to assess the JIT’s performance is to benchmark your code with and without the JIT.

Here’s an example of a program that demonstrates pretty consistent speedups with the JIT enabled. It’s a rudimentary version of the Mandelbroit fractal:

from time import perf_counter
import sys

print ("JIT enabled:", sys._jit.is_enabled())

WIDTH = 80
HEIGHT = 40
X_MIN, X_MAX = -2.0, 1.0
Y_MIN, Y_MAX = -1.0, 1.0
ITERS = 500

YM = (Y_MAX - Y_MIN)
XM = (X_MAX - X_MIN)

def iter(c):
    z = 0j
    for _ in range(ITERS):
        if abs(z) > 2.0:
            return False
        z = z ** 2 + c
    return True

def generate():
    start = perf_counter()
    output = []

    for y in range(HEIGHT):
        cy = Y_MIN + (y / HEIGHT) * YM
        for x in range(WIDTH):
            cx = X_MIN + (x / WIDTH) * XM
            c = complex(cx, cy)
            output.append("#" if iter(c) else ".")
        output.append("\n")
    print ("Time:", perf_counter()-start)
    return output

print("".join(generate()))

When the program starts running, it lets you know if the JIT is enabled and then produces a plot of the fractal to the terminal along with the time taken to compute it.

With the JIT enabled, there’s a fairly consistent 20% speedup between runs. If the performance boost isn’t obvious, try changing the value of ITERS to a higher number. This forces the program to do more work, so should produce a more obvious speedup.

Here’s a negative example — a simple recursively implemented Fibonacci sequence. As of Python 3.15a3 it shows no discernible JIT speedup:

import sys
print ("JIT enabled:", sys._jit.is_enabled())
from time import perf_counter

def fib(n):
    if n <= 1:
        return n
    return fib(n-1) + fib(n-2)

def main():
    start = perf_counter()
    result = fib(36)
    print(perf_counter() - start)

main()

Why this isn’t faster when JITted isn’t clear. For instance, you might be inclined to think using recursion makes the JIT less effective, but even a non-recursive version of the algorithm doesn’t provide any speedup either.

Using the experimental Python JIT

Because the JIT is still considered experimental, it’s worth approaching it in the same spirit as the “free-threaded” or “no-GIL” builds of Python also now being shipped. You can conduct your own experiments with the JIT to see if provides any payoff for certain tasks, but you’ll always want to be careful about using it in any production scenario. What’s more, each alpha and beta revision of Python going forward may change the behavior of the JIT. What was once performant might not be in the future, or vice versa!

Serdar Yegulalp

Serdar Yegulalp is a senior writer at InfoWorld. A veteran technology journalist, Serdar has been writing about computers, operating systems, databases, programming, and other information technology topics for 30 years. Before joining InfoWorld in 2013, Serdar wrote for Windows Magazine, InformationWeek, Byte, and a slew of other publications. At InfoWorld, Serdar has covered software development, devops, containerization, machine learning, and artificial intelligence, winning several B2B journalism awards including a 2024 Neal Award and a 2025 Azbee Award for best instructional content and best how-to article, respectively. He currently focuses on software development tools and technologies and major programming languages including Python, Rust, Go, Zig, and Wasm. Tune into his weekly Dev with Serdar videos for programming tips and techniques and close looks at programming libraries and tools.

More from this author