Serdar Yegulalp
Senior Writer

PyTorch 1.0 accelerates Python machine learning with native code

news
Oct 3, 20182 mins

The PyTorch 1.0 release candidate introduces Torch Script, a Python subset that can be JIT-compiled into C++ or other high-speed code

fire flames
Credit: Little Visuals

An official release candidate of PyTorch 1.0, the Python-centric deep learning framework created by Facebook, is available for developer testing. One of the most touted features of the new release is the ability to define models by writing Python code that can be selectively accelerated—similar to how competing frameworks work.

Python’s traditional role in machine learning has been to wrap high-speed, back-end code libraries with easy-to-use, front-end syntax. Anyone who writes machine learning modules in Python quickly discovers that native Python isn’t nearly fast enough for performance-critical research work or production use.

PyTorch’s developers have introduced a feature in PyTorch 1.0, called Torch Script, that strikes a balance between Python’s accessible syntax and performant code. Torch Script is a subset of Python that PyTorch can just-in-time compile into fast native code that doesn’t rely on the Python runtime.

Torch Script works one of two ways. New code can be written using the Torch Script language, which by design can compile readily to native code. It’s also possible to take existing Python code, decorate it with the @torch.jit.trace decorator, and have it just-in-time compiled to native code. However, this is not as effective as using Torch Script.

According to the Torch Script documentation, “[Torch Script] makes it possible to train models in PyTorch using familiar tools, and then export the model to a production environment where it is not a good idea to run models as Python programs for performance and multi-threading reasons.”

Torch Script’s approach echoes some of the other methods for developing high-performance software in Python. For example, Anaconda’s Numba library compiles specified functions to native code, using either just-in-time or ahead-of-time compilation. The Numba library can be used to generate code that runs without Numba itself present, but it has runtime dependencies on NumPy and Python generally.

Another commonly used package, Cython, allows Python to be turned incrementally into C by way of custom syntax declarations. Cython can work with the whole range of Python and C types alike, as well as all of Python’s syntax, but Torch Script is restricted to operations on PyTorch tensors, integers, and floating-point numbers. And Torch Script can’t use constructions like exceptions.

Serdar Yegulalp

Serdar Yegulalp is a senior writer at InfoWorld. A veteran technology journalist, Serdar has been writing about computers, operating systems, databases, programming, and other information technology topics for 30 years. Before joining InfoWorld in 2013, Serdar wrote for Windows Magazine, InformationWeek, Byte, and a slew of other publications. At InfoWorld, Serdar has covered software development, devops, containerization, machine learning, and artificial intelligence, winning several B2B journalism awards including a 2024 Neal Award and a 2025 Azbee Award for best instructional content and best how-to article, respectively. He currently focuses on software development tools and technologies and major programming languages including Python, Rust, Go, Zig, and Wasm. Tune into his weekly Dev with Serdar videos for programming tips and techniques and close looks at programming libraries and tools.

More from this author