Serdar Yegulalp
Senior Writer

TensorFlow unveils MLIR for faster machine learning

news
Apr 23, 20192 mins

Sublanguage project promises faster compilation and easier hardware optimization for high-performance machine learning models

Binary stream flowing through the fingers and palm of an upturned hand.
Credit: GrandeDuc / Getty Images

Engineers working on Google’s TensorFlow machine learning framework have revealed a subproject, MLIR, that is intended to be a common intermediate language for machine learning frameworks.

MLIR, short for Multi-Level Intermediate Representation, will allow projects using TensorFlow and other machine learning libraries to be compiled to more efficient code that takes maximum advantage of underlying hardware. What’s more, MLIR could in time be used by compilers generally, extending its optimization benefits beyond machine learning projects.

MLIR isn’t a language like C++ or Python. It represents an intermediate compilation step between those higher-level languages and machine code. The compiler framework LLVM uses an intermediate representation, or IR, of its own. One of LLVM’s originators, Chris Lattner, is a co-creator of MLIR. Making MLIR an LLVM co-project could be a way to spread its adoption.

In a slide presentation at the EuroLLVM conference earlier this month, Lattner and fellow Googler Tatiana Shpeisman explained how TensorFlow already generates multiple IRs internally, but that these disparate IRs don’t benefit from one another. MLIR provides a single, standard IR for all of those TensorFlow subsystems. TensorFlow is currently migrating to use MLIR internally.

Another benefit MLIR may provide is parallelized compilation. MLIR is designed to allow a compiler to work on different segments of code in parallel, allowing machine learning models—and other sorts of applications—to be pushed to production more quickly.

MLIR could provide other benefits to languages and frameworks outside machine learning. For example, LLVM-based languages like Swift and Rust have had to develop their own internal IRs, because many optimizations used in those languages can’t be expressed in LLVM. MLIR could provide a standard way to express those optimizations, which could in turn be re-used for other languages.

The MLIR project is open source. An official specification is available for those who want to generate MLIR.

Serdar Yegulalp

Serdar Yegulalp is a senior writer at InfoWorld. A veteran technology journalist, Serdar has been writing about computers, operating systems, databases, programming, and other information technology topics for 30 years. Before joining InfoWorld in 2013, Serdar wrote for Windows Magazine, InformationWeek, Byte, and a slew of other publications. At InfoWorld, Serdar has covered software development, devops, containerization, machine learning, and artificial intelligence, winning several B2B journalism awards including a 2024 Neal Award and a 2025 Azbee Award for best instructional content and best how-to article, respectively. He currently focuses on software development tools and technologies and major programming languages including Python, Rust, Go, Zig, and Wasm. Tune into his weekly Dev with Serdar videos for programming tips and techniques and close looks at programming libraries and tools.

More from this author