Serdar Yegulalp
Senior Writer

IBM’s machine learning server adds TensorFlow

news
Jan 26, 20172 mins

IBM's PowerAI system for machine learning on a mix of Power8 processors and Nvidia GPUs now supports Google's deep learning framework

artificial intelligence in the workplace
Credit: Thinkstock

When Intel unveiled its BigDL machine learning framework, it emphasized CPU rather than GPU power. With this move, Intel hopes to keep its present and future hardware competitive in the expanding world of machine learning.

IBM also is interested in boosting the use of its CPUs for machine learning in PowerAI, though with a different strategy. It pairs IBM’s Power8 processors with Nvidia GPUs, and IBM says it’s a prime method for running common machine learning applications like Caffe, Torch, and Theano.

Now IBM has added TensorFlow to the mix. Google’s deep-leaning framework spans both CPUs and GPUs, and IBM is pushing PowerAI as the perfect hardware blend to make the most of TensorFlow.

While the Xeon Phi chip sits at the heart of Intel’s machine learning hardware push, PowerAI uses IBM’s Power8 processor with the Nvidia Tesla Pascal P100 GPU. Nvidia’s Pascal line of GPUs is a next-generation machine learning server, due to a specifically tailored instruction set. But machine learning applications have to be written specifically to run that instruction set, and many cloud-based machine learning systems (like AWS) don’t use Pascal hardware.

IBM’s plan is to provide a whole package: CPUs, GPUs, and software, all designed to complement each other. The GPUs, for instance, use Nvidia’s custom NVlink bus to provide high-speed GPU-to-GPU and GPU-to-CPU connections, which IBM has already featured in other servers aimed at the high-performance computing market.

IBM also wants a slightly narrower focus for PowerAI compared to some of its competition. Mainly, it’ll be used for training machine learning models from raw data. This is the slow and computation-intensive phase of any machine learning project, particularly for TensorFlow-powered deep learning applications. Thus, IBM emphasizes having high-end CPUs and GPUs, and on building out the hardware underneath to accelerate the entire product. IBM’s PowerAI ambitions are also likely to be carried forward by the introduction of the Power9 processor later this year. 

As always with IBM, the bigger question is the size of the prospective market for this iron. IBM’s issue isn’t that customers are more readily served by commodity Intel processors; having Linux on the Power platform has helped knocked down that problem. Instead, it’s that businesses interested in machine learning have the luxury of opting for the cloud to obtain cheap, flexible access to GPU-powered services where TensorFlow and other algorithms are available.

Serdar Yegulalp

Serdar Yegulalp is a senior writer at InfoWorld. A veteran technology journalist, Serdar has been writing about computers, operating systems, databases, programming, and other information technology topics for 30 years. Before joining InfoWorld in 2013, Serdar wrote for Windows Magazine, InformationWeek, Byte, and a slew of other publications. At InfoWorld, Serdar has covered software development, devops, containerization, machine learning, and artificial intelligence, winning several B2B journalism awards including a 2024 Neal Award and a 2025 Azbee Award for best instructional content and best how-to article, respectively. He currently focuses on software development tools and technologies and major programming languages including Python, Rust, Go, Zig, and Wasm. Tune into his weekly Dev with Serdar videos for programming tips and techniques and close looks at programming libraries and tools.

More from this author