Serdar Yegulalp
Senior Writer

Intel’s BigDL deep learning framework snubs GPUs for CPUs

news analysis
Jan 17, 20173 mins

Why create a deep learning framework that doesn't use GPU acceleration by default? For Intel, it's part of a strategy to promote next-gen CPUs for machine learning

machine learning
Credit: Thinkstock

Last week Intel unveiled BigDL, a Spark-powered framework for distributed deep learning, available as an open source project. With most major IT vendors releasing machine learning frameworks, why not the CPU giant, too?

What matters most about Intel’s project may not be what it offers people building deep learning solutions on Spark clusters, but what it says about Intel’s ambitions to promote hardware that competes with GPUs for those applications.

Thinking big

BigDL is aimed at those who want to apply machine learning to data already available through Spark or Hadoop clusters, and who perhaps have already used libraries like Caffe or Torch. BigDL’s deep learning facilities are similar to Torch’s. With BigDL, models created by either framework can be loaded into and run against Spark programs. Spark also allows efficient scale-out across clusters.

However, unlike other machine learning frameworks that use GPU acceleration to speed up procedures, BigDL works with the Intel Math Kernel Library. This package of math functions is optimized to take advantage of multithreaded execution and Intel-specific processor extensions, and it’s seen in Intel’s Python distribution and elsewhere.

Intel claims processing in BigDL is “orders of magnitude faster than out-of-box open source Caffe, Torch, or TensorFlow on a single-node Xeon (i.e., comparable with mainstream GPU).” That said, the BigDL repository doesn’t have any detailed benchmarks to support this assertion.

If GPU acceleration is becoming the standard option for machine learning libraries to boost their speeds, why would Intel not include GPU support by default? At first glance, it might seem that’s because Spark hasn’t traditionally been a GPU-accelerated product. But this has become less prevalent lately: IBM has a project along these lines, and commercial Spark provider Databricks added support for GPU-accelerated Spark on its service at the end of last year. In theory it’s possible to use BigDL with GPU-accelerated Spark, but Intel’s overall plans may be in a different vein.

Hardware wars

Intel has been itching to compete head to head in the GPU market for high-end computing with its Xeon Phi processor lineup. Intel packaged the Xeon Phi processors in the form factor of a GPU—a PCIe add-on card—and is incorporating software tools like OpenMP and OpenCL for parallelization and high speed on its hardware. (Nervana, a machine learning hardware company acquired by Intel, will likely have its hardware delivered as PCIe add-ons as well.)

All this is meant to be a boon for developers; in theory, there’s less work involved in making existing software run well on Xeon Phi than in porting the software to a GPU architecture. It’s also meant to appeal to ops, since systems composed with Xeon Phi plug-in cards can be upgraded or expanded by simply changing or adding cards, rather than swapping out whole racks.

In this light, BigDL can be seen as one of many possible proof-of-concept applications that support Intel’s plans. But the momentum in the industry has long been toward GPUs—even if most software used for GPU acceleration involves a de facto standard created by another hardware maker (Nvidia and CUDA). In addition, with Spark and other libraries already enjoying GPU acceleration, developers don’t need to do as much work to leverage the benefits.

Intel could use a library like BigDL to its advantage, but machine learning will likely remain primarily GPU-powered for a long time to come.

Serdar Yegulalp

Serdar Yegulalp is a senior writer at InfoWorld. A veteran technology journalist, Serdar has been writing about computers, operating systems, databases, programming, and other information technology topics for 30 years. Before joining InfoWorld in 2013, Serdar wrote for Windows Magazine, InformationWeek, Byte, and a slew of other publications. At InfoWorld, Serdar has covered software development, devops, containerization, machine learning, and artificial intelligence, winning several B2B journalism awards including a 2024 Neal Award and a 2025 Azbee Award for best instructional content and best how-to article, respectively. He currently focuses on software development tools and technologies and major programming languages including Python, Rust, Go, Zig, and Wasm. Tune into his weekly Dev with Serdar videos for programming tips and techniques and close looks at programming libraries and tools.

More from this author