Serdar Yegulalp
Senior Writer

Why Google’s DeepMind next-gen machine learning will stay undercover

news
Mar 9, 20173 mins

Making neural networks that teach themselves is one of the current projects for Google's AI subcompany -- but don't expect to see source code any time soon

hidden data
Credit: Thinkstock

Google’s DeepMind, a London-based artificial intelligence company the search-and-cloud giant acquired in 2014, has been closely associated with Google’s quest to build general AI.

Earlier this year, with little publicity, DeepMind unveiled what looks like a useful step in that direction. The DeepMind team released a paper describing a neural network approach that would allow automatic “transfer learning,” meaning a neural network could reuse what it already “knows” on new problems.

If history is any hint, this is one innovation Google will keep close to its chest.

A fast learner

Neural networks are a common type of machine learning that mimic, perhaps distantly, how neurons interrelate in the brain to pass information around. Such networks are typically trained for one specific task, but the training for that task can’t be reused for other, similar tasks; you have to start from scratch with a new network. Sometimes new learning ends up trashing the existing learning, a problem known in AI research circles as “catastrophic forgetting.”

PathNet, the project described in the paper, is an attempt to allow a neural network to reuse what it already knows in new contexts. Embedded within the network are agents, or paths through the network that “learn how best to re-use existing parameters in the environment of the neural network by executing actions within the neural network,” according to the paper.

Here’s a real-world example: With human intelligence, if you learn to beat one video game, you have strategies that can be used to beat other video games. In fact, that’s exactly the test DeepMind’s researchers threw at PathNet — they taught it to play Atari video games, a technique it has used before for neural network research.

With PathNet, DeepMind found a much better transference of skills between games than with previous approaches. A PathNet network trained on, say, River Raid, became a better player of other video games, such as Centipede.

Black-box blues

DeepMind’s plans for PathNet — meaning, Google’s plans as well — aren’t only about moving up from beating Atari games to winning at Battlefield. “We also wish to try PathNet on other [real life] tasks which may be more suitable for transfer learning than Atari, for example continuous robotic control problems,” said DeepMind in the conclusion for the paper.

The long game is to build a portfolio of general strategies for making machine learning more self-guiding, more automatically intuitive, and less restrained by the need for human training.

What’s not likely to happen is for DeepMind’s work to go public along the lines of TensorFlow. For evidence, look to history: Most of the publicly announced innovations DeepMind has produced for Google have been kept within Google’s walls and used as proprietary innovations — such as reducing datacenter energy use. DeepMind has some open source releases, but they’re mostly in-house tools, not actual machine learning frameworks.

The implication is that PathNet and any related innovations won’t be the basis for an open source project with Google sponsorship. Odds are they’ll be used for competitive advantages — by enhancing Google’s internal operations or offered as a service to paying customers.

That’s Google’s prerogative. But with many machine learning advances taking place in the open source domain, future innovations provided as black-box products are at greater risk of being labeled as “AI-washing.” These days, it’s becoming less appealing to build on top of something if you don’t really know how it works.

Serdar Yegulalp

Serdar Yegulalp is a senior writer at InfoWorld. A veteran technology journalist, Serdar has been writing about computers, operating systems, databases, programming, and other information technology topics for 30 years. Before joining InfoWorld in 2013, Serdar wrote for Windows Magazine, InformationWeek, Byte, and a slew of other publications. At InfoWorld, Serdar has covered software development, devops, containerization, machine learning, and artificial intelligence, winning several B2B journalism awards including a 2024 Neal Award and a 2025 Azbee Award for best instructional content and best how-to article, respectively. He currently focuses on software development tools and technologies and major programming languages including Python, Rust, Go, Zig, and Wasm. Tune into his weekly Dev with Serdar videos for programming tips and techniques and close looks at programming libraries and tools.

More from this author