Zero-trust data governance needed to protect AI models from slop

news
Jan 26, 20262 mins

Gartner recommends new measures to defend against AI-generated data in the enterprise.

Zero trust security concept, Businessman holding virtual zero trust icon for business information security network.
Credit: A9 STUDIO / Shutterstock

Organizations need to be less trustful of data given how much of it is AI-generated, according to new research from Gartner.

As more enterprises jump on board the generative AI train — a recent Gartner survey found 84% expect to spend more on it this year — the risk grows that future large language models (LLMs) will be trained on outputs from previous models, increasing the danger of so-called model collapse.

To avoid this, Gartner recommends companies make changes to manage the risk of unverified data. These include the appointment of an AI governance leader to work closely with data and analytics teams; improve collaboration between departments with cross-functional groups including representatives from cybersecurity, data, and analytics; and updating existing security and data management policies to address risks from AI-generated data.

Gartner predicts that by 2028, 50% of organizations will have had to adopt a zero-trust posture for data governance as a result of this tidal wave of unverified AI-generated data.

“Organizations can no longer implicitly trust data or assume it was human generated,” Gartner managing VP Wan Fui Chan said in a statement. “As AI-generated data becomes pervasive and indistinguishable from human-created data, a zero-trust posture establishing authentication and verification measures is essential to safeguard business and financial outcomes.”

What makes matters even trickier to handle, said Chan, is that there will be different approaches to AI from governments. “Requirements may differ significantly across geographies, with some jurisdictions seeking to enforce stricter controls on AI-generated content, while others may adopt a more flexible approach,” he said.

Perhaps the best example of how AI can cause data governance issues was when Deloitte Australia had to refund part of a government contract fee after AI-generated errors, including non-existent legal citations, were included in its final report.

This article first appeared on CIO.

Maxwell Cooter

Maxwell began writing about technology in 1984, when mainframes ruled the world. Since then he has written for just about every business computing title in the UK, and for a few in the US, covering everything from Artificial intelligence to Zero-day exploits and all points in between. He has also been editor-in-chief of several award-winning titles, including Network Week, Techworld, and Cloud Pro, and a regular contributor to Whatsonstage.com. In his spare time he coaches a junior rugby team.

More from this author