Humans don’t use most of their granted permissions, but agents will—and the results will be disastrous. Credit: Summit Art Creations / Shutterstock Persistent weak layers (PWLs) have plagued my backcountry skiing for the past 10 years. They’re about to mess up the industry’s IT security, too. For those who don’t spend their early mornings skinning up mountains in Utah’s backcountry, a persistent weak layer, or PWL, is exactly what it sounds like. It’s a fragile layer of snow, often faceted crystals that form during cold and dry spells, which gets buried by subsequent storms. That PWL lies in wait for a trigger: Perhaps a skier hitting a shallow rock band, a sudden spike in spring temperatures, or a heavy snowfall. At that moment, the entire slab above it shatters, slides, and, all too often, kills people. Enterprise access control is built on its own version of a colossal PWL. For years, we’ve piled new roles, temporary privileges, and overly broad static profiles on top of an unmanaged foundation of dormant access. The structure has held up because people are relatively gentle triggers: We’re slow, easily distracted, and generally prefer to keep our jobs. But AI agents aren’t human skiers moving carefully down a slope. They’re a massive, rapid loading event, a trigger primed to spark an “avalanche” in your data center. OK, computer? This is the core takeaway from new research published by Oso and Cyera, which finally puts hard numbers to a problem that’s been visible but ignored for years. Their research analyzed 2.4 million workers and 3.6 billion application permissions, and the results should concern us. According to the Oso blind spot report, corporate workers completely ignore 96% of their granted permissions. Over a 90-day window, only 4% of granted permissions were ever actually exercised. With sensitive enterprise data, it’s even worse: Workers touch only 9% of the sensitive data they can actually reach, and nearly one-third of users have the power to modify or delete sensitive data. Seems ok, right? I mean, the fact that they’re not exercising their rights to certain applications or data isn’t a big deal, is it? So long as they don’t use what they have access to, we’re good. Right? Nope. Maybe this isn’t an issue in a world where people plod about, ignoring their access rights. But when we add autonomous agents to the mix, things get problematic very, very fast. As I’ve argued, the enterprise AI problem isn’t just a matter of hallucinations. It’s really about permissions. Humans act as a natural governor on permission sprawl. A marketing employee might technically have the right to view a million customer records but will only ever look at the 30 they need to finish their campaign for the quarter. The risk (the “persistent weak layer”) remains entirely dormant. Agents remove that governor entirely. When an AI agent inherits a human user account, it inherits the entire permission surface, not just the tiny fraction the human actually used. Because agents operate continuously, chain actions across various systems, and execute whatever privileges they possess without hesitation, they turn latent permission debt into active operational risk. If an agent is told to clean up stale records and it happens to hold the dormant permission to modify the entire database, it will attempt to do exactly that. Fixing permissions This aligns perfectly with a drum I’ve been beating for years. Back in 2021, I wrote that authorization was rapidly becoming the most critical unresolved challenge in modern software architecture. A year later, I argued that identity and trust must be baked into the development life cycle, not bolted on by a separate security team right before launch. More recently, I’ve pointed out that large language models demand a totally new approach to authorization, that boring governance is the only path to real AI adoption, and that the true challenge in agentic systems is building a robust AI control plane. The smartest players in the space are already treating this as table stakes. In its framework for trustworthy agents, Anthropic explicitly notes that systems like Claude Code default to read-only access and require human approval before modifying code or infrastructure. Microsoft offers similar guidance, warning against overprivileged applications and demanding tightly scoped service accounts. They understand that in the age of autonomous software, the old assumption that an application probably won’t use a dormant permission is foolish. The problem won’t stay neatly confined to a single SaaS application, either. We’re already dealing with a world where nonhuman identities are proliferating rapidly. A 2024 industry report from CyberArk notes that machine identities now outnumber human identities by massive margins, often 80 to 1 or higher. A huge chunk of those machine identities have privileged or sensitive access, and most organizations completely lack identity security controls for AI. Read-only as a default So, how do we fix the PWL before the avalanche hits? This isn’t something you solve with a clever prompt, a larger context window, or a new foundational model. It’s an architecture problem. Putting aside the overprovisioned humans (that’s a separate blog post), we can curtail agentic misuse of permissions by building golden paths where the default state for any new AI agent is strictly read-only. We have to stop the reckless, albeit convenient, practice of letting an agent inherit a broad employee account just to make a pilot project work faster for a sprint demo. Agents require purpose-built identities with aggressively minimal permissions. If 96% of a human user’s access goes unused anyway, we can’t grant that excess access to a machine. We need environments where the ability to draft an action and the ability to execute it are entirely separate permissions. We need explicit approvals for any destructive actions, and we need every single automated action logged and fully reversible. We spend so much time debating the intelligence of these new models while we ignore the ground they walk on. AI agents aren’t creating a brand-new authorization crisis. They’re simply exposing the persistent weak layer we’ve been ignoring for years. We tolerated bloated roles and static profiles because humans were slow enough to keep the damage theoretical. Agents make it concrete. Hopefully, they’ll also make us pay attention to authorization in ways we largely haven’t. Access ControlIdentity and Access ManagementSecurityDevelopment ApproachesSoftware DevelopmentArtificial Intelligence