Two Kubernetes creators are applying their expertise to agentic AI, helping it become governable, portable, observable, and ‘boring.’
I’ve been arguing for a while now that enterprise AI won’t really take off until it gets boring. Not boring in the sense of uninspired; no, I mean boring in the sense that enterprises can trust it, govern it, observe it, and hand it to rank-and-file employees without undue concern that things will go wrong.
We have no shortage of over-funded startups clamoring to be the next big thing in AI, but not nearly enough that are quietly doing the essential work to make AI safe for enterprise consumption. Enter Stacklok.
On the surface, this might look like yet another startup trying to surf the AI agent wave. It’s not. Stacklok is exciting precisely because its executive team is deeply experienced in being unexciting. Back at Google, Craig McLuckie and Joe Beda were instrumental in the creation of Kubernetes. They took the messy, chaotic world of container orchestration and built an abstraction layer that made it “boring” enough that the largest banks, telcos, and retailers in the world could rely on it with confidence. Now they’re bringing that ability to wring order out of chaos to agentic AI, and they recognize that the real problem in enterprise AI has more to do with operational accountability than model quality.
I interviewed McLuckie and Beda to better understand the opportunity to create a “Kubernetes moment” in agentic AI.
Targeting accountability
McLuckie founded Stacklok in early 2023. Beda, his Kubernetes and later Heptio counterpart, had “semi-retired” in 2022. Beda doesn’t need to make more money, and he’s not joining out of nostalgia. As he tells it, this is “an extraordinary moment in the industry,” with “an opportunity to bring deep expertise in developer platforms and enterprise-grade infrastructure” to solving key enterprise problems.
“The biggest problem,” McLuckie says, “is accountability.” He explains: “An agent, no matter how sophisticated, no matter how capable, no matter how useful, cannot be held accountable for the work it undertakes.” That’s exactly right. A large language model can write code, summarize a contract, file a ticket, or trigger a workflow, but if it mangles customer data, oversteps its permissions, or keeps running after the employee who launched it has left the company, nobody gets to shrug and blame the model. The enterprise still owns the outcome.
Even OpenAI, which has been slower to take the enterprise seriously than Anthropic, now recognizes that enterprises need AI to fit inside workflows, controls, deployment models, and day-to-day operations. It’s no longer just about raw model prowess, as Tom Krazit writes. In other words, the market is slowly rediscovering what infrastructure people have known for a long time: Enterprises may buy capability, but they deploy control.
A related issue, according to Beda, is that AI’s speed changes everything. Tasks that used to take a human days or weeks may soon be completed in minutes by an agent. That doesn’t just create productivity. It creates scale, and scale turns manageable sloppiness into operational disaster. As he puts it, “The volume dial is going to 11 across the board.” I recently said that humans don’t use most of their granted permissions, but agents will. That’s exactly why identity, authorization, and auditability suddenly stop being problems for the security team and become architecture.
This is where the Kubernetes analogy is actually useful, rather than just founder mythmaking.
AI’s Kubernetes moment
Too many people remember Kubernetes as a container story. Enterprises embraced it for a more practical reason: It gave them a common operating model across environments, plus an ecosystem of policy, security, observability, and workflow tools layered on top. Cloud Native Computing Foundation now says 82% of container users run Kubernetes in production, and the organization explicitly frames Kubernetes as the operating system for AI. In our interview, McLuckie describes Kubernetes’ deeper contribution as “self-determination.” That is, it gave enterprises a consistent substrate on premises, at the edge, and in the cloud. That consistency is what helped an ecosystem to flourish around it.
Beda goes one step further: “One of the core ideas in Kubernetes is that you describe what you want to happen, and then you have the system go make it happen.” This, he says, means that Kubernetes is essentially “control theory rendered into software. Over time, an enterprise’s desired state moves into code, into version control, and into systems traceable back to accountable humans. Nerdy and sort of dull? Sure. But that’s the point. Enterprise AI doesn’t just need smarter models. It needs systems where humans declare intent, machines execute it, and the whole mess remains observable and auditable.
This is why I keep insisting that the biggest strategic question in agentic AI isn’t whether agents are cool. They are—or at least they can be. No, the real question is who owns the control plane. Stacklok matters because it is explicitly aiming at that layer. The company’s bet is that enterprises want to run and manage Model Context Protocol–based agent infrastructure on the Kubernetes they already know. They want policy, identity, isolation, and observability built in, not bolted on afterwards.
That last part matters because MCP is important, but it isn’t enough. Anthropic introduced MCP in November 2024 as an open standard for connecting AI systems to tools and data. Later, they donated it to the Linux Foundation’s Agentic AI Foundation to keep it neutral and community-driven. It worked. Anthropic reports there are now more than 10,000 active public MCP servers and support across ChatGPT, Cursor, Gemini, Microsoft Copilot, and VS Code.
That’s awesome, but it’s also not enough. Why? Because a protocol isn’t a platform. A protocol can help an agent talk to a tool, but it doesn’t, by itself, tell an enterprise who approved that agent, what data it can touch, how its actions are logged, or how to shut it down safely when the human who launched it has left the company.
Meeting users where they are
That’s where Stacklok’s self-hosted, Kubernetes-native bias starts to look smart rather than stodgy. (Though, again, “stodgy” isn’t a bad thing for risk-averse enterprises.) McLuckie is blunt: “If you’re an enterprise connecting agents to sensitive data, you are almost certainly not comfortable with that data egressing your security domain or being sent to a SaaS endpoint that a vendor controls.” We’ve seen this movie before. When your hosting, identity, tool integration, and policy layers all belong to the same vendor, “choice” starts to mean “replatform.”
No one wants that.
This is also where open source matters, though not in the simplistic sense that open source automatically wins. It doesn’t. Enterprises don’t buy ideology: they buy simplicity. But in a young market, they also value leverage. I’ve written before that open source doesn’t magically redistribute market power. What it can do is give customers options and some control over their fate. In AI, where model switching costs are still relatively low, that optionality matters. Talking with McLuckie and Beda, it’s clear they are open source true believers, but not obnoxiously so. That’s good, because enterprises don’t need a sermon on openness; they just need enough neutrality to avoid getting trapped while the market is still changing underneath them.
It’s all about meeting enterprises where they are and helping them to incrementally move to where they’d like to be. As McLuckie stresses, most enterprise AI teams are being asked to deliver more with AI while running with flat or capped headcount. They don’t need and can’t implement a grand theory of some idealized, fully autonomous enterprise. Instead, they need an accretive (golden) path from here to there using things they already understand, such as containers, isolation, OpenTelemetry, Kubernetes, existing identity systems, and existing observability stacks.
Sound boring? Good!
The opposite of “boring” in enterprise AI isn’t innovation. It’s slideware or demoware that looks great in a keynote but dies on contact with procurement, security review, compliance, and the first ugly bit of enterprise data. McLuckie captures this perfectly: “Vibe-coding a platform for two weeks can produce something plausible. It won’t produce something accurate, hardened, or enterprise-grade.”
Will Stacklok be the company that defines this layer? It’s way too early to say. Markets this young are littered with smart people who were directionally right and commercially wrong. But the company is aiming at the right problem, and that already puts it ahead of a depressingly large percentage of the AI industry.
Again, the next era of enterprise AI will be won by whoever makes agents governable, portable, observable, and boring enough to trust. Kubernetes helped do that for cloud-native infrastructure. Stacklok is betting the same playbook can work for agentic infrastructure. That’s not a nostalgic rerun of Kubernetes. It’s a recognition that enterprises still need what they’ve always needed: not more magic, but a way to control it.


