Abigail Wall
by Abigail Wall

For agentic AI, other disciplines need their own Git

opinion
Jan 15, 20265 mins

Git provides the structure that makes agentic workflows in software engineering viable. Other disciplines need an equivalent backbone.

Every arrow wood block points at the same target at the center, all for one purpose, teamwork, business competition, or the  point of communication concept
Credit: patpitchaya / Shutterstock

Software engineering didn’t adopt AI agents faster because engineers are more adventurous, or the use case was better. They adopted them more quickly because they already had Git.

Long before AI arrived, software development had normalized version control, branching, structured approvals, reproducibility, and diff-based accountability. These weren’t conveniences. They were the infrastructure that made collaboration possible. When AI agents appeared, they fit naturally into a discipline that already knew how to absorb change without losing control.

Other disciplines now want similar leverage from AI agents. But they are discovering an uncomfortable truth: without a Git-equivalent backbone, AI doesn’t compound. It destabilizes.

What these disciplines need is not a literal code repository, but a shared operational substrate: a canonical artifact, fine-grained versioning, structured workflows, and an agreed-upon way to propose, review, approve, and audit changes.

Consider a simple example. Imagine a product marketing team using an AI agent to maintain competitive intelligence. The agent gathers information, synthesizes insights, and updates a master brief used by sales and leadership. This seems straightforward—until the agent edits the document.

In software, Git handles this effortlessly. Every change has a branch. Every branch produces a diff. Every diff is reviewed. Every merge is recorded. Every version is reproducible. Agents can propose changes safely because the workflow itself enforces isolation and accountability.

Life without version control

For the marketing team, no such backbone exists. If the agent overwrites a paragraph, where is the diff? If it introduces a factual error, where is the audit trail? If leadership wants to revert to last week’s version, what does that even mean? The lack of structure turns AI agents into risks.

This is why Git matters. Not because it is clever, but because it enforces process discipline: explicit change control, durable history, isolated work, and reproducibility. It created a shared contract for collaboration that made modern software engineering possible, and made agentic workflows in software engineering viable.

Other disciplines need structures that mirror these properties.

Take architecture or urban planning. Teams want AI agents to update simulations, explore zoning scenarios, or annotate design models. But without a versioning protocol for spatial artifacts, changes become opaque. An agent that modifies a zoning scenario without a traceable change set is effectively unreviewable.

Or consider finance. Analysts want agents to maintain models, update assumptions, and draft memos. Yet many organizations lack a unified way to version models, track dependencies, and require approvals. Without that substrate, automation introduces new failure modes instead of leverage.

At this point, the Git analogy feels strong—but it has limits.

Software is unusually forgiving of mistakes. A bad commit can be reverted. A merge can be blocked. Even a production outage usually leaves behind logs and artifacts. Version management works in part because the world it governs is reversible.

Many other disciplines are not.

Pulling irreversible levers

Consider HR. Imagine an organization asking an AI agent to terminate a vendor contract with “Joe’s Plumbing.” The agent misinterprets the context and instead terminates the employment of a human employee named Joe Plummer. There is no pull request. No staging environment. No clean revert. Payroll is cut, access is revoked, and legal exposure begins immediately. Even if the error is caught minutes later, the damage is already real.

This is the critical distinction. In non-code domains, actions often escape the system boundary. They trigger emails, revoke credentials, initiate payments, or change legal status. Version history can explain what happened, but it cannot undo it.

This means a Git-style model is necessary, but insufficient.

Applying diffs, approvals, and history without respecting execution boundaries creates a false sense of safety. In these domains, agents must be constrained not just by review workflows, but by strict separation between proposal and execution. Agents should prepare actions, simulate outcomes, and surface intent—without directly pulling irreversible levers.

Several patterns from software translate cleanly: durable history creates accountability; branching protects the canonical state; structured approvals are the primary mechanism of resilience; reproducibility enables auditing and learning.

Disciplines that lack these properties will struggle to govern AI agents. Tools alone won’t fix this. They need norms, repeatable processes, and artifact structure—in short, their own Git, adapted to their risk profile.

The lesson from software is not that AI adoption is easy. It is that adoption is procedural before it is technical. Git quietly orchestrates isolation, clarity, history, and review. Every discipline that wants similar gains will need an equivalent backbone—and, where mistakes are irreversible, a way to keep the genie in the bottle.

New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.

Abigail Wall
by Abigail Wall
Contributor

Abigail Wall is AI product and GTM leader at Runloop.AI, where she drives development of infrastructure and tools powering next-generation AI agents. She holds an M.S. in computational data analytics from Georgia Institute of Technology and an MBA from Darden School of Business. With strategic experience at BCG and more than 16 years in ownership and founding roles across retail and fintech industries, Wall applies her analytical expertise to solve complex, real-world problems by leveraging machine learning across diverse sectors. Her unique combination of deep data science education, strategic consulting experience and hands-on operational leadership positions allow her to evaluate AI agents and agentic workflows from both technical and practical implementation perspectives.