Cloudflare’s Dynamic Workers aim to simplify how enterprises execute AI-generated code, signaling a shift toward lightweight, on-demand runtimes for agent-driven workloads. Credit: T. Schneider / Shutterstock Cloudflare has rolled out Dynamic Workers, an isolate-based runtime designed to run AI-generated code faster and more efficiently than traditional containers, as the company pushes lightweight, disposable execution environments as a foundation for enterprise AI applications. The service enables enterprises to spin up execution environments in milliseconds, pointing to a transition away from container-heavy architectures toward more ephemeral runtimes designed for high-volume AI agent workloads. For many enterprises, this points to a shift in how AI systems are built and executed. Instead of orchestrating predefined tools, organizations are beginning to let models generate and execute code on demand, a shift that raises new questions around security and cost. Built on Cloudflare’s existing Workers platform, Dynamic Workers uses V8 isolates to execute code generated at runtime, often by LLMs, without requiring a full container or virtual machine. “An isolate takes a few milliseconds to start and uses a few megabytes of memory,” Cloudflare said in a blog post. “That’s around 100x faster and 10x-100x more memory efficient than a typical container. That means that if you want to start a new isolate for every user request, on-demand, to run one snippet of code, then throw it away, you can.” Cloudflare is pairing the runtime with its “Code Mode” approach, which encourages models to write short TypeScript functions against defined APIs instead of relying on multiple tool calls, a method the company says can reduce token usage and latency. From an enterprise perspective, the platform includes controls such as outbound request interception for credential management, automated code scanning, and rapid rollout of V8 security patches. Cloudflare noted that isolate-based sandboxes have different security characteristics compared to hardware-backed environments. Dynamic Workers are available in open beta under Cloudflare’s Workers paid plan. While pricing is set at $0.002 per unique Worker loaded per day, in addition to standard CPU and invocation charges, the per-Worker fee is waived during the beta period. Enterprise runtime implications For enterprise IT teams, the move to isolate-based execution could reshape how AI workloads are architected, especially for use cases that demand high concurrency and low-latency performance. “Cloudflare is essentially looking to redefine the application lifecycle by pivoting away from the traditional ‘build-test-deploy’ cycle on centralized servers, which often relies on high-overhead, latency-heavy containers,” said Neil Shah, VP for research at Counterpoint Research. “The move to V8 reduces startup times from around 500 ms to under 5 ms, a roughly 100x improvement, making it significant for bursts of agentic AI requests that may require cold starts.” This shift could also have cost implications. If AI agents can generate and execute scripts locally to produce outcomes, rather than repeatedly calling LLMs, enterprises may see improvements in both efficiency and latency. However, Shah noted that the model introduces new security considerations that enterprise leaders cannot ignore. “Allowing AI agents to generate and execute code on the fly introduces a new attack vector and risk,” Shah said. “While Dynamic Workers are sandboxed to limit the impact of a potential compromise, the unpredictability of AI-generated logic requires a robust security framework and clear guardrails.” Others say these risks extend beyond sandboxing and require broader governance across the AI execution lifecycle. Nitish Tyagi, principal analyst at Gartner, said that while isolate-based environments improve containment, they do not eliminate risks. “Running an AI agent and executing code in an isolated environment may seem very safe in theory, but it doesn’t ensure complete safety,” Tyagi said. He pointed to risks such as vulnerabilities in AI-generated code, indirect prompt-injection attacks, and supply-chain threats, in which compromised external sources could lead agents to expose sensitive data or execute harmful actions. Tyagi also warned of operational risks, including the risk of autonomous agents entering recursive execution loops, which can lead to cost escalation and resource exhaustion. To mitigate these risks, Tyagi said enterprises need stronger governance mechanisms, including real-time monitoring of agent behavior, tighter control over outbound traffic, and better visibility into AI supply chains and dependencies. Development ToolsSoftware DevelopmentArtificial Intelligence