
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Wed, 15 Apr 2026 17:50:01 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Project Think: building the next generation of AI agents on Cloudflare]]></title>
            <link>https://blog.cloudflare.com/project-think/</link>
            <pubDate>Wed, 15 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Announcing a preview of the next edition of the Agents SDK — from lightweight primitives to a batteries-included platform for AI agents that think, act, and persist.
 ]]></description>
            <content:encoded><![CDATA[ <p>Today, we're introducing Project Think: the next generation of the <a href="https://developers.cloudflare.com/agents/"><u>Agents SDK</u></a>. Project Think is a set of new primitives for building long-running agents (durable execution, sub-agents, sandboxed code execution, persistent sessions) and an opinionated base class that wires them all together. Use the primitives to build exactly what you need, or use the base class to get started fast.</p><p>Something happened earlier this year that changed how we think about AI. Tools like <a href="https://github.com/badlogic/pi-mono"><u>Pi</u></a>, <a href="https://github.com/openclaw"><u>OpenClaw</u></a>, <a href="https://docs.anthropic.com/en/docs/agents"><u>Claude Code</u></a>, and <a href="https://openai.com/codex"><u>Codex</u></a> proved a simple but powerful idea: give an LLM the ability to read files, write code, execute it, and remember what it learned, and you get something that looks less like a developer tool and more like a general-purpose assistant.</p><p>These coding agents aren't just writing code anymore. People are using them to manage calendars, analyze datasets, negotiate purchases, file taxes, and automate entire business workflows. The pattern is always the same: the agent reads context, reasons about it, writes code to take action, observes the result, and iterates. Code is the universal medium of action.</p><p>Our team has been using these coding agents every day. And we kept running into the same walls:</p><ul><li><p><b>They only run on your laptop or an expensive VPS:</b> there's no sharing, no collaboration, no handoff between devices.</p></li><li><p><b>They're expensive when idle</b>: a fixed monthly cost whether the agent is working or not. Scale that to a team, or a company, and it adds up fast.</p></li><li><p><b>They require management and manual setup</b>: installing dependencies, managing updates, configuring identity and secrets.</p></li></ul><p>And there's a deeper structural issue. Traditional applications serve many users from one instance. As mentioned in our Welcome to Agents Week post, <a href="https://blog.cloudflare.com/welcome-to-agents-week/"><u>agents are one-to-one</u></a>. Each agent is a unique instance, serving one user, running one task. A restaurant has a menu and a kitchen optimized to churn out dishes at volume. An agent is more like a personal chef: different ingredients, different techniques, different tools every time.</p><p>That fundamentally changes the scaling math. If a hundred million knowledge workers each use an agentic assistant at even modest concurrency, you need capacity for tens of millions of simultaneous sessions. At current per-container costs, that's unsustainable. We need a different foundation.</p><p>That's what we've been building.</p>
    <div>
      <h2>Introducing Project Think</h2>
      <a href="#introducing-project-think">
        
      </a>
    </div>
    <p>Project Think ships a set of new primitives for the Agents SDK:</p><ul><li><p><b>Durable execution</b> with fibers: crash recovery, checkpointing, automatic keepalive</p></li><li><p><b>Sub-agents</b>: isolated child agents with their own SQLite and typed RPC</p></li><li><p><b>Persistent sessions</b>: tree-structured messages, forking, compaction, full-text search</p></li><li><p><b>Sandboxed code execution</b>: Dynamic Workers, codemode, runtime npm resolution</p></li><li><p><b>The execution ladder</b>: workspace, isolate, npm, browser, sandbox</p></li><li><p><b>Self-authored extensions</b>: agents that write their own tools at runtime</p></li></ul><p>Each of these is usable directly with the Agent base class. Build exactly what you need with the primitives, or use the Think base class to get started fast. Let's look at what each one does.</p>
    <div>
      <h2>Long-running agents</h2>
      <a href="#long-running-agents">
        
      </a>
    </div>
    <p>Agents, as they exist today, are ephemeral. They run for a session, tied to a single process or device, and then they are gone. A coding agent that dies when your laptop sleeps, that’s a tool. An agent that persists — that can wake up on demand, continue work after interruptions, and carry forward the state without depending on your local runtime — that starts to look like infrastructure. And it changes the scaling model for agents completely.</p><p>The Agents SDK builds on <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> to give every agent an identity, persistent state, and the ability to wake on message. This is the <a href="https://en.wikipedia.org/wiki/Actor_model"><u>actor model</u></a>: each agent is an addressable entity with its own SQLite database. It consumes zero compute when hibernated. When something happens (an HTTP request, a WebSocket message, a scheduled alarm, an inbound email) the platform wakes the agent, loads its state, and hands it the event. The agent does its work, then goes back to sleep.</p><table><tr><th><p>
</p></th><th><p><b>VMs / Containers</b></p></th><th><p><b>Durable Objects</b></p></th></tr><tr><td><p><b>Idle cost</b></p></td><td><p>Full compute cost, always</p></td><td><p>Zero (hibernated)</p></td></tr><tr><td><p><b>Scaling</b></p></td><td><p>Provision and manage capacity</p></td><td><p>Automatic, per-agent</p></td></tr><tr><td><p><b>State</b></p></td><td><p>External database required</p></td><td><p>Built-in SQLite</p></td></tr><tr><td><p><b>Recovery</b></p></td><td><p>You build it (process managers, health checks)</p></td><td><p>Platform restarts, state survives</p></td></tr><tr><td><p><b>Identity / routing</b></p></td><td><p>You build it (load balancers, sticky sessions)</p></td><td><p>Built-in (name → agent)</p></td></tr><tr><td><p><b>10,000 agents, each active 1% of the time</b></p></td><td><p>10,000 always-on instances</p></td><td><p>~100 active at any moment</p></td></tr></table><p>This changes the economics of running agents at scale. Instead of "one expensive agent per power user," you can build "one agent per customer" or "one agent per task" or "one agent per email thread." The marginal cost of spawning a new agent is effectively zero.</p>
    <div>
      <h3>Surviving crashes: durable execution with fibers</h3>
      <a href="#surviving-crashes-durable-execution-with-fibers">
        
      </a>
    </div>
    <p>An LLM call takes 30 seconds. A multi-turn agent loop can run for much longer. At any point during that window, the execution environment can vanish: a deploy, a platform restart, hitting resource limits. The upstream connection to the model provider is severed permanently, in-memory state is lost, and connected clients see the stream stop with no explanation.</p><p><code></code><a href="https://developers.cloudflare.com/agents/api-reference/durable-execution/"><code><u>runFiber()</u></code></a> solves this. A fiber is a durable function invocation: registered in SQLite before execution begins, checkpointable at any point via <code>stash()</code>, and recoverable on restart via <code>onFiberRecovered</code>.</p>
            <pre><code>import { Agent } from "agents";

export class ResearchAgent extends Agent {
  async startResearch(topic: string) {
    void this.runFiber("research", async (ctx) =&gt; {
      const findings = [];

      for (let i = 0; i &lt; 10; i++) {
        const result = await this.callLLM(`Research step ${i}: ${topic}`);
        findings.push(result);

        // Checkpoint: if evicted, we resume from here
        ctx.stash({ findings, step: i, topic });

        this.broadcast({ type: "progress", step: i });
      }

      return { findings };
    });
  }

  async onFiberRecovered(ctx) {
    if (ctx.name === "research" &amp;&amp; ctx.snapshot) {
      const { topic } = ctx.snapshot;
      await this.startResearch(topic);
    }
  }
}
</code></pre>
            <p>The SDK keeps the agent alive automatically during fiber execution, no special configuration needed. For work measured in minutes, keepAlive() / keepAliveWhile() prevents eviction during active work. For longer operations (CI pipelines, design reviews, video generation) the agent starts the work, persists the job ID, hibernates, and wakes on callback.</p>
    <div>
      <h3>Delegating work: sub-agents via Facets</h3>
      <a href="#delegating-work-sub-agents-via-facets">
        
      </a>
    </div>
    <p>A single agent shouldn't do everything itself. <a href="https://developers.cloudflare.com/agents/api-reference/sub-agents/"><u>Sub-agents</u></a> are child Durable Objects colocated with the parent via <a href="https://blog.cloudflare.com/durable-object-facets-dynamic-workers/"><u>Facets</u></a>, each with their own isolated SQLite and execution context:</p>
            <pre><code>import { Agent } from "agents";

export class ResearchAgent extends Agent {
  async search(query: string) { /* ... */ }
}

export class ReviewAgent extends Agent {
  async analyze(query: string) { /* ... */ }
}

export class Orchestrator extends Agent {
  async handleTask(task: string) {
    const researcher = await this.subAgent(ResearchAgent, "research");
    const reviewer = await this.subAgent(ReviewAgent, "review");

    const [research, review] = await Promise.all([
      researcher.search(task),
      reviewer.analyze(task)
    ]);

    return this.synthesize(research, review);
  }
}
</code></pre>
            <p>Sub-agents are isolated at the storage level. Each one gets its own SQLite database, and there’s no implicit sharing of data between them. This is enforced by the runtime where sub-agent RPC latency is a function call. TypeScript catches misuse at compile time.</p>
    <div>
      <h3>Conversations that persist: the Session API</h3>
      <a href="#conversations-that-persist-the-session-api">
        
      </a>
    </div>
    <p>Agents that run for days or weeks need more than the typical flat list of messages. The experimental <a href="https://developers.cloudflare.com/agents/api-reference/sessions/"><u>Session API</u></a> models this explicitly. Available on the Agent base class, conversations are stored as trees, where each message has a parent_id. This enables forking (explore an alternative without losing the original path), non-destructive compaction (summarize older messages rather than deleting them), and full-text search across conversation history via <a href="https://www.sqlite.org/fts5.html"><u>FTS5</u></a>.</p>
            <pre><code>import { Agent } from "agents";
import { Session, SessionManager } from "agents/experimental/memory/session";

export class MyAgent extends Agent {
  sessions = SessionManager.create(this);

  async onStart() {
    const session = this.sessions.create("main");
    const history = session.getHistory();
    const forked = this.sessions.fork(session.id, messageId, "alternative-approach");
  }
}
</code></pre>
            <p>Session is usable directly with <code>Agent</code>, and it's the storage layer that the <code>Think</code> base class builds on.</p>
    <div>
      <h2>From tool calls to code execution</h2>
      <a href="#from-tool-calls-to-code-execution">
        
      </a>
    </div>
    <p>Conventional tool-calling has an awkward shape. The model calls a tool, pulls the result back through the context window, calls another tool, pulls that back, and so on. As the tool surface grows, this gets both expensive and clumsy. A hundred files means a hundred round-trips through the model.</p><p>But <a href="https://blog.cloudflare.com/code-mode/"><u>models are better at writing code to use a system than they are at playing the tool-calling game</u></a>. This is the insight behind <a href="https://github.com/cloudflare/agents/tree/main/packages/codemode"><u>@cloudflare/codemode</u></a>: instead of sequential tool calls, the LLM writes a single program that handles the entire task.</p>
            <pre><code>// The LLM writes this. It runs in a sandboxed Dynamic Worker.
const files = await tools.find({ pattern: "**/*.ts" });
const results = [];
for (const file of files) {
  const content = await tools.read({ path: file });
  if (content.includes("TODO")) {
    results.push({ file, todos: content.match(/\/\/ TODO:.*/g) });
  }
}
return results;
</code></pre>
            <p>Instead of 100 round-trips to the model, you just run a single program. This leads to fewer tokens used, faster execution, and better results. The <a href="https://github.com/cloudflare/mcp"><u>Cloudflare API MCP server</u></a> demonstrates this at scale. We expose only two tools <code>(search()</code> and <code>execute())</code>, which consume ~1,000 tokens, vs. ~1.17 million tokens for the naive tool-per-endpoint equivalent. This is a 99.9% reduction.</p>
    <div>
      <h3>The missing primitive: safe sandboxes</h3>
      <a href="#the-missing-primitive-safe-sandboxes">
        
      </a>
    </div>
    <p>Once you accept that models should write code on behalf of users, the question becomes: where does that code run? Not eventually, not after a product team turns it into a roadmap item. Right now, for this user, against this system, with tightly defined permissions.</p><p><a href="https://blog.cloudflare.com/dynamic-workers/"><u>Dynamic Workers</u></a> are that sandbox. A fresh V8 isolate spun up at runtime, in milliseconds, with a few megabytes of memory. That's roughly 100x faster and up to 100x more memory-efficient than a container. You can start a new one for every single request, run a snippet of code, and throw it away.</p><p>The critical design choice is the capability model. Instead of starting with a general-purpose machine and trying to constrain it, Dynamic Workers begin with almost no ambient authority (<code>globalOutbound: null</code>, no network access) and the developer grants capabilities explicitly, resource by resource, through bindings. We go from asking "how do we stop this thing from doing too much?" to "what exactly do we want this thing to be able to do?"</p><p>This is the right question for agent infrastructure.</p>
    <div>
      <h3>The execution ladder</h3>
      <a href="#the-execution-ladder">
        
      </a>
    </div>
    <p>This capability model leads naturally to a spectrum of compute environments, an <b>execution ladder</b> that the agent escalates through as needed:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6yokfTVcg8frH4snf7c4sp/2306d721650b4956b28e2198f7cf915d/BLOG-3200_2.png" />
          </figure><p><b>Tier 0</b> is the Workspace, a durable virtual filesystem backed by SQLite and R2. Read, write, edit, search, grep, diff. Powered by <a href="https://www.npmjs.com/package/@cloudflare/shell"><code><u>@cloudflare/shell</u></code></a>.</p><p><b>Tier 1</b> is a Dynamic Worker: LLM-generated JavaScript running in a sandboxed isolate with no network access. Powered by <a href="https://www.npmjs.com/package/@cloudflare/codemode"><code><u>@cloudflare/codemode</u></code></a>.</p><p><b>Tier 2</b> adds npm. <a href="https://github.com/cloudflare/agents/tree/main/packages/worker-bundler"><code><u>@cloudflare/worker-bundler</u></code></a> fetches packages from the registry, bundles them with esbuild, and loads the result into the Dynamic Worker. The agent writes <code>import { z } from "zod"</code> and it just works.</p><p><b>Tier 3</b> is a headless browser via <a href="https://developers.cloudflare.com/browser-rendering/"><u>Cloudflare Browser Run</u></a>. Navigate, click, extract, screenshot. Useful when the service doesn't support agents yet via MCP or APIs.</p><p><b>Tier 4</b> is a <a href="https://developers.cloudflare.com/sandbox/"><u>Cloudflare Sandbox</u></a> configured with your toolchains, repos, and dependencies: <code>git clone, npm test, cargo build</code>, synced bidirectionally with the Workspace.</p><p>The key design principle: <b>the agent should be useful at Tier 0 alone, where each tier is additive.</b> The user can add capabilities as they go.</p>
    <div>
      <h3>Building blocks, not a framework</h3>
      <a href="#building-blocks-not-a-framework">
        
      </a>
    </div>
    <p>All of these primitives are available as standalone packages. <a href="https://blog.cloudflare.com/dynamic-workers/"><u>Dynamic Workers</u></a>, <a href="https://github.com/cloudflare/agents/tree/main/packages/codemode"><code><u>@cloudflare/codemode</u></code></a>, <a href="https://github.com/cloudflare/agents/tree/main/packages/worker-bundler"><code><u>@cloudflare/worker-bundler</u></code></a>, and <a href="https://www.npmjs.com/package/@cloudflare/shell"><code><u>@cloudflare/shell</u></code></a> (a durable filesystem with tools) are all usable directly with the Agent base class. You can combine them to give any agent a workspace, code execution, and runtime package resolution without adopting an opinionated framework.</p>
    <div>
      <h2>The platform</h2>
      <a href="#the-platform">
        
      </a>
    </div>
    <p>Here's the complete stack for building agents on Cloudflare:</p><table><tr><th><p><b>Capability</b></p></th><th><p><b>What it does</b></p></th><th><p><b>Powered by</b></p></th></tr><tr><td><p>Per-agent isolation</p></td><td><p>Every agent is its own world</p></td><td><p><a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> (DOs)</p></td></tr><tr><td><p>Zero cost when idle</p></td><td><p>$0 until the agent wakes up</p></td><td><p><a href="https://developers.cloudflare.com/durable-objects/best-practices/websockets/#websocket-hibernation-api"><u>DO Hibernation</u></a></p></td></tr><tr><td><p>Persistent state</p></td><td><p>Queryable, transactional storage</p></td><td><p><a href="https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/"><u>DO SQLite</u></a></p></td></tr><tr><td><p>Durable filesystem</p></td><td><p>Files that survive restarts</p></td><td><p>Workspace (SQLite + <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a>)</p></td></tr><tr><td><p>Sandboxed code execution</p></td><td><p>Run LLM-generated code safely</p></td><td><p><a href="https://blog.cloudflare.com/dynamic-workers/"><u>Dynamic Workers</u></a> + <a href="https://github.com/cloudflare/agents/tree/main/packages/codemode"><code><u>@cloudflare/codemode</u></code></a></p></td></tr><tr><td><p>Runtime dependencies</p></td><td><p><code>import * from react</code> just works</p></td><td><p><a href="https://github.com/cloudflare/agents/tree/main/packages/worker-bundler"><code><u>@cloudflare/worker-bundler</u></code></a></p></td></tr><tr><td><p>Web automation</p></td><td><p>Browse, navigate, fill forms</p></td><td><p><a href="https://developers.cloudflare.com/browser-rendering/"><u>Browser Run</u></a></p></td></tr><tr><td><p>Full OS access</p></td><td><p>git, compilers, test runners</p></td><td><p><a href="https://developers.cloudflare.com/sandbox/"><u>Sandboxes</u></a></p></td></tr><tr><td><p>Scheduled execution</p></td><td><p>Proactive, not just reactive</p></td><td><p><a href="https://developers.cloudflare.com/durable-objects/api/alarms/"><u>DO Alarms + Fibers</u></a></p></td></tr><tr><td><p>Real-time streaming</p></td><td><p>Token-by-token to any client</p></td><td><p>WebSockets</p></td></tr><tr><td><p>External tools</p></td><td><p>Connect to any tool server</p></td><td><p>MCP</p></td></tr><tr><td><p>Agent coordination</p></td><td><p>Typed RPC between agents</p></td><td><p>Sub-agents (<a href="https://developers.cloudflare.com/durable-objects/api/facets/"><u>Facets</u></a>)</p></td></tr><tr><td><p>Model access</p></td><td><p>Connect to an LLM to power the agent</p></td><td><p><a href="https://developers.cloudflare.com/ai-gateway/"><u>AI Gateway</u></a> + <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> (or Bring Your Own Model)</p></td></tr></table><p>Each of these is a building block. Together, they form something new: a platform where anyone can build, deploy, and run AI agents as capable as the ones running on your local machine today, but <a href="https://www.cloudflare.com/learning/serverless/what-is-serverless/"><u>serverless</u></a>, durable, and safe by construction.</p>
    <div>
      <h2>The Think base class</h2>
      <a href="#the-think-base-class">
        
      </a>
    </div>
    <p>Now that you've seen the primitives, here's what happens when you wire them all together.</p><p><code>Think</code> is an opinionated harness that handles the full chat lifecycle: agentic loop, message persistence, streaming, tool execution, stream resumption, and extensions. You focus on what makes your agent unique.</p><p>The minimal subclass looks like this:</p>
            <pre><code>import { Think } from "@cloudflare/think";
import { createWorkersAI } from "workers-ai-provider";

export class MyAgent extends Think&lt;Env&gt; {
  getModel() {
    return createWorkersAI({ binding: this.env.AI })(
      "@cf/moonshotai/kimi-k2.5"
    );
  }
}
</code></pre>
            <p>That’s effectively all you need to have a working chat agent with streaming, persistence, abort/cancel, error handling, resumable streams, and a built-in workspace filesystem. Deploy with <code>npx wrangler deploy</code>.</p><p>Think makes decisions for you. When you need more control, you can override the ones you care about:</p><table><tr><td><p><b>Override</b></p></td><td><p><b>Purpose</b></p></td></tr><tr><td><p><code>getModel()</code></p></td><td><p>Return the <code>LanguageModel</code> to use</p></td></tr><tr><td><p><code>getSystemPrompt()</code></p></td><td><p>System prompt</p></td></tr><tr><td><p><code>getTools()</code></p></td><td><p>AI SDK compatible <code>ToolSet</code> for the agentic loop</p></td></tr><tr><td><p><code>maxSteps</code></p></td><td><p>Max tool-call rounds per turn</p></td></tr><tr><td><p><code>configureSession()</code></p></td><td><p>Context blocks, compaction, search, skills</p></td></tr></table><p>Under the hood, Think runs the complete agentic loop on every turn: it assembles the context (base instructions + tool descriptions + skills + memory + conversation history), calls <code>streamText</code>, executes tool calls (with output truncation to prevent context blowup), appends results, loops until the model is done or the step limit is reached. All messages are persisted after each turn.</p>
    <div>
      <h3>Lifecycle hooks</h3>
      <a href="#lifecycle-hooks">
        
      </a>
    </div>
    <p>Think gives you hooks at every stage of the chat turn, without requiring you to own the whole pipeline:</p>
            <pre><code>beforeTurn()
  → streamText()
    → beforeToolCall()
    → afterToolCall()
  → onStepFinish()
→ onChatResponse()
</code></pre>
            <p>Switch to a lower cost model for follow-up turns, limit the tools it can use, and pass in client-side context on each turn. Also log every tool call to analytics and automatically trigger one more follow-up turn after the model completes, all without replacing <code>onChatMessage</code>.</p>
    <div>
      <h3>Persistent memory and long conversations</h3>
      <a href="#persistent-memory-and-long-conversations">
        
      </a>
    </div>
    <p>Think builds on <a href="https://developers.cloudflare.com/agents/api-reference/sessions/?cf_target_id=E7A3D837FA7DC4C7DDA822B3DE0F831B"><u>Session API</u></a> as its storage layer, giving you tree-structured messages with branching built in.</p><p>On top of that, it adds persistent memory through <b>context blocks</b>. These are structured sections of the system prompt that the model can read and update over time, and they persist across hibernation<b>.</b>The model sees "MEMORY (Important facts, use set_context to update) [42%, 462/1100 tokens]" and can proactively remember things.</p>
            <pre><code>configureSession(session: Session) {
  return session
    .withContext("soul", {
      provider: { get: async () =&gt; "You are a helpful coding assistant." }
    })
    .withContext("memory", {
      description: "Important facts learned during conversation.",
      maxTokens: 2000
    })
    .withCachedPrompt();
}
</code></pre>
            <p>Sessions are flexible. You can run multiple conversations per agent and fork them to try a different direction without losing the original.<b> </b></p><p>As context grows, Think handles limits with non-destructive compaction. Older messages are summarized instead of removed, while the full history remains stored in SQLite.<b> </b></p><p>Search is built in as well. Using FTS5, you can query conversation history within a session or across all the sessions. The agent is also able to search its own past using<b> </b><code>search_context</code> tool.</p>
    <div>
      <h3>The full execution ladder, wired in</h3>
      <a href="#the-full-execution-ladder-wired-in">
        
      </a>
    </div>
    <p>Think integrates the entire execution ladder into a single <code>getTools()</code> return:</p>
            <pre><code>import { Think } from "@cloudflare/think";
import { createWorkspaceTools } from "@cloudflare/think/tools/workspace";
import { createExecuteTool } from "@cloudflare/think/tools/execute";
import { createBrowserTools } from "@cloudflare/think/tools/browser";
import { createSandboxTools } from "@cloudflare/think/tools/sandbox";
import { createExtensionTools } from "@cloudflare/think/tools/extensions";

export class MyAgent extends Think&lt;Env&gt; {
  extensionLoader = this.env.LOADER;

  getModel() {
    /* ... */
  }

  getTools() {
    return {
      execute: createExecuteTool({
        tools: createWorkspaceTools(this.workspace),
        loader: this.env.LOADER
      }),
      ...createBrowserTools(this.env.BROWSER),
      ...createSandboxTools(this.env.SANDBOX), // configured per-agent: toolchains, repos, snapshots
      ...createExtensionTools({ manager: this.extensionManager! }),
      ...this.extensionManager!.getTools()
    };
  }
}
</code></pre>
            
    <div>
      <h3>Self-authored extensions</h3>
      <a href="#self-authored-extensions">
        
      </a>
    </div>
    <p>Think takes code execution one step further. An agent can write its own extensions: TypeScript programs that run in Dynamic Workers, declaring permissions for network access and workspace operations.</p>
            <pre><code>{
  "name": "github",
  "description": "GitHub integration: PRs, issues, repos",
  "tools": ["create_pr", "list_issues", "review_pr"],
  "permissions": {
    "network": ["api.github.com"],
    "workspace": "read-write"
  }
}
</code></pre>
            <p>Think's <code>ExtensionManager</code> bundles the extension (optionally with npm deps via <code>@cloudflare/worker-bundler</code>), loads it into a Dynamic Worker, and registers the new tools. The extension persists in DO storage and survives hibernation. The next time the user asks about pull requests, the agent has a <code>github_create_pr </code>tool that didn't exist 30 seconds ago.</p><p>This is the kind of self-improvement loop that makes agents genuinely more useful over time. Not through fine-tuning or RLHF, but through code. The agent is able to write new capabilities for itself, all in sandboxed, auditable, and revocable TypeScript.</p>
    <div>
      <h3>Sub-agent RPC</h3>
      <a href="#sub-agent-rpc">
        
      </a>
    </div>
    <p>Think also works as a sub-agent, called via <code>chat()</code> over RPC from a parent, with streaming events via callback:</p>
            <pre><code>const researcher = await this.subAgent(ResearchSession, "research");
const result = await researcher.chat(`Research this: ${task}`, streamRelay);
</code></pre>
            <p>Each child gets its own conversation tree, memory, tools, and model. The parent doesn't need to know the details.</p>
    <div>
      <h3>Getting started</h3>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>Project Think is experimental. The API surface is stable but will continue to evolve in the coming days and weeks. We're already using it internally to build our own background agent infrastructure, and we're sharing it early so you can build alongside us.</p>
            <pre><code>npm install @cloudflare/think agents ai @cloudflare/shell zod workers-ai-provider</code></pre>
            
            <pre><code>// src/server.ts
import { Think } from "@cloudflare/think";
import { createWorkersAI } from "workers-ai-provider";
import { routeAgentRequest } from "agents";

export class MyAgent extends Think&lt;Env&gt; {
  getModel() {
    return createWorkersAI({ binding: this.env.AI })(
      "@cf/moonshotai/kimi-k2.5"
    );
  }
}

export default {
  async fetch(request: Request, env: Env) {
    return (
      (await routeAgentRequest(request, env)) ||
      new Response("Not found", { status: 404 })
    );
  }
} satisfies ExportedHandler&lt;Env&gt;;
</code></pre>
            
            <pre><code>// src/client.tsx
import { useAgent } from "agents/react";
import { useAgentChat } from "@cloudflare/ai-chat/react";

function Chat() {
  const agent = useAgent({ agent: "MyAgent" });
  const { messages, sendMessage, status } = useAgentChat({ agent });
  // Render your chat UI
}
</code></pre>
            <p>Think speaks the same WebSocket protocol as <code>@cloudflare/ai-chat</code>, so existing UI components work out of the box. If you've built on <a href="https://developers.cloudflare.com/agents/api-reference/chat-agents/"><code><u>AIChatAgent</u></code></a>, your client code doesn't change.</p>
    <div>
      <h2>The third wave</h2>
      <a href="#the-third-wave">
        
      </a>
    </div>
    <p>We see three waves of AI agents:</p><p><b>The first wave was chatbots.</b> They were stateless, reactive, and fragile. Every conversation started from scratch with no memory, no tools, and no ability to act. This made them useful for answering questions, but limited them to only answering questions.</p><p><b>The second wave was coding agents.</b> These are stateful, tool-using and far more capable tools like Pi, Claude Code, OpenClaw, and Codex. These agents can read codebases, write code, execute it, and iterate. These proved that an LLM with the right tools is a general-purpose machine, but they run on your laptop, for one user, with no durability guarantees.</p><p><b>Now we are entering the third wave: agents as infrastructure.</b> Durable, distributed, structurally safe, and serverless. These are agents that run on the Internet, survive failures, cost nothing when idle, and enforce security through architecture rather than behavior. Agents that any developer can build and deploy for any number of users.</p><p>This is the direction we’re betting on.</p><p>The Agents SDK is already powering thousands of production agents. With Project Think and the the primitives it introduces, we're adding the missing pieces to make those agents dramatically more capable: persistent workspaces, sandboxed code execution, durable long-running tasks, structural security, sub-agent coordination, and self-authored extensions.</p><p>It's available today in preview. We're building alongside you, and we'd genuinely love to see what you (and your coding agent) create with it.</p><hr /><p><sup><i>Think is part of the Cloudflare Agents SDK, available as @cloudflare/think. The features described in this post are in preview. APIs may change as we incorporate feedback. Check the </i></sup><a href="https://github.com/cloudflare/agents/blob/main/docs/think/index.md"><sup><i><u>documentation</u></i></sup></a><sup><i> and </i></sup><a href="https://github.com/cloudflare/agents/tree/main/examples/assistant"><sup><i><u>example</u></i></sup></a><sup><i> to get started.</i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/161Wz7Tf8Cpzn2u2cBCH3V/37633c016734590005edd280732e89b9/BLOG-3200_3.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[Storage]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">3r2ykMs0LTSPwVHmVWldCy</guid>
            <dc:creator>Sunil Pai</dc:creator>
            <dc:creator>Kate Reznykova</dc:creator>
        </item>
        <item>
            <title><![CDATA[Agents have their own computers with Sandboxes GA]]></title>
            <link>https://blog.cloudflare.com/sandbox-ga/</link>
            <pubDate>Mon, 13 Apr 2026 13:08:35 GMT</pubDate>
            <description><![CDATA[ Cloudflare Sandboxes give AI agents a persistent, isolated environment: a real computer with a shell, a filesystem, and background processes that starts on demand and picks up exactly where it left off. ]]></description>
            <content:encoded><![CDATA[ <p>When we launched <a href="https://github.com/cloudflare/sandbox-sdk"><u>Cloudflare Sandboxes</u></a> last June, the premise was simple: <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/"><u>AI agents</u></a> need to develop and run code, and they need to do it somewhere safe.</p><p>If an agent is acting like a developer, this means cloning repositories, building code in many languages, running development servers, etc. To do these things effectively, they will often need a full computer (and if they don’t, they can <a href="https://blog.cloudflare.com/dynamic-workers/"><u>reach for something lightweight</u></a>!).</p><p>Many developers are stitching together solutions using VMs or existing container solutions, but there are lots of hard problems to solve:</p><ul><li><p><b>Burstiness -</b> With each session needing its own sandbox, you often need to spin up many sandboxes quickly, but you don’t want to pay for idle compute on standby.</p></li><li><p><b>Quick state restoration</b> - Each session should start quickly and re-start quickly, resuming past state.</p></li><li><p><b>Security</b> - Agents need to access services securely, but can’t be trusted with credentials.</p></li><li><p><b>Control</b> - It needs to be simple to programmatically control sandbox lifecycle, execute commands, handle files, and more.</p></li><li><p><b>Ergonomics</b> - You need to give a simple interface for both humans and agents to do common operations.</p></li></ul><p>We’ve spent time solving these issues so you don’t have to. Since our initial launch we’ve made Sandboxes an even better place to run agents at scale. We’ve worked with our initial partners such as Figma, who run agents in containers with <a href="https://www.figma.com/make/"><u>Figma Make</u></a>:</p><blockquote><p><i>“Figma Make is built to help builders and makers of all backgrounds go from idea to production, faster. To deliver on that goal, we needed an infrastructure solution that could provide reliable, highly-scalable sandboxes where we could run untrusted agent- and user-authored code. Cloudflare Containers is that solution.”</i></p><p><i>- </i><b><i>Alex Mullans</i></b><i>, AI and Developer Platforms at Figma</i></p></blockquote><p>We want to bring Sandboxes to even more great organizations, so today we are excited to announce that <b>Sandboxes and Cloudflare Containers are both generally available.</b></p><p>Let’s take a look at some of the recent changes to Sandboxes:</p><ul><li><p><b>Secure credential injection </b>lets you make authenticated calls without the agent ever having credential access  </p></li><li><p><b>PTY support</b> gives you and your agent a real terminal</p></li><li><p><b>Persistent code interpreters</b> give your agent a place to execute stateful Python, JavaScript, and TypeScript out of the box</p></li><li><p><b>Background processes and live preview URLs</b> provide a simple way to interact with development servers and verify in-flight changes</p></li><li><p><b>Filesystem watching</b> improves iteration speed as agents make changes</p></li><li><p><b>Snapshots</b> let you quickly recover an agent's coding session</p></li><li><p><b>Higher limits and Active CPU Pricing</b> let you deploy a fleet of agents at scale without paying for unused CPU cycles </p></li></ul>
    <div>
      <h2>Sandboxes 101</h2>
      <a href="#sandboxes-101">
        
      </a>
    </div>
    <p>Before getting into some of the recent changes, let’s quickly look at the basics.</p><p>A Cloudflare Sandbox is a persistent, isolated environment powered by <a href="https://blog.cloudflare.com/containers-are-available-in-public-beta-for-simple-global-and-programmable/"><u>Cloudflare Containers</u></a>. You ask for a sandbox by name. If it's running, you get it. If it's not, it starts. When it's idle, it sleeps automatically and wakes when it receives a request. It’s easy to programmatically interact with the sandbox using methods like <code>exec</code>, <code>gitClone</code>, <code>writeFile</code> and <a href="https://developers.cloudflare.com/sandbox/api/"><u>more</u></a>.</p>
            <pre><code>import { getSandbox } from "@cloudflare/sandbox";
export { Sandbox } from "@cloudflare/sandbox";

export default {
  async fetch(request: Request, env: Env) {
    // Ask for a sandbox by name. It starts on demand.
    const sandbox = getSandbox(env.Sandbox, "agent-session-47");

    // Clone a repository into it.
    await sandbox.gitCheckout("https://github.com/org/repo", {
      targetDir: "/workspace",
      depth: 1,
    });

    // Run the test suite. Stream output back in real time.
    return sandbox.exec("npm", ["test"], { stream: true });
  },
};
</code></pre>
            <p>As long as you provide the same ID, subsequent requests can get to this same sandbox from anywhere in the world.</p>
    <div>
      <h2>Secure credential injection</h2>
      <a href="#secure-credential-injection">
        
      </a>
    </div>
    <p>One of the hardest problems in agentic workloads is authentication. You often need agents to access private services, but you can't fully trust them with raw credentials. </p><p>Sandboxes solve this by injecting credentials at the network layer using a programmable egress proxy. This means that sandbox agents never have access to credentials and you can fully customize auth logic as you see fit:</p>
            <pre><code>class OpenCodeInABox extends Sandbox {
  static outboundByHost = {
    "my-internal-vcs.dev": (request, env, ctx) =&gt; {
      const headersWithAuth = new Headers(request.headers);
      headersWithAuth.set("x-auth-token", env.SECRET);
      return fetch(request, { headers: headersWithAuth });
    }
  }
}
</code></pre>
            <p>For a deep dive into how this works — including identity-aware credential injection, dynamically modifying rules, and integrating with <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/"><u>Workers bindings</u></a> — read our recent blog post on <a href="https://blog.cloudflare.com/sandbox-auth"><u>Sandbox auth</u></a>.</p>
    <div>
      <h2>A real terminal, not a simulation</h2>
      <a href="#a-real-terminal-not-a-simulation">
        
      </a>
    </div>
    <p>Early agent systems often modeled shell access as a request-response loop: run a command, wait for output, stuff the transcript back into the prompt, repeat. It works, but it is not how developers actually use a terminal. </p><p>Humans run something, watch output stream in, interrupt it, reconnect later, and keep going. Agents benefit from that same feedback loop.</p><p>In February, we shipped PTY support. A pseudo-terminal session in a Sandbox, proxied over WebSocket, compatible with <a href="https://xtermjs.org/"><u>xterm.js</u></a>.</p><p>Just call <code>sandbox.terminal</code> to serve the backend:</p>
            <pre><code>// Worker: upgrade a WebSocket connection into a live terminal session
export default {
  async fetch(request: Request, env: Env) {
    const url = new URL(request.url);
    if (url.pathname === "/terminal") {
      const sandbox = getSandbox(env.Sandbox, "my-session");
      return sandbox.terminal(request, { cols: 80, rows: 24 });
    }
    return new Response("Not found", { status: 404 });
  },
};

</code></pre>
            <p>And use <code>xterm addon</code> to call it from the client:</p>
            <pre><code>// Browser: connect xterm.js to the sandbox shell
import { Terminal } from "xterm";
import { SandboxAddon } from "@cloudflare/sandbox/xterm";

const term = new Terminal();
const addon = new SandboxAddon({
  getWebSocketUrl: ({ origin }) =&gt; `${origin}/terminal`,
});

term.loadAddon(addon);
term.open(document.getElementById("terminal-container")!);
addon.connect({ sandboxId: "my-session" });
</code></pre>
            <p>This allows agents and developers to use a full PTY to debug those sessions live.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3bgyxh8kg3MPfij2v1XXLE/9cff50318ad306b20c3346c3bd3554d9/BLOG-3264_2.gif" />
          </figure><p>Each terminal session gets its own isolated shell, its own working directory, its own environment. Open as many as you need, just like you would on your own machine. Output is buffered server-side, so reconnecting replays what you missed.</p>
    <div>
      <h2>A code interpreter that remembers</h2>
      <a href="#a-code-interpreter-that-remembers">
        
      </a>
    </div>
    <p>For data analysis, scripting, and exploratory workflows, we also ship a higher-level abstraction: a persistent code execution context.</p><p>The key word is “persistent.” Many code interpreter implementations run each snippet in isolation, so state disappears between calls. You can't set a variable in one step and read it in the next.</p><p>Sandboxes allow you to create “contexts” that persist state. Variables and imports persist across calls the same way they would in a Jupyter notebook:</p>
            <pre><code>// Create a Python context. State persists for its lifetime.
const ctx = await sandbox.createCodeContext({ language: "python" });

// First execution: load data
await sandbox.runCode(`
  import pandas as pd
  df = pd.read_csv('/workspace/sales.csv')
  df['margin'] = (df['revenue'] - df['cost']) / df['revenue']
`, { context: ctx });

// Second execution: df is still there
const result = await sandbox.runCode(`
  df.groupby('region')['margin'].mean().sort_values(ascending=False)
`, { context: ctx, onStdout: (line) =&gt; console.log(line.text) });

// result contains matplotlib charts, structured json output, and Pandas tables in HTML
</code></pre>
            
    <div>
      <h2>Start a server. Get a URL. Ship it.</h2>
      <a href="#start-a-server-get-a-url-ship-it">
        
      </a>
    </div>
    <p>Agents are more useful when they can build something and show it to the user immediately. Sandboxes support background processes, readiness checks, and <a href="https://developers.cloudflare.com/sandbox/concepts/preview-urls/"><u>preview URLs</u></a>. This lets an agent start a development server and share a live link without leaving the conversation.</p>
            <pre><code>// Start a dev server as a background process
const server = await sandbox.startProcess("npm run dev", {
  cwd: "/workspace",
});

// Wait until the server is actually ready — don't just sleep and hope
await server.waitForLog(/Local:.*localhost:(\d+)/);

// Expose the running service with a public URL
const { url } = await sandbox.exposePort(3000);

// url is a live public URL the agent can share with the user
console.log(`Preview: ${url}`);
</code></pre>
            <p>With <code><i>waitForPort()</i></code> and <code><i>waitForLog()</i></code>, agents can sequence work based on real signals from the running program instead of guesswork. This is much nicer than a common alternative, which is usually some version of <code>sleep(2000)</code> followed by hope.</p>
    <div>
      <h2>Watch the file system and react immediately</h2>
      <a href="#watch-the-file-system-and-react-immediately">
        
      </a>
    </div>
    <p>Modern development loops are event-driven. Save a file, rerun the build. Edit a config, restart the server. Change a test, rerun the suite.</p><p>We shipped <i>sandbox.watch()</i> in March. It returns an SSE stream backed by native <a href="https://man7.org/linux/man-pages/man7/inotify.7.html"><u>inotify</u></a>, the kernel mechanism Linux uses for filesystem events.</p>
            <pre><code>import { parseSSEStream, type FileWatchSSEEvent } from '@cloudflare/sandbox';

const stream = await sandbox.watch('/workspace/src', {
  recursive: true,
  include: ['*.ts', '*.tsx']
});

for await (const event of parseSSEStream&lt;FileWatchSSEEvent&gt;(stream)) {
  if (event.type === 'modify' &amp;&amp; event.path.endsWith('.ts')) {
    await sandbox.exec('npx tsc --noEmit', { cwd: '/workspace' });
  }
}
</code></pre>
            <p>This is one of those primitives that quietly changes what agents can do. An agent that can observe the filesystem in real time can participate in the same feedback loops as a human developer.</p>
    <div>
      <h2>Waking up quickly with snapshots</h2>
      <a href="#waking-up-quickly-with-snapshots">
        
      </a>
    </div>
    <p>Imagine a (human) developer working on their laptop. They <code>git clone</code> a repo, run <code>npm install</code>, write code, push a PR, then close their laptop while waiting for code review. When it’s time to resume work, they just re-open the laptop and continue where they left off.</p><p>If an agent wants to replicate this workflow on a naive container platform, you run into a snag. How do you resume where you left off quickly? You could keep a sandbox running, but then you pay for idle compute. You could start fresh from the container image, but then you have to wait for a long <code>git clone</code> and <code>npm install</code>.</p><p>Our answer is snapshots, which will be rolling out in the coming weeks.</p><p>A snapshot preserves a container's full disk state, OS config, installed dependencies, modified files, data files and more. Then it lets you quickly restore it later.</p><p>You can configure a Sandbox to automatically snapshot when it goes to sleep.</p>
            <pre><code>class AgentDevEnvironment extends Sandbox {
  sleepAfter = "5m";
  persistAcrossSessions = {type: "disk"}; // you can also specify individual directories
}
</code></pre>
            <p>You can also programmatically take a snapshot and manually restore it. This is useful for checkpointing work or forking sessions. For instance, if you wanted to run four instances of an agent in parallel, you could easily boot four sandboxes from the same state.</p>
            <pre><code>class AgentDevEnvironment extends Sandbox {}

async forkDevEnvironment(baseId, numberOfForks) {
  const baseInstance = await getSandbox(baseId);
  const snapshotId = await baseInstance.snapshot();

  const forks = Array.from({ length: numberOfForks }, async (_, i) =&gt; {
    const newInstance = await getSandbox(`${baseId}-fork-${i}`);
    return newInstance.start({ snapshot: snapshotId });
  });

  await Promise.all(forks);
}
</code></pre>
            <p>Snapshots are stored in <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a> within your account, giving you durability and location-independence. R2's <a href="https://developers.cloudflare.com/cache/how-to/tiered-cache/"><u>tiered caching</u></a> system allows for fast restores across all of Region: Earth.</p><p>In future releases, live memory state will also be captured, allowing running processes to resume exactly where they left off. A terminal and an editor will reopen in the exact state they were in when last closed.</p><p>If you are interested in restoring session state before snapshots go live, you can use the <a href="https://developers.cloudflare.com/sandbox/guides/backup-restore/"><code><u>backup and restore</u></code></a> methods today. These also persist and restore directories using R2, but are not as performant as true VM-level snapshots. Though they still can lead to considerable speed improvements over naively recreating session state.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/LzVucBiNvxOh3NFn0ukxj/3b8e6cd9a5ca241b6c6a7c8556c0a529/BLOG-3264_3.gif" />
          </figure><p><sup><i>Booting a sandbox, cloning  ‘axios’, and npm installing takes 30 seconds. Restoring from a backup takes two seconds.</i></sup></p><p>Stay tuned for the official snapshot release.</p>
    <div>
      <h2>Higher limits and Active CPU Pricing</h2>
      <a href="#higher-limits-and-active-cpu-pricing">
        
      </a>
    </div>
    <p>Since our initial launch, we’ve been steadily increasing capacity. Users on our standard pricing plan can now run 15,000 concurrent instances of the lite instance type, 6,000 instances of basic, and over 1,000 concurrent larger instances. <a href="https://forms.gle/3vvDvXPECjy6F8v56"><u>Reach out</u></a> to run even more!</p><p>We also changed our pricing model to be more cost effective running at scale. Sandboxes now <a href="https://developers.cloudflare.com/changelog/post/2025-11-21-new-cpu-pricing/"><u>only charge for actively used CPU cycles</u></a>. This means that you aren’t paying for idle CPU while your agent is waiting for an LLM to respond.</p>
    <div>
      <h2>This is what a computer looks like </h2>
      <a href="#this-is-what-a-computer-looks-like">
        
      </a>
    </div>
    <p>Nine months ago, we shipped a sandbox that could run commands and access a filesystem. That was enough to prove the concept.</p><p>What we have now is different in kind. A Sandbox today is a full development environment: a terminal you can connect a browser to, a code interpreter with persistent state, background processes with live preview URLs, a filesystem that emits change events in real time, egress proxies for secure credential injection, and a snapshot mechanism that makes warm starts nearly instant. </p><p>When you build on this, a satisfying pattern emerges: agents that do real engineering work. Clone a repo, install it, run the tests, read the failures, edit the code, run the tests again. The kind of tight feedback loop that makes a human engineer effective — now the agent gets it too.</p><p>We're at version 0.8.9 of the SDK. You can get started today:</p><p><code>npm i @cloudflare/sandbox@latest</code></p><div>
  
</div>
<p></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[Containers]]></category>
            <category><![CDATA[Sandbox]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">7jXMXMjQUIpjGzJdPadO4a</guid>
            <dc:creator>Kate Reznykova</dc:creator>
            <dc:creator>Mike Nomitch</dc:creator>
            <dc:creator>Naresh Ramesh</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building agents with OpenAI and Cloudflare’s Agents SDK]]></title>
            <link>https://blog.cloudflare.com/building-agents-with-openai-and-cloudflares-agents-sdk/</link>
            <pubDate>Wed, 25 Jun 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We’re building AI agents where logic and reasoning are handled by OpenAI’s Agents SDK, and execution happens across Cloudflare's global network via Cloudflare’s Agents SDK.  ]]></description>
            <content:encoded><![CDATA[ 
    <div>
      <h2>What even <i>is</i> an Agents SDK?</h2>
      <a href="#what-even-is-an-agents-sdk">
        
      </a>
    </div>
    <p>The AI landscape is evolving at an incredible pace, and with it, the tools and platforms available to developers are becoming more powerful and interconnected than ever. Here at Cloudflare, we're genuinely passionate about empowering you to build the next generation of applications, and that absolutely includes intelligent agents that can reason, act, and interact with the world.</p><p>When we talk about "<b>Agents SDKs</b>", it can sometimes feel a bit… fuzzy. Some SDKs (software development kits) <b>described as 'agent' SDKs</b> are really about providing frameworks for tool calling and interacting with models. They're fantastic for defining an agent's "brain" – its intelligence, its ability to reason, and how it uses external tools. Here’s the thing: all these agents need a place to actually run. Then there's what we offer at Cloudflare: <a href="https://developers.cloudflare.com/agents/"><u>an SDK purpose-built to provide a seamless execution layer for agents</u></a>. While orchestration frameworks define how agents think, our SDK focuses on where they run, abstracting away infrastructure to enable persistent, scalable execution across our global network.</p><p>Think of it as the ultimate shell, the place where any agent, defined by any agent SDK (like the powerful new OpenAI Agents SDK), can truly live, persist, and run at global scale.</p><p>We’ve chosen OpenAI’s Agents SDK for this example, but the infrastructure is not specific to it. The execution layer is designed to integrate with any agent runtime.</p><p>That’s what this post is about: what we built, what we learned, and the design patterns that emerged from fusing these two pieces together.</p>
    <div>
      <h2>Why use two SDKs?</h2>
      <a href="#why-use-two-sdks">
        
      </a>
    </div>
    <p><a href="https://openai.github.io/openai-agents-js/"><u>OpenAI’s Agents SDK</u></a> gives you the <i>agent</i>: a reasoning loop, tool definitions, and memory abstraction. But it assumes you bring your own runtime and state.</p><p><a href="https://developers.cloudflare.com/agents/"><u>Cloudflare’s Agents SDK</u></a> gives you the <i>environment</i>: a persistent object on our network with identity, state, and built-in concurrency control. But it doesn’t tell you how your agent should behave.</p><p>By combining them, we get a clear split:</p><ul><li><p><b>OpenAI</b>: cognition, planning, tool orchestration</p></li><li><p><b>Cloudflare</b>: location, identity, memory, execution</p></li></ul><p>This separation of concerns let us stay focused on logic, not glue code.</p>
    <div>
      <h2>What you can build with persistent agents</h2>
      <a href="#what-you-can-build-with-persistent-agents">
        
      </a>
    </div>
    <p>Cloudflare <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> let agents go beyond simple, stateless functions. They can persist memory, coordinate across workflows, and respond in real time. Combined with the OpenAI Agents SDK, this enables systems that reason, remember, and adapt over time.</p><p>Here are three architectural patterns that show how agents can be composed, guided, and connected:</p><p><b>Multi-agent systems: </b>Divide responsibilities across specialized agents that collaborate on tasks.</p><p><b>Human-in-the-loop: </b>Let agents plan independently but wait for human input at key decision points.</p><p><b>Addressable agents: </b>Make agents reachable through real-world interfaces like phone calls or WebSockets.</p>
    <div>
      <h3>Multi-agent systems </h3>
      <a href="#multi-agent-systems">
        
      </a>
    </div>
    <p>Multi-agent systems let you break down a task into specialized agents that handle distinct responsibilities. In the example below, a triage agent routes questions to either a history or math tutor based on the query. Each agent has its own memory, logic, and instructions. With Cloudflare <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a>, these agents persist across sessions and can coordinate responses, making it easy to build systems that feel modular but work together intelligently.</p>
            <pre><code>export class MyAgent extends Agent {
  async onRequest() {
    const historyTutorAgent = new Agent({
      instructions:
        "You provide assistance with historical queries. Explain important events and context clearly.",
      name: "History Tutor",
    });

    const mathTutorAgent = new Agent({
      instructions:
        "You provide help with math problems. Explain your reasoning at each step and include examples",
      name: "Math Tutor",
    });

    const triageAgent = new Agent({
      handoffs: [historyTutorAgent, mathTutorAgent],
      instructions:
        "You determine which agent to use based on the user's homework question",
      name: "Triage Agent",
    });

    const result = await run(triageAgent, "What is the capital of France?");
    return Response.json(result.finalOutput);
  }
}</code></pre>
            
    <div>
      <h3>Human-in-the-loop</h3>
      <a href="#human-in-the-loop">
        
      </a>
    </div>
    <p>We implemented a<a href="https://github.com/cloudflare/agents/tree/main/openai-sdk/human-in-the-loop"> <u>human-in-the-loop agent example</u></a> using these two SDKs together. The goal: run an OpenAI agent with a planning loop, allow human decisions to intercept the plan, and preserve state across invocations via Durable Objects.</p><p>The architecture looked like this:</p><ul><li><p>An OpenAI <code>Agent</code> instance runs inside a Durable Object</p></li><li><p>User submits a prompt</p></li><li><p>The agent plans multiple steps</p></li><li><p>After each step, it yields control and waits for a human to approve or intervene</p></li><li><p>State (including memory and intermediate steps) is persisted in <code>this.state</code></p></li></ul><p>It looks like this:</p>
            <pre><code>export class MyAgent extends Agent {
  // ...
  async onStart() {
    if (this.state.serialisedRunState) {
      const runState = await RunState.fromString(
        this.agent,
        this.state.serialisedRunState
      );
      this.result = await run(this.agent, runState);</code></pre>
            <p>This design lets us intercept the agent’s plan at every step and store it. The client could then:</p><ul><li><p>Fetch the pending step via another route</p></li><li><p>Review or modify it</p></li><li><p>Send approval or rejection back to the agent to resume execution</p></li></ul><p>This is only possible because the agent lives inside a Durable Object. It has persistent memory and identity, allowing multi-turn interaction even across sessions</p>
    <div>
      <h3>Addressable agents: “Call my Agent”</h3>
      <a href="#addressable-agents-call-my-agent">
        
      </a>
    </div>
    <p>One of the most interesting takeaways from this pattern is that agents are not just HTTP endpoints. Yes, you can <code>fetch() </code>them via Durable Objects, but conceptually, <b>agents are addressable entities</b> — and there's no reason those addresses have to be tied to URLs.</p><p>You could imagine agents reachable by phone call, by email, or via pub/sub systems. Durable Objects give each agent a global identity that can be referenced however you want.</p><p>In this design:</p><ul><li><p>External sources of input connect to the Cloudflare network; via email, HTTP, or any network interface. In this demo, we use Twilio to route a phone call to a WebSocket input on the Agent.</p></li><li><p>The call is routed through Cloudflare’s infrastructure, so latency is low and identity is preserved.</p></li><li><p>We also store the real-time state updates within the agent, so we can view it on a website (served by the agent itself). This is great for use cases like customer service and education. </p></li></ul>
            <pre><code>export class MyAgent extends Agent {
  // receive phone calls via websocket
  async onConnect(connection: Connection, ctx: ConnectionContext) {
    if (ctx.request.url.includes("media-stream")) {
      const agent = new RealtimeAgent({
        instructions:
          "You are a helpful assistant that starts every conversation with a creative greeting.",
        name: "Triage Agent",
      });

      connection.send(`Welcome! You are connected with ID: ${connection.id}`);

      const twilioTransportLayer = new TwilioRealtimeTransportLayer({
        twilioWebSocket: connection,
      });

      const session = new RealtimeSession(agent, {
        transport: twilioTransportLayer,
      });

      await session.connect({
        apiKey: process.env.OPENAI_API_KEY as string,
      });

      session.on("history_updated", (history) =&gt; {
        this.setState({ history });
      });
    }
  }
}</code></pre>
            <p>This lets an agent become truly multimodal, accepting and outputting data as audio, video, text, email. This pattern opened up exciting possibilities for modular agents and long-running workflows where each agent focuses on a specific domain.</p>
    <div>
      <h2>What we learned (and what you should know)</h2>
      <a href="#what-we-learned-and-what-you-should-know">
        
      </a>
    </div>
    
    <div>
      <h3>1. OpenAI assumes you bring your own state — Cloudflare gives you one</h3>
      <a href="#1-openai-assumes-you-bring-your-own-state-cloudflare-gives-you-one">
        
      </a>
    </div>
    <p>OpenAI’s SDK is stateless by default. You can attach memory abstractions, but the SDK doesn’t tell you where or how to persist it. Cloudflare’s Durable Objects, by contrast, <i>are</i> persistent — that’s the whole point. Every instance has a unique identity and storage API <code>(this.ctx.storage)</code>. This means we can:</p><ul><li><p>Store long-term memory across invocations</p></li><li><p>Hydrate the agent’s memory before <code>run()</code></p></li><li><p>Save any updates after <code>run()</code> completes</p></li></ul>
    <div>
      <h3>2. Durable Object routing isn’t just routing — it’s your agent factory</h3>
      <a href="#2-durable-object-routing-isnt-just-routing-its-your-agent-factory">
        
      </a>
    </div>
    <p>At first glance, <code>routeAgentRequest</code> looks like a simple dispatcher: map a request to a Durable Object based on a URL. But it plays a deeper role — it defines the identity boundary for your agents. We realized this while trying to scope agent instances per user and per task.</p><p>In Durable Objects, identity is tied to an ID. When you call <code>idFromName()</code>, you get a stable, name-based ID that always maps to the same object. This means repeated calls with the same name return the same agent instance — along with its memory and state. In contrast, calling <code>.newUniqueId()</code> creates a new, isolated object each time.</p><p>This is where routing becomes critical: it's where you decide how long an agent should live, and what it should remember.</p><p>This lets us:</p><ul><li><p>Spin up multiple agents per user (e.g. one per session or task)</p></li><li><p>Co-locate memory and logic</p></li><li><p>Avoid unintended memory sharing between conversations</p></li></ul><p><b>Gotcha:</b> If you forget to use <code>idFromName()</code> and just call <code>.newUniqueId()</code>, you’ll get a new agent each time, and your memory will never persist. This is a common early bug that silently kills statefulness.</p>
    <div>
      <h3>​​3. Agents are composable — and that’s powerful</h3>
      <a href="#3-agents-are-composable-and-thats-powerful">
        
      </a>
    </div>
    <p>Agents can invoke each other using Durable Object routing, forming workflows where each agent owns its own memory and logic. This enables composition — building systems from specialized parts that cooperate.</p><p>This makes agent architecture more like microservices — composable, stateful, and distributed.</p>
    <div>
      <h2>Final thoughts: building agents that think <i>and</i> live</h2>
      <a href="#final-thoughts-building-agents-that-think-and-live">
        
      </a>
    </div>
    <p>This pattern — OpenAI cognition + Cloudflare execution — worked better than we expected. It let us:</p><ul><li><p>Write agents with full planning and memory</p></li><li><p>Pause and resume them asynchronously</p></li><li><p>Avoid building orchestration from scratch</p></li><li><p>Compose multiple agents into larger systems</p></li></ul><p>The hardest parts:</p><ul><li><p>Correctly scoping agent architecture</p></li><li><p>Persisting only valid state</p></li><li><p>Debugging with good observability</p></li></ul><p>At Cloudflare, we are incredibly excited to see what <i>you</i> build with this powerful combination. The future of AI agents is intelligent, distributed, and incredibly capable. Get started today by exploring the <a href="https://github.com/openai/openai-agents-js"><u>OpenAI Agents SDK</u></a> and diving into the <a href="https://developers.cloudflare.com/agents/"><u>Cloudflare Agents SDK documentation </u></a>(which leverages Cloudflare Workers and Durable Objects).</p><p>We’re just getting started, and we love to see all that you build. Please <a href="https://discord.com/invite/cloudflaredev"><u>join our Discord</u></a>, ask questions, and tell us what you’re building.</p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">3RJX3pjuKNyVyxPYsPBbGg</guid>
            <dc:creator>Kate Reznykova</dc:creator>
            <dc:creator>Sunil Pai</dc:creator>
        </item>
    </channel>
</rss>