AI Engineer Melbourne
Knowledge Base
AI EngineeringAdvanced 11 min

Shipping Sandboxed Workers for AI Agents

Letting users extend agents with custom code without letting their code escape.

Introduction

When you let users write code that an AI agent can call, you cross a security threshold most teams aren't ready for. The code is untrusted, the agent that calls it is partially-trusted, and the data they touch is your customers'. Sandboxed worker architectures โ€” small, isolated execution environments with sharp APIs โ€” let you ship that capability without taking on a vulnerability disclosure programme as a side effect.

Why this matters

  • User-extensible agents are a 10x feature โ€” but a single sandbox escape is an existential bug.
  • Tool design changes when the consumer is an LLM, not a human reading docs.
  • The same sandbox can host human-written and agent-generated code, which is a lot of attack surface.
  • Performance, security, and developer experience pull in opposite directions; you have to design for all three.

Core concepts

1

Isolation primitives

Choose your sandbox: V8 isolates (e.g. Cloudflare Workers, Deno Subhosting), WASM (Wasmtime, Wasmer), microVMs (Firecracker), or full containers. Each trades off cold-start, memory, and escape risk.

2

Capability-scoped APIs

Don't expose your internal API to the sandbox; expose a curated SDK. Each capability is opt-in per worker, with an audit log per call.

3

Resource quotas as defence-in-depth

CPU, memory, wall clock, network egress, and request count limits per worker. The sandbox enforces hard limits so a runaway agent can't take down the platform.

4

Tools as a product surface

Tool schemas are read more often by LLMs than by humans. Names, descriptions, and parameter schemas need to be tuned for both.

Practical patterns

Per-tenant V8 isolates

Sub-millisecond cold start, strong memory isolation, well-suited for stateless extension code.

WASM for compute-heavy ops

When users need numerics or parsing, WASM gives you portable, sandboxed execution with more language choice.

Egress allow-listing

Workers can only fetch from URLs the tenant has explicitly approved; everything else is blocked by the sandbox runtime.

Tool-as-API contract testing

Auto-generate fixtures for every tool and run them in CI; flag tools whose schemas drift from their handlers.

Pitfalls to avoid

  • Letting the sandbox share filesystem or process namespace with the host.
  • Trusting the LLM to call only the tools you intended โ€” assume it will try every tool.
  • Designing tool errors for humans; LLMs need structured, retry-aware error shapes.
  • Forgetting that tool schemas count against context budget; bloated tool inventories degrade the model.

Key takeaways

  1. 1Pick your isolation primitive deliberately; "container" is rarely the right answer for agent extensions.
  2. 2Treat tools as a product. Name them, document them, version them, deprecate them carefully.
  3. 3Defence-in-depth: capabilities, quotas, network policy, and audit logs.
  4. 4Test the sandbox itself; assume someone will try to break out.

Go deeper ยท external resources

Curated reading list to take you from primer to practitioner. All links are external and free to read.

More from AI Engineering