Introduction
Most AI agents in the wild are reactive chatbots — fine for one-off queries but useless as personal assistants or autonomous workflows. They reset, forget, and lack initiative. Persistent, proactive architectures flip the defaults: state survives sessions, the agent watches for triggers, and the human is one of many possible interlocutors rather than the only one that matters.
Why this matters
- Real assistants notice things; reactive bots only react to direct prompts.
- Persistence unlocks long-horizon tasks (multi-week projects, ongoing monitoring).
- Proactivity converts an agent from a tool into a teammate.
- These properties are hard to bolt on later — they shape the whole architecture.
Core concepts
State as a first-class citizen
Agent state (memory, goals, in-flight tasks, beliefs about the world) lives in a durable store, not in a context window. The model is stateless; the agent is not.
Event-driven triggering
Proactive agents react to events: cron schedules, webhook deliveries, file changes, message arrivals. The "main loop" is an event router, not a chat loop.
Goal hierarchies
A long-running agent juggles many goals at different priorities. Explicit goal trees with success/abandon criteria prevent the agent from spinning forever or losing the thread.
Human-in-the-loop checkpoints
Proactive doesn't mean unsupervised. Define checkpoints where the agent must surface a decision, with the cost of missing the checkpoint priced in.
Practical patterns
Two-loop architecture
Inner loop (one task, one agent run, may take minutes). Outer loop (event scheduler, runs forever, dispatches inner-loop work).
Belief–desire–intention (BDI)
Classical agent model that's very useful for proactive systems: separate facts (belief), goals (desire), and current plan (intention).
Durable execution
Use a workflow engine (Temporal, Inngest, Restate) so multi-day tasks survive restarts and provider outages.
Notification budget
Cap how often the agent can interrupt the human. Forces it to batch and prioritise.
Pitfalls to avoid
- Building proactivity without observability — runaway agents become very expensive very quickly.
- Storing the entire conversation history forever; design what to keep, not what to discard.
- Treating the LLM as the orchestrator; it should be the reasoning step inside an orchestrator you control.
- Skipping the abandon-goal logic. Agents that can't give up are agents that loop.
Key takeaways
- 1Persistence is an architecture decision; bolt-ons don't scale.
- 2Make the event loop explicit and observable.
- 3Give the agent fewer, sharper goals — and the right to abandon them.
- 4Cap the agent's interruption budget; respect the human's attention.
Go deeper · external resources
Curated reading list to take you from primer to practitioner. All links are external and free to read.