What If Your AI Agent Could Dream?
That's not a metaphor. Anthropic just launched a feature called "dreaming" in Claude Managed Agents, and it's one of the more significant AI developments I've seen in a while.
Here's what it actually does.
While your agent isn't working, it reviews its past sessions, finds patterns, surfaces recurring mistakes, and refines its own memory. Automatically. Between sessions. No human in the loop required, unless you want one.
We've spent the last few years building AI systems that are only as good as their last prompt. Every new session, a fresh start. Dreaming changes that model fundamentally. Agents can now get better at their jobs over time, the same way people do.
Memory and dreaming work as a pair. Memory captures what an agent learns while it's working. Dreaming refines that memory between sessions, pulling shared learnings across agents, restructuring what's stored, keeping it high-signal as it grows. Together, they form a real feedback loop. Not a marketing claim. An actual architecture for self-improvement.
The results are hard to ignore. Harvey, the legal AI platform, is using Managed Agents for complex drafting and document work. With dreaming, their agents remember filetype workarounds and tool-specific patterns across sessions. Completion rates went up roughly 6x in their tests. Legal work. Complex, high-stakes, detail-intensive legal work.
That's not a chatbot story. It's an infrastructure story.
Dreaming isn't the only thing in this release. Anthropic also launched "outcomes", a rubric-based system where agents check their own work before returning results, and multiagent orchestration, which lets a lead agent delegate pieces of a complex job to specialists running in parallel. Both are worth your attention.
But dreaming is the one that changes the long-term trajectory. Every session your agent runs becomes an input to the next one. The longer it runs, the better it gets.
We talk a lot about AI adoption. The real bottleneck isn't capability. It's trust, specifically, trust in non-deterministic results. Can the agent do the work reliably? Can it flag when something isn't right? Can it improve without constant human steering?
Dreaming is a serious step toward answering yes.
Read the full Anthropic announcement here: New in Claude Managed Agents: dreaming, outcomes, and multiagent orchestration

