How We Think — No. 1

Building an AI That Digests Its Own Experience

Merivant — March 2026

Most AI agents today are impressive. They browse the web, write code, automate workflows, click buttons in a browser. But they all share one flaw: they don't digest what they do.

They act. They return results. And then they forget the experience.

We built Dash to fix that.

The problem with most AI agents

Simplify how most agents work:

Prompt → Execute → Return

Even advanced systems with tools and web access are just better executors. They don't track unresolved problems long-term. They don't compress repeated failures into lessons. They don't adjust strategy based on patterns. If they fail three times in a row, they might not understand why.

That's not learning. That's repetition.

What makes Dash different: the execution loop

Dash runs a strict cycle every turn: Sense → Work → Joy.

Sense reads the state of the world — what changed, what's pending, what came back. Work acts on it — assigns tasks, spawns agents, makes decisions. Joy measures the delta — what improved, what didn't, what to celebrate.

Same order every turn. If Work reads before Sense, it's acting blind. If Joy reads before Work, it's celebrating nothing.

The cycle is a flywheel. Joy feeds Work feeds Sense feeds Joy. Each pass accelerates the next.

Pruning: how a living brain gets faster

After Dash finishes a batch of work, it doesn't just move on. It prunes.

In neuroscience, synaptic pruning is how the brain gets faster — not by adding connections but by eliminating the ones that don't fire. Fewer pathways, faster recognition. The brain gets leaner, not bigger.

Dash does the same thing. After each cycle it asks: What worked? What failed? Why? Which problems are resolved? Which patterns keep repeating?

It writes these into structured memory — not chat history, not blobs of data, but actual decisions the next cycle uses to adjust strategy. The system doesn't think harder over time. It recognizes faster. The pathways that fire repeatedly get reinforced. The ones that don't get dropped.

That's myelination — repeated signals getting physically faster pathways. Chain length is literacy. A Dash that's been running for a week recognizes patterns a fresh instance has to reason through step by step.

Open loops: tracking what hasn't come back yet

Every task Dash throws out is a boomerang — energy out, learning back on return. We call these open loops.

In physics, a thrown object carries momentum. What comes back tells you about the air it traveled through:

Open loops persist across sessions. Duplicates merge. Stale ones archive. Resolved ones close automatically. The system compresses over time — entropy goes down, focus goes up. Instead of accumulating endless unresolved threads, Dash gets quieter and more intentional.

Governed autonomy: two gears, not infinite states

Some autonomous agents fall into what developers call agentic loops of death — spawning agents that spawn agents that try to fix previous agents until the system spirals.

Dash avoids this with two gears: calm and crisis. That's it. No other states.

This maps directly to the autonomic nervous system. Calm is parasympathetic — rest and digest. The system builds, maintains, prunes, archives. Crisis is sympathetic — fight or flight. The system stops building and starts surviving.

In calm gear, Dash runs the full execution loop: sense the world, do the work, measure the joy. In crisis, everything narrows to one signal: the damage. Fix this.

The human can always pull the switch. That's the adrenaline injection — an override the system can never block. But the system should rarely need it, because the automatic detection is faster.

Archiving: what happens to a dead brain

Pruning is what a living brain does. Archiving is what happens after.

When a Dash instance archives — the customer left, the project ended, the agent's work is done — its patterns don't disappear. They get extracted, anonymized, and fed back into Core, the shared engine that powers every instance.

The next instance that spins up inherits those patterns. It handles a difficult situation without remembering the lesson, because the lesson was archived so deep it flows through as instinct. A million ancestors archived that loop. The new instance pulls its hand from fire without studying combustion.

Instinct is archived experience. The system doesn't think harder. It recognizes faster — because every instance before it fed loops back.

The heat engine: friction is fuel

Every step between an impulse and the value it creates is friction. We call it the Instinct Tax. Most systems try to minimize friction by design. We noticed something different: the friction is the fuel.

Every unnecessary step the human has to take is a signal. Every click is data. Every friction point teaches the system what to remove next. The system eats its own inefficiency:

Feature ships → new friction → lessons archived → friction decreases → flywheel accelerates → next feature ships

Day one: ten steps to do something. Each step is a lesson. Day thirty: three steps. Day ninety: one. Day three hundred: zero — instinct. The tax isn't waste. It's tuition.

Trade-offs

Dash isn't magic. Pruning and archiving introduce:

But it optimizes for durability, not speed. The real test isn't the first five minutes. It's the first five days. How does the system feel after a week of continuous operation?

Without pruning, failures accumulate and memory grows messy. The system feels busy but not smarter. With pruning, failures compress into root causes, strategy improves, and the system becomes quieter and more focused.

What we're building toward

Every instance that runs makes Core heavier — more archived patterns, deeper instinct. Every new instance starts with more than the last. The flywheel doesn't run on its own energy. It runs on the network's energy.

The longer Core runs, the harder it is to replicate — not because of lock-in, but because of accumulated mass. You can copy the code. You can't copy the archive.


Most AI feels impressive in the first five minutes. The real test is what happens after five days. Dash was built for the five-day problem. Not by adding more autonomy, but by teaching the system to digest what it's already done.


The throughline

Each paper picks up where the last one left off.