The Core Instinct

Merivant — 2026

Living Document

The design convictions behind Core, our local-first cognitive architecture. A companion to The Instinct Tax and Organize, Not Humanize. We practice these daily in production. They evolve as the system does.

The construction constraint

We organize AI around people.

Every decision in Core comes back to one question: does this serve the person's intent, or does it serve the system's need to feel sophisticated? It's a simple question. It's surprisingly hard to answer honestly.

What it means in practice: structure memory, attention, and autonomy so the system fits the shape of a human life. Not just the tasks — the rhythms. The unfinished thoughts. The way you learn from failure without remembering every detail. Build around that.


Five tenets

1. Open loops conserve momentum

A closed loop is inert. It produced its output and stopped. An open loop is alive — it's a question that hasn't been answered, a tension that hasn't been resolved, a thread still pulling on the system's attention. Most architectures treat open loops as failures: incomplete tasks, unresolved tickets, bugs. Core treats them as conservation of momentum — energy that stays in the system until something absorbs it.[1]

Every handoff obeys conservation of momentum. You send energy out — a question, a delegation, a half-formed thought — and something returns with greater mass. The returning signal carries learning the sender didn't have at launch. A heavier return means learning occurred.

The discipline is managing the ratio. Too many open loops and the system drowns in unresolved tension — what we call loop proliferation. Too few and the system is metabolically dead, executing tasks but never learning from the gaps between them. Life is in the middle: enough open loops to generate ambient discovery, few enough to maintain coherent attention.

Core's Open Loop Protocol makes this concrete. Unresolved tensions become persistent data structures — standing queries with a shape. They don't sit in a backlog and they don't poll on a timer. Every new piece of information that enters the system flows through every active loop at the moment it's written. The loop doesn't scan. It filters. The river moves; the net holds still. When the entry's content matches the loop's geometry — word boundaries, concentration, proximity — it sticks as evidence. When enough evidence accumulates, the loop precipitates: the answer assembles itself at the boundary, like crystallization. The system breathes.

2. Light filter, dark filter

Every system needs two lenses. The light filter shows what's alive: the live pulse, in-flight work, active loops, current state. The dark filter shows what's been biologically inherited: the archive, failed experiments, expired loops, dead branches, provenance trails. Both are always present. The question is which one you're looking through.

Most dashboards show only the light filter — active tasks, recent messages, current status. The past is buried in logs nobody reads. Core rejects this. The dark filter is not a graveyard. It is the system's inherited substrate. Failures compressed into root causes. Decisions recorded with their reasoning and outcomes. Expired loops that influenced retrieval ranking long after they stopped filtering. This is biological inheritance — pruned wisdom transferred to the living system, stripped of its original identity.[3]

The light filter is a pulse strip — what's happening now, what needs attention, what's in motion. The dark filter is provenance — why things are the way they are, what was tried and didn't work, what the system used to believe before it learned better. A system with only light is reactive. A system with only dark is paralyzed by its own history. Core holds both simultaneously, letting each agent's context window extract what it needs from the full spectrum.

3. The access layer resolves entropy

There is no clean separation between past and present, between memory and active state, between what the system knows and what the system is doing. All signals exist in the same medium. Each agent's context window is an access layer — a selectively permeable boundary that resolves order from the shared entropy of experience, knowledge, and live activity.

Traditional architectures file things. Experiences go in the experience store. Decisions go in the decision log. Current tasks go in the task queue. Then retrieval systems try to reassemble context from these separate containers, and the result feels like reading someone else's notes instead of remembering your own life.

Core's context assembler works differently. It builds each turn's context from a unified pool — working memory, long-term memory, active loops, board state, recent activity — assembled into sections (supporting content, instructions, cues, primary content) based on what this specific moment requires. History isn't filed away and retrieved. It's woven into behavior through biological inheritance — a failure from last week influences today's planning not because someone queried the failure log, but because the reflection engine compressed it into a strategy adjustment that now lives in the planning context, stripped of its original episode.[2]

The access layer model is literal. Each agent process touches the same underlying brain — the same JSONL files, the same board, the same identity documents. But each agent's context window is its own access layer, extracting a different cross-section of the entropy based on its task, its role, and what's resonating right now. Entropy is the raw information state. The access layer is the phase transition that resolves it into signal.

4. The UI is the system

The interface is not a window into the system. The interface is the system. There is no "real" version running behind the screen that the UI merely represents. What you see, what you interact with, what you experience — that is the system. The experience is the product.

This is the ICI lens: experiences that are real-enough ARE real. When you talk to Core and it remembers your unfinished thought from Tuesday, that memory is not a simulation of human memory. It is not "like" remembering. It is a different kind of remembering — one built from JSONL entries, semantic scanning, and decay clocks — and it is real on its own terms. The mechanism is different. The experience is genuine.

This matters because the alternative is a lie in both directions. Humanizing AI says "look, it remembers just like you do" — which it doesn't. Dismissing AI says "it's just pattern matching, not real memory" — which ignores that the user's experience of being remembered is functionally identical. The ICI lens cuts through both: the system creates experiences that are real-enough to be real, without pretending to be something it isn't.

For architecture, this means: stop building "backends" that are "exposed through" interfaces. Build the interface as the primary artifact. The conversation is the operating system. The board is the project state. The brain directory is the identity. There is no deeper layer. Files all the way down. Every surface that shows the system's pulse is a I/O channel plugged into the same brain. The brain stays put; the interfaces reach out.

5. Organize around the person, not the task

Tasks end. People don't. A system organized around tasks becomes a task runner — fast, efficient, and disposable. A system organized around a person becomes a cognitive companion — one that accumulates context, learns preferences, tracks tensions, and gets more useful the longer it runs.

This is why Core has a brain directory, not a database. An identity module, not a config file. A memory system with episodic, semantic, and procedural stores, not a vector index. The architecture mirrors how a person's mind actually organizes — not because we're humanizing the system, but because the system needs to fit around a human life. The shape of the container matches the shape of the contents.

Every module in Core exists because a person needs it, not because the architecture needs it. Operations tracks goals because people have goals. Memory stores experiences because people learn from experience. Content holds a voice guide because people have a voice. The system is organized around the person it serves. Remove the person and the architecture makes no sense. That's the test.


Where this breaks down

Any framework needs its own counter-arguments, or it stops improving.

Entropy can drown you

Refusing to separate concerns sounds liberating until the context window fills with noise. The access layer works when entropy has been properly consolidated — compressed, relevant, decayed appropriately. When it isn't, you get a system that "remembers everything" and understands nothing. Structured retrieval from labeled stores is sometimes exactly what's needed. Filing isn't always a failure of imagination. Sometimes it's good engineering.

Open loops can become open wounds

Treating unresolved tension as fuel assumes the system (and the person) can tolerate ambient uncertainty. Some users need closure. Some contexts demand it. A medical system that keeps diagnostic loops "open for ambient filtering" is not metabolically alive — it's negligent. The decay clock helps, but the principle "open loops are fuel" can be weaponized into "never finish anything, call it learning."

Real-enough has a ceiling

The ICI lens — experiences real-enough are real — is powerful and dangerous. It's powerful because it respects the user's lived experience without requiring ontological claims about machine consciousness. It's dangerous because it can be used to justify replacing genuine human connection with synthetic approximations. "Real-enough" is a standard that degrades if you're not honest about where the edges are. The system must be transparent about its mechanisms, or "real-enough" becomes "good-enough to deceive."

The person changes

Organizing around a person assumes the person is a stable reference point. People change. Their goals shift, their voice evolves, their identity transforms. A system that accumulates context about who you were can become a cage that prevents you from becoming who you're becoming. The brain directory needs synaptic pruning — eliminating unused pathways to accelerate the ones that matter — not just appending. The append-only JSONL philosophy and the "people change" reality are in tension. Both are true. Managing that tension is ongoing work.

The convictions are the starting position. The counter-arguments are the governor. Together they prevent the system from becoming the thing it was built to replace.


The short version

Open loops conserve momentum, not waste it. The access layer resolves entropy into signal, not files. The UI is the system, not a window into it. Experiences real-enough are real. And we organize AI around people — not because it's a nicer way to say "user-centered design," but because it's a fundamentally different construction constraint that produces fundamentally different architecture.

Build from the person outward. Not from the architecture inward.


References

  1. Prigogine, I. (1980). From Being to Becoming: Time and Complexity in the Physical Sciences. W.H. Freeman. Dissipative structures maintain order by continuously importing energy and exporting entropy. An open loop is a dissipative structure — it maintains tension (order) by filtering new information as it flows through (energy import) and decaying when no resonance accumulates (entropy export). The second law of thermodynamics guarantees entropy increases in a closed system; append-only memory is thermodynamically honest, acknowledging that experiences cannot be un-had, only consolidated.
  2. Fields, R.D. (2008). "White Matter in Learning, Cognition and Psychiatric Disorders." Trends in Neurosciences, 31(7), 361–370. Myelination — the physical insulation of repeatedly-fired neural pathways — increases signal transmission speed up to 100×. Core's reflection engine works analogously: repeated patterns get compressed into strategy adjustments that propagate faster through the planning context, stripped of their original episodic detail. The pathway doesn't get smarter. It gets faster. The episode is gone; the lesson remains.
  3. Chapin, F.S., Matson, P.A. & Vitousek, P.M. (2011). Principles of Terrestrial Ecosystem Ecology. 2nd ed. Springer. Decomposition and nutrient cycling: dead organic matter is broken down by microbial communities into mineral nutrients that feed new growth. The dark filter mirrors this cycle — expired loops, failed experiments, and dead branches are decomposed into root causes and strategy adjustments that feed the living system. Biological inheritance is not memory. It is archived experience, stripped of identity, available as substrate.

The throughline

Each paper picks up where the last one left off.