Interface Endpoints: The Public API Is a Feeling

Merivant — 2026

Practitioner Observation

On designing public interfaces as sensory experiences, not information displays. Builds on The Instinct Tax, The Core Instinct, and Organize, Not Humanize.

Reading the room

A good CEO walks into a building and knows in two seconds whether things are going well. Nobody hands her a report. Nobody pulls up a chart. She reads the room. The tempo of conversations, the posture of the people, the energy in the hallway. Decades of pattern recognition compressed into a feeling that arrives before any thought does.

A parent wakes up at 3 a.m. and knows something is wrong because the house is too quiet. Not because a sensor triggered. Not because an alert fired. Because the absence of the usual small noises is itself a signal, and their nervous system has been calibrated to that frequency for years.

This is how humans actually process complex systems. Not metric by metric. Not dashboard by dashboard. You read the aggregate state, and the aggregate state arrives as a feeling. Calm, tension, something's off, everything's fine. The feeling comes first. The details come when you ask for them.

The interface between a complex system and a human should be a feeling, not a dashboard.

Put operationally: the default view is aggregate state. Details appear only on deliberate request. The system's job is to compress, not to display.

We've been building the opposite. And it's worth asking why.


The dashboard trap

When a system gets more complex, the instinct is to show more. More metrics, more graphs, more real-time feeds, more tabs. The logic feels sound: the system is doing more things, so you need to see more things. But there's a cost to every piece of information you put in front of a person, and that cost compounds in ways that are easy to miss.

Every widget on a dashboard is a question the human has to answer: Is this number good or bad? Is this trend concerning? Does this chart need my attention right now? Each question costs a small amount of cognitive effort. One question is nothing. Forty questions, refreshing every few seconds, is a second job. The system was supposed to help you work. Now you're working for the system.[1]

This is the instinct tax at scale. Every interaction that requires conscious evaluation instead of instinctive recognition is a tax on the person's attention. A dashboard full of numbers forces conscious evaluation on every single element. The human becomes a monitoring service for their own tools.

The common response is better dashboards. Cleaner layouts, smarter defaults, configurable widgets. But the problem isn't the design of the dashboard. The problem is the premise. The premise that the right way to connect a human to a complex system is to show them the system's internals, just prettier. What if the system let answers precipitate and delivered them as a feeling instead of a report?

What if the right interface isn't a window into the system at all?


Three dots

Sense. Work. Joy. Three signals, three dots. Blue means calm. Green means moving. Amber means something needs your attention. That's the entire public interface.

One agent running in the background, or a thousand agents coordinating across a network. Same three dots. The number of agents, the complexity of their tasks, the volume of data they're processing: none of that changes what you see. You see three dots, and you read the room.

The dots are what we call the shared signal layer: aggregate state with no individual identities. You don't see "Agent 47 is processing your tax documents." You don't see "Memory consolidation is at 73%." You see blue, green, amber. The specifics are dissolved into the shared signal layer, the way individual conversations dissolve into the general energy of a room when you walk in.

This is a deliberate inversion of how most systems present themselves — and a direct application of what Weiser and Brown called "calm technology," systems that inform without demanding attention.[2] Most systems want you to know how hard they're working. They want credit for every process, every optimization, every background task. Three dots refuse that. The system's effort is not the user's concern. The user's experience is the user's concern.

Three signals is not a simplification. It's a compression. The same way a face compresses thousands of muscular micro-movements into an expression you read instantly, three dots compress the full state of a complex system into a feeling you absorb without thinking. The information isn't lost. It's translated into a form your perceptual system already knows how to process.


Glance, drill down, done

The primary experience is the glance. You look at three dots. They're blue. You keep working. That interaction took less than a second, and it told you everything you needed to know: things are calm, nothing needs you, carry on.

Most of the time, that's the whole experience. And that's success.

When a dot turns amber, you have a choice. You can glance and note it, knowing something wants attention but deciding to finish what you're doing first. Or you can tap. The tap opens a detail layer: what's underneath the amber, which threads are pulling for attention, what the system recommends. This is the drill-down. It's still not a dashboard. It's a focused explanation of one signal.

A concrete example: you tap Work (amber) and see "3 agents running — 1 failed: invoice reconciliation timed out after 4 retries. Recommended: restart with smaller batch." That's not a log dump. It's one sentence explaining why the dot changed color and one sentence telling you what to do about it. The drill-down earns the glance's trust. Never ship three dots without a detail layer that can explain every amber.

If you want everything, you go deeper. The full experience shows the working state of the system, the active threads, the recent history. This is where the complexity lives, and it's available whenever you want it. But you have to choose to go there. The system never pushes it on you.[3]

Three layers: feeling, detail, system. Feeling is the glance. Detail is the drill-down. System is the full picture. Most people live in the first layer most of the time, visit the second layer occasionally, and rarely touch the third. That distribution is the whole point. The architecture is designed so that the lightest interaction carries the most value.[4]

Compare this to a traditional dashboard, where the full system view is the default. You land on everything, and your job is to filter it down to what matters. The cognitive work flows in the wrong direction. You're doing the system's job for it.


Interface endpoints, not products

A Chrome extension isn't a product. It's an interface endpoint. A phone app isn't a product. It's an interface endpoint. A watch face isn't a product. It's an interface endpoint.

The brain is the only product. Every surface that shows you dots is the same three signals, rendered for a different form factor. The extension shows dots in your browser toolbar. The phone shows dots in a notification shade. The watch shows a single blended dot, because the screen is too small for three. A voice assistant speaks a sentence: "Things are calm." Same shared signal layer, different shape.

We call these shapes the I/O channel vocabulary. Five types:

A laptop bundles all five I/O channels. A watch carries glance and haptic. A smart speaker carries voice only. The device isn't the product. The device is a bundle of interface endpoints, and each bundle offers a different subset of the vocabulary. The brain addresses I/O channels, not devices. It says "send a glance signal" and the access layer figures out which surfaces are listening.

This changes how you think about building for new platforms. You don't ask "what should our Apple Watch app do?" You ask "which I/O channels does a wrist carry?" Glance and haptic. So you render one blended dot and a vibration pattern. Done. No feature negotiation, no reduced-functionality version. The I/O channel vocabulary defines the experience. The platform just provides the surface.


The access layer translates

Same three signals. Different shape per I/O channel. The access layer is the translation layer that sits between the brain and every interface endpoint, deciding how to render the shared signal layer for each surface.[5]

On a PC, the access layer renders three dots in a horizontal strip at the top of the screen. Each dot is color-coded and tappable. On a phone, the access layer renders three dots in the notification shade, compact enough to live alongside other notifications without demanding attention. On a watch, the access layer blends three signals into one dot, because the surface can't carry three distinct signals at that size. On a voice interface, the access layer translates the shared signal layer into a sentence.

The access layer isn't just resizing. It's translating between sensory modalities. A visual dot and a spoken sentence are fundamentally different experiences, but they carry the same information. The color blue and the word "calm" are the same signal in different I/O channel vocabularies. The access layer's job is to preserve the meaning while changing the medium.

This is what makes the architecture behave like an integrated sensory system rather than a software platform. Your own sensory system does the same thing: the same "danger" signal arrives as a visual flash, a sound, a skin sensation, or a gut feeling depending on which channels are active. The signal is one. The experiences are many. The body's access layer translates.

For the system where the UI is the product, the access layer is what makes that possible across surfaces. There's no "real" version of the interface that the watch version is a reduced copy of. Each interface endpoint is a complete experience. The watch version is the real version, for wrists. The voice version is the real version, for ears. The access layer ensures each one is whole.


Where this breaks down

Three signals is reductive. That's the point, but it's also the risk. Some situations carry nuance that three colors can't express. "Amber" on joy could mean you've been heads-down for six hours and haven't taken a break, or it could mean a relationship you care about has gone quiet for too long. The dot is the same color. The appropriate response is completely different.

This means the drill-down layer is structurally load-bearing. If you tap an amber dot and the detail layer fails to explain why it's amber in a way that leads to a clear next step, the whole model collapses. The glance only works because the drill-down is trustworthy. The moment the detail layer feels vague or unhelpful, the user stops trusting the dot, and now they're back to wanting a dashboard. The glance layer is the promise. The detail layer is where you keep it.

Alert fatigue

If amber fires too often, it becomes noise. This is signal detection theory applied directly: every detection system has a threshold, and if the threshold is too sensitive, false alarms erode trust until the user ignores the signal entirely.[6] A dot that's always amber is the same as no dot at all. Calibration is the ongoing work. The thresholds need to be personal, because what counts as "needs attention" is different for every person. And those personal thresholds need to be revisited, because what counts as "needs attention" changes as you change.

In practice, this means a few concrete levers: per-signal sensitivity (your Work amber might fire at 2 failed tasks; mine fires at 5), learning from dismissals (if you consistently glance at amber and don't tap, the threshold should tighten), and scheduled recalibration ("Do these thresholds still fit you?" asked at regular intervals). Calibration isn't a settings page. It's the same process as onboarding and the same process as a performance review. The question is always the same: does this match how you actually feel?

The three-signal contract

Sense, Work, and Joy are a specific model of what matters to a person. It's a bet. And bets can be wrong. Some people's lives don't decompose neatly into those three buckets. Some seasons of life are dominated by one signal so completely that the other two feel irrelevant. The model assumes balance is the goal. Not everyone agrees. The non-negotiable contract of three signals is also a constraint that resists adaptation, and any honest accounting has to name that tension. If the bet is wrong, the escape hatch is re-labeling, not restructuring. The three channels are fixed; what they measure is not. A founder in crisis mode might map all three to execution dimensions. A parent on leave might map them to family, health, and rest. The structure holds. The labels breathe.


The feeling at the boundary

There's a reason we called this paper "Interface Endpoints" and not "API Design Patterns." An API is a contract between systems. An interface endpoint is a place where a system meets the world. The difference matters. Contracts are negotiated, documented, versioned. Interface endpoints just feel.

You don't negotiate with your fingertips about what temperature means. You touch the surface and you know. The knowing is immediate, pre-verbal, and trusted. That's what a good public interface should feel like. Not a report you read. Not a dashboard you interpret. A surface you touch and know.

Three dots. Blue, green, amber. The brain stays put. The I/O channels plug in and out. And the feeling at the boundary is all you need to keep the person at the center.


References

  1. Few, S. (2006). Information Dashboard Design: The Effective Visual Communication of Data. O'Reilly Media. Few's central argument is that most dashboards fail because they present too much data with too little visual structure, forcing the viewer into conscious interpretation of every element. He advocates for designs that support rapid perceptual processing, which aligns with the three-dot model's emphasis on pre-attentive signal over analytical reading.
  2. Weiser, M. & Brown, J.S. (1996). "Designing Calm Technology." Xerox PARC. Weiser and Brown introduced the concept of calm technology: systems that inform without demanding attention, moving information between the periphery and the center of awareness as needed. The three-dot interface is a direct application of this principle. The dots live at the periphery until amber pulls one to the center.
  3. Nielsen, J. (2006). "Progressive Disclosure." Nielsen Norman Group. Progressive disclosure reduces complexity by showing only the information needed at each stage, with detail available on demand. The glance/drill-down/system model is progressive disclosure applied to an entire system surface rather than a single form or dialog. The key insight is that most users should never need to reach the deepest layer.
  4. Wisneski, C., Ishii, H., Dahley, A., Gorbet, M., Brave, S., Ullmer, B. & Patten, J. (1998). "Ambient Displays: Turning Architectural Space into an Interface." Proceedings of CoBuild '98. Springer. The MIT Media Lab's ambient display research demonstrated that environmental, peripheral information displays can convey system state without demanding focused attention. Their ambientROOM prototype used water ripples, sound, and light to represent information. Three dots extend this concept from architectural installations to any surface that can render color.
  5. Matthews, T., Dey, A.K., Mankoff, J., Carter, S. & Rattenbury, T. (2004). "A Toolkit for Managing User Attention in Peripheral Displays." Proceedings of UIST '04. ACM. This work formalized the design space for peripheral awareness displays, identifying key dimensions including notification level, transition type, and abstraction level. The access layer's job of translating the same signal into different sensory modalities maps directly to their framework for managing attention across display types.
  6. Green, D.M. & Swets, J.A. (1966). Signal Detection Theory and Psychophysics. Wiley. The foundational text on how humans detect signals in noise. The receiver operating characteristic (ROC) curve describes the tradeoff between hit rate and false alarm rate. For the three-dot model, this means calibration is not optional. A system that generates too many amber signals shifts the user's criterion toward ignoring them, degrading the entire interface from peripheral awareness to background noise.

The throughline

Each paper picks up where the last one left off.