Humorphism: The Interface AI Deserves
- 8 minutes readTwo weeks ago, AWS held its “What’s Next with AWS” event. Among the big announcements, one thing caught my attention that wasn’t a product launch. It was a design philosophy.
Humorphism. A word I hadn’t encountered before.
The headlines will be about the OpenAI partnership, and rightly so. OpenAI on Bedrock, Codex, Managed Agents. That’s a major shift. But buried in the product announcements was a design philosophy that I think deserves its own conversation.
From Leather Textures to Human Dynamics
Remember skeuomorphism? Apple’s calendar app with the stitched leather texture. The notepad with the torn yellow pages. The trash can on your desktop. The idea was simple: make digital things look like physical things so people feel comfortable using them.
It worked. For a while. Then we moved on to flat design, material design, and eventually forgot why we were mimicking physical objects in the first place.
Humorphism picks up where skeuomorphism left off, but it mimics something different. Not the physics of objects. The dynamics of people. How humans negotiate, interrupt, delegate, escalate. How we build trust through small acts of alignment before we hand off work.
Hector Ouilhet, VP of Design for AWS Applied AI Solutions, put it sharply:
“Artificial intelligence has never been so intelligent — or so artificial.”
That line landed. Because it names the exact problem. We have models that can reason across million-token contexts, write production code, and hold subtle conversations. And we’re stuffing them behind sparkle buttons and bolted-on chat panels.
We keep forcing fluid intelligence into rigid containers.
The Desktop Metaphor Trap
Nobody wants to say it out loud: we’re still living inside the Desktop Metaphor. Files, folders, trash cans. Concepts from the 1980s, designed to make computers approachable by mimicking an office desk.
That metaphor was brilliant for its time. But it was built for a world where computers were tools you operated. You clicked. You dragged. You double-clicked. The computer waited.
AI doesn’t wait. AI reasons, suggests, acts, and sometimes gets it wrong. It’s not a tool you operate. It’s closer to a colleague you collaborate with. And we’re still designing interfaces as if the user is an operator and the AI is a fancy text box.
Humorphism reframes this entirely. The shift isn’t from “bad UI” to “good UI.” It’s from “users who operate tools” to “humans who cooperate with teammates.”
That’s not a design tweak. That’s a fundamentally different starting point.
The 10 Foundations
The humorphism manifesto [1] lays out 10 foundations. Not UI components. Not design tokens. Collaboration patterns that humans have refined for millennia:
| # | Foundation | What It Means |
|---|---|---|
| 1 | Notice | Observe context before acting |
| 2 | Align | Establish shared understanding |
| 3 | Delegate | Assign work with clear intent |
| 4 | Execute | Do the work |
| 5 | Decide | Make choices with appropriate authority |
| 6 | Communicate | Share status, reasoning, and results |
| 7 | Coach | Teach and improve over time |
| 8 | Verify | Check work against expectations |
| 9 | Consent | Get agreement before consequential actions |
| 10 | Escalate | Raise issues to the right level |
Look at the ordering. It tells you something about the philosophy.
Notice comes before Execute. You observe before you act. Align comes before Delegate. You establish shared understanding before you hand off work. Consent comes before Escalate. You get agreement before you raise the stakes.
Awareness and alignment precede action. Every time.
That said, this isn’t a waterfall. Real collaboration is cyclical, you start executing, realize you didn’t align properly, loop back. The ordering reflects priority, not a rigid pipeline. Awareness before action as a default, not a gate.
Most AI interfaces today skip straight to Execute. Type a prompt, get a response. No Notice. No Align. No Consent. Just raw execution. That’s not collaboration. That’s a command line with better autocomplete.
Why This Matters If You Build AI Applications
If you’re building AI-powered products, here’s the uncomfortable truth: the interface IS the product. Not the model. Not the infrastructure. Not the fine-tuning pipeline. The interface.
A brilliant model behind a bad interface is a bad product.
Yes, enterprise buyers evaluate model capabilities, compliance certifications, and pricing first. But a tool that’s bought and never adopted because the interface is bad is still a failed product. For adoption and retention, the metrics that matter after the contract is signed, the interface is the product.
Now, the skeptic in me says: haven’t people been saying “design for collaboration, not operation” for years? Human-centered design, cooperative AI, the “AI teammate” framing. None of this is new as a general idea.
What’s new is the specificity. The 10 foundations aren’t abstract principles. They’re a concrete checklist with a deliberate ordering. Most design frameworks for AI stop at “be helpful” or “keep the human in the loop.” Humorphism says: here are the 10 things a collaboration actually requires, in the order they should happen. That’s actionable in a way that “design for humans” never was.
And right now, most AI interfaces are skeuomorphic in the worst way. Chat panels are the new sparkle buttons. We’re mimicking the form of conversation without the substance of collaboration. A chat window doesn’t make your AI a collaborator any more than a leather texture made your calendar a real notebook.
The 10 foundations give you a practical design checklist. For every AI feature you ship, ask:
- Does it Notice? Does it observe the user’s context before acting, or does it wait passively for a prompt?
- Does it Align? Does it confirm understanding before executing, or does it assume it knows what you want?
- Does it Delegate with intent? When it hands work to sub-agents or tools, is the delegation transparent?
- Does it seek Consent before consequential actions? Or does it just do things and hope for the best?
- Does it Escalate appropriately? When it’s uncertain, does it flag that uncertainty or hide it behind confident-sounding prose?
Look at the Amazon Connect expansion announced at the same event [4]. Four products (Customer AI, Decisions, Talent, Health) built around agents that “behave more like human workers do,” as Colleen Aubrey described it [4]. They learn context, prioritize tasks, and proactively ask for whatever information they need. That’s humorphism in practice. Not a chatbot. Not a dashboard with an AI sidebar. An agent that Notices, Aligns, and Escalates.
The Deeper Thread: What Mead Saw a Century Ago
Here’s where it gets interesting. And personal.
Shortly after the event, my colleague Sebastian Thielke left a comment on my RAG post [7] that sent me down a research rabbit hole. He referenced George Herbert Mead’s symbolic interactionism, a sociological framework from the 1930s that I hadn’t thought about since university.
Mead’s core insight: meaning doesn’t live inside individuals. It arises between actors through interaction [6]. A gesture only becomes meaningful when another person interprets it and responds. The meaning is in the exchange, not in the sender or receiver.
Read that again in the context of humorphism.
The manifesto says we should stop treating people as “users who operate tools” and start treating them as “humans who cooperate with teammates.” That’s Mead. Meaning, and value, arises between the human and the AI through interaction. Not from the model’s capabilities alone. Not from the user’s prompt alone. From the dynamic between them.
The 10 foundations are essentially Mead’s “social acts” codified as design patterns. Notice is perception. Align is Mead’s “conversation of gestures,” the back-and-forth through which shared meaning emerges. Consent is the social contract that makes cooperation possible.
This isn’t just UX polish. It’s a philosophical shift in how we think about human-AI collaboration.
And it connects to a problem the ThoughtWorks Technology Radar Vol. 34 [5] recently named: “codebase cognitive debt,” the invisible cost when AI generates faster than humans build understanding. Humorphism’s Coach foundation is a direct response. An AI that coaches doesn’t just produce outputs. It builds the human’s mental model. It reduces cognitive debt instead of compounding it.
Concretely: an AI that explains why it chose a particular architecture pattern, “I used an event-driven approach here because your existing services already communicate asynchronously”, is coaching. An AI that just generates the code is executing. The first builds understanding you can maintain and extend. The second creates artifacts you can’t reason about when they break at 2 AM.
What Would Your Product Look Like?
The question isn’t whether AI will get smarter. It will. The models will get faster, cheaper, more capable. That trajectory is set.
The real question is whether we’ll design interfaces worthy of that intelligence.
Humorphism is a bet that the answer starts with treating AI as a teammate, not a tool. With designing for cooperation, not operation. With building interfaces that Notice before they Execute, that Align before they Delegate, that seek Consent before they Escalate.
I wrote recently about the protocol we should have built for humans [8], how MCP solved for AI agents what we never solved for human developers. Humorphism is the design counterpart to that argument. MCP gave agents a protocol for connecting to tools. Humorphism gives designers a protocol for connecting AI to humans.
Both were overdue.
What would your product look like if you designed it for humans cooperating with AI teammates instead of users operating AI tools?
Sources
[1] Humorphism manifesto — https://humorphism.com/
[2] “What’s Next with AWS” event — https://aws.amazon.com/events/whats-next-with-aws/
[3] AWS and OpenAI partnership announcement — https://www.aboutamazon.com/news/aws/bedrock-openai-models
[4] Amazon Connect expansion — https://www.aboutamazon.com/news/aws/amazon-connect-ai-business-set
[5] ThoughtWorks Technology Radar Vol. 34 — https://www.thoughtworks.com/radar
[6] Mead, Mind, Self, and Society (1934)
[7] Stefan Christoph, “Is RAG Still Needed?” — https://schristoph.online/blog/is-rag-still-needed/
[8] Stefan Christoph, “The Protocol We Should Have Built for Humans” — https://schristoph.online/blog/the-protocol-we-should-have-built-for-humans/
❤️ Created with the support of AI (Kiro)
📝 Last updated: May 2, 2026 — Minor edits