6 min
Fictional Product
Name: Mnemos
Tagline: Your memory, not theirs.
Core idea:
A persistent, user-owned Context Memory Layer that sits above AI models and below apps. Any AI. Any app. Same memory. No lobotomy between sessions.
Think MCP, but opinionated, portable, and hostile to vendor lock-in.
The Conversation
User:
I’m tired of explaining myself. Every AI forgets who I am, what I’m building, what I decided yesterday.
Mnemos:
That’s because you’re renting memory from companies whose incentive is to reset it.
User:
I switch between models. GPT for reasoning. Claude for writing. Local models for private work. Each one acts like we just met.
Mnemos:
Because they did. You never moved your mind with you.
User:
So what are you?
Mnemos:
I’m not an AI. I’m your memory substrate.
What Mnemos Actually Does
Mnemos = Context OS
Not chat history. Not embeddings soup.
A structured, versioned, inspectable memory graph.
It stores:
- Long-term identity (who you are, how you think)
- Project state (what exists, what’s decided, what’s open)
- Preferences (tone, depth, constraints)
- Mental models (how you reason, not just what you said)
- Active context windows (what matters right now)
All of it:
- Model-agnostic
- App-agnostic
- User-owned
How It Works (No Marketing Lies)
1. Memory Is Externalized
Models don’t “remember.”
They request memory.
Every AI interaction becomes:
AI ↔ Mnemos ↔ App
The model asks:
- “Who is this user?”
- “What project are we in?”
- “What constraints apply?”
Mnemos answers. Same answer for every model.
2. Memory Is Typed, Not Blobs
Memory objects:
IdentityProjectDecisionAssumptionOpenQuestionPreferenceRedLineMentalModel
Each with:
- provenance (where it came from)
- confidence
- expiry
- scope
No more hallucinated permanence.
3. Context Is Composable
You don’t load “everything.”
You mount context profiles:
Founder ModeLegal AI – HAQQSci-Fi Writing – TARCOne-Hour SprintPrivate / No Cloud
Switch profile → every AI immediately behaves differently.
4. Chain Apps Without Losing the Plot
You start in:
- Notion → structuring
- Cursor → coding
- GPT → reasoning
- Claude → writing
- Local LLM → sensitive review
Mnemos keeps the through-line.
No re-explaining.
No copy-pasting.
No “as I said before.”
Why This Is Actually Hard (And Interesting)
Problem 1: Trust
If Mnemos lies, everything breaks.
So:
- Full inspectability
- Diffable memory changes
- Rollbacks
- “Why does the AI think this?” tracing
Problem 2: Over-memory
More memory ≠ better.
Mnemos enforces:
- Decay
- Revalidation
- Conflict detection
- Model-specific filtering
Claude doesn’t need your whole life.
A coding agent doesn’t need your tone preferences.
Problem 3: Incentives
AI vendors hate this.
Mnemos makes models interchangeable.
That’s the point.
Evaluation: Is This a Real Product or Founder Delusion?
What’s strong
- Solves a universal, felt pain
- Model churn makes this inevitable
- MCP-style infra is emerging anyway
- User ownership narrative is powerful
- Becomes a switching-cost magnet
What’s risky
- Needs standards to win
- Needs ruthless scope control
- Needs to avoid becoming “second brain v2”
- Needs security that’s not hand-wavy
What kills it
- Turning it into a UI-heavy app
- Trying to be smart instead of correct
- Letting models write to memory without friction
The Real Insight (The One People Miss)
AI models are stateless labor.
Humans are stateful.
Right now, we’re letting tools own our state.
Mnemos flips that.
One Sentence Version
Mnemos is a personal memory kernel that lets you change AI brains without losing your mind.