Hi friends,
For the past year or so, I’ve spent a lot of time talking to AI.
Sometimes it’s to debug a weird edge case. Sometimes it’s to help structure an idea for a documentation. Sometimes it’s just to keep track of what I’m working on across a messy day.
And over time, a pattern emerged—something just didn’t feel right.
At first, I thought it was just how it is, models have limitations. Hallucinations. Lost memory. Repetitive prompts. You know what I’m talking about..
But eventually I realized it wasn’t just about the models.
It was about context—and more specifically, how little tooling or structure we have around it.
We talk to these powerful systems as if every session is a blank slate. Or worse, we try to mimic continuity by copy-pasting snippets from previous chats, hoping the AI can guess what matters.
That’s not how we work with code.
That’s not how we write documents.
That’s not how we collaborate with people.
So why are we doing it with AI?
A growing itch
I’ll give you an example.
Let’s say I’m building a new feature and want the AI’s help structuring a few functions. I paste in some code, talk through the problem, and get a useful suggestion. Great.
A few hours later, I return with a follow-up question. I want to refer to the earlier conversation, but now it’s buried in a sea of chat threads. The AI doesn’t remember what we talked about. I’m left recreating the context manually—summarizing decisions, code changes, constraints—just to ask the next question.
It’s like talking to someone with amnesia. Smart, helpful amnesia. But still.
That mental load builds up. The friction grows. And eventually I either stop asking for help or spend more time prompting than progressing.
That’s the gap I want to close!
Not just memory
It’s tempting to think the solution is just “better memory.” Persistent threads. Smarter agents. More tokens.
But I think the problem goes deeper.
We don’t just need longer memory.
We need better structure.
Context, isn’t just what happened earlier in the chat. It’s the relevant scaffolding around the task at hand:
What’s the goal?
What’s the user’s current state?
What constraints are in play?
What assumptions were made?
What language, tone, and formatting do we expect?
These aren’t things you get from a prompt alone. They’re layered. Evolving. Often implicit. And they’re scattered across different tools, documents, and mental models.
If we want AI to truly collaborate with us, we need better ways to capture, reuse, and share that context—without starting over every time.
Why now?
I’m not the first person to notice this, and I won’t be the last. But I’ve hit the point where it feels worth acting on. Not as a big startup. Not as a whitepaper. Just as a builder who sees an uncomfortable gap and wants to shrink it.
So I’m starting small:
A command-line utility along with a web-based interface chat to help manage and reuse context files across AI chats.
Something fast and scriptable.
Something I’ll actually use.
From there, I’ll see where it leads.
What's next
This post is the first in a short series about why I’m building this and what I’m learning as I go. I’ll share:
What I mean by composable conversations
How I’m thinking about context types
Why I believe a minimal, local-first tool is the right place to begin
And what a long-term framework might look like
If you’ve ever felt the same tension—the weight of repeating yourself, the lost thread between conversations, the friction of starting from zero—you’re not alone.
And maybe, together, we can build something better.
Thanks for following along,
Adeline