Hi friends,
Before I ever sat down to write a line of code for this project, the idea of context had already started to bug me.
I was brainstorming from my phone, switching from app to app, and in those small, daily moments I found myself repeating the same things to my AI tool of choice, over and over again.
It was like having a really smart coworker who forgot everything we talked about yesterday. Or five minutes ago.
In my day-to-day work, I noticed something: the more I leaned on AI to help me move faster, the more often I found myself hitting the same walls.
Repeating a decision I’d already made.
Re-describing a feature I’d already scoped.
Re-sharing context I’d just explained in another thread.
And with every repetition, a little bit of friction crept in.
The first time, I figured: okay, no big deal. I’ll just paste a few notes from earlier. Add some code for reference. Remind the AI what we’re doing. But when this became a pattern, across projects, across tools, across days. I started to realize: this isn’t just an inconvenience. It’s a design flaw in how we work with AI.
And most of it’s driving me crazy.
A few patterns
Looking back a few patterns started to stand out:
Context Drift: Even within the same chat thread, subtle misunderstandings would pile up. A variable renamed. A goal slightly misinterpreted. A tone shift. It was like the AI was trying to help with the current step without remembering why we were walking in the first place. 🤔
Prompt Reuse: I kept writing the same things. Explaining my project, my goals, my style preferences. If you’ve ever had a document called "README for GPT" or a note titled "Things to remind ChatGPT," you know the feeling. 📃📃📃
Fractured Conversations: Because the tools don’t share memory across threads or tabs, I’d end up duplicating context just to pick up where I left off. It made switching contexts feel brittle and annoying. 😩
Mental Load: The more I tried to work around the lack of memory, the more I had to manage it myself. Copying. Pasting. Summarizing. Reframing. And with each workaround, the more I felt like I was compensating for something that should’ve been built into the workflow. 😠
These weren’t isolated bugs. They were systemic clues.
Context as a first-class citizen
They made me wonder: what would it look like to treat context as a first-class citizen in the way we work with AI?
The quiet friction that builds The hardest part about this problem is that it doesn’t always scream. It whispers.
It shows up as a hesitation to ask another question because you know it’ll take five minutes just to bring the AI up to speed. It shows up as missed nuance, dropped threads, or suggestions that are technically correct but totally off-base. And it shows up in how we slowly lower our expectations.
We get used to reexplaining. We assume the AI won’t remember. We accept the limits.
But what if we didn’t?
What if we made it easy to keep that context alive? Intentionally, explicitly, and portably?
This project didn’t start with a grand vision. It started with a hunch: that the most useful tool wouldn’t be the smartest model or the flashiest interface, but something that helped reduce friction in the work I already do.
A way to say: "Here’s what matters. Remember this. Build on it."
That hunch became a prototype. Then a folder structure. Then a CLI command. And slowly, it’s becoming a system. But it started here: in those quiet moments where something felt off.
What's next
Thinking in Systems, Talking to Machines.
Where I share how systems thinking shaped my view of context, and why I think we’ve been treating it too much like memory, not enough like structure.
Let alone as personality and character! 🎭
Thanks for following along,
Adeline