Thinking in Systems, Talking to Machines
How systems thinking reshaped how I build with AI and why context isn’t memory, it’s structure. Building a Better Context for AI - part 3!
Hi friends,
When I started working on this project, I thought I was building a better memory.
For me. For the AI. For the work in between.
But the more I sat with i, the more I paid attention to the friction in my day-to-day, I started to realize:
What I really needed wasn’t memory.
It was structure.
Let me explain.
The patterns beneath the pain
By the time I noticed how much I was repeating myself to the AI, the pattern had already taken hold. It wasn’t just that the model forgot. It was that I had no system for what it should remember.
I’d write something like “the goal of this feature is to make it easier for users to…”
…and five minutes later, I’d say it again.
And again, in a different thread.
And again, in another tool.
Each time slightly reworded. Each time slightly different.
Each time a fresh prompt, as if we’d never met.
And then came the real question:
If I’m repeating this, where should it live?
Where do goals go? Where do decisions go? Where does context go?
That’s when systems thinking kicked in.
Systems thinking, quietly applied
Systems thinking teaches you to stop chasing symptoms and start mapping flows.
What moves through this system?
What reinforces? What decays? What loops?
So I started asking:
If context drifts over time… what process keeps it grounded?
If conversations fork… what tracks the decisions that split them?
If I change direction… what updates the downstream instructions?
And just like that, the problem shape shifted.
This wasn’t a note-taking issue. It wasn’t even an AI issue.
It was a structure issue.
We treat context like loose memory, scraps of old chats, scattered notes, saved prompts.
But what we need is a system of flow.
Not memory. Not metadata. A structure.
Once I saw it that way, I couldn’t unsee it.
Context isn’t a blob of remembered facts.
It’s a map.
It’s how things connect.
It’s what tells the AI why we’re doing what we’re doing, not just what we’re doing.
So I started designing it like a system:
facts/
for reusable knowledgegoals/
to track intent over timedecisions/
with reasons, tradeoffs, contextsummaries/
to keep pace as projects evolvearchives/
for full history, tucked safely away
It’s just a folder.
But now, it’s a system.
And that system gives shape to everything else.
Context has a lifecycle
The more I worked on it, the more I saw: context isn’t static.
It ages. It drifts. It grows stale.
And most tools treat it like a flat note.
Permanent, or invisible. Nothing in between.
But good systems change over time.
So I added ways to:
Compress bloated context into clean summaries
Flag outdated files and prune them when needed
Separate what’s active from what’s just reference
Load exactly what matters, and nothing more
Now, context isn’t something I fight to maintain.
It’s something that evolves with me, without dragging me down.
Structure unlocks trust
Here’s what changed for me:
When I structured context like a system, not a memory, I finally started trusting it.
I stopped second-guessing whether the AI “got it.”
I stopped wasting time reestablishing what we already knew.
I stopped feeling like I was the only one holding the thread.
Because the thread was there. In the structure.
And that structure could travel with me, across tools, across models, across time.
And that’s the deeper why
This project didn’t start with a plan to reengineer AI workflows.
It started with quiet friction.
But the more I traced that friction back to its root, the more I saw a pattern:
Most tools treat context like temporary metadata.
I wanted to treat it like a living system.
So that’s what I’m building:
A system you can see. Shape. Trust.
Where context isn’t a lucky guess, but a deliberate foundation.
What’s next
The Cost of Context Switching.
Next time, I’ll zoom in on something we don’t talk about enough: how exhausting it is to juggle goals, codebases, and conversations—especially when the context keeps slipping through your fingers.
This isn’t just about memory loss in chat. It’s about the mental tax of having to constantly reorient, re-explain, and reconnect threads across your work.
Thanks for following along,
Adeline