AI Knowledge and Context

date
Aug 12, 2025
type
Post
year
slug
ai-knowledge-and-context
status
Published
tags
AI
Claude Code
summary
Whatever discoveries you make on your AI coding adventures, the AI will remember none of it.

Knowledge

Whatever discoveries you make on your AI Coding adventures - the AI will remember none of it.
Its mechanical brain was formed through all the training data and then switched to read only mode. Nothing you do with the AI will teach it anything or have any lasting impact. None of your interactions will leave any trace. The next session will start from the exact same starting point as the last one. All it’s got is its built-in knowledge, the prompt, and nothing else. It makes no new memories. It remembers nothing. In fact it’s not even remembering your previous chat messages of the same session. Every new message you send, packs in the entire session’s chat history and the AI goes through it from scratch.
notion imagenotion image
Now what that means in effect is that the AI knows nothing about all the things that happened since it was trained, so it’s a good idea to be aware of the cutoff date for the training data of the model you’re using. The cutoff date for Gemini 1.5 Pro was January 2025.
The Tailwind 4 CSS framework was also released in January 2025. So whenever I try to use tailwind 4 in a project, it happily installs the latest version, then tries to use it just like you would use version 3 - completely oblivious to the many breaking changes that came with version 4 and then utterly surprised if it doesn’t work.
notion imagenotion image
And it gives up before it thinks about consulting the docs.
notion imagenotion image
The only way to get it to figure out the problems it creates that way is to tell it to go read the tailwind docs. It ingests the information and then it’s able to do stuff that actually works. But none of that knowledge sticks! As soon as you start a new session, it will happily go and break the entire project without blinking an eye. Claude Sonnet 4’s cutoff date was March 2025, so 2 months later, but it still has the exact same problem.
notion imagenotion image
It’s not unlike what Leonard Shelby is doing in the movie Memento. He wakes up in a motel room with only the memories from before his fall, but no idea where he is or how he got here. He gathers what info he can from his tattoos and polaroids in his pockets, and then he goes and pretends to know what he’s doing.
notion imagenotion image
So to help my Leonard-Shelby-like AI agent, I try to help out by getting it to write up instructions when I see that it’s finally getting it right. I tell it to save those in a Markdown file, then I can include them in future prompts and get this highly relevant information stuffed into its context from the start. That’s so much more convenient than going through the same troubles again and again.

Context

So why not just stuff as much info as we can into the context? - Because too much context is bad. Yes, this is counter-intuitive. Us humans, the more mistakes we make, the more we learn. But the AI doesn’t learn from its mistakes the same way we do. As mentioned above, every time you submit a new command, the entire chat history is sent along and reaches a fresh AI brain.
Imagine chatting with someone online. But it’s not one person - instead: every time you submit a message, a new person receives the full chat history ending with your newest message. - Imagine a fresh clone, if you want - stay with me, I’m trying to paint a picture here - a fresh clone who has to make sense of 200 pages of back and forth about failed attempts to fix an issue. You see how keeping that whole history attached can hurt more than it could help?
So /clear and start fresh, right? Yes, but not so quick! Even if we have given up hope that something useful might come out of a specific chat session, we can still get some value out of it! Have the AI define the problem! Have it write a MarkDown file where it explains what’s wrong and how it thinks it could be fixed. Then /clear and start the new session by having the AI ingest that file! This way it can hit the ground running and it saves you from having to explain it all, all over again! You might want to review that file first, though. Just in case.
I’ve had good success with this method.

So, TL;DR:

  • The AI doesn’t learn and doesn’t remember. What it doesn’t know you have to stuff into its context with every new session.
  • Too much context is bad. Start every new task with /clear
  • If you see the AI auto-compacting the context, you’ve gone too far. Only pain and suffering lie beyond this point.
 
Good luck out there!
 

Leave a comment