- can write code
- tireless
- have no aspirations
- have no stylistic or architectural preferences
- have massive, but at the same time not well defined, body of knowledge
- have no intrinsic memories of past interactions.
- change in unexpected ways when underlying models change
- ...
Edit: Drones? Drains?
- don't have career growth that you can feel good about having contributed to
- don't have a genuine interest in accomplishment or team goals
- have no past and no future. When you change companies, they won't recognize you in the hall.
- no ownership over results. If they make a mistake, they won't suffer.
Whenever I have a model fix something new I ask it to update the markdown implementation guides I have in the docs folder in my projects. I add these files to context as needed. I have one for implementing routes and one for implementing backend tests and so on.
They then know how to do stuff in the future in my projects.
That sounds a lot like '50 First Dates' but for programming.
Yes, this is something people using LLMs for coding probably pick up on the first day. They're not "learning" as humans do obviously. Instead, the process is that you figure out what was missing from the first message you sent where they got something wrong, change it, and then restart from beginning. The "learning" is you keeping track of what you need to include in the context, how that process exactly works, is up to you. For some it's very automatic, and you don't add/remove things yourself, for others is keeping a text file around they copy-paste into a chat UI.
This is what people mean when they say "you can kind of do "learning" (not literally) for LLMs"
It's functionally working the same as learning.
If you look at it like a black box, then you can't tell the difference from the input and output.
Key words are these.
> They then know how to do stuff in the future in my projects.
No. No, they don't. Every new session is a blank slate, and you have to feed those markdown files manually to their context.
AGENTS.md exists, Codex and Crush support it directly. Copilot, Gemini and Claude have their own variants and their /init commands look at AGENTS.md automatically to initialise the project.
Nobody is feeding aything "manually" to Agents. Only people who think "AI" is a web page do that.
It's a tool, not an intelligent being
We'll fix that, eventually.
- don't have career growth that you can feel good about having contributed to
Humans are on the verge of building machines that are smarter than we are. I feel pretty goddamned awesome about that. It's what we're supposed to be doing.
- don't have a genuine interest in accomplishment or team goals
Easy to train for, if it turns out to be necessary. I'd always assumed that a competitive drive would be necessary in order to achieve or at least simulate human-level intelligence, but things don't seem to be playing out that way.
- have no past and no future. When you change companies, they won't recognize you in the hall.
Or on the picket line.
- no ownership over results. If they make a mistake, they won't suffer.
Good deal. Less human suffering is usually worth striving for.
> Humans are on the verge of building machines that are smarter than we are.
You're not describing a system that exists. You're describing a system that might exist in some sci-fi fantasy future. You might as well be saying "there's no point learning to code because soon the rapture will come".
It's also the premise of The Matrix. I feel pretty goddamned uneasy about that.
Why?
Coincidentally, the hippocampus looks like a seahorse (emoji). It's all connected.
Not to mention; hippocampus literally means "seahorse" in Greek. I knew neither of those things before today, thanks!
- constantly give wrong answers, with surprising confidence
- constantly apologize, then make the same mistake again immediately
- constantly forget what you just told them
- ...
They can usually write code, but not that well. They have lots of energy and little to say about architecture and style. Don't have a well defined body of knowledge and have no experience. Individual juniors don't change, but the cast members of your junior cohort regularly do.
But they don't have a grasp for the project's architecture and will reinvent the wheel for feature X even when feature Y has it or there is an internal common library that does it. This is why you need to be the "manager of agents" and stay on top of their work.
Sometimes it's just about hitting ESC and going "waitaminute, why'd you do that?" and sometimes it's about updating the project documentation (AGENTS.md, docs/) with extra information.
Example: I have a project with a system that builds "rules" using a specific interpreter. Every LLM wants to "optimise" it by using a pattern that looks correct, but will in fact break immediately when there's more than one simultaneous user - and I have a unit test that catches it.
I got bored by LLMs trying to optimise the bit wrong, so I added a specific instruction, with reasoning why it shouldn't be attempted and has been tried and failed multiple times. And now they stopped doing it =)