After a 10 hour 'flow state' deep coding session , you had this buzzing but not unpleasant feeling of needing your brain to fold back into reality. After a 3 hour frantic agentic coding stint, you are just mentally exhausted from the sheer speed and volume of actions and descisions taken.
On top, you feel the perceived 'opportunity cost' of non-productive hours skyrocketed, and everyone that cares feels like they are constantly 3 steps behind where they want to be on keeping up with the latest changes (that are very real steps, not just like the past hype frameworks shifts or language or tool fads).
This is going to end in lots of burnout and substance abuse along the way.
But whereas that's like reading a book, the way people are using agents is like scrolling tick tock. If you have 6 or 7 agents going, and you want to keep the pipeline full, you are constantly context switching. Your brain never has a chance to get into deep focus mode. Your attention is constantly being yanked from one thing to the next.
The same thing happens to me if I let myself get distracted at work and try to focus on too many things at once (emails, teams, Jira, actual work...). My brain feels like a fractured mess.
There are some good books on the subject if you're interested:
The Shallows: What the Internet Is Doing to Our Brains — by Nicholas Carr
Amusing Ourselves to Death — by Neil Postman
Dopamine Nation — by Anna Lembke
1. cook up design
2. coding
3. compiling+running it
4. view the logs, figure out what broke
5. back to 2
AI just makes 2 and 4 happen faster. Which frankly just makes things easier. I don't have to worry about how "modern" my C++ is because it already does it for me. And for debugging, it just does the cmd+clicking through the codebase for me.
So it really just makes it less fatiguing, especially for moving around large bodies of code and renaming stuff. I can spend more time on the essential problem.
I think having multiple of these loops running at once (or having an agent just iterating on its own) is kind of dumb tbh and I don't use them that way. I think having 100 agents running at once or whatever the fuck these people are saying is bullshit. Just using it to speed up 2 and 4 is good enough for me (and also using to explain what the code is doing for building my mental model).
Usually step 3 takes a long time as well, so if claude alone vs claude+me is less accurate, that gets amplified. Another reason why I don't like to let it run all by itself
I started using and writing skills for the tasks extensively. PR review, deployment, frontend design, that kind of thing. The agent follows a tested process instead of guessing and leads to fewer surprises.
The tiredness is real. But running every agent interaction as a fresh conversation when you could reuse a workflow makes it worse than it needs to be.
I work on a social platform and the frontier we're hitting isn't productivity, it's trust. Can an agent help two strangers build enough trust to have a real conversation? That requires progressive permission escalation, not task completion.
The interesting design constraint: the agent should never be able to reveal your identity to someone without your explicit approval. Unlike coding agents where more autonomy = more productivity, social agents need less autonomy and more consent checkpoints. Almost the opposite of what this thread is debating.
Things are too weird now. Alomst makes me want to just teach music lessons again for way less money. Sigh