Here's the thing -- I don't care about "getting stronger." I want to make things, and now I can make bigger things WAY faster because I have a mech suit.
edit: and to stretch the analogy, I don't believe much is lost "intellectually" by my use of a mech suit, as long as I observe carefully. Me doing things by hand is probably overrated.
I’ve worked with plenty of self taught programmers over the years. Lots of smart people. But there’s always blind spots in how they approach problems. Many fixate on tools and approaches without really seeing how those tools fit into a wider ecosystem. Some just have no idea how to make software reliable.
I’m sure this stuff can be learned. But there is a certain kind of deep, slow understanding you just don’t get from watching back-to-back 15 minute YouTube videos on a topic.
1. contacts - these come in the form of peers who are interested in the same things and in the form of experts in their fields of study. Talking to these people and developing relationships will help you learn faster, and teach you how to have professional collegial relationships. These people can open doors for you long after graduation.
2. facilities - ever want to play with an electron microscope or work with dangerous chemicals safely? Different schools have different facilities available for students in different fields. If you want to study nuclear physics, you might want to go to a school with a research reactor; it's not a good idea to build your own.
If you weren't even "clever enough" to write the program yourself (or, more precisely, if you never cultivated a sufficiently deep knowledge of the tools & domain you were working with), how do you expect to fix it when things go wrong? Chatbots can do a lot, but they're ultimately just bots, and they get stuck & give up in ways that professionals cannot afford to. You do still need to develop domain knowledge and "get stronger" to keep pace with your product.
Big codebases decay and become difficult to work with very easily. In the hands-off vibe-coded projects I've seen, that rate of decay was extremely accelerated. I think it will prove easy for people to get over their skis with coding agents in the long run.
That's kinda how I see vibe coding. It's extremely easy to get stuff done but also extremely easy to write slop. Except now 10x more code is being generated thus 10x more slop.
Learning how to get quality robust code is part of the learning curve of AI. It really is an emergent field, changing every day.
The key difference with LLMs is that React was written very intentionally by smart engineers who provided a wealth of documentation to help people who need to peek under the hood of their framwork. If your LLM has written something you don't understand, though, chances are nobody does, and there's nowhere you can turn to.
If (as Peter Naur famously argued) programming is theory building, then an abstraction like a framework lets you borrow someone else's theory. You skip developing an understanding of the underlying code and hope that you'll either never need to touch the underlying code or that, if you do, you can internalize the required theory later, as needed. LLM-generated code has no theory; you either need to supervise it closely enough to impose your own, or treat it as disposable.
There are other fictional variants: the giant mech with the enormous support team, or Heinlein's "mobile infantry." And virtually every variantion on the Heinlein trope has a scene of drop commandos doing extensive pre-drop checks on their armor.
The actual reality is it isn't too had for a competent engineer to pair with Claude Code, if they're willing to read the diffs. But if you try to increase the ratio of agents to humans, dealing with their current limitations quickly starts to feel like you need to be Tony Stark.
With all respect, that's nonsense.
Absolutely no one gains more than a superficial grasp of a skill just by observing.
And even with a good grasp of skills, human boredom is going to atrophy any ability you have to intervene.
It's why the SDCs (Tesla, I think) that required the driver to stay alert to take control while the car drove itself were such a danger - after 20+ hours of not having to to anything, the very first time a normal reaction time to an emergency is required, the driver is too slow to take over.
If you think you are learning something reviewing the LLM agent's output, try this: choose a new project in a language and framework you have never used, do your usual workflow of reviewing the LLMs PRs, and then the next day try to do a simple project in that new language and framework (that's the test of how much you learned).
Compare that result to doing a small project in a new language, and then the next day doing a different small project in that same language.
If you're at all honest with yourself, or care whether you atrophy or not, you'd actually run that experiment and control and objectively judge the results.
I'd agree, if my goal was "to be a great and complete coder."
I don't. I want just enough to build cool things.
Now, that's just me.
That being said, I'd also venture to say that your attitude here might be a tad dinosaurish. I like it too, but also, know that to a large extent, especially in the market -- this "quality" that you're striving for here may just not happen.
> I don't. I want just enough to build cool things.
That's great, but that wasn't the point I was responding to. I was specifically addressing your specific point of:
>>> I don't believe much is lost "intellectually" by my use of a mech suit, as long as I observe carefully.
I still think that's nonsense; no one learns much by observing.
Paraphrasing the old joke, you aren't going to get to Carnegie Hall by observing violinists.
Let's not mince words here, what you mean is that you don't care to learn about a craft. You just want to get to the end result, and you are using the shiny new tool that promises to take you from 0 to 100% with little to no effort.
In this way, I'd argue what you are doing is not "creating", but engaging in a new form of consumption. It used to be you relied on algorithms to present to you content that you found fun, but the problem was that algorithm required other humans to create that content for you to later consume. Now with LLMs, you remove the other humans from the loop, and you can prompt the AI directly with exactly what you wish to see in that moment, down to the fine grained details of the thing, and after enough prompts, the AI gives you something that might be what you asked for.
You are rotting your brain.
In true HN fashion of trading analogies, it’s like starting out full powered in a game and then having it all taken away after the tutorial. You get full powered again at the end but not after being challenged along the way.
This makes the mech suit attractive to newcomers and non-programmers, but only because they see product in massively simplified terms. Because they don’t know what they don’t know.
You need to be strong to do so. Things of any quality or value at least.
Thinking through the issue, instead of having the solve presented to you, is the part where you exercise your mental muscles. A good parallel is martial arts.
You can watch it all you want, but you'll never be skilled unless you actually do it.
Or "An [electric] bicycle for the mind." Steve Jobs/simonw