Though, I guess I do treat LLM's as a last resort longshot for when other documentation is failing me.
I don't think current LLMs take you all the way but a powerful code generator is a useful think, just assemble guardrails and keep an eye on it.
You can absolutely learn from an LLM. Sometimes.documentation sucks and the LLM has learned how to put stuff together feom examples found in unusual places, and it works, and shows what the documentation failed to demonstrate.
And with the people above, I agree - sometimes the fun is in the end process, and sometimes it is just filling in the complexity we do not have time or capacity to grab. I for one just cannot keep up with front end development. Its an insurmountable nightmare of epic proportions. Im pretty skilled at my back end deep dive data and connecting APIs, however. So - AI to help put together a coherent interface over my connectors, and off we go for my side project. It doesnt need to be SOC2 compliant and OWASP proof, nor does it need ISO27001 compliance testing, because after all this is just for fun, for me.
The code that LLM outputs, has worked well enough to learn from since the initial launch of ChatGPT. This even though back then you might have to repeatedly say "continue" because it would stop in the middle of writing a function.
Also, it's actually fun watching LLMs debug. Since they're reasonably similar to devs while investigating, but they have a data bank the size of the internet so they can pull hints that sometimes surprise even experienced devs.
I think hard earned knowledge coming from actual coding is still useful to stay sharp but it might turn out the balance is something like 25% handmade - 75% LLM made.
Instead you’d learn it, remember it, and it would be useful next time. But it’s not.
Some people will try to vibe out everything with LLMs, but other people will use them to help engage with their coding more directly and better understand what's happening, not do worse.
I've been programming for 15+ years, and I think I've forgotten the overwhelming majority of the things I've googled. Hell, I can barely remember the things I've googled yesterday.
Instead of manual coding training your time is better invested in learning to channel coding agents, how to test code to our satisfaction, how to know if what AI did was any good. That is what we need to train to do. Testing without manual review, because manual review is just vibes, while tests are hard. If we treat AI-generated code like human code that requires a line-by-line peer review, we are just walking the motorcycle.
How do we automate our human in the loop vibe reactions?
This is funny in the sense that in properly built urban environment bycicles are one of the best ways to add some physical activity in a time constrained schedule, as we're discovering.
All channelling is broken when the model is updated. Being knowledgeable about the foibles of a particular model release is a waste of time.
> how to test code to our satisfaction
Sure testing has value.
> how to know if what AI did was any good
This is what code review is for.
> Testing without manual review, because manual review is just vibes
Calling manual review vibes is utterly ridiculous. It's not vibes to point out an O(n!) structure. It's not vibes to point out missing cases.
If your code reviews are 'vibes', you're bad at code review
> If we treat AI-generated code like human code that requires a line-by-line peer review, we are just walking the motorcycle.
To fix the analogy you're not reviewing the motorcycle, you're reviewing the motorcycle's behaviour during the lap.
My point is that visual inspection of code is just "vibe testing", and you can't reproduce it. Even you yourself, 6 months later, can't fully repeat the vibe check "LGTM" signal. That is why the proper form is a code test.
Yes, I recon coding is dead.
No, that doesn't mean there's nothing to learn.
People like to make comparisons to calculators rendering mental arithmetic obsolete, so here's an anecdote: First year of university, I went to a local store and picked up three items each costing less than £1, the cashier rang up a total of more than £3 (I'd calculated the exact total and pre-prepared the change before reaching the head of the queue, but the exact price of 3 items isn't important enough to remember 20+ years later). The till itself was undoubtedly perfectly executing whatever maths it had been given, I assume the cashier mistyped or double-scanned. As I said, I had the exact total, the fact that I had to explain "three items costing less than £1 each cannot add up to more than £3" to the cashier shows that even this trivial level of mental arithmetic is not universal.
I now code with LLMs. They are so much faster than doing it by hand. But if I didn't already have experience of code review, I'd be limited to vibe-coding (by the original definition, not even checking). I've experimented with that to see what the result is, and the result is technical debt building up. I know what to do about that because of my experience with it in the past, and I can guide the LLM through that process, but if I didn't have that experience, the LLM would pile up more and more technical debt and grind the metaphorical motorbike's metaphorical wheels into the metaphorical mud.
Code review done visually is "just vibe testing" in my book. It is not something you can reproduce, it depends on the context in your head this moment. So we need actual code tests. Relying on "Looks Good To Me" is hand waving, code smell level testing.
We are discussing vibe coding but the problem is actually vibe testing. You don't even need to be in the AI age to vibe test, it's how we always did it when manually reviewing code. And in this age it means "walking your motorcycle" speed, we need to automate this by more extensive code tests.