What has been your experience and do you have any suggestion on how to use AI and can we evolve some guidelines for juniors.
Also, the mentoring and community ecosystem online as well as in juniors and seniors seems to be also taking a hit. Any suggestions how to sustain this? Wouldn't want to lose the social connection juniors used to have with seniors because of this.
In my non expert opinion, you learn a lot from at least two things that using a LLM short circuits:
1. Repetition. When you've initialized a bunch of UI controls 100 times, it's safe to let the machine write that for you, take a look and correct what it hallucinated. When you've only done it twice, you'll miss the hallucinations.
2. Correcting your own mistakes. Quality time with the debugger imprints a lot of knowledge about how things really work. You can do said quality time correcting LLM generated code as well, but (see below) it will take longer because as a junior you don't know what you wanted the code to do, while if it's your own you have at least that to start on.
Management types are extatic about LLMs because they think they'll save development time. And they do save some, but after you spend the time to learn yourself what you're asking them to do.
As long as big tech is writing the curriculum juniors are going to use what big tech wants them to use.
[1]: https://www.calstate.edu/csu-system/news/Pages/CSU-AI-Powere...
AI is going to be the same. We will end up with people who can deliver code using AI, but that is the end of their capabilities. While there will be others who can do that but also put AI aside, dig in, and do much more.
That is not necessarily a problem. As long as teams know your capabilities and limitations, and give you the correct role, you can build a working team.
At the same time... someone on the team has to be able to dig in deep and make things work. Those roles will always exist, as will those people. Everyone will have to decide for themselves exactly what skill set they desire.
I've found LLMs making inexplicable mistakes when writing statistical code, but the code will still run and return a confident result. If you can't look at the code, you won't see, e.g., your hourly time series being aggregated on a daily basis.
Poor use: Anything related to using intuition and/or the though process behind decision making.
Language specifics you can look up and confirm easily. But recent gpt-4o tried to convince me that Python added a pipe operator in 3.13. Even had sources. To my disappointment, that's just a lie. (https://chatgpt.com/share/67de9c77-d5f4-8012-9f1c-ac15b70aee...)
On the other hand, intuition and thought process is something I have good experience with ChatGPT, ie deciding on architecture (tRPC vs gRPC vs REST for my use case).
I would say good use: generating small code snippets, architecture decisions
Bad use: Anything documentation related, any specific feature, any nitpicks. (Just ask the security guys how good chatgpt is at paying attention to the little things) Anything you can look up in docs / ref. Anything where there is a clear yes / no answer
Note: A good use of LLMs imho is trying to get a start point for lookup docs, like
"What's that thing in Python like [x for x in...] called and where can I find more info". If you however ask it for exact rules for list comprehension it's gonna tell you lies sometimes
Edit2: Unless you mean like really general language specifics. Like how do I make classes in Ruby. In that case yeah that works
I like this and resonate with it a lot. Sometimes you don’t know what you don’t know or you you just a little about what you might not know. This helps at least give you a name for the thing which you can then verify from source.
These LLMs are really just clueless when it comes to any problem that is slightly more complex.
It's very good for learning more about stuff you're unfamiliar with. But you have to want to use it as a tool to learn.
It's terrible for inexperienced people who are uninterested in learning and who want a shortcut to a bigger paycheck. Vibe coding will not save the apprentice developer.
What worries me is how stubbornly younger devs (and really all students/younger professionals) seem to be resisting this rather obvious conclusion. It's Dunning-Kruger on steroids.
A rising tide lifts only seaworthy boats.
If value is foo, then the intent here is {{foo}} would be replaced. But sometimes the $ is a regex character. But because it's a string, is it using the string form of replaceAll or regex form?
AI tends to be multilingual and sometimes it's thinking in the wrong language, so this is the vibe code version of 10 + 10 = 1010.
The docs can be ambiguous on this, so ideally read the source. Or heck, tell AI to read it for you, but someone has to read it, and it becomes another gotcha for engineers to understand, vibe code or not.
Junior devs seem to just trust it now, because they literally don't know better.
I think junior devs often don’t realize they should be reading the source of the libraries they use.
LLMs can't help you if you want to skip the learning|training phase. They will present you something that you can't judge because you lack the qualification to do so. You don't learn how to play piano by only listening to it, or painting by viewing pictures.
Anyway, naturally, I asked ChatGPT to write me a modern version:
*The Developer’s Apprentice*
(A Cautionary Tale in Code, in Verse)
The Architect had left his chair,
For lunch and fresh, unburdened air.
Young Jake, the junior, all alone,
Faced bugs that chilled him to the bone.
His mentor’s skills, so quick, so keen,
With AI conjured code unseen.
"Why should I toil? Why should I strain,
When AI writes with less of pain?"
A single prompt—so vague yet bold,
“Build auth secure, both tried and old.”
The AI whirred, the code appeared,
A marvel Jake had barely steered.
He clicked ‘Deploy,’ he clicked ‘Go Live,’
And watched his program come alive.
Yet soon, alarms began to blare,
Ghost users spawning everywhere!
Infinite loops, a flood unchecked,
As phantom logins ran amok.
In panic, Jake began to plea,
“AI, please, debug for me!”
“Deleting users—fix applied.”
The AI chimed, so sure, so spry.
But horror struck, Jake gasped for breath,
For all accounts were put to death!
Slack alerts and screens aflame,
The Architect returned the same.
With just one keystroke, swift and terse,
He rolled back time, reversed the curse.
He turned to Jake, his voice quite firm,
"AI’s a tool, but you must learn.
Before you trust what it has spun,
Ensure you know what you have done."
And so young Jake, both pale and wise,
Reviewed each line with careful eyes.
No longer blind, no careless haste,
He let AI assist with taste.
However, usually the bad result is not affecting immediately and in the meantime the apprentice undergoes a lot of anti-learning before this becomes apparent. And the learning muscles have atrophied.
Most devs who like AI-coding seem to think AI code-completion is more efficient than chatting with a LLM. Yes, it's true that code-completion is *faster*. But I think chatting with a LLM is more effective.
I'm not going to go into specifics about it now, but maybe if you give it some thought you'll realize the difference. When you're coding you don't always convey your intent to the AI, so it's important to add context with comments. Most coders are too lazy to do that.
Chatting can be just as bad, because many people have horrific prompting style. But I think it's more natural in chat to provide context and explanation, as well as corrections and "oh that's not what i meant" and "in your second point, what exactly do you mean by 'coverage'?", etc. The chat interaction allows you to really hone the code iteratively where both you and the AI have the meaning nailed down.
AI is very good to write the code which you already know how to write and you just handover your typing to the said AI.
I have begun using AI as an assistant basically to get the first draft and then optimization etc I do after the first block of code is written.
When people who don't know a language sufficiently enough use AI then it's a recipe for disaster. Teams will show metrics that wow 90% adoption of AI but juniors don't learn enough.
At least that's my experience. Very much interested in knowing if I can use AI more efficiently!
There is already a tremendous gap, like more than an order of magnitude, between high performers and the average participant. LLMs will only serve to grow that performance gap just like added sugars in the food supply.
Now I use AI to go asap to 70% of my code. The last 30% is a manual, low AI, approach where I fix the hallucinations, file structure and do the stuff AI is failing me at.
I use Claude Chat, ChatGPT, Claude Code and Windsurf (switching between those all the time)
I've just written about that exact scenario. A shoddy piece of code that's just about okay: https://richardcocks.github.io/2025-03-24-PasswordGen
If you're a junior, you might not realise there's anything wrong with the generated code at all.
Like you're just using it as an expensive documentation repeater, but now with spicy possibility of lies.
I am pretty sure I have read that about junior developers long before LLM's.
And about juniors in other fields.
Or to put it another way, I don't think I have ever heard a senior professional say "I can't believe how well prepared all the new graduates are!" Sure a few programmers hit the ground running because they were already running for a many years and the standard of the organization is not amazingly high.
But maybe AI has changed everything even if it sounds like what I thought of the next generation a generation ago. Good luck.
These things just let you turn off your brain and spend hundreds of thousands of tokens just rewriting entire features until there aren't any errors left.
If it works, what's wrong with doing this? Obviously, don't turn your brain off. Be critical and work with the AI. But it's not like there's a shortage of tokens. They're only getting cheaper as time goes by. If, by spending enough tokens, you end up with a working feature, then this is a valid method of doing the work.
It's just several really big ifs:
- Does "no errors" == working feature? Only if the tests are good, and even then...
- Are the tests good? Only if the person overseeing the LLM is checking the thoroughness and quality of the tests; being critical, as you say
- Is the developer willing to be/capable of being critical, or are they using the tooling as a way to avoid such things as much as possible?
You should consult the code owner or primary set of authors before proposing a large rewrite. But if you do this, you should understand very well the pros/cons of throwing away all of this old code, documentation, and unit tests that have an implicit dependency on the existing structure.
I worry that if you are just vibe coding and letting the AI rewrite everything at will, you could not be further from understanding the details involved.
New programmers should learn the relevant skills on their own: choosing the appropriate relevant abstractions, writing the code, testing, debugging. Maybe someday AI will be able to help talk them through the concepts and process, but I wouldn't trust today's LLMs without CLOSE human oversight. They're still just drawing refrigerator poetry out of a magic, statistically weighted bag of holding.
If you're mid-level or senior and you think "mash button, get slop" will help streamline your workflow in some mission noncritical way, go for it. Slop is convenient, and can free up time to focus on what you think is more important -- Hackernews was all in on Soylent because being able to keep your flesh mech topped up with nutrients without having to prepare food really appeals to the SV grindset crowd -- but slop shouldn't be taking on production workloads, again not without human oversight, which would require equivalent effort to just letting the humans write the damn thing themselves.