The person you're replying to is a senior, not junior candidate.
For junior devs who are still learning, LLMs are a great force multiplier that help them understand code faster and integrate new things.
For senior devs, LLMs are a maybe-optional tool that might save a couple hours per week, on a good week. I would consider extremely heavy LLM use a much larger red flag for a senior level position, than not using them at all.
As an experienced engineer, I know how to describe what I want which is 90% of getting the right implementation.
Secondly, because I know what I want and how it should work, I tend to know it when I see it. Often it only takes a nudge to get to a solution similar to what I already would have done. Usually it is just a quick comment like: "Do it in a functional style." or "This needs to have double check locking around {something}."
When I am working in the edge of my knowledge I can also lean on the model, but I know when I need to validate approaches that I am not sure satisfy my constraints.
A junior engineer doesn't know what they need most of the time and they usually don't understand which are the important constraints to communicate to the model.
I use an LLM to generate probably 50-60% of my code? Certainly it isn't ALWAYS strictly faster, but sometimes it is way way faster. On of the other things that is an advantage is it requires less detailed thinking at the inception phase which allows my do fire off something to build a class, make a change when I am in a context where I can't devote 100% of my attention to it and then review all the code later, still saving a bunch of time.
See here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566
Worse/less experienced developers see a much greater increase in output, and better and more experienced developers see much less improvement. AI are great at generating junior level work en masse, but their output generally is not up to quality and functionality standards at a more senior level. This is both what I've personally observed and what my peers have said as well.
Out of curiosity, which LLM code tool do you use?
Somewhat related, i have a good idea what i can and cannot ask chatgpt for, ie when it will and wont help. That is partially usage related and partially dev experience related. I usually ask it to not generate full examples, only minimal snippets which helps quite a bit.
For the second use case, I can easily see how effectively prompting a model can boost productivity. A few months ago, I had to work on implementing a Docker registry client and I had no idea where to begin, but prompting a model and then reviewing its code, and asking for corrections (such as missing pagination or parameters) allowed me to get said task done in an hour.
For example I needed to create a starting point for 4 langchain tools that would use different prompts. They are initially similar but, I'll be deverging them. I would do something like copy the file of one. select all then use the inline chat to ask o1 to rename the file, rip out some stuff and make sure the naming was internally consistent. Then I might attach additional output schema file and the maybe something else I want it to integrate with and tell it to go to town. About 90% of the work is done right.. then I just have to touch up. (This specific use case is not typical, but it is an example where it saved me time, I have them scafolded out and functional while listening to a keynote and in-between meetings.. then in the laster day I validated it. There were a handful of misses that I needed to clean up.)
This is mostly because if i don't know that i'm asking for the wrong thing, the llm won't correct me and provide code that answers the wrong question and make things up to do that if needed.
Sure i learn by debugging the llm's nonsensical code too, and it solves my "don't want to watch a 2h tutorial because if i just watch the 10minutes that explain what i want to learn, i don't understand any of the context". But it's not much faster with the llm since I need to google things anyway to check if it is gaslighting me.
It does help understanding errors i'm unfamiliar with and the most value i found is pasting in my own code and asking it to explain what the code should do, so i find errors in my logic when it compiles but doesn't have the desired effect. And it will mention concepts i'm lacking to look them up (it won't explain em clearly but at least it's flagging them to me) in a way youtubers earely do.
Still haven't made up my mind if it is a net positive as it often ends up getting on my nerves to wait 10min for a fluff intro before it gets to the answer. Better than a 20min fluff video intro on youtube maybe?
For unit tests, it's a godsend. Particularly if you write one unit test, and then it can write another in the style you wrote.
I don't know where you work where code is written once and is never changed again, but enjoy it while it lasts...
E.g. suddenly some fresh out of college know-it-all sent crap into your function that you weren't expecting. Then he went to management to blame you for writing such shitty code.
Thing is you wrote unit tests around that code and the shitty know-it-all deleted them rather than changing them when he modified the code
This is why management needs to understand code.
Of course, there are overzealous managers and their brown-nosing underlings who will say that the LLM can do everything from writing the code itself and the unit tests, end-to-end, but that is usually because they see more value in toeing the line and follow the narratives being pushed from the C-level.
This is a hot take. I'm 100% not onboard with.
Meanwhile, a sr with an LLM is a straight up superpower!
I'm an industrial engineer who writes software and admittedly not a "senior dev", I guess, but LLMs help me save much more than just a few hours of week when crapping out a bunch of Qt/Python code that would cause my eyes to glaze over if I had to plod through it.
Someone with experience can first think through the problem. Maybe use chatgpt for some resarch and fresh up your memory first.
Then you can break up the problem and let chatgpt implement the stuff instead typing everything. Since you are smart and experience you know what chunks of code it can write (basically nothing new. only stuff you could copy pasta before if you had somehow access to all code in the internet yourself).
TLDR: It is way faster to use it. Especially for experienced programmers. Everything else is just ignorant.