Currently part of the problem is the taboo using AI coding in undergrad CS programs. And I don't know the answer. But someone will find the right way to teach new/better ways of working with and without generative AI. It may just become second nature to everyone.
i just want devs who actually read my pr comments instead of feeding them straight into an llm and resubmitting the pr
Pretty sure it's a self-destructive move for a CS or software engineering student to pass foundational courses like discrete math, intro to programming, algorithm & data structure using LLM. You can't learn how to write if all you do is read. LLM will 1-shot the homework, and the student just passively reads the code.
On more difficult and open coursework, LLM seems to work pretty well at assisting students. For example, in the OS course I teach, I usually give students a semester-long project on writing from scratch x86 32-bit kernel with simple preemptive multitasking. LLM definitely makes difficult things much more approachable; students can ask LLM for "dumb basic questions" (what is pointer? interrupt? page fault?) without fear of judgement.
But due to the novelty & open-ended nature of the requirement ("toy" file system, no DMA, etc), playing a slot machine on LLM just won't cut it. Students need to actually understand what they're trying to achieve, and at that point they can just write the code themselves.
Kind of like that meme or how two AIs talking to each other spontaneously develop their own coding for communication. The human trappings become extra baggage.
But if they're not hired...?
That's not true anymore in the smart phone / tablet era.
5-10 years ago my wife had a gig working with college kids and back then they were already unable to forward e-mails and didn't really understand the concept of "files" on a computer. They just sent screenshots and sometimes just lost (like, almost literally) some document they had been working on because they couldn't figure out how to open it back up. I can't imagine it has improved.