The picture of software development also looks completely different. Code that used to be readable in a few lines becomes 100 lines—overblown because, well, code is cheap. Now, I could argue that it makes things unreadable and so on, but honestly, who cares? Right? The AI can fix it if it breaks...
So what do you guys think? Is this the future? Maybe the skill to focus on is orchestrating AI, and if you don’t do that, you become a legacy developer—someone with COBOL-like skills—still needed, but from the past millennium.
Juniors don't have that skillset yet, but they're being pushed to use AI because their peers are using it. Where do you draw the line?
What will happen when the current senior developers start retiring? What will happen when a new technology shows up that LLMs don't have human-written code to be trained on? Will pure LLM reasoning and generated agent skills be enough to bridge the gap?
It's all very interesting questions about the future of the development process.
1. AI gets better enough fast enough that by the time the senior people are retiring, it won't matter anyway
2. Software becomes mostly unreadable and nobody really understands how it works, but the AI is good enough that this is ok
Both are hard for me to imagine right now, but if you'd asked me five years ago if AI would ever be good enough to commit to my codebase, I would have said, "I really doubt it". Yet here we are, AI code is sometimes better than handwritten code (depending on the person of course).
Would love to hear others thoughts on these as well.
AI systems look at code on the internet that was written by humans. This is smart, clean code. And they learn from it. What they produce — unreadable spaghetti code — is the maximum they can squeeze out of the best code written by humans.
In the near future, AI-generated code will flood the internet, and AI will start training on its own code. On the other hand, juniors will forget how to write good code.
And when these two factors come together in the near future, I honestly don’t know what will happen to the industry.
We need to remember that the core of what “logic” is can be understood by every human mind, and that it’s our individual responsibility to endeavor to build this understanding, not delegate or hand-wave it. For all of human history, delegating/hand-waving away basic logic that can be understood by actuarial/engineering types has never gone well in the long term.
Even at the presidential level, today:
>RFK Jr claims basic math rules don’t apply to White House https://www.independent.co.uk/tv/news/rfk-jr-math-percentage...
I agree, that AI generated code will really start to piss in the pool so to speak. I'm not sure the models will get better without a lot of hand curation and signals of what is good vs bad vs popular code. They emphatically are not the same.
We had a couple of decades of brilliant engineers working for faang. What did we get as a result? Just crap: twitter, instagram, youtube, facebook. Imagine all those brilliant minds working on something meaningful instead.
Same goes for LLMs
I've found experienced developers leverage AI as a force multiplier because they can scrutinize the output, unlike juniors who often just paste and move on. The real skill is becoming an AI orchestrator, prompting effectively, and critically validating the output. Otherwise, if you're just a wrapper for AI, then yes, you become the "legacy developer" you mention because you're adding no critical thinking or value.
I can't imagine the people using many agents in parallel are actually even checking the fitness of the output they are generating, let alone the design, structure and quality of the code itself.
There's always been the need to verify the code matches the business requirement, right? It used to be when you asked someone why they wrote the code the way they did, they'd tell you they thought it was the right way because X or Y. But with AI they can respond saying they actually don't know why they wrote it a certain way. That's just what ChatGPT or Claude told them to do. So, that's the nightmare part that people are experiencing.
Code reviews are important and software architecture skills are just as important now.
How do you get it to not add so much unnecessarily defensive code?
Everyone seriously doing it has a bunch of agents in a corporate like structure doing code reviews, the bad AI code is when someone is just using a single instance of Claude or Chat, but when you have 50 agents competing to write the best code from a single prompt, it hits differently.
Nowadays, everybody is doing everything with AI, young and old alike. It's very hard to justify not doing it. That being said, you can produce good code with AI, if you know what it should look like and spend the time to prompt and iterate.
PS: Compare Assembly with Python - for sure the ration is more then 10x. Still we need much more devs compared to early days. For me the question is what the future software dev looks like (if the job still exists).
Yes. The feature is quickly produced slop. Future LLMs will train on it too, getting even more sloppy. And "fresh out of uni juniors" and "outsourced my work to AI" seniors wont know any better.
P.S. I'm reminded of the short story "My Father's Singularity" (https://clarkesworldmagazine.com/cooper_06_10/) wrt. the gradual way change accumulates until a significant gap has grown.
Is this because the guys claiming success are working in popular, known, more limited areas like Javascript in web pages, and the people outside those, with more complex systems, don't get the same results ?
I also note that most of the "Don't code any more" guys have AI tools of their own to promote...
I'm not sure what the difference is between your situation and mine. Many vibers try to say you're not doing it right, but I think people forget what an incredible diversity of use-cases are out there, and how many are bound to fall outside the norm. The "blame the user" attitude is irksome IMO.
I do a lot of greenfield coding. AI is easier to use in that situation - code bases are smaller and you can orient the project from the ground up. So that could be a difference.
When I take over an existing project, I have a bootstrap phase to get it working well with AI. I have the AI write a lot of documentation, and at the end of every coding session I have it update the documentation based on the code diff. When using AI to code, documentation is code to the AI, so it's important to keep it up to date, but thankfully AI can take care of that.
In the case of a bug that takes repeated attempts to get right like the sizing bug you describe, I stop and assess - ask the AI to describe the algorithm, point out that it's error prone and ask why, ask it to come up with ideas for simplifying the code. AI is amazing at insights, not so much at decision making, so I try to have it summarize and feed me information and direct it from there.
Also, feedback loops are incredibly important - the ability for the AI to self-test its work is paramount. Without that it just stabs at hard problems randomly hoping that the fix works. Nearly every difficult AI-can't-do-this problem I've worked through has involved setting up a mechanism for it to self-test.
Sometimes I do have to crack open the code to look at it, but that's increasingly rare. I can't remember the last time, TBH.
I gave up using VSC and copilot long ago, FWIW. Claude code on the command line in a sandbox and let it rip. I don't think it's possible to get the same level of automation in the VSC environment.
Dunno if this is helpful at all, and I'm not pushing you to use AI. I'll just say that that my experience with it is amazing and it's an incredible time saver. Keeps me focused on product level issues rather than micro-managing code issues.
(And, to be clear, I have no AI tools that I try to promote. Just a contractor doing work for other people.)
At the same time, I see the future being brighter with the help of these coding LLMs — I personally was not building software for years, focusing on management-like work. Serious coding during 'free time' was just too heavy to lift — you need time to sleep, eat and do some IRL things too...
Now, having experience in building software and caring of what I create and why I can do this far more quickly with LLMs and it kinda opens possibilities I could only dream of before. Like get a few spare $$$ millions and hire a team to build something before = pay $20 to Cursor/Claude and spend a few days guiding it like if it was a team of junior outsource devs: it's painful sometimes, but if you really know what you're doing and why — it works. And no one stops you from tweaking pixels when the majority of work is done — you'll even have a will to, as opposed to writing it all by yourself and spending all your mental energy on routine stuff.
So... if people learn to use this hammer properly — I suppose the future might be brighter than the past. And also those who actually care but didn't have time to do things they're passionate about now can do things on their own.
Nope because this is all I do and the AI doesn't do it right either