This is a timeless post. We'll see it today. We'll see it in five years. We'll see it in 10 years.
My oldest son doesn't even use tutorials, he uses LLMs. Only time will tell, if his way is worse than mine. And right now, I think it doesn't really matter _how_ he learns writing software. It matters more that he doesn't stop doing that.
I never liked learning from books. I often played with code myself. In the long term I think this had some negative effect, where I did not learn all the things but used more common solutions over and over.
I like writing with LLMs, as it sometimes show a pattern I never could think of. This also teaches me new ways to solve a problem / write a code.
For us people 40+, even "Stack Overflow" was the easy/lazy way to get knowledge. There was something called expertsexchange.com at some point in the 2000 (but it became pay-walled at some point). But generally, downloading PDFs from Emule or going to the library was THE way to learn.
Fortunately, nowadays we have LLMs and tools that are way better. No regrets, and I am so happy to live in this era.
[1] Beginning Linux Programming (Programmer to Programmer) 2nd edition by Stones, Richard, Matthew, Neil (2000) Paperback
AI coding is even more impactful considering that most coding-oriented AIs will explain what the generated code does. My offline combo of Ollama, Qwen Coder 14b, and the Continue.dev VS Code extension will always explain what it did at the end of each chat message in understandable English. And if I'm still confused, I can literally type in "I don't understand these changes could you walk me through it" and it will walk me through the code changes with the whole codebase as context/RAG material. All running on-device with no token limits or subscription fees (runs really slow, but still $0), only limited by the computer hardware itself.
In fact, I credit my AI stack with removing a huge coding "writer's block" I've had since recovering from a mental health crisis right before the start of the COVID lockdown, and has made me fall in love with building software all over again.
And it's only going to get even better the more open/shared source on-device stuff gets released. Forget multimodal models, these specialized tools are where the real magic is happening!
Im guessing its only a matter of time we see newer programming languages specially invented to work in the LLM era. So the same old processes like ever before continue. You need to understand things in a fundamental way else you won't have a clue what is going on.
You could say you still need to have done a fair of code work without LLMs to work through difficult to find and fix bugs.
This is a mindset, and I don't think AI code is changing the number of people with this mindset. What it may be doing though, is letting them get away with it for longer.
It is SIMILAR - it is not the same. There's a minimal element of interaction by virtue of the fact that SO code is usually not completely bespoke for the developer's requirements. They'll need to do things like change variable names, re-arrange some parts of it, etc.
The junior dev using an integrated LLMs (like with Cursor) has to do NONE of this. They simply hit the Tab key to accept and move on with their day.
There's a far larger danger of induced passivity.
1. Constructing an algorithm yourself from first principles, then implementing it. Let's call this "architect level"
2. Reading someone else's description of an algorithm (from a textbook, a blog post, etc.) and implementing it yourself. "Senior dev level"
3. Starting with an existing implementation, treating certain parts of that implementation as a blackbox, but adapting other parts. (e.g. a StackOverflow solution doesn't have a cache, but you need one for performance and add one yourself) "Junior dev level"
4. Copying/pasting with minimal modification. (e.g. ChatGPT gives you a solution that doesn't have a cache. You reprompt it, asking it to add a caching strategy. Your contribution is correcting some names or parameter order and adding some glue code. The main insight you gain here is how to drive ChatGPT, not how the code itself functions.)
Can today's new devs climb from rung 4 to rung 3? If the answer is yes, then maybe nothing has fundamentally changed. If it's a no, then we may be in for trouble 10 to 15 years down the road.
Almost always, the instant answers are toolchain-specific, like why my C# DLL isn’t using relative paths in the CSProj file (answer: because different VS versions process DLLs differently).
Because of where the HN community works and hires, things are a bit different; in the real world, senior programmers (people who are hired in that role and make money for >=decade, not whatever your feeling what it should be is) are not very different either. Very many don't know what they are doing either, just they deliver by trial and error and got their years and stripes in, still barely understanding what they are doing. This now has become easier with llms for them too, but it's the reason why I, vs other people on hn, am bearish on programmer jobs; by far most outside the hn bubble are and always were terrible and can be readily replaced by llms now and will be soon. The ones that do understand what they are doing and can architect, write and read complex software won't be replaced by the current or next gen, but when we read that companies are going to lay off programmers in favour of llms, they mean the people I have to work with daily (we go into large companies and do emergency repairs; there was an article yesterday somewhere saying that all companies have outages all the time; sometimes we get called in for those) who have massive teams of people who cannot write anything sensible; it is useful for the problem, but reading the code or looking how it's done makes you cry; clearly there was no real understanding to begin with. Most commonly, and this wasn't all that common when we started out, an (or rather 1000s now) external library was used, the way it was supposed to be used wasn't completely/fully understood and so a bunch of brittle code has been produced to make it work in the way the author believed it should work, breaking in a myriad of edge cases that are discovered (by outage often) years/decades later. I am thinking that maybe llms are better at these cases; sure they 'understand' about the same nothing, but at least, once it works, they can clean up the code without effort so it might not be that crust of misunderstood pain plastered on top to hold things together.
Yup. I had to deal with that last year when some senior Microsoft devs tried to shove serverless Azure stuff into something that was supposed to be for a Seattle community group full of non-technical people. The group lead was totally oblivious to how serverless on-demand pricing worked and wanted a fixed monthly cost. That whole project ended up getting scrapped and replaced with an Excel spreadsheet.
I get where that’s coming from. However, I don’t think these complaints are the same.
Let’s not approach this from the youth, but from the technology that’s supposedly corrupting the youth.
Stack Overflow, C compilers, Python are all mentioned as previous examples of technologies that were supposedly making people bad developers. And while true, none of them was hailed as a genuine game changer the way AI is. And why is AI hailed as a game changer? Precisely because AI takes the thinking out of the achieving. It does the thinking for you (it’s right there in the name…artificial INTELLIGENCE).
None of those other technologies pretended to take the thinking out of the achieving.
Now it may turn out that AI is overhyped and it doesn’t actually able to think as well as humans beyond a certain point. But the point still stands that AI, if it exists, is fundamentally different from other technologies and can genuinely have some of those concerning effects on developers that those other techs did not.
It's also not as different as you make it out to be. A compiler takes the thinking away from targeting hardware (promise* reality: you still have to target hardware (and software), but you can write larger projects) Likely AI will just become superhuman in various fields, subhuman in many other and won't be AGI for the foreseeable future (barring some kind of massive emergence in VLLMs).
So, like WYSIWYG designers and RAD tools?
Or in RAD, you could make pretty good GUI programs quickly that did their job, and did it well. But they would all look very similar. And if you needed a complex interface that required an unsupported workflow, it might not work at all.
Coding with AI can be pretty similar. It will work a lot of the time and you can have something usable quickly. But, if you don’t understand why it works, and something is broken, or misfortunes, you’re stuck. You’ll be left trying to figure out a system without the benefits of knowledge of how or why it worked in the first place.
I’ve seen this with junior developers who don’t understand the languages and tools they use. If they just plug things into AI and hit a wall, they don’t understand the data flow to be able to fix the problem. On the other hand, I’ve also worked with junior devs who have a solid programming background who are able to work faster with AI and still understand/troubleshoot the system. At the end of the day, AI is still a tool (for now) that needs to be used. Some people will use it well…
Exactly. One if the killer features of Copilot is even, of all things, tab completion.
Are template engines now bad?
I don't think it's all junior engineer's laziness to get things quick out the door. I recently worked with several, and many prefers to "do the right way". However, there are bad managers who wants to get things done quick so they can climb up the ladder. One junior told me his 1-on-1 with manager told him "don't think too much and get it done". The current market is tough for junior devs so they'll do whatever to please their manager. What choice do they have? Rather prefer getting PIP'ed?
And it's funny how the current market is considered "bad" because software companies earnings are clearly not that bad. Perhaps our greed has peaked and the only way to pocket more money is to squeeze every drop of juice out of employees to get the product out ASAP
I feel like every 15 years or so you can just find-and-replace the name of the tech we decide is only for Not Real Programmers.
God knows what he thought about me
Around a decade back, I was doing lots of work on 8-bit microcontrollers, and a fairly old programmer taught me how it was done. And I learned a lot from the approach.
Honestly speaking I had to do lots of paper work, and lots of incremental thinking on paper, testing the ideas along the way.
I'm guessing if you didn't have the print statement or a web page as an output, this is just how you work anyway.
The code did come insanely efficient and bug free. Its not for web dev, but there are its use cases.
The thing with printed books is that you have to type in those snippets yourself, and the act of typing out code reinforces knowledge. I only used the cookbook for each new problem a few times, after which I have committed the relevant bits to memory.
The act of copy&pasting from Stack Overflow might have the same reinforcement effect, but perhaps not as much because it didn't cost as much effort. The act of having a bot generate code probably doesn't do much reinforcement at all, although perhaps these new developers will be better at asking questions or creating prompts.
Meanwhile, boss wants the fix now...
That being said any large company I've worked at seems to have followed the mantra you are putting down.
AI struggles with knowledge from after its training date (so it can't help very well for anything relating to new versions of libraries) and often just generally gets things wrong or comes up with suboptimal answers. It's still only optimized to create answers that look correct, after all.
With these problems, someone on the team still needs to understand or be able to figure out what's going on. And dangit if it isn't getting hard to hire for that.
And the day that AI can actually replace the work of junior devs is just going to cause more complications for the software industry. Who will get the experience to become senior devs? Who will direct them? And even if those people also get replaced eventually, we will still probably have more awkward inbetween times with their own problems.
Can't say it's not convenient, but no use pretending the challenges don't exist.
The Clouds By Aristophanes Written 419 B.C.E
Being lazy and never developing foundational knowledge is different.
New devs are expected to have experience using AI coding tools. If they’re expected to have that, why wouldn’t they trust it?
If you trust a code generator, why dig deeper?
Turns out that we still have people who know how to write good code, even good assembly code.
Totally agree that should be treated as learning tool just as much "give me something that works" tool. If Junior devs are not taking advantage of that side of it instinctively out of their own curiosity and interest, well, maybe they were never going to be good developers in the first place even without AI.
What I can say is that for me as a as senior dev with 22 years experience who has been using using these tools daily for about a year now, it has been a huge win with no downsides.
I am so much more efficient at unblocking myself and others with all the minor "how do I do X in Y" and "what is causing this error" type questions. I know exactly what I want to do, how to ask it, but only partially what the answer should be... and AI takes away the tedious part of bridging that knowledge gap for me.
Maybe even more significantly, I have learned new things at a much faster rate. When AI suggests solutions I am often exposed to different ways to do things I already knew, features I didn't know existed, etc. I feel good that I found a solution to my problem, but often I feel even better having learned something along the way that wasn't even the original goal and it didn't really take any extra dedicated effort.
The best part is that is has made side projects a lot more fun and I stick with them a lot longer because I get something working sooner and spend less time fighting problems. I also find myself taking on new types of projects that are outside my comfort and experience zone.
Love the discussion on HN as always, great to see various perspectives on the issue.
Do you think that in the future, new programmers will not ever need to learn syntax/algorithms and will just be able to rely on AI?
Code is currently the easiest and most convenient encoding for lots of folks to express such logic. So they’ll need to learn to read the syntax even if they write less of it.
So I think people will be able to put together lots of code with AI and not much programming experience, but there will be a need to ensure that it does the right thing. Now eventually the AI will create fewer bugs, infer intent better, automatically write tests, etc but even then someone needs to eg check the testset is correct.
One of my pet theories is that this is what programming evolves to in a few decades. Programmers write formal specifications for what should happen in some very high level language. AI + Compilers + ... take that specification, implement it, and prove that the implementation is correct and performant.
Think "SQL but not just for databases".
Before we had computers and machines, humans did all the work.
And you can't really reliably "code" humans. They misunderstand instructions. They disobey rules and regulations. They make mistakes.
But society still strived with these unreliable humans that have no "proof" whatsoever that they'll do the job properly.
In fact, these days we still often trust humans more than code, at least in areas where the stakes are high. For example, we trust human surgeons over programs that perform surgery. We trust humans to run the government rather than programs.
It's entirely plausible that future generations growing up with AI don't see the point of requiring "proof" of correctness when deploying automation. If an AI model does the job correctly for 99.99% the cases, isn't that sufficient "proof"? That's better than a human for sure.
Yeah that sounds dystopian but I don't see why it can't happen.
I assume that 20-30 years ago when juniors were using either ide-provided auto-completion or refactoring or gui designers some old graybeard developer had a similar reaction.
Nothing new under the sun.
On a different layer of thinking, it makes perfect sense. The more the computing industry progresses, the more it abstracts away from how the thing actually works.
I don't know the author of this post, but as a system engineer that works closely with many software engineers, there are so many of them that yap left and right about the code they wrote or the ecosystem around their main programming language but are completely hopeless to anything outside that scope. I've seen so many re-implement the wheel because they don't know about facilities provided by the operating system (let alone how to interface and make use of them).
There's so much stuff that's done by the kernel (linux) and could be re-used if somebody was able to dive into FFI and write the adequate wrappers. And that's just an example.
One might argue that junior developers are just starting at a higher level of abstraction.
You assume wrong. Source: was there at the time.
This time is genuinely different.
It is unbelievably grating to have colleagues that are plainly over-reliant on LLMs, especially if they’re less experienced. Hopefully the cultural norm around their use gets set quickly. I can’t handle too many more PRs where juniors plug my feedback in to an LLM and paste the response
That philosophy never died and will probably keep living on eternally, perhaps until it somedays becomes a realistic option, who knows ?
Many companies see it as the utopia, and won't have any strong rejection of programmers sharing that ideal.
I mean… as a senior dev now, I’m not complaining, but it can’t be good for the industry at large.
Because he used the words 'Google Worksheet' in his prompting, the LLM spit out some code using the Google Sheets API. This was less than helpful, and it took some time to realize why he was struggling.
If the author is suggesting that more junior devs today than in the past are only interested getting tasks done and the paycheck, then this really has nothing to do with AI.
There will always be people who know how to do things better , who can understand what will be the most optimal code style for fastest and/or smallest assembly instruction , and in future this bottom line will get moved much further.
We are probably coming close to something similar of 1980s game industry crash for engineers or we can avoid it.
New devs may not care for deep tech knowledge they optimize for opportunities, and you give them tools for their disposal to optimize for opportunities and they grab it.
The reality, though, is that for most CRUD / code scaffolding, what you need the most is good knowledge of the problem space and a solid enough foundation to ask LLMs for solutions. Because, let's not forget it, people with no coding knowledge whatsoever can't get functional stuff out of LLMs as fast as non-developers.
We need to get used to a world where augmentation means "getting rid of the boring stuff at the margins". There's no heroics in doing that stuff the old way.
These new tools can be very valuable but I worry about people not using the tools to maximum effect long-term, in favor of short term success of a certain kind.
A large majority of people would be very, very happy with this. I don't have to know how to fix my car to drive it and thank God for that!
I think it's wonderful that LLMs enable someone to create useful things without understanding what's happening under the hood.
I also hope those people don't claim to be / won't be hired as (senior) software engineers, though.
No it isn't. Stack Overflow consists of people saying "I don't know what you are trying to do" and then answering anyway without waiting for additional information or scolding people for not asking their question correctly. If you post your real code, you get told to provide a minimal reproducible example. If you just post the minimal reproducible code, you get told this doesn't correspond to a real problem and that the question itself is contrived.
0. A strong desire to solve the user's problem and the organization's problem.
1. Knowledge of a major programming language like JavaScript, Java, Python, C/C++, C#, etc.
2. Knowledge of how to use an SQL database or maybe a no SQL database.
3. Knowledge of how to debug the build process and write scripts in Bash, PowerShell, etc.
4. Knowledge of at least 1 major framework.
5. Knowledge of Linux, MacOS, or Windows.
6. An ability to read documentation and learn.
7. An ability to debug large programs and fix bugs without introducing more bugs.
8. A desire to think critically and choose the appropriate technology for the problem (very hard, takes a lot of experience).
9. An ability to write clear code which others will understand.
10. The ability to write, argue, and persuade others.
11. A good person who works well with others, puts the product before himself, and is honest.
Almost all of these things are not taught to computer science majors. At best, a person will learn 1 to 2 languages and maybe Linux. Expecting computer science programs to produce good software engineers is crazy because software engineering and computer science are two different things.
I once got Raymond Chen himself to answer My Stack Overflow question. I do not think my programming career will reach that height ever again.
I know we're getting fewer "traditional" junior devs, but I'm seeing more and more designers and product managers contributing, at a frequency which was much harder pre-GPT.
In my roles as a head of product/eng, I've always encouraged non-technical team members to either learn coding or data to get better results, but always had limited success, as folks were scared of it (a) being hard, (b) taking too much time to get proficient.
But that's changing now, and fast - and these junior devs are becoming proficient much faster and leading to much business and customer outcomes.
I'm seeing more realistic goals, sprints, etc. I'm seeing fewer front/backend bottleneck, and lastly I'm seeing fewer pixel moving requests going to senior engineers.
As other have mentioned juniors were often unable to code prior to LLMs, and what helped make them better was code reviews, bugs and general mentorship. Those tools to make them better are still available to us.
- In the before times, writing software usually meant having a logical mindset, perhaps one trained in math of physics, or an insatiable curiosity. Most of my older CS college professors had actually majored in virtually anything other than CS, because the field didn't exist when they were in school.
- Lessons learned in how to convert higher-level languages into lower-level ones were captured in compilers, linkers, debuggers, and other tools. The next generation really didn't need to learn machine code, assembler, or any of that other business. They were operating in COBOL, Fortran or maybe later C most likely. They entered the workforce ready to figure out complex algorithms rather than figure out how to make the machine perform basic functions at scale -- that knowledge was captured.
- By the time I went to school, there was a strong emphasis on algorithms, established operating system concepts, multi-threading and processors, very little at the machine level, almost no GPUs existed outside of Silicon Graphics workstations in a little lab, and some cursory and revolutionary concepts about VMs as in the Java VM, and a new thing called "agile" that was showing up in some places. There was a very active department researching ways to distribute work across networked computers. Templates in programming languages hadn't really shown up anywhere, and it wasn't uncommon to work in version of C++ without a standard String type. Perl was common, and unicode was being looked at, but the world was ASCII. I could sit down and write my own optimal self-balancing trees, and reliably put them behind production software. My first programming gig, a company that wasn't focused at all on hard CS algorithms, wrote their own String types, search engines, compression algorithms, and virtual memory mapped libraries just to get a product out the door. But we weren't trying to write boot loaders, firmware, graphics windowing systems, or databases, that stuff already existed - the knowledge was captured.
- Templates, the Enterprise Java "Culture", better IDEs, and early internet tech seemed to drive lots of new students away from figuring out how to write a B-tree, and into things that were more valuable in the marketplace. Early CRUD apps, e-commerce systems, sales-tax libraries, inventory control software. A developer didn't need to figure out the algorithms, because the knowledge had been generically captured behind a few Template libraries. I remember buying a C++ Template library from a software house that had a good String type, regex engine and early support for UTF-8 Strings, along with some interesting pointer types. UML and code generation was the new hotness in places that thought COBOL was still radical.
- Today, CRUD/e-commerce/etc are so common, you start with a framework, plug in your business logic and point it at your database and service resources and off it goes. The knowledge for how to build that stuff has been captured. The movement to the front-end has been tremendous, getting rid of Java and flash from the browser and all the work in Javascript-land has made good front-end developers worth their weight in gold. But lots of that knowledge is captured too, and you can just React.js your way into a reasonable front-end pretty quick that works hard not to look like a developer-made GUI. Even design aesthetics have been captured.
So what's next? The knowledge for how to build all this stuff, the decades of hard fought knowledge creation was captured in code, and on forums, and blog posts. StackOverflow worked because it was "central" to lots of searching, but there's still random stuff buried away as captured knowledge in code repositories, personal sites, weird vendor examples, and so on.
Ignoring AI, what one might want to be really functional today as developer is something that taps into this wealth, maybe auto searches for things in all of these places and compiles a summary, or an example. Maybe this searching happens in the IDE for you and the result is dumped right into your code for you to tweak. But how to compile all these results? How to summarize into something logical? I guess we need something kinda smart to do it, maybe we need AI for it.
The real question is what's the next layer that needs to be hard fought to create new knowledge? If AI is the new Generics/Template library, what are we building next?
Grumpy old devs often lament that their technology layer is being ignored by newer devs, but often forget that computing has a long history and many layers are built on top of each other. Nobody really understands all the layers from the top down to the semiconductor tech.
If your goal is to learn the tech at any layer, you can still do it; but most people just want to create products and technical knowledge is just a means to an end. Both are valid goals, and I think there's no reason for the former to be snobbish against the latter.
I've been reliant on GPS on my phone for probably over a decade now. I'm probably lost without it. But... I explore way more of the world now! I'll quite happily bounce around to random parts of a city that's completely foreign to me, safe in the knowledge that I'll always be able to find my way back again.
(I don't know if this analogy will hold up for AI-assisted programming - I still think programmers who actually understand the details will be a whole lot useful in the long run over programmers who don't - but I'm definitely not going to ditch my GPS in an attempt to improve my sans-GPS navigation skills.)
When trying to get from point A to point B, there is a clear, well defined goal.
Streets (paths) are also clear and well defined (within reason on most popular GPS direction software).
An expert in city streets may get you there a little faster, but the end result of both GPS directions and city street experts is the same; getting you to B. What’s more, once you’re at B, the route you took probably no longer matters.
The only side effect of taking different routes is the time involved.
Coding is different. There’s a combinatorial explosion of different paths to get to any step of any problem. Each path has consequences which can be very difficult to understand at the time. (for example, the classic of-by-one error always causes some strange behavior)
Also, the code is an artifact that would need to be understood, built upon, and changed in the future.
It’s just a different beast than getting directions.
Maybe you should, if Eleanor Maguire's research on "The Knowledge" and the hippocampus has any bearing.
https://www.nytimes.com/2025/02/14/science/eleanor-maguire-d...
The author didn't say stop using AI. He offered his solutions.
It was honestly frightening how quickly I finished a side project this weekend, one that I had previously been struggling with on and off for a few weeks now. The scary part was that the user experience of prompting for feature requests or bugs and then seeing the code changed and the app be hot reloaded (I use Flutter which has this functionality) was so seamless that it didn't feel like a Copilot, it felt like an Autopilot. I literally felt myself losing brain cells as I could, yes, ostensibly review the code between every single prompt and change cycle, but realistically I clicked apply and checked out the UI.
However, all good things must come to an end, it seems, as I burned through all 150 credits of the free trial, but more importantly, the problem of hallucinations is still ever-present and oftentimes I'd ask it to fix an issue and it'd change some other part of the codebase in subtle ways, such that I'd notice bugs popping up that had been fixed in previous iterations, from minor to truly application-breaking. The issue now was that since I didn't write the code, it took me quite a bit longer to even understand what had been written; granted, it was less total time than if I had written everything from scratch, and it could be argued that reading it is no different than reading a coworker's (or one's own older) code, and I still had an LLM to guide me through it (as Cursor's chat is unlimited while their composer feature, the AI prompt and apply loop, is not), but I understand the author's point much better now.
While others in this thread and elsewhere might say it is no different than reading Stack Overflow or books of yore, the automaticity of AI and the human together feels fundamentally different than what came before. Truly, I felt much more like a product manager, citing feature requests and bugs, than I ever did as an actual developer during this loop, only this time our knowledge and experience will be so eroded that we won't be able to fix novel problems in the future and will rely on our learned helplessness in asking the AI to fix it, as I had increasingly felt as the easier this loop got.
I'm sorry but for many of us socially awkward folk having to rely on begging to solve coding tasks felt like chewing glass.
"This is the worst code I've ever run."
"But it does run."
> I recently realized that there’s a whole generation of new programmers who don’t even know what StackOverflow is.
I also did not know about stack overflow when I started programming. Or even when I finished college and entered the workforce.
Because it didn't exist yet.
> Back when “Claude” was not a chatbot but the man who invented the field of information entropy, there was a different way to debug programming problems.
First, search on Google.
Google also did not exist until '98. By which point I'd already learned enough c++ from a physical book to write toys like a space invaders kind of game.
> Junior devs these days have it easy. They just go to chat.com and copy-paste whatever errors they see.
Juniors blindly copying code from StackOverflow used to be a standard complaint.