"Wow, show it to me!"
"OK here it is. We call it COBOL."
"Before 1954, almost all programming was done in machine language or assembly language. Programmers rightly regarded their work as a complex, creative art that required human inventiveness to produce an efficient program."
-John Backus, "The History of Fortran I, II, and III", https://dl.acm.org/doi/10.1145/800025.1198345
"The IBM Mathematical Formula Translating System or briefly, FORTRAN, will comprise a large set of programs to enable the IBM 704 to accept a concise formulation of a problem in terms of a mathematical notation and to produce automatically a high speed 704 program for the solution of the problem."
-IBM, "Specifications for the IBM Mathematical FORmula TRANslating System, FORTRAN", http://archive.computerhistory.org/resources/text/Fortran/10...
"FORTRAN should virtually eliminate coding and debugging" https://news.ycombinator.com/item?id=3970011
They are very much the exception that proves the rule though.
(I'd love for someone to substantiate or debunk this for me.)
Most people miss the fact that technical improvements increases the pie in a way that was not possible before.
When digital cameras became popular, everybody become a photographer. That only made the world better, and we got soo many more good photographers. Same with YouTube & creativity.
And same with coding & LLMs. World will have lots more of apps, and programmers.
I disagree with the "only" part here. Imagine a distribution curve of photos with shitty photos on the left and masterpieces on the right and the height at the curve is how many photos there are to be seen at that quality.
The digital camera transition massively increased the height of the curve at all points. And thanks to things like better autofocus, better low light performance, and a radically faster iteration loop, it probably shift the low and middle ends to the right.
It even certainly increased the number number of breathtaking, life-changing photos out there. Digital cameras are game-changes for photographic journalists traveling in difficult locations.
However... the curve is so high now, the sheer volume of tolerably good photos so overwhelming, that I suspect that average person actually sees fewer great photos than they did twenty years ago. We all spend hours scrolling past nice-but-forgottable sunset shots on Instagram and miss out on the amazing stuff.
We are drowning in a sea of "pretty good". It is possible for there to be too much media. Ultimately, we all have a finite amount of attention to spend before we die.
Did it?
people now stand around on dance floors taking photos and videos of themselves instead of getting on dancing and enjoying the music. to the point where clubs put stickers on phones to stop people from doing it.
people taking their phone out and videoing / photographing something awful happening, instead of doing something helpful.
people travel to remote areas where the population has been separated from humanity and do stupid things like leave a can of coke there, for view count.
it’s not made things better, it just made things different. whether that’s better or worse depends on your individual perspective for a given example.
so, i disagree. it hasn’t only made things better. it made some things easier. some things better. some things worse. some things harder.
someone always loses, something is always lost. would be good if more people in tech remembered that progress comes at a cost.
This is actually bad for existing programmers though?
Do you not see how this devalues your skills?
Not sure I agree. I haven't seen much evidence of "better photography" now that it's digital instead of film. There are a million more photos taken, yes, because the cost is zero. But quantity != quality or "better", and if you're an average person, 90% those photos are in some cloud storage and rarely looked at again.
You could argue that drones have made photography better because it's enabled shots that were impossible or extremely difficult before (like certain wildlife/nature shots).
One thing digital photography did do is decimate the photographer profession because there is so much abundance of "good enough" photos - why pay someone to take good ones? (This may be a lesson for software development too.)
I think you really missed the point of what these technologies and innovations actually did for society and how it applies to today, underneath the snark.
In the 1970's, if you got gifted a camera, and were willing to put in the work to figure out how to use it, you learned a skill that immediately put you in rare company.
With enough practice of that skill you could be a professional photographer, which would be a good , reliable, well paid job. Now, the barrier of entry is nothing, so it's extremely competitive to be a professional photographer, and even the ones that succeed just scrape by. And you have to stand out on other things than the technical ability to operate a camera.
That's...what's about to happen (if it hasn't already) with software developers.
But consider this— back in the day, how many mainframe devs ( plus all important systems programmer! ) would it take to conjure up a CRUD application?
Did you forget the vsam SME or dba? The CICS programming?
Today, one person can do that in a jiffy. Much, much less manpower.
That might be what AI does.
No, wait it was called natural language coding, now anyone can code.
No, wait it was called run anything self fixing code. No wait, simplified domain specific language.
No, wait it was uml based coding.
No, wait excel makros.
No, wait its node based drag and drop .
No, wait its LLMs.
The mental retardation of no code is strong with the deciding caste, every reincarnation must be taxed.
Presumably, the LLM user will have sufficient brain capacity to verify that the result works as they have imagined (however incomplete the mental picture might be). They then have an opportunity to tweak, in real time (of sorts), to make the output closer to what they want. Repeat this as many times as needed/time available, and the output gets to be quite sufficient for purpose.
This is how traditional, bespoke software development would've worked with contractor developers. Except with LLM, the turnaround time is in minutes, rather than in days or weeks.
today:
s/COBOL/SQL
and the statement is still true, except that many devs nowadays are JS-only, and are too scared or lazy as shit to learn another, relatively simple language like SQL. ("it's too much work". wtf do you think a job is. it's another name for work.)
because, you know, "we have to ship yesterday" (which funnily enough, is always true, like "tomorrow never comes").
the explains are not nearly as straightforward to read, and the process of writing SQL is to write the explain yourself, and then try to coax the database into turning SQL you write into that explain. its a much less pleasent LLM chat experience
Pushes never come from the LLM, which can be easily seen by feeding the output of two LLMs into each other. The conversation collapses completely.
Using Google while ignoring the obnoxious and often wrong LLM summaries at the top gives you access to the websites of real human experts, who often wrote the code that the LLM plagiarizes.
Then they'll change their mind to their original answer when you tell them "I wasn't disagreeing with you". Honestly, it's amusing, but draining at the same time.
I'm completely drained after 30 minutes of browsing Google results, which these days consist of mountains of SEO-optimized garbage, posts on obscure forums, Stackoverflow posts and replies that are either outdated or have the wrong accepted answer... the list goes on.
Second, it doesn't do well at all if you give it negative instructions, for example if you tell it to: "Don't use let! in Rspec" , it will create a test with "let!" all over the place.
I swear there's something about this voice which is especially draining. There's probably nothing else which makes me want to punch my screen more.
PS: Both humans and llms are hard to align. But I do have to discuss with humans and I find that exhausting. llms I just nudge or tell what to do
"I went to work early that day and noticed my monitor was on, and code was being written without anyone pressing any keys. Something had logged into my machine and was writting code. I ran to my boss and told him my computer had been hacked. He looked at me, concerned, and said I was hallucinating. It's not a hacker, he said. It's our new agent. While you were sleeping, it built the app we needed. Remember that promotion you always wanted? Well, good news buddy! I'm promoting you to Prompt Manager. It's half the money, but you get to watch TikTok videos all day long!'"
Hard to find any real reassurance in that story.
Prompt engineering is like singing: sure thing everyone can physically sing… now whether it’s pleasant listening to them is another topic.
it would have been funnier if the story then took a turn and ended with it was the AI complaining about a human writing code instead of it.
I fear this will be more and more of a problem with the TikTok/instant gratification/attention is only good for less than 10 seconds -generation. Deep thinking has great value in many situations.
"Funnily" enough, I see management more and more reward this behavior. Speed is treated as vastly more important than driving in the right direction, long-term thinking. Quarterly reports, etc etc.
Yes, it's supportive and helps you stay locked in. But it also serves as a great frustration lightning rod. I enjoy being an unsavory person to the LLM when it behaves like a buffoon.
Sometimes you need a pressure release valve. Better an LLM than a person.
P.S: Skynet will not be kind to me.
Don't people realize it's a machine "pretending" to be human?
I’ll stick to human emotional support.
With LLM it’s speed - seconds rather than the minutes or hours as per stack overflow which is main benefit.
There have been several personal projects that have been on the back-burner for a few years now that I would implement about 20% of, get stuck and frustrated, and give up on because I'm not being paid for it anyway.
With ChatGPT, being able to bounce back and forth with it is enough to unblock me a lot of the time, and I have gotten all my projects over the finish line. Am I learning as much as I would if I had powered through it without AI? Probably not, but I'm almost certainly learning more than I would had I given up on the project like I usually do.
To me, I view ChatGPT as an "intelligent rubber duck". It's not perfect, in fact a lot of the time time the suggestions are flatout wrong, but just being able to communicate with something that gives some input seems to really help me progress.
And by that, I mean corps will make poor decisions that will be negative for thought workers while never really threatening executive compensation.
I see this latest one somewhat like TFA author: this is a HUGE opportunity for intelligent, motivated builders. If our jobs are at risk now or have already been lost, then we might as well take this time to make some of the things we have thought about making before but were too busy to do (or too fatigued).
In the process, we may not only develop nice incomes that are independent of PHB decisions, but some will even build things that these same companies will later want to buy for $$$.
I've been recording to myself voice notes for years. Until now they've seemingly been near-read-only. The friction for recording them is often low (in settings where I can speak freely) but getting the information out of them has been difficult.
I'm now writing software to help me quickly get information out of the voice notes. So they'll be useful to me too, not just to future historians who happen upon my hard drive. I would not be able to devote the time to this without AI, even though most of the code and all the architecture is my own.
On occasion I've used Otter or Whisper with some success.
Please let me know if you open source any of your work.
Do what you think is best of course, but is a very bad recommendation for those who have lost their jobs and are unlikely to find another in software any time soon (if ever).
I said a few years ago when people were still saying I was overreacting and AI wouldn't take jobs, people need to reskill ASAP. If you've lost your job, learn how to paint walls or lay carpet before your emergency fund is up. In the unlikely event you find another software job while you're training, then great, if not you have a fall back.
Remember you're highly unlikely to make any serious money out of a bootstrapped startup. Statistically we know significantly fewer than than 1% of bootstrapped startups make money, let alone become viable replacements for a full-time income.
Don't be stupid – especially if you have a family depending on you.
During the rise of the net, there were unexplored green fields everywhere. You could make easy bank from ads. You didn't need an office or a factory to start a company (which was more or less a requirement previously). So the idea of a bootstrapped startup was new, but seemed somewhat obvious if you were paying attention.
Now? Everyone has LLMs and can see a bit into the future. Lots of these companies will bubble up and either fold or get acquired. A few will unicorn. But the key point remains: if you are unemployed or have some time and build something functional on this new stack, your value as an employee will be much higher in the future.
Don't sacrifice what you can't, but I think there may be a softer landing for failed AI founders in the near future.
But I thought this might be worth blogifying just for the sake of adding some counter-narrative to the doomerism I see a lot regarding the value of software developers. Feel free to tear it apart :)
I would say that when the fundamentals are easier to learn it becomes a great time to learn anything. I remember spending so much of my degree during software development trying to fix bugs and have things explained by trawling through online forums like many of us have. Looking for different ways of having concepts explained to me and how to apply them.
LLM's give us a fairly powerful tool to act as a sort of tutor in asking questions, feedback on code blocks, understanding concepts, where my code went wrong etc. Asking it all of the dumb questions we go trawling for.
But I can't speak to how this translates when you're a more intermediate developer.
The open questions right now are how much of a demand is there for more software, and where do AI capabilities plateau.
But, there is a key distinction that we would be remiss to not take note of: By definition, farmers are the owners of the business. Most software developers aren't owners, just lowly employees. If history is to repeat, it is likely that, as usual, the owners are those who will prosper from the advancement.
In the long term, food demand is elastic in that populations tend to grow.
fruits and all non-essential food items are famously very elastic, and constitute large share of the spending.
for example: if cheap cereal becomes abundant, it is only at the cost of poor quality, so demand for high quality cereal will increase.
the LLM driven software engineering will continuously increase the bar for quality and demand for high quality software
Maybe, but also "Excel with VBA macros" has generated an unimaginable amount of value for businesses in that time as well. There's going to be room for both.
So much room left. As I doubt every developer will double check things every time by asking.
However, I do agree that the premium shifts from mere "coding" ability -- we already had a big look into this with the offshoring wave two decades ago -- to domain expertise, comprehension of the business logic, ability to translate fluidly between different kinds of technical and nontechnical stakeholders, and original problem-solving ability.
I'm also a bit tired of running into people that are 'starting a contracting firm' and have 0 clients or direction yet and just want to waste your time.
The more awkward truth is that most of what developers have been paid to do in the 21st century was, from the larger perspective, wasted. We mostly spent a lot of developer time in harvesting attention, not in actually making anything truly useful.
Most organizations do derive net benefit from laying off the below average and hiring the above average for a given compensation range, as long as the turnover is not too high.
And this delta increases when the above average can augment themselves more effectively, so it seems we should expect an even more intense sorting.
I heard that before. Borland Delphi, Microsoft FrontPage, Macromedia Flash and so on. I learned how in 5 years or so, these new technologies would dominate everything.
Then I learned that two scenarios exist. One of them is "being replaced by a tool", the other is "being orphaned by a tool". You need to be prepared for both.
That said, even if the specific products like Cursor or ChatGPT are not here in 5 years, I am confident we are not going to collectively dismiss the utility of LLMs.
> mechanized farm equipment
Sure, that could be a valid analogy.
Or maybe we invented CAD software for mechanical engineering, where we were making engineering drawings by hand before?
And that doesn't quite ring the same way in terms of obsoleting engineers…
And if I were to jump into instruction-level programming today I would start by asking an LLM where to begin...
I browse the web. Eventually, I review the agent code and more often than not, I rewrite it.
Unfortunately, that's many businesses already, even before AI. It's all just one big factory line. Always has been (to those at the top).
That said, I still try to figure out the logic myself first, then let AI help polish or improve it. It is a bit slower, but when something breaks, at least I know why.
AI has definitely lowered the barrier. But whether you can actually walk through the door still depends on you.
We might think, "Yeah, but so many of these dumb AI corpo-initiatives are doomed to fail!" and that's correct but the success/fail metric is not based on whether the initiatives' advertised purpose is effective. If investors respond positively in the near term, that's success. This is likely why Logitech embedded AI in their mouse software (Check and see if Logi+ AI Agent is in your task manager) https://news.logitech.com/press-releases/news-details/2024/N...
The near term crash (which will happen) in AI stuff will be because of this dynamic. All it means is that phase one of the grift cycle is completing. In the midst of this totally predictable, repeatable process, a dev's job is to get gud at whatever is truly emerging as useful. There are devs who are showing huge productivity gains through thoughtful use of AI. There are apps that leverage AI to do new and exciting things. None of this stuff happens without the human. Be that human!
There actually is a ChadGPT but I assume the OP meant ChatGPT
I... assume that was meant sarcastically, but it's not at all clear from context I think.
Damn straight we are.
Just, don’t skip out on learning the fundamentals. There’s no royal road to knowledge and skill. No shortcuts. No speed running, downloading kung fu, no passing go.
Why?
Because the only thing LLMs do is hallucinate. Often what they generate is what you’re looking for. It’s the right answer!
But if you don’t know what and L1 cache is or how to lay out data for SIMD; no amount of yelling at the bot is going to fix the poor performance, the security errors, and the logic errors. If you don’t know what to ask you won’t know what you’re looking at. And you won’t know how to fix it.
So just remember to learn the fundamentals while you’re out there herding the combine space harvesters… or whatever it is kids do these days.
Regardless of the true number, you're right that no amount of reasoning on paper "why" we should be employed matters if the reality is different; which it clearly is for a lot of people. Reality decides in the end.
A more accurate title might have been "Why AI is a reason to become a software developer" - since the topic I discuss is entirely AI and its effects on the field, and there might be entirely non-AI reasons for not going into software.
1. I use AI to find my way in a sprawling micro(service|frontend) system that I am new to. This helps cut down massively on the “I know what to do, I just can’t figure out where.” I started a new job where everyone has years of context as to how things fit together and I have none. I feel strongly that I need to give an honest effort at finding things on my own before asking for help, and AI certainly helps there.
2. Anything I stumble upon in a dev/deployment process that leans too heavily into the “good behavior/hygiene,” I try to automate immediately for myself and then clean up to share with the team. In the past, I might have tried to adopt the common practice, but now it’s less effort to simply automate it away.
3. There is value in using AI in the same manner as I use vim macros: I use the planning mode heavily and iterate like crazy until I’m satisfied with the flow. If the task has a lot of repetition, I typically do the first one myself then let the AI take a whack at one or two. If I don’t like the process, I update the plan. Once I see things going smoothly, I give the AI the ok to finish the rest (making atomic commits so that it’s not just one big ball of wax). This is pretty similar to how I record macros (make one change yourself, record the macro on the next line, test it out for a line or 2, re-record if necessary, test again, plow through the rest).
4. When I come across something that needs to be fixed/could be improved but isn’t related to my task at hand, I do a few minutes of research and planning with the AI, and instead of coding a solution, we create a todo document or an issue in a tracking system. This wasn’t happening before because of the context switching required to write good documentation for later. Now it’s more of the same thing but akin to a dry run of a script.
5. I can quickly generate clear and easy to read reports to allow other teammates to give me feedback on work in flight. Think about a doc with before and after screenshots of changes throughout an app produced by a playwright script and a report generator that I can rerun in under a minute whenever I want.
I’m finding that I really enjoy the skipping the tedious stuff, and I’m also writing higher quality stuff because I have more bandwidth. It helps me collaborate more with my non dev peers because it lowers the barrier to sharing.
Important to note that in my experimenting, I haven’t had great luck with winding it up and setting it loose on a task. Too often it felt like being a junior engineer again, doomed to throw spaghetti at the wall. Once I started using AI as an assistant, I felt things really started to click. Software development is about writing code, but it’s about a lot of other things too. It’s nice when the AI can help write code, but it’s fantastic when it helps you accomplish the other things.
Yeah there is real labor involved with trying to set up the right context and explain your goals and ensure it has the right tools, etc. etc. Sometimes the payoff is massive, other times it's like "I could have done this myself faster". It takes time to build an intuition for which is which, and I'm constantly having to tune that heuristic as new models and tools come out.
It may be in the end that software developers make less money even as more jobs become available.
I thought that AI could help me learn it and have tried a variety of approaches. I have found that it is just absolute crap. It is worse than hype, it lies about what you are able to do. I spent a day with ChatGPT trying to figure out how to resize a browser window in a feature test until it finally told me after hours of confident explanation, that it is impossible to do at the moment. Um, that would have been great information to have 8 hours ago!
Not only does it lie to you, but AI slop is literally destroying the Internet. Please do not try to learn software development from AI. There are plenty of great ways to learn. Maybe your experience is or will be different, but I value my time and AI does nothing but waste mine.
Lol
Always has been.
Heck I'm so tired of statements like this, many who? It's already a lot an LLM that automate/help the boring/tedious part of my job, I have yet to see taking over 2, 5 or 10 of my collegues, just knowing what a hawful lot these tiredlessly dudes do I couldn't ever imagine doing also their job. imo such statements have very short shelf life