It can do almost everything I can do a bit better and I have years and years of domain knowledge, keep ontop of rfc changes, new languages, c++ standards etc, side projects and even occasional leetcode.
Oh well, this gold run did run on long enough - Im glad I made a bit of money from the industry but I think all these students going into CS are in for a rude awakening and we're in for a huge shift in this industry.
I'm a Pascal programmer, not a Python programmer... but I'm hoping that I can leverage CoPilot to help me navigate the nitty gritty boilerplate that would otherwise take days to sort through, and get to the heart of the refactoring/patching necessary to get WikidPad up to date and fix the breakage.
I see GPT4 and kin as tools to allow more freedom of action, and less worry about the stuff I always hated anyway, the minutiae of coding.
--
>years of domain knowledge
Usually the term "domain knowledge" applies to real world non-programming knowledge such as chemistry, manufacturing, etc. This is the first time I've seen it applied to programming. Programming is just a means to an end. I've never considered programming to be an industry. We produce a product with zero marginal cost.
I suspect you are in the same emotional place that accountants were, the first time they saw spreadsheets in use. It must have seemed like the end of the world to them, but it wasn't.
>all these students going into CS are in for a rude awakening
As long as they know that computers are a tool, not the end result, they'll be fine
C++ versus Pascal mindset. :)
> This is the first time I've seen it applied to programming. Programming is just a means to an end. I've never considered programming to be an industry.
Yes and no. It did not replace the good accountants who actually orchestrated the whole department, but it got rid of a lot of the low-level grunt work.
AI-supported coding autopilots seem to go in the same direction: The junior devs whose whole job it is to translate architectural and design specifications into excessively verbose boilerplate code will struggle to survive, but the software architects above them will find a new means through which to express the analytical thinking, process planning and understanding of complex dependencies that they're paid for.
Which will, in turn, further drive shortages of good developer talent long term. The industry thinks there's a tech labor shortage now? Just wait until the ladder has been completely kicked over for young folks trying to find their career footing in tech.
We were all once juniors, doing work that could primarily/completely be replaced by AI. We had to do that grunt work to learn the lessons that make us architects today.
GPT-4 can do all of that stuff. Probably as well or better than you. If you think you still are better, can you do it literally for 24 hours a day? How about 3-6 months from now, are you going to become 50-500% faster/better at your job?
I would love to see a few samples of some real world "architecting" that you have done in the last few months along with the calendar schedule you did that on and then compare you head-to-head with GPT4 and a junior dev. And then we show your executives how that went. Sound good?
How well did that play out?
It didn't. At all. In fact quite the opposite happened. But I would be lying if I said development wasn't transformed. A lot of menial labor, labor many here on HN have never dealt with, was no longer done.
I see the same thing happening with AI. What you need to be thinking about is what does an "AI app" look like? What does an app accelerated with "AI" look like? How are we going to use this technology to better serve our customer's needs? How will applications be integrated with AI? What even is an application in this new world?
Paradigm shifts are exciting times! Enjoy it!
just because people didnt become obsolete then doesnt mean that will always be the case.
especially when obsolence through technology more generally isnt even a rare thing
I’ve been programming with the assistance of copilot for a year now, and using ChatGPT as well since 3.5. These tools are amazing, I’m never going back. But to me they’ve only increased the value of the senior engineer with deep domain knowledge. It makes me more productive. But I have to come up with the requirements, correct the AIs code, do all the plumbing that it can’t do efficiently. A lot of the code it comes up with is pure garbage, I have to know enough to know the difference. It does kind of replace a junior developer a little, if you were giving them grunt work tasks, which could be an issue when starting out in this field.
There is so much more that software engineers do than just writing code. That’s really a minority of the time each day for me. If you’re familiar with Amdahl’s law, then you know there’s a mathematical limit to the productivity improvement here.
One example its ability to do problem-solving when diagnosing a complex issue:
1. It repeatedly ignores described constraints and offers suggestions that logically cannot be solutions.
2. When I give it the explanation, it appologizes and... what do you know, two messages later its back to the same suggestion.
3. When I sigh try the suggestion anyway, then go back to it telling it it didn't work, it will suggest something else, and when that doesn't work either, suggest the first thing again!
I'm not saying its never going to happen... but its really not there, yet.
1. You're not far away from beginner level in your C++/Networking enthusiasm 2. The types of products/codebase you deal with was overengineered, outdated, dumb or poorly designed in the first place
Unfortunately there are still piles of piles of garbage code and so called "architecture" being produced every day. That is a low hanging fruit for AI and rightfully so.
Personally, I think we will do more and more complicated things instead of just being done with programming.
Agreed. GPT-3 and GPT-3.5 commonly hallucinate. GPT-4 can certainly be made to behave badly, but on real questions I've put to GPT-4 it has a 0% hallucination rate. The few wrong answers it has given have been "sensibly wrong" in that it's highly likely an experienced human programmer would have made the same mistake (eg lots of Stack Overflow answers are wrong in the same way), and even its wrong answers have been helpful in guiding me towards the correct solution.
These occasional, "sensibly wrong" GPT-4 answers are fundamentally different from the correctly formatted academic bibliography citations for technical papers that never existed, by authors that never existed, in journals that never existed hallucinated "answers" I've received from GPT-3 and GPT-3.5.
Try a few times and ask for more complex example.
Will we ever get to the point where it does everything independently? No idea. But right now your reaction is premature.
Obviously, ChatGPT can't do this today, but I don't see any fundamental reason AI won't be able to do this in the decade or so.
Even in this context I don't know that there won't be demand for humans that have the mental rigor to program computers. I do think we will need to adapt.
This abstraction, compiled away, into machine code that does one thing. Compilers also tend to take all kinds of shortcuts to make programs a lot faster, and therefore changing them might not be as straightforward as you think.
If you think I'm wrong, I'd like to see what "fundamental reasons" you've considered and how you've reasoned that they aren't an issue for this sort of system.
People expecting that their jobs never change would have a rough time surviving for decades in IT anyway. Automate all the drudgery away, new drudgery always comes back in its place.
Almost all of the difficult problems I face when developing software is not from the code writing part.
It's things like deciphering what the customer needs, it's picking a good architecture for a new module or selecting an appropriate algorithm depending on various trade-offs, and figuring out how to integrate new features into existing in-production code with minimal disruptions.
While there certainly are exceptions, and if you're in a C++ shop chances are you're one of them, I'm pretty sure my job is quite typical in that sense.
As such, I feel my job disproportionately involves writing code. Some is superfluous, due to tool limitations. Some is not, it encodes intent and restrictions, but might still be trivial to write.
So I've long been wanting to write less code, to focus more on the difficult parts.
Such a good description/summary. I’d love to use ChatGPT but the problems I solve on a day to day basis are almost never of the form “write some simple code to solve this well defined problem”
This requires subject matter experts, like yourself, to use and implement.
These LLM are tools. They are not sentient.
> If you are an expert in some area and suddenly everyone else catches up almost, it will surely impact your wages.
Find something that compels you and don't be a wage slave. That used to be what programmers did.
It's the same as IKEA, it's not as good quality as a handcrafted table from the 1800s. But it works well for most situations and most people.
A calculator didn't make mathematicians obsolete. It aided in the creation of more complex mathematicians.
However, as far as agency, these language models have none.
To those saying it will enable them to solve more problems, yes that is correct. It will give everyone "wings", but once everyone has wings the industry will be so different in terms of wage and employment.
To people saying GPT gives incorrect code, please try GPT4.
If your age and circumstance allow, you should think whether a career change is possible. Not a hard change right now, but atleast explore what options might be available. I am exploring the same myself.
To those talking of chess, that is not a correct comparison since people want to watch (and connect with) human players playing chess (thus the pro scene survives), and play it for their own joy. Due to tools like stockfish, it has become far easier for people to explore moves. If the aim in chess was to finish more and more games from random given positions, and people were paid per game (and some value was created finishing it), stockfish would easily drive it to 0. Chess survives not because humans do better than AI, but because nobody is interested in playing against AI or watching Stockfish v Stockfish (By nobody I mean a very small number). Most people want to play against real people and watch real people play.
I’ve been doing this for years, and I have the least confidence I’ve ever had that I’ll ever pull it off. Credentials are expensive as hell and time consuming to obtain, and if massive wage loss is a concern, there aren’t many careers you’ll be safe starting over in. Anything “interesting”, but low paying will likely be affected in the same way software is.
user: crop_rotation
created: 4 days ago
karma: 305
TL;DR: I think your job is safe.
> What you're seeing now is as good as it gets (in terms of big breakthroughs, there will still be lots of small and medium ones).
Unless you have a strong source for this, I find it hard to believe. Also, GPT3 didn't have many big breakthroughs over GPT2, other than massive parameter size.
> Unless you have a strong source for this, I find it hard to believe
Like I said, this is just my opinion. I do have a world-expert level understanding of the technology, but at the same time I'm a strong believer that even experts are bad at predicting the future, so make of this what you will. Also, my impression of what constitutes a breakthrough and what doesn't might vary a lot from yours.
So far, I find it a pretty extreme statement, but it appears true here in the Netherlands. Most employees that I talk to at various industries can immediately point out one or two things which they would like to see automated. In most cases, software would replace data transfers which now occur via spreadsheets or paper.
But in my experience, there's some hard to solve bottlenecks for those sorts of organizations that still insist on largely paper-based workflows:
1. Their workflows are such a byzantine mess that either nobody understands them sufficiently to explain them (be it to a programmer, or to a no-code platform, or to an AI), or they're fundamentally broken and only fudged along by people who shouldn't be allowed to do their job, if the process was implemented "correctly"… or both.
2. It takes a special breed of people who have the necessary analytical skills to really pull off architecting automated workflows that work in practice. It comes natural to some managers and consultants, a lot of programmers, and a bunch of others, but it's not a widespread skill, and those individuals know their worth. People who already can't conceptualize the underlying problems won't be able to do so with the help of AI any time soon.
3. It still takes budget to implement such workflows, AI or not, and the affected orgs usually don't want to spend any money on improving themselves.
Interesting. I agree it is a relatively decent boilerplate generator, but it is quite useless for anything else I've tried. How did you measure its performance?
And yes, programming has a lot more tasks but essentially they are common in one or another. I don't know. My job is obviously in danger right now and I don't see an easy way out. What am I going to do? Shift to another junior role that GOT may take over this version or next version? Or be a product guy or a marketing guy that I HATE and AVOID to be for my life? And how are those guys safe? Maybe I should go back to school amd study general relativity -- at least AI is pretty weak in abstract math and physics. I don't see a way that we cam be sure that is diagonal to what AI is capable of. The best thing I, no, you can say is, OK AI might be able to take 80% of my job away but my company still needs me to modify the code.
But what fun is in that? If AI can do say 50% of the task in a split of second, why on earth would your employer EVER pay you to initate a piece of code? It will pay you to debug and give it more prompts, but is it what you want to do?
But I'm probably paranoid. We will all be fine. After all every technological advance added jobs, right? We simply need to adapt then everything will be fine.
And you know what? I thought about something funny and almost LMAO -- all we programming guys, we have been working so hard to automate ourselves away. But schools, hospitals, governments and pretty much anything else that we think are as slow as dinosaurs will stay as dinosaurs.
While job losses are a concern, I think the more significant impact will be on the way companies operate. As firms exist to economize on the cost of coordinating economic activity, the streamlining and reduction of coordination needs brought about by LLMs will challenge the very foundations of many businesses. In this new landscape, individuals and small teams might outcompete larger organizations.
Freelancers and solo entrepreneurs could find themselves better positioned to compete in the market, driving the rise of smaller, agile businesses that can innovate rapidly and cater to niche markets. This shift will also change the skills needed for success in the field.
Overall, it's an exciting time to be part of this industry. Far from being a time to quit, it's an opportunity to adapt, grow, and harness the power of LLMs to reshape the world of work.
Will that come along with a 10-100x pay increase?
Why should I care about being so much more productive, when it won't come with any pay increase (and will likely come with a pay decrease)?
When our tools get so good that our employers don't need us, we won't need them. In a market where the engineers are competing with the manager class whose core skill is politics and bullshit, who would you bet on long term?
Thinking about switching to a better paid job and / or starting a side-project that would be able to generate semi-passive income.
I'm surprised the majority of devs don't see this as a threat. Check out r/cscareerquestions or r/programming, the general mood is people ridiculing the prospects of AI having impact on jobs / wages.
Are you sure that's possible? If so, why haven't you done so already? Were you deliberately earning less money than you could?
It looks like the most interesting part of programming (for me) has been automated, while the parts I hate remain (at least for now?): gathering requirements, talking to people, understanding business, etc.
I hoped to earn enough money through commercial programming to live off it, and switch to programming languages/compilers and work in that area for small money but big fun.
It seems neither of these things are going to happen, and for technology/logic aligned people that are mediocre in their performance, and don't like working with people, the only place to go is trades. Maybe I'll still have a "programming" job, as an intermediary between AI and product people, but I feel the competition is not gonna be in my favor.
If AI truly replaces all creative work, maybe a good way to go would be acknowledging your inferiority towards a superior species, buy some land in the country and try living a quiet life off the farming?
I've seen repeated suggestions to this effect. Meanwhile, housing prices in the US are 40% above what they were per-pandemic.
But most developers do and know much more than that. They have domain knowledge, understand the relation between different systems and understand the codebase as a whole, not just a specific file or function that does one specific thing.
Don't give up. Try to use this new tool to improve your knowledge and use it to you advantage.
There will still be a source of income in fixing the AI generated code. It won't be as fun though.
Meanwhile Stable Diffusion managed to motivate me more than anything to learn drawing. I always gave up in the past because it takes so much practice to get good results. Now I can draw something, throw it into Stable Diffusion as input (the only way to semi-reliably get what I want) and get a more satisfying result, and it's still bad/inconsistent enough that I'm motivated to do it better.
But I never started coding to write code - I started coding to solve problems and make things, and I've been doing it for 40 years and 25 professionally. Code is the medium for me, not the message. I always thought that made me a bad coder, but maybe in this new era it puts me ahead of the game somehow. I dunno.
Personal development should be about what you like to do. I like solving problems and building the solution myself, for which I need to learn new things, which is also enjoyable.
There were always other c++ devs out there, why did you choose to do it anyway?
Is this everyone's experience?
But with the new stuff, who will write the answers for them to learn first from? Or will we be stuck with this level of knowledge forever now? In the case that no one will produced sufficient training data anymore to be fed in to them...
Maybe the future isn't so bleak. Or we just need to find suitable niches which won't be filled. Or where someone wants a human in loop.
ChatGPT's answer:
> While AI can enhance our productivity and efficiency, it cannot replace our ability to learn, grow, and adapt to new challenges.
> Therefore, it is important to view AI as a complement to our skills and knowledge
As an AI language model, I do not have personal opinions or emotions. However, I can provide some insights on the topic of AI and its impact on the tech industry.
It is understandable to feel discouraged or demotivated when faced with the capabilities of advanced AI models such as GPT-4. These models have demonstrated remarkable skills in various domains, including natural language processing, computer vision, and even software development. However, it is important to note that AI technology is still evolving and has its limitations. While AI can assist in automating certain tasks, it cannot replace human creativity, intuition, and problem-solving skills.
Moreover, AI technology is not a substitute for continuous learning and personal development in the tech industry. The rapid pace of technological innovation requires professionals to stay up-to-date with the latest trends, standards, and best practices. While AI can enhance our productivity and efficiency, it cannot replace our ability to learn, grow, and adapt to new challenges.
Therefore, it is important to view AI as a complement to our skills and knowledge, rather than a threat or a replacement. As the tech industry evolves, it will continue to create new opportunities and challenges for professionals to thrive and make a meaningful impact.GPT Development Services are hard to come by and we are at the forefront!
How long ago was this? Is this a long term change, or just a short term aberration that you're assuming will last forever?
That's all these AI tools are, better stack overflow searches. They have no ability to know what is correct or what is wrong, it lacks judgement, which is one of the most important skills to have as a software engineer.
Engineering is about solving problems, these tools can't solve problems, they can regurgitate solutions to problems they have been trained on, often times confidently incorrectly, which is much worse than saying "I don't know".
They can't extract requirements from the client to find out what they really want.
They completely fail at moderately hard problems, or novel problems.
I think these tools may be worse for the industry because people will have less opportunity to learn problem solving skills since the AI will handle to easy stuff, and when the hard stuff come along, people won't have the skills to solve them.
For those with the good problem solving skills, AI isn't a threat. There will always be work for solving hard problems, making judgements, and trade offs, actual thinking.
I keep seeing this idea, but have you actually used it? It clearly has the ability to problem solve. It's not just copying and pasting solutions.
Ok granted it's not especially good at it yet and the bullshitting problem is a real issue, but how long do you think that will remain unsolved?
I think where it will continue to struggle is niche domains that aren't on the internet a lot, e.g. hardware design. But if you're writing CRUD apps all day you should be worried!
Or at least brace for your job description to change from "Software Developer" to "Prompt Developer and AI Output Verification".
I'm curious why some people seem to think it's going to be solved imminently. The last 1% is always the hardest (by far)!
That's why I always tend to laugh at the self-awarded "Engineer". Most of you aren't. Not by a long shot. Engineers in the classical sense need to think in much more dimensions. Try developing modern electronics. Analog and digital electronics, high speed and high frequency electronics, embedded develpment while debugging both software and hardware in tandem. The odd EMC problem backfiring into your software. Adding mechanical engineering and thermal management, notwithstanding supply chain management.
Nothing I have seen from an AI comes even close. Hell, haven't even seen an AI autorouter that's worth a second look. And that's already working in a constrained solvespace.
TLDR: If you're not a one-trick-code-vomiter, you'll be fine.
That's clearly a skill you need.
We are just moving up the abstraction hierarchy.
Can you imagine assembly coders and their fears of C making them obsolete.
There is no end to the demand for software. And the demand for smart and competent ppl to create solutions. With the tools at hand.
Move along folks, nothing to see here ;)
At this particular point in time, I would not worry too much about getting replaced by an AI.
To wit:
> Please write me a C++ program to compute the 1000000-th decimal of pi
GPT-4 answers:
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;
int main() {
int n = 1000000;
int k, j;
double pi = 0.0;
double denom;
for (k = 0; k < n; k++) {
denom = 8.0 * k + 1.0;
pi += (1.0 / pow(16.0, k)) * (
4.0 / denom -
2.0 / (denom + 2.0) -
1.0 / (denom + 3.0) -
1.0 / (denom + 4.0)
);
}
pi = pi - floor(pi);
pi *= pow(10.0, 6);
cout << fixed << setprecision(0) << pi << endl;
return 0;
}
The code is totally broken: GPT-4 entirely fails to understand that floats have limited precision.It's cool (and very helpful) that it can regurgitate Plouffe's algorithm (although I haven't checked that the algo in the answer is correct), but to me, this example clearly proves that GPT-4 is simply a parrot with a very large memory that does not understand anything it does.
A glorified hash table with a very sophisticated compression and retrieval algorithm, but a very far cry from AGI.
It's going to allow us to build a much better search engine, a great coding assistant, it will enhance your productivity as a programmer, but it is - so far - altogether incapable of replacing you: it does not understand what it's doing.
[EDIT 1]: I checked Plouffe's formulas and GPT-4 gets it wrong: Plouffe formula's key feature is that it yields binary or decimal digits of pi directly [1] (no iteration required), so ... why does GPT-4 loop a million time to produce the answer, ... meh.
[1] http://plouffe.fr/NEW/A%20formula%20for%20the%20n%27th%20dig...
[EDIT 2]: oh, and, I just tried to same example on Bard, it fares way, way worse than GPT-4:
#include <iostream>
#include <iomanip>
using namespace std;
int main() {
// Calculate the first 1000000 decimal places of pi.
double pi = 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679;
// Write the first 1000000 decimal places of pi to the console.
for (int i = 0; i < 1000000; i++) {
cout << setprecision(10) << pi << " ";
}
return 0;
}
These things are impressive, but they have a very long way to go.Worst of all: they seem at this point incapable of verifying the consistency of their answers, correct and iterate until they at least output something vaguely coherent.
I've met a lot of people in my professional career that are deemed "experts" because they have the exact same skill set as LLMs : huge memory and a gift for crafting BS.
But in both cases, there is no actual thinking involved.
In particular, if the answer produced does not actually solve the problem at hand, there is no "check that my solution works, correct and iterate towards an actually working solution", something most human do very instinctively and naturally.
So, TL;DR: if you are an "expert" in the same way LLMs are "experts", i.e. you just regurgitate knowledge and fudge it to make it look like it makes sense, then YES: you will get replaced, and by the way: thank god for that.
If, on the other hand, you're an actual "expert" in that you are capable of leveraging you vast encyclopedic knowledge of a subject to guide you towards an actual working solution to a problem, then you're very likely safe for quite a while longer.