People don’t need to feel threatened; it is simply already replacing all the mundane programming and writing work we used to do; people who can only do that type of mundane work aka crud work (chatgpt can do it in any language or framework), integration, transformation, plumbing etc are already gone.
A lot of our (very well paid) work is taking data, transforming it, sending to some api, get the result, transform it and move to the next step. A few months ago this was just boring human work, now it’s just copy pasting the spec and out it falls. Sure you might need some fixes (as the article says), but not much and it learns (you add new knowledge to the prompt ‘memory’). I have been working with my own custom client on top of chatgpt for months now; it has a lot of custom prompting and effort to make sure it does as well as it can. This I can throw away in a few months when improvements come from their side.
It's something I could do with swagger in 5 minutes as well, you don't need an AI to generate boilerplate code.
The difference with using swagger would be, I know the code is correct.
The actual problem might be that you're so rusty, you don't actually ,know what the job entails or is worth? I mean you have clients, and you're pasting code from ChatGPT into source control and people are paying you for this?
(1) makes structured requests to chat gpt based on an initial prompt. That request asks ChatGPT for all of the files associated with solving the problem.
(2) ChatGPT produces each file in a structure fashion. This could range from source files to project files like package.json or Dockerfile
(3) take code output and spin up a corresponding stack blitz or other online resource (assuming those have APIs.
I’m shocked and bummed that there’s not more effort to make this a reality yet. We’ve had ChatGPT for a month and this is low hanging fruit (or so it would seem).
Programming is not just taking product requirements and spitting out the correct algorithms.
I have no doubt that machines can write a lot of the boilerplate code going forward. But that's just the evolution of tools. Actually building systems; composing them and keeping them running... this is a whole other art.
A lot of people haven't really played with chatGPT in depth. This thing is next level. It's not just spitting out generic things, there is genuine understanding here and genuine creativity.
Here's a snippet from a chatGPT conversation I had based on your post:
ME: Give more examples of squishy stuff that software developers need to handle.
chatGPT:
1. Ethical considerations: As technology advances, software developers may need to consider ethical implications of the products they create. For example, a developer working on a facial recognition system may need to consider issues of privacy and bias.
2. User experience (UX) design: Creating a software that is easy to use and intuitive for the user is an important aspect of software development. Developers may need to consider factors such as layout, navigation, and accessibility to ensure that the software is user-friendly.
3. Business strategy: Developers may need to consider the overall business strategy of the company they work for, and how their work contributes to achieving that strategy. This may involve working with other departments, such as sales or marketing, to ensure that the software is aligned with the company's goals.
4. Security: Developers should consider security in their software development process to protect the software from hacking attempts, data breaches, and other security risks.
5. Cultural and linguistic diversity: Developers must consider the cultural and linguistic diversity of their users, in order to create software that is inclusive, accessible and understandable for all users.
6. Communication and collaboration: Developers often work in teams, and must be able to communicate effectively with other team members and stakeholders to ensure that the software is developed as intended.
ME: Give example python code that serves some random customer in some random sector. Then explain how you would change that code to better serve the "squishy stuff" needed by the customer.
chatGPT: Here's an example of Python code that serves a random customer in the retail sector by calculating the total cost of a purchase including sales tax:
def calculate_total_cost(price, sales_tax):
total_cost = price + (price * sales_tax)
return total_cost
purchase_price = 100
sales_tax_rate = 0.08
total_cost = calculate_total_cost(purchase_price, sales_tax_rate)
print(total_cost)
To better serve the "squishy stuff" needed by the customer, we could make the following changes:
1. Add a function to calculate discounts based on customer loyalty or promotional offers, for example by adding a function that uses customer's ID as an argument.
2. Add a function to handle different tax rates for different states or countries, for example by adding a function that uses the customer's address as an argument.
3. Add a function to handle different currencies, for example by adding a function that uses the customer's preferred currency as an argument.
4. Add a function to handle different languages, for example by adding a function that uses the customer's preferred language as an argument.
Here is an example of how the code would look after making these changes:
def calculate_total_cost(price, sales_tax, discount, customer_id):
total_cost = price + (price * sales_tax)
if customer_id in loyalty_customers:
total_cost = total_cost - (total_cost * discount)
return total_cost
def tax_rate(customer_address):
if customer_address in country_1:
return 0.08
elif customer_address in country_2:
return 0.09
else:
return 0.07
def currency_convert(price, customer_currency):
if customer_currency == "USD":
return price
elif customer_currency == "EUR":
return price * 0.8
else:
return price * 0.9
def language_change(text, customer_language):
if customer_language == "en":
return text
elif customer_language == "fr":
return translate_to_french(text)
else:
return translate_to_spanish(text)
purchase_price = 100
customer_id = 123
customer_address = "US"
customer_currency = "USD"
customer_language = "en"
sales_tax_rate = tax_rate(customer_address)
purchase_price = currency_convert(purchase_price, customer_currency)
total_cost = calculate_total_cost(purchase_price, sales_tax_rate, 0.1, customer_id)
text = f"Your total cost is: {total_cost} {customer_currency}"
print(language_change(text, customer_language))
This code now considers the customer's address, currency, language and ID, to provide the customer with a more personalized experience.
Honestly there's a good number of people who aren't getting how revolutionary chatGPT is because of all the other inconsequential AI buzz. You really need to play with this thing in depth. It is Creative. It can create it's own specs and implement those specs like I asked it too. It won't replace you, but it's definitely the precursor to something that will.Earlier today I used ChatGPT to explain to me concepts I lacked understanding of, which allowed me to accurately interpret the statistical information in a scientific paper (I'm a reporter and got through a humanities degree with only an introductory stats class, like 10 years ago).
When I then asked it to copyedit the draft of my article however, it failed. Making it worse and introducing numerous innacuracies.
> It's not just spitting out generic things, there is genuine understanding here and genuine creativity.
Srsly? I really can't wrap my head around where specifically did you find "understanding" or "creativity".
The "squishy stuff" is super boring SEO-like text you'd find in some sales-person blog, who needs to run their mouth, but have zero in-depth understanding or appreciation of the hard domain problems. How is any of this non-generic? There is absolutely no substance here!
Real "squishy stuff" would be something around "handling personal data", "ensuring verifiability and correctness", "productive quality assurance", "robust and scalable systems architecture", "managing complexity in a way that doesn't require rebuilding the whole thing as soon as something inevitably changes", "observability", "productive documentation and self-documenting approaches to work" - you know, the REAL squishy stuff that REAL professionals have to deal with, not some totally vague abstract BS.
The code examples are also super-bad, incorrect and don't even actually fulfil your initial requirements: magic constants, use of undefined variables, "customer_address in country_1", "print", supporting merely 2 hardcoded currencies and languages, and routing them with "if", while also providing "else" fallback that you never implied in your requirements.
This is basically a throwaway random code, only thematically connected to your requirement and that could never under any circumstances be running in any productive scenario.
> Honestly there's a good number of people who aren't getting how revolutionary chatGPT
Honestly there's a good number of people who don't understand objective limits and properties of chatGTP, despite it actually totally being revolutionary.
All in all, chatGPT output seems more like a product of work of some total but productive idiot, left with a task and google: simplistic, totally naive, zero understanding or creativity.
It's good for fun rhetorical exercises, very useful for things that you don't know anything about, but in any professional environment it can only be used in a super-limited scope, supervised by an actual professional. Basically just an enhanced "monkey with a typewriter".
That's an absolutely massive claim but I think it's a false one.
I've seen it be creative, but how good / useful that creativity is seems to be somewhat quite the roll of a dice. This is opposed to someone / something being intentionally creative.
This is not to knock the system, but depending on the "seed" you might get something brilliant or silly, I think there's a major difference there to your claim that's it's "genuinely" creative. Genuinely creative would imply that 100% of the time, I get something new and relevant. Which isn't what happens for me.
For example, I re-asked it your original question and it says this is what developers do:
User input: Developers need to anticipate and handle unexpected or invalid input from users, such as special characters, empty fields, or out-of-range values.
Randomness: Developers may need to incorporate randomness into their programs, such as generating random numbers or shuffling a deck of cards.
Concurrency: Developers may need to handle concurrent access to shared resources, such as preventing race conditions or deadlocks.
Network communication: Developers may need to handle network communication, such as dealing with lost or delayed packets, or handling different types of connections (e.g. HTTP, WebSockets).
Time: Developers may need to handle time-related issues, such as scheduling tasks, handling time zones, or dealing with clock drift.
Human language: Developers may need to handle natural language processing tasks, such as understanding speech, identifying sentiment, or handling different languages.
Image and video processing: Developers may need to handle image and video processing tasks, such as object detection, image recognition, and video compression.
I wouldn't consider a developers job to do video processing tasks, or object detection, not what I expect a developer to handle natural language processing tasks such as identifying sentiment, would you? In fact I couldn't really imagine a more generic attempt at a response to that question?It's for sure impressive, but I feel the more "creative" I asked it to be (more entropy I inject), the more likely it will come up with incorrect answers; However, I'll concede that "incorrect" answer could be used for some inspirational new ways of thinking about solving problems.
I feel personally, the more I play with these systems, the more predictable they become, and this isn't something "truly creative" would be.
Where I think we're going wrong with AI is we seem to think that it will be a "perfect brain". When really nothing is perfect, especially the data it's being trained on.
Folks, let's get real.
I kind of snarked at this but then I realized it's written by Bertrand Meyer and the conclusion is spot on: I agree that should these tools proliferate it will highlight the importance of formal methods and verification.
Even if ChatGPT-like systems get faster and gain deeper models of computer syntax and structure, I suspect the one problem that will be difficult to solve is elegance and abstraction. Often the abstractions we choose are based on laws and ways of thinking that help us manage complex phenomenon using laws and notation which make it easier for us to reason about them.
And even if that does get solved some how, we're going to have to understand them some how. A stronger emphasis on proofs and model checking will be useful to anyone who wants to be sure that the program the FutureGPT produced isn't simply "some what right" but is actually right.
The fact that this industry was able to sustain itself for so long and not actually get this is a miracle in itself. This is what ruined OOP when it became just managements lingo and it'll probably happen again with tools like ChatGPT, we are just going to vomit code in a faster (and cheaper) way.
A few rebellious souls will soon realize that domain modelling and writing correct code is hard, they'll end in the usual places that already exist today, by working in some niche company that applies pure FP or proving theorems in Lean/Coq/Agda.
A generalist SE (side note: like how I assume HN’s readership bends towards) will be the one to benefit, as a good generalist SE can have networking understanding (CCNA), Ops/DevOps (cloud), and SE (backend, frontend, mobile) and more (at embedded programming is where the line could be drawn, but ChatGPT is good at explaining hardware concepts too!), to thrive and depend on ChatGPT or equivalent system for discernible assistance.
(Minor: I even tweeted my opinion with my personal note about it too: https://twitter.com/raj_nathani/status/1615709768487948292?s...)
I.e. maybe you will be able to use AI to solve a problem, like "write a lambda to to accept JSON in this format and put it in a database with this scema", or even potentially "optimize this function to run as fast as possible on a multi-core processor", but it seems we're quite far from something like "write an application with a GUI that looks like this which has the following functions".
https://aidev.codes. By default it is like the OpenAI JavaScript playground, except it immediately hosts the results in a web page and allows you to edit the accumulated prompt.
Also can edit specific files. Many improvements planned. Tonight I got a lot of stuff done related to creating and searching knowledgebases using embeddings.
If anyone tries it, please consider writing any feedback in the Discord.
Perhaps I can try to say, "Please decide what browser features I should use to maximize profit.", and it will actually do some proper research and contract out studies and all that... but I don't see that happening yet. Even still, the point of validation remains.
I can't help but draw a parallel with automated theorem provers. Sure they resolve "true", but then who validates the validators? It's a never ending cycle.
At the end of the day, you must learn to love and trust your tools and then stand by the joint creations.
But even something straightforward: "write a hash function whose output evenly mixes the input" or "write a function to convert an RGB image to a dithered B&W image" is harder than it seems. How do I know the dithering algorithm chosen is good for my data? How do I know the hash function actually does what I asked and doesn't have some funnel somewhere? I'll have to write some tests, but I can't get the AI to write the tests, because how do I know the tests will be right?
Like here is what it generates for the 2016 day 1 problem:
def find_distance(instructions):
x, y = 0, 0
direction = 0 # 0: North, 1: East, 2: South, 3: West
visited = set()
visited.add((0,0))
instructions = instructions.split(", ")
for instruction in instructions:
turn = instruction[0]
distance = int(instruction[1:])
if turn == "R":
direction = (direction + 1) % 4
else:
direction = (direction - 1) % 4
for _ in range(distance):
if direction == 0:
y += 1
elif direction == 1:
x += 1
elif direction == 2:
y -= 1
else:
x -= 1
if (x, y) in visited:
return abs(x) + abs(y)
visited.add((x, y))
return abs(x) + abs(y)
This function returns 113 from my input for that day, which is actually the answer for part 2... For part 1 it should be 234.When I tried in Rust the solution didn't even compile, which is business as usual as far as my experience goes for trying to get ChatGPT to write anything practical (not a 'toy' example) in Rust.
I gave it another chance with day 2 in python and it failed at that as well. These are VERY simple tasks, CHILDREN can solve the initial couple days of advent of code.
In this article they give an example of a square root function. Maybe the authors could consider trying some more realistic tasks? So silly...
Because it's a nascent technology that hasn't been optimized for solving advent of code problems. It can, however, do a lot of other cool stuff.
It often starts hallucinating the input in the exact format advent of code gives, so I'm like 99.9% certain it has been trained on a large corpus of advent of code.
Personally I use codex sometimes for debugging help so I agree it can do cool stuff. I just disagree it is "almost" right at solving problems -- it almost never generates code that even compiles for me when prompted to do anything not super trivial like advent of code brain teasers.
https://github.com/openai/openai-cookbook/blob/main/techniqu...
Yep, that seems to be the key, and some realized that already: https://news.ycombinator.com/item?id=34463061
For me, it solved part (a) perfectly when I told it: "To solve this, write a Python 3 function that takes a string like `"R4, R3, R5, L3, ..."` and outputs the number of blocks to Easter Bunny HQ." The original question on its own was a bit ambiguous in my opinion because it doesn't explicitly contain the input which the user reads on a second page.
In any case, neither is strong evidence for or against its ability to solve problems like these. First, it's N=1. Second, it's a problem from its training set.
For me, Copilot/ChatGPT adds value not by replacing my programming but by (a) writing simple code for me and (b) answering my questions about things I don't understand. I operate in a supervisory role where I have to double check everything it says. But, critically, it's faster for me to double check its work than to do everything myself.
It genuinely fails 100% of the time at coding anything non-trivial for me, and about half the time for simple stuff. Glad you've been having success though, maybe some people are just better at getting it to work, or it has certain domains it excels in, or your tasks are fairly simple.
As far as I can tell, until it actually understand what it is doing, it's just kind of "blending" what it thinks the most common response is based on thousands of other similar responses to similar questions.
I can imagine people tweaking it down to be more "right" in some cases, but then won't it just become more wrong in other cases?
I'm actually starting understand why AI is good at generating pictures, statistically it's just flipping bits to look like other bits it's seen relative to the input specified. Code on the other hand is something which needs/should to be more precise.
There's also the fact that the more people lean on this tech, the more mistakes will be perpetuated into the system and the less samples it will have available to learn from, as people are no longer feeding it new answers.
I guess like how DeepMind trained AlphaGo, it can code itself to learn, but I I do imagine the problem space for it to "play itself" against is practically infinite, even compared to go, the game, which is also a huge space.
I'm a software person, not an AI person, but I love thinking about it.
Is there domain limitation to this growth and performance? Medicine, theoretical physics, art, engineering, pure/applied maths, etc.?
I don't see how you guys are getting this from the current tech? Maybe there is an educational resource someone can suggest?
The fact that it's often right is a horrifying omen of the future.
chatGPT will not replace you. It is the precursor to the thing that will replace you.
Cool, man. So why don't you get chatGPT to start writing you some software? Or optimize an algorithm? Hey, maybe it'll tackle the travelling salesman problem in polynomial time!
SO many economic and scientific opportunities that will make you wealthy and famous if it's as capable as you claim (eg. Doesn't just solve elementary problems by regurgitating shitty code).
For Software Engineering I look forward to stopping writing the majority of my code and instead managing this tool and helping guide it to create apps and websites faster than I can and outside the areas of my expertise.
In addition, it can help individuals and teams learn/debug/ship things quicker - which is unfortunately/fortunately something that every company wants and needs.
If it does what some think, and it may, UBI can certainly be necessary for some tasks if you consider diffusion tools like Dall-e and StableDiffusion as well.
I'm not saying that it will never happen, but if your job is threatened by ChatGPT right now, you were not really doing real software engineering anyway.
Could actually be the end of IT as we know it.
So, nothing, I guess.