> It's not just spitting out generic things, there is genuine understanding here and genuine creativity.
Srsly? I really can't wrap my head around where specifically did you find "understanding" or "creativity".
The "squishy stuff" is super boring SEO-like text you'd find in some sales-person blog, who needs to run their mouth, but have zero in-depth understanding or appreciation of the hard domain problems. How is any of this non-generic? There is absolutely no substance here!
Real "squishy stuff" would be something around "handling personal data", "ensuring verifiability and correctness", "productive quality assurance", "robust and scalable systems architecture", "managing complexity in a way that doesn't require rebuilding the whole thing as soon as something inevitably changes", "observability", "productive documentation and self-documenting approaches to work" - you know, the REAL squishy stuff that REAL professionals have to deal with, not some totally vague abstract BS.
The code examples are also super-bad, incorrect and don't even actually fulfil your initial requirements: magic constants, use of undefined variables, "customer_address in country_1", "print", supporting merely 2 hardcoded currencies and languages, and routing them with "if", while also providing "else" fallback that you never implied in your requirements.
This is basically a throwaway random code, only thematically connected to your requirement and that could never under any circumstances be running in any productive scenario.
> Honestly there's a good number of people who aren't getting how revolutionary chatGPT
Honestly there's a good number of people who don't understand objective limits and properties of chatGTP, despite it actually totally being revolutionary.
All in all, chatGPT output seems more like a product of work of some total but productive idiot, left with a task and google: simplistic, totally naive, zero understanding or creativity.
It's good for fun rhetorical exercises, very useful for things that you don't know anything about, but in any professional environment it can only be used in a super-limited scope, supervised by an actual professional. Basically just an enhanced "monkey with a typewriter".
You'll be impressed once the successor of chatGPT takes your job. You realize that chatGPT wasn't trained to be a programmer right? They did virtually nothing to make it a good programmer. It learned programming as a side effect. Wait till they make the thing targetted towards programming.
>The code examples are also super-bad, incorrect and don't even actually fulfil your initial requirements: magic constants, use of undefined variables, "customer_address in country_1", "print", supporting merely 2 hardcoded currencies and languages, and routing them with "if", while also providing "else" fallback that you never implied in your requirements.
All of what you said is true yet you are blind if you can't see why it's revolutionary. In fact it can do better. You can specify all the requirements you want. No use of undefined variables, no hardcoded currencies. More flexibility more features no routing with if statements. It will do all of what you asked it to. And this is a tool that only learned how to program as a side effect. If you're not impressed, you're in denial.
>Honestly there's a good number of people who don't understand objective limits and properties of chatGTP, despite it actually totally being revolutionary.
No. I'm sorry but you're biased. Artists are ALREADY starting lawsuits because the art is on par with what they create. I don't know if you realize this but drawing something photorealistically is actually significantly HARDER then programming. There's literally no Bootcamp to produce a fine artist in a year because it's brutally hard to do. If AI can crack art, there's not much more time before it cracks programming. Don't kid yourself.
>but in any professional environment it can only be used in a super-limited scope, supervised by an actual professional. Basically just an enhanced "monkey with a typewriter".
Of course. EVEN with supervision, chatGPT isn't even useful enough to be used in a professional environment. I AGREE. The successor to chatGPT, however will be. In fact one of the successors to chatGPT can replace a persons job. Nobody is afraid of chatGPT taking their job. They are afraid of chatGPT because it is a herald about the AI in the future that WILL.
>Real "squishy stuff" would be something around "handling personal data", "ensuring verifiability and correctness", "productive quality assurance", "robust and scalable systems architecture", "managing complexity in a way that doesn't require rebuilding the whole thing as soon as something inevitably changes", "observability", "productive documentation and self-documenting approaches to work" - you know, the REAL squishy stuff that REAL professionals have to deal with, not some totally vague abstract BS.
chatGPT says:
Handling Personal Data: A software engineer at a healthcare company is tasked with implementing a system to store and process patient data. They use the programming language Python and the Django web framework to build the system. To ensure the privacy and security of personal data, the engineer implements industry-standard encryption techniques for data at rest and in transit, as well as implementing user authentication and authorization mechanisms. The engineer also conducts regular security audits and penetration testing to identify and address potential vulnerabilities.
Ensuring Verifiability and Correctness: An engineer at a financial services company is responsible for building and maintaining a system that processes financial transactions. They use the programming language Java and the Spring framework to build the system. To ensure the verifiability and correctness of the system, the engineer implements automated unit tests, integration tests, and end-to-end tests to verify that the system behaves as expected. They also use code review tools like SonarQube and Crucible to ensure that the code is maintainable, readable and adheres to company coding standards.
Productive Quality Assurance: An engineer at a e-commerce company is tasked with building and maintaining a system that processes online orders. They use the programming language Ruby and the Rails framework to build the system. To ensure the quality of the system, the engineer works closely with the QA team to establish a test-driven development process and use tools like Selenium and Cucumber to automate functional and acceptance testing. They also use performance testing tools like Apache JMeter to ensure the system can handle high traffic loads.
Robust and Scalable Systems Architecture: An engineer at a social media company is responsible for building and maintaining a system that can handle millions of users and handle high traffic loads. They use the programming language Go and the Microservices architecture to build the system. To ensure the system is robust and scalable, the engineer designs the system to be highly available and fault-tolerant. They use load balancers like HAProxy and Nginx to distribute traffic across multiple servers and use service discovery tools like Consul and Etcd to automatically route requests to the appropriate service.
Managing Complexity: An engineer at a logistics company is tasked with building and maintaining a system that tracks packages and shipments as they move through the supply chain. They use the programming language Python and the Flask web framework to build the system. To manage complexity, the engineer uses a microservices architecture to break the system into smaller, more manageable pieces. They also use containerization technology like Docker to package and deploy the system, which makes it easier to manage and scale. They also use monitoring tools like Prometheus and Grafana to track the system performance and identify potential issues.
Observability: An engineer at a SaaS company is responsible for building and maintaining a system that provides analytics and reporting features to customers. They use the programming language Node.js and the Express.js framework to build the system. To ensure observability, the engineer uses logging and monitoring tools like ELK stack, Grafana and Prometheus to track system performance and identify potential issues. They also use distributed tracing tools like Zipkin and Jaeger to track request and response times across multiple services.
Productive documentation and self-documenting approaches to work: An engineer at a cloud service provider is tasked with building and maintaining a system that automates the provisioning and management of cloud resources. They use the programming language Python and the Flask web framework to build the system. To ensure productive documentation, the engineer uses a documentation-as-code approach and stores all documentation in the same repository as the code. They use tools like Sphinx and Doxygen to automatically generate API documentation from code comments. They also automatically generate documentation from code comments and maintain the documentation up-to-date. They also use self-documenting approaches like using clear and descriptive variable names, function names and comments in the code, making it easy for other developers to understand the codebase. They also use a wiki system like Confluence to store additional information, such as design decisions, meeting notes, and user guides.
Literally. You want to drill down on more squishy stuff? The squishy stuff is what's easy for chatGPT because chatGPT is SPECIFICALLY trained on that stuff. It's the coding and math that it has more trouble with because it's NOT trained specifically on code and math.Which part of it? ;) Leading a team of software engineers? Identifying and negotiating vague parts of business requirements? Designing technical specs? Or maybe the part where I am responsible for software actually working correctly as business expects it to?
It would totally make the coding-in part faster (just as IDE suggestion do), but this was always the brainless tedious manual labour part.
> You realize that chatGPT wasn't trained to be a programmer right?
I realize that neural networks are unable to generate correct formal (where each minor detail has specific and important meaning) descriptions by design.
Neural networks are great for task where minor details are largely unimportant compared to overall "impression" - generating visuals, informal texts, music, probably image/video decompression, etc. On the other hand, while they can mimic "overal look", they can't guarantee (and in practice they always fail in that regard) that each detail of the produced artifact is correct. Which means you can't reliably or productively use them for programming, legal texts, construction design (though it can be used to draw inspiration for the overall image), etc.
> All of what you said is true yet you are blind if you can't see why it's revolutionary
I never said it's not revolutionary. I merely point out its hard limits.
> In fact it can do better. You can specify all the requirements you want. No use of undefined variables, no hardcoded currencies. More flexibility more features no routing with if statements.
Sure, you can specify every minor detail: how the data should flow, which patterns should be used, which things should be pulled from configs, how the interfaces should be structured, and a shit load of negative prompts. But that's the details that only the domain expert would know. And again, there are no guarantees that the result would actually be correct: the expert will have to review all this extra-attentively, cuz there is no chance that expert's assumptions are the same as NN's "assumptions".
So you basically still need a domain expert, who now has to do extra (guess)work, instead of just writing a formal description directly in code. What's the profit then?
> Artists are ALREADY starting lawsuits because the art is on par with what they create
Technically artists are starting lawsuits due to copyright. Also, technically, an artist can easily tell the difference between raw NN output and an actual drawing, sometimes even non-artists, as the images often look somewhat uncanny.
AI artists actually typically do shit load of prompt-enginerring, pipe different parts of image through different NN's (appropriate to specific situation) and do a lot of manual post-processing so the result looks good.
> I don't know if you realize this but drawing something photorealistically is actually significantly HARDER then programming
These are two completely different tasks. You are comparing apples and oranges, that can't really be put on a same scale, unless by "HARDER" you specifically imply the amount of brainless tedious work required to complete the job.
Also, in practice artists just use and process real photos when they aim for "photorealistic" - no one actually draws photorealistics from scratch, normally (but one can obviously invent any kind of challenge for themselves if they want to)
> There's literally no Bootcamp to produce a fine artist in a year because it's brutally hard to do
Who told you that there is a bootcamp that can produce a fine software engineer in a year? It takes (a talented-enough person) at the very least 5 years of rigorous study and practice before one can actually start working somewhat autonomously without constant supervision, while also delivering appropriate quality.
> If AI can crack art, there's not much more time before it cracks programming. Don't kid yourself.
Don't kid yourself thinking that these two are similar or comparable sets of tasks.
> chatGPT isn't even useful enough to be used in a professional environment. I AGREE.
That's actually not true and I never made such a claim. ChatGPT is EXTREMELY useful in a professional environment, but only for a specific set of tasks, while being used as a tool by an expert with actual responsibilities.
> The successor to chatGPT, however will be. > They are afraid of chatGPT because it is a herald about the AI in the future that WILL.
The first GPT and GANs were heralds. ChatGPT is already a relatively mature and refined technology. I don't know why you expect to see low base effect here - the base is already actually pretty high.
> chatGPT says:
"Handling Personal Data" - somewhat scratches the surface, but it doesn't mention actual problematics (that first and foremost it's a regulatory matter and all the specifics stem directly from it).
"Ensuring Verifiability and Correctness" - clearly confuses runtime and compiler properties with quality assurance, way off.
"Productive Quality Assurance" - didn't understand the productivity issue (to test or not to test) and even if we drop "productive" part, the process it describes is also incorrect: engineers don't really ever work with the QA team in order to establish TDD.
"Robust and Scalable Systems Architecture" - way off, while you'll often see service discovery, nginx, HAProxy, etc in scalable systems, that's not what makes scalability. Properly managing state and persistence in appropriate places does.
"Managing Complexity" - way off. I don't suppose this one even requires an explanation, total gibberish.
"Observability" - as expected, this is a rather good one. Unlike other points (which are concepts/problems) - this one is a rather well defined term.
"Productive documentation and self-documenting approaches to work" - totally ignored "productive" part and just gave a definition of "self-documenting" along with some rhetorics on the fact that people document stuff in general.
Notice how each one of them also for some reason mentions a kind of business and languages and frameworks, which are totally unrelated.
Basically, even if you ignore "brain-farts" (which is a good example of "minor" incorrect details that make NNs inappropriate tool for complex formal stuff) it only really got - AT BEST - 2-3/7 right. Now, imagine it's a real world and you are betting millions on it, without having an expert-overseer to tell you when it brain-farts or if the output is even remotely correct.
Actually, what was the prompt? Seems like you just asked it to describe the list I gave you, which essentially means you just used my own expertise, understanding and creativity, not GPT's, as it didn't even give you a list of concrete problems.
> The squishy stuff is what's easy for chatGPT because chatGPT is SPECIFICALLY trained on that stuff.
Not sure what you mean here by "squishy stuff" or "SPECIFICALLY". ChatGPT is a language model trained on a huge-ass volume of non-specific text corpus.
> It's the coding and math that it has more trouble with because it's NOT trained specifically on code and math.
Nope, that is merely a property and a limitation of the NNs. At best, you can use them to build up "intuition" to bruteforce problems (like AlphaFold for protein folding), but obviously it only works for simple-enough stuff that can actually be bruteforced, when the output can be easily formally verified fast-enough.
All of it. Only one human leader to write queries. Everything else designed by an AI.
>Neural networks are great for task where minor details are largely unimportant compared to overall "impression" - generating visuals, informal texts, music, probably image/video decompression, etc. On the other hand, while they can mimic "overal look", they can't guarantee (and in practice they always fail in that regard) that each detail of the produced artifact is correct. Which means you can't reliably or productively use them for programming, legal texts, construction design (though it can be used to draw inspiration for the overall image), etc.
You're just regurgitating a trope that's Categorically false. You're a NN did you realize that?
>I never said it's not revolutionary. I merely point out its hard limits.
And you're wrong. You have thoroughly expanded the limitations and you are mistaken about this.
>Technically artists are starting lawsuits due to copyright. Also, technically, an artist can easily tell the difference between raw NN output and an actual drawing, sometimes even non-artists, as the images often look somewhat uncanny.
No. corps and AI's and bots have been scraping pics off the internet for years. Google is one. No lawsuit of this nature has been filed until AI came out. Artists are threatened and they are reacting as such that's why the lawsuit is filed now instead of before.
https://futurism.com/the-byte/artist-banned-looked-ai-human <- artist banned because they thought his work was by an AI.
>These are two completely different tasks. You are comparing apples and oranges, that can't really be put on a same scale, unless by "HARDER" you specifically imply the amount of brainless tedious work required to complete the job.
No, ENGLISH is written in a language written with tokens of symbols. The other, PICTURES, is written in tokens of language as well. A pixel is 3 numbers of RGB and in the computer it is represented as a language with a format before translation onto your monitor. It is a translation problem and it is treated the same way by experts. Both DALL-E and chatGPT utilize very similar generative models translating English to English in the case of chatGPT and english to numbers which can be further translated to pixels for DALL-E.
>Also, in practice artists just use and process real photos when they aim for "photorealistic" - no one actually draws photorealistics from scratch, normally (but one can obviously invent any kind of challenge for themselves if they want to)
Not true. A good amount do.
>Who told you that there is a bootcamp that can produce a fine software engineer in a year? It takes (a talented-enough person) at the very least 5 years of rigorous study and practice before one can actually start working somewhat autonomously without constant supervision, while also delivering appropriate quality.
There's many bootcamps that make that claim and there's PLENTY of people who can live up to that claim. But NONE for artistry.
>Don't kid yourself thinking that these two are similar or comparable sets of tasks.
Kid myself? It is literally the same type of neural network. There's no kidding here. It's not a coincidence that chatGPT and DALL-E came out back to back. These models are called generative models. It's a single new technology that's responsible for this.
>That's actually not true and I never made such a claim. ChatGPT is EXTREMELY useful in a professional environment, but only for a specific set of tasks, while being used as a tool by an expert with actual responsibilities.
No it's not. There's no guard rails users can ask it anything and take it anywhere. It can't stay within a defined task. It's also wrong enough times that it can't be used in prod for virtually most tasks.
>The first GPT and GANs were heralds. ChatGPT is already a relatively mature and refined technology. I don't know why you expect to see low base effect here - the base is already actually pretty high.
No they weren't heralds. Text generators have always been around it got better. But never displayed signs of true understanding or even self awareness as it does now. Literal self awareness.
>Notice how each one of them also for some reason mentions a kind of business and languages and frameworks, which are totally unrelated.
I told it to do that. So that the responses wouldn't be generic. chatGPT is following my instructions.
>Not sure what you mean here by "squishy stuff" or "SPECIFICALLY". ChatGPT is a language model trained on a huge-ass volume of non-specific text corpus.
It is ALSO trained using humans to pick and choose good and bad answers. This training is non-specific and they used just regular people. If they used programmers and had programmers pick and choose good answers from programming questions, chatGPT will begin outputting really accurate code.
>Nope, that is merely a property and a limitation of the NNs. At best, you can use them to build up "intuition" to bruteforce problems (like AlphaFold for protein folding), but obviously it only works for simple-enough stuff that can actually be bruteforced, when the output can be easily formally verified fast-enough.
You are categorically wrong about this. 3 neurons can be trained to become an NAND gate which can then be used to simulate any computational network or mathematical equation that doesn't have a feedback loop. It can model anything with just an input and an output. This also has been demonstrated in practice and proven theoretically.