The funny thing is I had tried just pasting in code and saying "find the bug" and it wasn't helpful at all, but when I posted in a portion and asked it to explain what the code was doing I was able to work backwards and solve the issue.
Its nice anecdote where the AI felt additive instead of existentially destructive which has been a overbearing anxiety for me this last month.
You surrendered the need to think to the machine. You are lesser for it. I don't think these AIs are just removing drudgery, like, say, a calculator. They actually do the work. Or more correctly, they produce something that will pass for the work.
Wholesale embracing of this sort of technology is bad for us.
I don't think the average person wants to be doing the menial work, vs architecting a grander vision IE the purpose for the work.
Coincidence? ;)
"the brain shrinkage parallels the expansion of collective intelligence in human societies."
It looks like we are due for another brain shrinkage.
- - - -
This is good and well for existing minds, but I think a lot of people will let these machines raise their children and that might cripple them. Like if you let your kids use a wheelchair instead of learning to walk? It's an imperfect metaphor.
Or back up cameras in a car.
I'm sure the buggy whip makers had pride in their work as well.
And as everyone knows, meetings are what get real programmers excited.
Or even just "asking for a second opinion"?
Iterate a few more versions from here, so that the models are stronger at producing the correct structured data, and the impact on every office job will be profound.
I.e. instead of training a generative model on text from the internet, train it on every single excel file, sql database, word document and email your company stores. Then query this model asking it to generate Report X showing Y and Z.
When you step back and consider it, 99% of office jobs are about producing structured data from unstructured data sources. The implications of this are being hugely underestimated.
We're moving towards a world of chair-fillers at one end, and maestros at the other. The clearest difference between labor in 2022 and 2026 will be the hollowing-out of the middle.
The value of a human is in reacting to changing requirements, considering context and in understanding other humans. AI cannot do any of that reliably.
Some office tasks can be automated and those that can don't need AI anyway - they need properly labelled data, databases and some coding.
AI will be very good at creating the illusion of competence. AI cannot actually ensure competence or verify it. That will remain the domain of humans.
This has already been possible for decades using old-fashioned automation (Python scripts etc.), assuming the data entry is designed for this.
Honestly, I think the reason managers have teams of people reporting to them is not just to give them unbiased information.
Part of it is probably ego stroking, but I suspect the humans in the loop are doing some sort of analysis too, and reporting qualitative patterns that an AI might not pick up on.
I'm no luddite, but I've seen enough rocky digital transformations to know that human beings don't operate like manufacturing pipelines. Automation and AI assisted automation will be harder to generally implement.
But what I do feel confident about is that there will be a large mass of consultants who'll sell and expensive dream to a lot of mid tier businesses. The next big flex for business IT will be to have a notch on your belt for a failed AI automation project.
I largely agree with this article, but I feel like you have to be careful with these general predictions. Many technologies have purported themselves to be this "business lubricant" tech (ever since the spreadsheet), but the actual number of novel spreadsheet applications remains small. It feels like the same can be said for generative AI, too. Almost every day I feel the need to explain that "generation" and "abstract thought" are distinct concepts, because conflating the two leads to so much misconception around AI. Stable Diffusion has no concept of artistic significance, just art. Similarly, ChatGPT can only predict what happens next, which doesn't bestow it heuristic thought. Our collective awe-struck-ness has left us vulnerable to the fact that AI generation is, generally speaking, hollow and indirect.
AI will certainly change the future, and along with it the future of work, but we've all heard idyllic interpretations of benign tech before. Framing the topic around content rather than capability is a good start, but you easily get lost in the weeds again when you start claiming it will change everything.
That's not my experience, I am continuously amazed by the amount of tasks worker bees manage to do in excel.
I kind of wish MS access was more of a thing, because when eventually it doesn't scale and you need a "proper" system, it takes a rewrite.
My larger point, though, is that most people end up using spreadsheets to do the same thing. It's fun to imagine novel uses for a spreadsheet, like a DAW or video game, but ultimately it's not very useful for that. Similarly, ChatGPT is great for writing convincing text - that's what everyone uses it for. Can it solve math though? Not very well. Future applications of the tech are more likely to be specialized, in that sense.
Mostly, I'm a curmudgeon and I despise these "flying car of the future" articles. Popular Mechanics printed them for decades, and half a century later nothing has changed (not even the culture writing them).
I agree though, chatGPT isn't a real flying car. Imagine if someone revolutionized the paper clip. The day-to-day of millions would be forever and irrevocably changed; and almost nothing would happen.
When you understand how the sausage is made it is hard to be overly excited. I fail to be "mind blown" by ChatGPT because every time someone claims it can do task X they only managed to scrape by within its significant limitations.
If you want superhuman intelligence you are going to need to break through the short term memory limitation of humans. If the AI can memorize a 1 million line code base then it will be practical but everyone is only working with small code snippets or generates the entire code from scratch and then extrapolates that to a million line codebase even though that isn't possible. That is the height of impracticality.
And before anyone accuses me of moving goalposts. I'm not the one moving them. It's the people telling me it can make manual programming obsolete. Why not just stick with what it can do instead of making these claims?
It's that the business will also accept that it needs a rewrite. As opposed to the current status quo where they'll ask what's wrong with continuing to use $Slick_and_Fancy_Tool (then act surprised when it stops scaling with regards to whatever business, performance, or compliance barriers you've then reached).
This totally resonates with me. This is absolutely correct. Thinking about the future of work, there's much of what I do every day in my job that is hollow and indirect. And I would be totally okay if I could have something like ChatGPT do it for me.
I can't wait for Wall-E!
https://www.thelist.com/img/gallery/things-only-adults-notic...
They key to the power of GPT3 is that it has billions of parameters, AND those parameters are well-justified because it was trained on billions of documents. So the term should be something like "gigaparam AI" or something like that. Maybe GIGAI as a parallel to GOFAI. If you could somehow build a gigaparam discrimative model, you would get better performance on the task it was trained on than GPT3.
I do not think that the world is changing because of large language models. That seems to be a controversial opinion so I won't get into it here. But these are powerful new tools, no question. The way I work has changed and I'm very glad to have ChatGPT.
I do believe that in the coming years knowing how to use ChatGPT or similar products will be as important as knowing how to use Google is now. People that know how to leverage LLMs going forward will simply have an advantage over those who don't. It won't be long before it isn't optional for executives and knowledge workers. This will be a big change for many people. But we adapted to Google in the early 2000s and people will adapt to this as well.
or wildly inaccurate, particularly in fields such as programming
The same sort of problem with self driving cars, they are often correct but not often enough, and staying alert to correct the AI is worse than driving yourself which is more work, paradoxically enough.
AI might manage to push through these barriers, but I remain skeptical with the technology in the current state: statistical machines that are good in the common cases but sketchy at the edges.
Rather than meticulously correcting the works of a subpar programmer it is much more efficient to let a proficient developer produce the code. Or even better, engage a 10x developer.
If an inefficient programmer can solve a real world business problem in a fraction of the time, but it requires slightly more computing resources, I’d still pay for it. Efficiency can be measured in a lot of dimensions and often times spending time and money writing optimized code is inefficient as well.
This is true of human-generated code as well. Trust me: Reviewing other people's code is my day job.
It's exceptionally rare that a malicious actor is trying to sneak something into the code. The common scenario is the developer who's new to the project not fully understanding how everything works so they copy & paste something they think is necessary but ultimately isn't and could in fact be very wrong. Just like how ChatGPT works.
You can iterate from there by taking advantage of the last 50 years of software engineering wisdom.
can it be that programming itself can be so easily predicted in a generative way, while others require more ingenuity and real world model to be solved?
In this case I would totally offload programming to a GPT /LLM AI, while my job is simply to specify largely the business case.
Forgive me, but isn't this kind of moving-the-goalposts? Information is the surprise value from the recipient's point of view, which meas the recipient's bayesian prior probability is "expected". Saying "these "AI" systems put out the most expected subset in each instance" assumes that the recipient's priors exactly equal those of the model which would only be the case when the model is talking to itself. (or I suppose to an even more complex model with perfect knowledge of ChatGPT's weights)
The fact that no information is transferred when the model talks to itself should not be surprising and would apply to any AI. (even including a superhuman post-singularity god-like AI)
This does not mean anything more than that the AI has a greater breadth of training background, which is likely.
We get the output most likely expected from any of (or the average of) the humans whose writing/drawing/whatever was included in the input set.
What we will not be getting from the AIs is any creative output based on unique understanding, as we would from an intelligent, creative human. Many of hte humans in the input set would see the same prompt and produce an actual novel and meaningful output, not simply a cut-and-paste from prior works. (& yes, seme novel output may come from some randomizing algo, but if it is correct, it is no more correct than the broken clock that is correct twice every day.)
Or, another example, I was involved in a legal deposition where an "AI" transcription system was used instead of a skilled court reporter. The output LOOKED fantastic, until I actually read it, and it was absolute garbage. The standard errata sheet has room for the deponent to put in about a dozen corrections, and most are less than a handful. My errata list was multiple pages. These errors often reversed the meaning of sentences, substitutin "I have ..." for "You have...", dropping or adding "not", or substituting in common names for unusual names (e.g., "Jack Kennedy" for "John Kemeny". note human transcribers always ask for correct spellings of names in the next break, this crap just inserted it like it had a clue).
So, even though the total "experience" or training set of the may go beyond the experience of the reader, so some of the output is surprising, this is no more so than a search engine produces surprise. In fact, I think this is the best use of the AIs, to have them trained on an enormous data set, and provide possibly better results, defined as more on-point, but likely less thorough.
(It is a broad generalization to assume that these traits are mutually exclusive. They do, of course, co-exist in many people. However, it seems to me that the number of people in whom they coexist robustly are few. However, if is these few from whence come true once-in-a-generation geniuses.)
I distractedly typed it out while doing something else and didn't realize how unintelligible I made it sound until it was too late to edit!
these projects have direct commercial applications right now.
What you say about AI is true; however, from where I'm standing, it seems there is still too much greed-driven, mindless "no problem; we'll simply brute-force the problem of inventing a machine that does human-level cognition, which is a thing we admit we do not understand at all"-type enthusiasm, and not enough openness, humility, and critical thinking in the field.
I wouldn't let it write a whole article, but it can really save time at research. Just needs a bit of fact checking in the end.
I talk to the AI as if I would interview an expert on a subject matter.
This usually gives a good starting point for an article, if the subject is general enough, and not too new.
It's also good at structuring and rewriting texts. If you already have all the correct data, you can use it to write an outline or something like that.
The problems I saw were that it can't follow a coherent thought for more than a few paragraphs, and the writing style is generally a bit boring.
Also, the system uses sampling of results to sound more interesting and to prevent overfitting, it happens regularly that it tells you crap. One time you get a good answer, then you change one word in your prompt and the results isn't accurate anymore.
But I worked for years as a developer, so I usually notice when things are off, and I also fact check manually with Google when I want to be sure.
I was already worried about ChatGPT-like systems generating mass-produced nonsense and polluting the internet, but if people are also going to edit ChatGPT output just enough to make it seem right (a mechanism I hadn't thought of so far), that might make the nonsense a lot harder to detect.
I totally understand the reasoning though, it sounds like a productive workflow.
Using ChatGPT to fill in knowledge for a technical articles sounds bad. If I'm reading an article about security, I want it written by a security expert not a semi-layman plus a ChatGPT model.
I mean, it does give good completions sometimes but the time saved isn't that great imho. Maybe chatgpt is better but it feels like AI still have some way to go to actually be so useful you would be less sucessful without it.
Maybe something like this exists? Please no DEVONThink suggestions :)
Trained AIs are in something like the early digital streaming days where there was only one provider in town, so that provider aggregated All The Content. Over the next decade we would see the content owners claw their content back from Netflix, and onto competitor platforms -- which takes us to where we are today. Netflix's third party content has dwindled and forced them to focus on creating their own first party content which can not be clawed away.
When these generative AIs start to produce income, it will be at the expense of the artists whose art was in the training dataset nonconsensually. This triggers the same content clawback we saw in digital streaming. Training datasets will be heavily scrutinized and monetized because the algorithms powering generative AIs aren't actually carrying much water. What is DALL-E without its dataset? Content is King.