I haven’t seen a company convincingly demonstrate that this affects them at all. Lots of fluff but nothing compelling. But I have seen many examples by individuals, including myself.
For years I’ve loved poking at video game dev for fun. The main problem has always been art assets. I’m terrible at art and I have a budget of about $0. So I get asset packs off Itch.io and they generally drive the direction of my games because I get what I get (and I don’t get upset). But that’s changed dramatically this year. I’ll spend an hour working through graphics design and generation and then I’ll have what I need. I tweak as I go. So now I can have assets for whatever game I’m thinking of.
Mind you this is barrier to entry. These are shovelware quality assets and I’m not running a business. But now I’m some guy on the internet who can fulfil a hobby of his and develop a skill. Who knows, maybe one day I’ll hit a goldmine idea and commit some real money to it and get a real artist to help!
It reminds me of what GarageBand or iMovie and YouTube and such did for making music and videos so accessible to people who didn’t go to school for any of that, let alone owned complex equipment or expensive licenses to Adobe Thisandthat.
Ironically though, having lots of people found startups is not good for startup founders, because it means more competition and a much harder time getting noticed. So its unclear that prosumers and startup founders will be the eventual beneficiary here either.
It would be ironic if AI actually ended up destroying economic activity because tasks that were frequently large-dollar-value transactions now become a consumer asking their $20/month AI to do it for them.
that's not destroying economic activity - it's removing a less efficient activity and replace it with a more efficient version. This produces economic surplus.
Imagine saying this for someone digging a hole, that if they use a mechanical digger instead of a hand shovel, they'd destroy economic activity since it now cost less to dig that hole!
You are missing the other side of the story. All those customers, those AI boosted startups want to attract also have access to AI and so, rather than engage the services of those startups, they will find that AI does a good enough job. So those startups lost most of their customers, incoming layoffs :)
For a large chunk of my life, I would start a personal project, get stuck on some annoying detail (e.g. the server gives some arcane error), get annoyed, and abandoned the project. I'm not being paid for this, and for unpaid work I have a pretty finite amount of patience.
With ChatGPT, a lot of the time I can simply copypaste the error and get it to give me ideas on paths forward. Sometimes it's right on the first try, often it's not, but it gives me something to do, and once I'm far enough along in the project I've developed enough momentum to stay inspired.
It still requires a lot of work on my end to do these projects, AI just helps with some of the initial hurdles.
I am the same way. I did Computer Science because it was a combination of philosophy and meta thinking. Then when I got out, it was mainly just low level errors, dependencies, and language nuance.
Even amateurish art can be tasteful, and it can be its own intentional vibe. A lot of indie games go with a style that doesn't take much work to pull off decently. Sure, it may look amateurish, but it will have character and humanity behind it. Whereas AI art will look amateurish in a soul-deadening way.
Look at the game Baba Is You. It's a dead simple style that anyone can pull off, and it looks good. To be fair, even though it looks easy, it still takes a good artist/designer to come up with a seemingly simple style like that. But you can at least emulate their styles instead of coming up with something totally new, and in the process you'll better develop your aesthetic senses, which honestly will improve your journey as a game developer so much more than not having to "worry" about art.
The only difference is you spend less on art but will spend same in other areas.
Literally nothing changed
Imagery
AI does not produce art.
Not that it matters to anyone but artists and art enjoyers.
It’s worth reading William Deresiewicz‘ The Death of the Artist. I’m not entirely convinced that marketing that everyone can create art/games/whatever is actually a net positive result for those disciplines.
This is an argument based in Luddism.
Looms where not a net positive for the craftsman that were making fabrics at the time.
With that said, looms where not the killing blow, instead an economic system that lead them to starve in the streets was.
There are going to be a million other things that move the economics away from scarcity and take away the profitability. The question is, are we going to hold on to economic systems that don't work under that regime.
My contribution to this scam
Not sure why digital artists get mad when I ask. They’re no Michelangelo.
If you ask an LLM to generate some imagery, in what way have you entered visual arts?
If you ask an LLM to generate some music, in what way have you entered being a musician?
If you ask an LLM to generate some text, in what way have you entered writing?
If you're just generating images using AI, you only get 80% there. You need at least to be able to touch up those images to get something outstanding.
Plus, is getting 1 billion bytes of randomness/entropy from your 1 thousand bytes of text input really <your> work?
Also, "democratizing"? Please. We're just entrenching more power into the small handful of companies who have been able to raise and set fire to unfathomable amounts of capital. Many of these tools may be free or cheap to use today, but there is nothing for the commons here.
For transparency I just ask for a bright green or blue background then use GIMP.
For animations I get one frame I like and then ask for it to generate a walking cycle or whatnot. But usually I go for like… 3 frame cycles or 2 frame attacks and such. Because I’m not over reaching, hoping to make some salable end product. Just prototypes and toys, really.
Use Google Nano Banana to generate your sprite with a magenta background, then ask it to generate the final frame of the animation you want to create.
Then use Google Flow to create an animation between the two frames with Veo3
Its astoundingly effective, but still rather laborious and lacking in ergonomics. For example the video aspect ratio has to be fixed, and you need to manually fill the correct shade of magenta for transparency keying since the imagen model does not do this perfectly.
IMO Veo3 is good enough to make sprites and animations for an 2000s 2D RTS game in seconds from a basic image sketch and description. It just needs a purpose built UI for gamedev workflows.
If I was not super busy with family and work, I'd build a wrapper around these tools
Otherwise I have to touch up a hundred or so images manually for each different character style… probably not worth it
https://www.totallyhuman.io/blog/the-surprising-new-number-o...
I mainly use AI for selfhosting/homelab stuff and the leverage there is absolutely wild - basically knows "everything".
https://github.com/PicoTrex/Awesome-Nano-Banana-images/blob/...
For you the example of "extract object and create iso model" should be relevant :)
Generally I have an idea I’ve written down some time ago, usually from a bad pun like Escape Goat (CEO wants to blame it all on you. Get out of the office without getting caught! Also you’re a goat) or Holmes on Homes Deck Building Deck Building Game (where you build a deck of tools and lumber and play hazards to be the first to build a deck). Then I come up with a list of card ideas. I iterate with GPT to make the card images. I prototype out the game. I put it all together and through that process figure out more cards and change things. A style starts to emerge so I replace some with new ones of that style.
I use GIMP to resize and crop and flip and whatnot. I usually ask GPT how to do these tasks as photoshop like apps always escape me.
The end result ends up online and I share them with friends for a laugh or two and usually move on.
Easier to start, harder to stand out. More competition, a more effective "sort" (a la patio11).
If we take high-level creativity and deform, really horizontalize the forms, they have a much higher cost, as experience become generic.
AI was a complete failure of imagination.
But in the case of video games there's been similar things already happening; tooling, accessible and free game engines, online tutorials, ready-made assets etc have lowered the barrier to building games, and the internet, Steam, itch.io, etcetera have lowered the barrier to publishing them.
Compare that to when Doom was made (as an example because it's a good source), Carmack had to learn 3d rendering and making it run fast from the scientific text books, they needed a publisher to invest in them so they could actually start working on it fulltime, and they needed to have diskettes with the game or its shareware version manufactured and distributed. And that was when part was already going through BBS.
I am also starting to get a feel for generating animated video and am planning to release a children’s series. It’s actually quite difficult to write a prompt that gets you exactly what you want. Hopefully that improves.
This collapses an important distinction. The containerization pioneers weren’t made rich - that’s correct, Malcolm McLean, the shipping magnate who pioneered containerization didn’t die a billionaire. It did however generate enormous wealth through downstream effects by underpinning the rise of East Asian export economies, offshoring, and the retail models of Walmart, Amazon and the like. Most of us are much more likely to benefit from downstream structural shifts of AI rather than owning actual AI infrastructure.
This matters because building the models, training infrastructure, and data centres is capital-intensive, brutally competitive, and may yield thin margins in the long run. The real fortunes are likely to flow to those who can reconfigure industries around the new cost curve.
Hopefully the boom will slow down and we'll all slowly move away from Holy Shit Hype things and implement more boring, practical things. (although I feel like the world has shunned boring practical things for quite a while before)
Not that I don't recognize the inherent limits of LLMs, but there are as many edge cases covered as are found in the training sets. (More or less.)
In the time it would take to keep retrying until it makes one that fits, then reshaping it to fit into 16x16 nicely I could have just drawn one myself.
There will be millions of factories all benefiting from it, and a relatively small number of companies providing the automation components (conveyor belt systems, vision/handling systems, industrial robots, etc).
The technology providers are not going to become fabulously rich though as long as there is competition. Early adopters will have to pay up, but it seems LLMs are shaping up to be a commodity where inference cost will be the most important differentiator, and future generations of AI are likely to be the same.
Right now the big AI companies pumping billions into it to advance the bleeding edge necessarily have the most advanced products, but the open source and free-weight competition are continually nipping at their heels and it seems the current area where most progress is happening is agents and reasoning/research systems, not the LLMs themself, where it's more about engineering rather than who has the largest training cluster.
We're still in the first innings of AI though - the LLM era, which I don't think is going to last for that long. New architectures and incremental learning algorithms for AGI will come next. It may take a few generations of advance to get to AGI, and the next generation (e.g. what DeepMind are planning in 5-10 year time frame) may still include a pre-trained LLM as a component, but it seems that it'll be whatever is built around the LLM, to take us to that next level of capability, that will become the focus.
-AI is leading to cost optimizations for running existing companies, this will lead to less employment and potentially cheaper products. Less people employed temporary will change demand side economics, cheaper operating costs will reduce supply/cost side
-The focus should not just be on LLM's (like in the article). I think LLMs have shown what artificial neural networks are capable of, from material discovery, biological simulation, protein discovery, video generation, image generation, etc. This isn't just creating a cheaper, more efficient way of shipping goods around the world, its creating new classifications of products like the microcontroller invention did.
-The barrier to start businesses is less. A programmer not good at making art can use genAI to make a game. More temporary unemployment from existing companies reducing cost by automating existing work flows may mean that more people will start their own businesses. There will be more diverse products available but will demand be able to sustain the cost of living of these new founders? Human attention, time etc is limited and their may be less money around with less employment but the products themselves should cost cheaper.
-I think people still underestimate what last year/s LLMs and AI models are capable of and what opportunities they open up, Open source models (even if not as good as the latest gen), hardware able to run these open source models becoming cheaper and more capable means many opportunities to tinker with models to create new products in new categories independent of being reliant on the latest gen model providers. Much like people tinkering with microcontrollers in the garage in the early days as the article mentioned.
Based on the points above alone while certain industries (think phone call centers) will be in the red queen race scenario like the OP stated there will new industries unthought of open up creating new wealth for many people.
There's zero change that cost optimizations for existing companies will lead to cheaper products. It will only result in higher profits while companies continue to charge as much as they possibly can for their products while delivering as little as they can possibly get away with.
On the one hand, there are a lot of fields that this form of AI can and will either replace or significantly reduce the number of jobs in. Entry level web development and software engineering is at serious risk, as is copywriting, design and art for corporate clients, research assistant roles and a lot of grunt work in various creative fields. If the output of your work is heavily represented in these models, or the quality of the output matters less than having something, ANYTHING to fill a gap on a page/in an app, then you're probably in trouble. If your work involves collating a bunch of existing resources, then you're probably in trouble.
At the same time, it's not going to be anywhere near as powerful as certain companies think. AI can help software engineers in generating boilerplate code or setup things that others have done millions of times before, but the quality of its output for new tasks is questionable at best, especially when the language or framework isn't heavily represented in the model. And any attempts to replace things like lawyers, doctors or other such professions with AI alone are probably doomed to fail, at least for the moment. If getting things wrong is a dealbreaker that will result in severe legal consequences, AI will never be able to entirely replace humans in that field.
Basically, AI is great for grunt work, and fields where the actual result doesn't need to be perfect (or even good). It's not a good option for anything with actual consequences for screwing up, or where the knowledge needed is specialist enough that the model won't contain it.
This is what happens when users gain value which they themselves capture, and the AI companies only get the nominal $20/month or whatever. In those cases it's a net gain for the economy as a whole if valuable work was done at low cost.
The inverse of the broken window fallacy.
It will not remain cheap as soon as the competition is dead, which is simply a case of who's got the biggest VC supplied war chest.
I'm not sure it is very predictable.
We have people saying AI is LLMs and they won't be much use and there'll be another AI winter (Ed Zitron), and people and people saying we'll have AGI and superintelligence shortly (Musk/Altman), and if we do get superintelligence it's kind of hard to know how that will play out.
And then there's John von Neumann (1958):
>[the] accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
which is what kicked of the misuse of a perfectly good mathematical term for all that stuff. Compared to the other five revolutions listed - industrial, rail, electricity, cars and IT, I think AI is a fair bit less predictable.
The AI revolution has only just got started. We've barely worked out basic uses for it. No-one has yet worked out revolutionary new things that are made possible only by AI - mostly we are just shoveling in our existing world view.
I think AI value will mostly be spread. Open AI will be more like Godaddy than Apple. Trying to reduce prices and advertise (with a nice bit of dark patterns). It will make billions, but ultimately by competing its ass off rather than enjoying a moat.
The real moats might be in mineral mining, fabrication of chips etc. This may lead to strained relations between countries.
Having the cutting edge best model won't matter either since 99.9% of people aren't trying to solve new math problems, they are just generating adverts and talking to virtual girlfriends.
IIRC Sam Altman has explicitly said that their plan is to develop AGI and then ask it how to get rich. I can't really buy into the idea that his team is going to fail at this but a bunch of random smaller companies will manage to succeed somehow.
And if modern AI turns into a cash cow for you, unless you're self-hosting your own models, the cloud provider running your AI can hike prices or cut off your access and knock your business over at the drop of a hat. If you're successful enough, it'll be a no-brainer to do it and then offer their own competitor.
If they actually reach AGI they will be rich enough. Maybe they can solve world happiness or hunger instead?
There are still lots of currently known problems that could be solved with the help of AI that could make a lot of money - what is the weather going to be when I want to fly to <destination> in n weeks/months time, currently we can only say "the destination will be in <season> which is typically <wet/dry/hot/cold/etc>"
What crops yield the best return next season? (This is a weather as well as a supply and demand problem)
How can we best identify pathways for people whose lifestyles/behaviours are in a context that is causing them and/or society harm (I'm a firm believer that there's no such thing as good/bad, and the real trick to life is figuring out what context is where a certain behaviour belongs, and identifying which context a person is in at any given point in time - we know that psycopathic behaviour is rewarded in business contexts, but punished in social contexts, for example)
innovator's dilemma
Absolutely with 150% certainty yes, and probably many. The www started April 30, 1993, facebook started February 4, 2004 - more than ten years until someone really worked out how to use the web as a social connection machine - an idea now so obvious in hindsight that everyone probably assumes we always knew it. That idea was simply left lying around for anyone to pick up and implement rally fropm day one of the WWW. Innovation isn't obvious until it arrives. So yes absolutely the are many glaring opportunities in modern capitalism upon which great fortunes are yet to be made, and in many cases by little people, not big companies.
>> if so, is a random startup founder or 'little guy' going to be the one to discover and exploit it somehow? If so, why wouldn't OpenAI or Anthropic etc get there first given their resources and early access to leading technology?
I don't agree with your suggestion that the existing big guys always make the innovations and collect the treasure.
Why did Zuckerberg make facebook, not Microsoft or Google?
Why did Gates make Microsoft, not IBM?
Why did Steve and Steve make Apple, not Hewlett Packard?
Why did Brin and Page make Google - the worlds biggest advertising machine, not Murdoch?
In that scenario, everyone makes money: OpenAI, Google (maybe Anthropic, maybe Meta) make money on the platform, but there are thousands of companies that sell solutions on top.
Maybe, however, LLMs get commoditized and open-source models replace OpenAI, etc. In that case, maybe only NVIDIA makes money, but there will still be thousands of companies (and founders/investors) making lots of money on AI everything.
Every use case I have for LLMs is satisfied with copilot, but even then if it costs like $5 a month to access someday, I’d just as soon not have it. Let alone the subsequent spending.
What LLMs are absolutely not useful for, in my opinion, is answering questions or writing code, or summarising things, or being factual in any sense at all.
That’s kinda happening, small local models, huggingface communities, civit ai and image models. Lots of hobby builders trying to make use of generative text and images. It just there’s not really anything innovative about text generation since anyone with a pen and paper can generate text and images.
>The article "AI Will Not Make You Rich" argues that generative AI is unlikely to create widespread wealth for investors and entrepreneurs. The author, Jerry Neumann, compares AI to past technological revolutions, suggesting it's more like shipping containerization than the microprocessor. He posits that while containerization was a transformative technology, its value was spread so thinly that few profited, with the primary beneficiaries being customers.
>The article highlights that AI is already a well-known and scrutinized technology, unlike the early days of the personal computer, which began as an obscure hobbyist project. The author suggests that the real opportunities for profit will come from "fishing downstream" by investing in sectors that use AI to increase productivity, such as professional services, healthcare, and education, rather than investing in the AI infrastructure and model builders themselves.
I used to be the biggest AI hater around, but I’m finding it actually useful these days and another tool in the toolbox.
I think we'll see a ton of games produced by AI or aided heavily by AI but there will still be people "hand crafting" games: the story, the graphics, etc. A subset of these games will have mass appeal and do well. Others will have smaller groups of fans.
It's been some time since I've read it, but these conversation remind me of Walter Benjamin's essay, "The Work of Art in the Age of Mechanical Reproduction".
Is it a large market though?
That is fairly insignificant segment of the market.
I remember back in 2004, my first project was testing a teleconferencing system. We set up a huge screen with cameras at one of our subsidiaries and another at the HQ, and I had a phone on my desk with a built-in camera and screen. Did the company roll out the system? No, it didn’t. It was just too expensive. Did they make a fortune from that experience? No, they didn’t. But I’m pretty sure all companies in the knowledge industry that didn’t enable video calls and screen sharing for their employees went out of business years ago...
Maybe the question isn’t “Will AI take jobs?” but “How do we redesign pathways so humans still get the training ground they need—while AI handles the repetitive load?”
Example
https://specinnovations.com/blog/ai-tools-to-support-require...
If anything, I'd think that crypto in 2010 had all the hallmarks of a new wave as described. The concept was open to anyone who wanted to tinker with it. It had to be sold to skeptical consumers by wildcat startups. It certainly had the potential to upend the financial industry, but no incumbent would touch it. Yet it did end up more or less being sucked into the gravity well of the incumbents, although in the case of crypto we very much need to consider governments which control the money supply to be the heavyweights, even more so than banks and lenders.
Maybe I'm pessimistic, but I'm not sure any new innovation now can escape the gravitational nexus of the duopoly of government and incumbent tech, in a way that would lead to the kind of wild growth and experimentation we had with microprocessors in the 70s.
We had some guy named Satoshi write a paper that basically handed the keys to anyone who wanted to experiment - and 17 years later, after a significant bubble, that wave has done very little to change the status quo.
I suppose if someone released a DIY genome editor or protein folding was solved or, like, a working Mr. Fusion device showed up on Kickstarter, or a "feed"/"seed" a la the Diamond Age made it possible to turn dirt into anything you wanted, or a FTL drive came out of someone's garage or something... yeah. That would seriously upset the incumbents. But even with sci-fi stuff like that, what's the moat once you make your findings public anymore? This article suggests that the only serious moat has ever been that large companies are slowed down by inertia and take some time to spin up once their internal cultures go from deriding something to deciding it's essential.
How do we know if we have stalled to the point that no one will come along and be the next Amazon or Google, until 50 years from now we see that no one did?
Self-driving cars are not going to create generational wealth through invention like microprocessors did.
Gen AI is not nearly powerful enough to justify current investments. A lot of money is going to go up in smoke.
Looking around, can find curious things current AI can't do but likely can find important things it can do. Uh, there's "a lot of money", can't be sure AI won't make big progress, and even on a national scale no one wants to fall behind. Looking around, it's scary about the growth -- Page and Brin in a garage, Bezos in a garage, Zuckerberg in school and "Hot or Not", Huang and graphics cards, .... One or two guys, ... and in a few years change the world and $trillions in company value??? Smoking funny stuff?
Yes, AI can be better than a library card catalog subject index and/or a dictionary/encyclopedia. But a step or two forward and, remembering 100s of soldiers going "over the top" in WWI, asking why some AI robots won't be able to do the same?
Within 10 years, what work can we be sure AI won't be able to do?
So people will keep trying with ASML, TSMC, AMD, Intel, etc. -- for a yacht bigger than the one Bezos got or for national security, etc.
While waiting for AI to do everything, starting now it can do SOME things and is improving.
Hmm, a SciFi movie about Junior fooling around with electronics in the basement, first doing his little sister Mary's 4th grade homework, then in the 10th grade a published Web site book on the rise and fall of the Eastern Empire, Valedictorian, new frontiers in mRNA vaccines, ...?
And what do people want? How 'bout food, clothing, shelter, transportation, health, accomplishment, belonging, security, love, home, family? So, with a capable robot (funded by a16z?), it builds two more like itself, each of those ..., and presto-bingo everyone gets what they want?
"Robby, does P = NP?"
"Is Schrödinger's equation correct?"
"How and when can we travel faster than the speed of light?"
"Where is everybody?"
1. The tech revolutions of the past were helped by the winds of global context. There were many factors that propelled those successful technologies on the trajectories. The article seems to ignore the contextual forces completely.
2. There were many failed tech revolutions as well. Success rate was varied from very low to very high. Again the overall context (social, political, economic, global) decides the matters, not technology itself.
3. In overall context, any success is a zero-sum game. You maybe just ignoring what you lost and highlighting your gains as success.
4. A reverse trend might pickup, against technology, globalization, liberalism, energy consumption etc
1990 is when the real outsourcing mania started, which led to the destruction of most Western manufacturing. Apart from cheap Chinese trinkets the quality of life and real incomes have gotten worse in the West while the rich became richer.
So this is an excellent analogy for "AI": Finding a new and malicious application can revive the mania after an initial bubble pop while making societies worse. If we allow it, which does not have to be the case.
[As usual, under the assumption that "AI" works, of which there is little sign apart from summarizing scraped web pages.]
AI is largely capable of running on-device. In a few years, it's likely that most tasks that most people want AI for will be possible from a tiny model living in their phone. Open source models are plentiful, functional, and only becoming moreso.
But you can't monetize that. We're currently dumping billions of dollars into datacenter moats that are just gonna evaporate inside the decade.
For the average user doing their daily "who was that actor in that movie" query, no, you absolutely cannot monetize AI because all of your local devices can run the model for free with enough quality that no one will know or care that there's a difference.
For enterprise scale building a trillion dollar datacenter and 15 nuclear reactors to replace a hundred developers... also no. LLMs are not capable of that, and likely won't be in the foreseeable future. It's also extremely unclear that one could ever get an ROI on in-house AI like this. It might be more plausible if it were a commodity technology you can just buy, but then you can't make a moat.
The only hypothetical fortune to be found is by whoever is selling AI to people who think they need to buy AI. Just like bitcoin or NFTs.
The good news is that this has two possible outcomes: capitalist AI vendors will want to remove AI from individual access so they can sell it to you: everyone gets less AI. Capitalists realize they can never monetize AI when it's free and open source, and give up: everyone gets less AI. Win-win-win, in my book.
People using it get dumber.
What is being produced is slop and discardable poc-like trash
The environmental costs of building and training LLMs are huge. That compute and water could have been useful for something.
Even the companies building and peddling AI are losers. They are not profitable, need constant billions of dollars of financial help to even syay afloat and pay their compute depth.
The worst part is that even bigger losers will be the general population. Not only are our kids gonna be dumber than us thanks to never having to think for themselved, but our pensions are tied to the stock market that will inevitably collapse when the realization that the top 30% of companies in terms of value are just dominoes waiting to collapse.
But the biggest loser of all is Elon Musk. Just because of who he is.
This looks certain. Few technologies have had as much adoption by so many individuals as quickly as AI models.
(Not saying everything people are doing has economic value. But some does, and a lot of people are already getting enough informal and personal value that language models are clearly mainstreaming.)
The biggest losers I see are successive waves of disruption to non-physical labor.
As AI capabilities accrue relatively smoothly (perhaps), labor impact will be highly unpredictable as successive non-obvious thresholds are crossed.
The clear winners are the arms dealers. The compute sellers and providers. High capex, incredible market growth.
Nobody had to spend $10 or $100 billion to start making containers.
But I think the benefits of AI usage will accumulate with the person doing the prompting and their employers. Every AI usage is contextualized, every benefit or loss is also manifested in the local context of usage. Not at the AI provider.
If I take a photo of my skin sore and put it on ChatGPT for advice, it is not OpenAI that is going to get its skin cured. They get a few cents per million tokens. So the AI providers are just utilities, benefits depend on who sets the prompts and and how skillfully they do it. Risks also go to the user, OpenAI assumes no liability.
Users are like investors - they take on the cost, and support the outcomes, good or bad. AI company is like an employee, they don't really share in the profit, only get a fixed salary for work
The remaining 99% had become a significant challenge to the greatest human achievement in distribution of knowledge.
If people used LLMs, knowing that all output is statistical garbage made to seem plausible (i.e. "hallusinations"), and that it just sometimes overlaps with reality, it would be a lot less dangerous.
There is not a single case of using LLMs that has lead to a news story, that isn't handily explained by conflating a BS-generator with Fact-machine.
Does this sound like I'm saying LLMs are bad? Well, in every single case where you need factual information, it's not only bad, it's dangerous and likely irresponsible.
But there are a lot of great uses when you don't need facts, or by simply knowing it isn't producing facts, makes it useful. In most of these cases, you know the facts yourself, and the LLM is making the draft, the mundane statistically inferable glue/structure. So, what are these cases?
- Directing attention in chaos: Suggest where focus needs attention from a human expert. (useful in a lot of areas, medicine, software development). - Media content: music, audio (fx, speech), 3d/2d art and assets and operations. - Text processing: drafting, contextual transformation, etc
Don't trust AI if the mushroom you picked is safe to eat. But use its 100% confident sounding answer for which mushroom it is, as a starting point to look up the information. Just make sure that the book about mushrooms was written before LLMs took off....
"Bare" LLMs are rare today. Almost all of them are hooked to search engines, APIs, and code execution. So "closed book" fact retention is not an issue, we're not even trying to do that anymore.
Nearly everyone uses pens daily but almost no one really cares about them or says their company runs using pens. You might grumble when the pens that work keeps in the stationary cupboard are shit, perhaps.
I imagine eventually "AI" services will be commoditised in the same way that pens are now. Loads of functional but faily low-quality stuff, some fairly nice but affordable stuff and some stratospheric gold plated bricks for the military and enthusiasts.
In the middle is a large ecosystem of ink manufacturers, lathe makers, laser engravers, packaging companies and logistics and so on and on that are involved.
The explosive, exponential winner-takes-all scenario where OpenAI and it's investors literally ascend to godhood and the rest of humanity lives forever under their divine bootheels doesn't seem to be the trajectory we're on.
How many of us know how to use machine code? And we call ourselves software engineers.
> Is it safe to say that LLMs are, in essence, making us "dumber"?
> No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it.