No.
The bottleneck isn't intellectual productivity. The bottleneck is a legion of other things; regulation, IP law, marketing, etc. The executive email writers and meeting attenders have a swarm of business considerations ricocheting around in their heads in eternal battle with each other. It takes a lot of supposedly brilliant thinking to safely monetize all the things, and many of the factors involved are not manifest in written form anywhere, often for legal reasons.
One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields. Another is art "creatives": graphic artists in particular. They're early victims and likely to be fully supplanted in the near future. A little further on and it'll be writers, actors, etc.
> One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields.
Great point. The perfect example: (From Wiki): > In 2024, Hassabis and John M. Jumper were jointly awarded the Nobel Prize in Chemistry for their AI research contributions for protein structure prediction.
AFAIK: They are talking about DeepMind AlphaFold.Related: (Also from Wiki):
> Isomorphic Labs Limited is a London-based company which uses artificial intelligence for drug discovery. Isomorphic Labs was founded by Demis Hassabis, who is the CEO.Yes, it's an example of ML used in science (other examples include NN based force fields for molecule dynamics simulations and meteorological models) - but a biologist or meteorologist usually cares little how the software package they are using works (excluding the knowledge of different limitation of numerical vs statistical models).
The whole thing "but look AI in science" seem to me like Motte-and-bailey argument to imply the use of AGI-like MLLM agents that perform independent research - currently a much less successful approach.
Can you give an example, say in Medicine, where AI made a significant advancement? That is we are talking neural networks and up (ie: LLM) and not some local optimization.
"Our study suggests that LLMs have achieved superhuman performance on general medical diagnostic and management reasoning"
In the scenario being discussed - if a bunch of companies hired a whole bunch of lawyers, markerters, etc that might make salaries go up due to increased demand (but probably not super high amoung as tech isnt the only industry in the world). That still first requires companies to be hiring more of these types of people for that effect to happen, so we should still see some of the increased output even if there is a limiting factor. We would also notice the salaray of those professions going up, which so far hasn't happened.
The tech is going to have to be absolutely flawless, otherwise the uncanny-valley nature of AI "actors" in a movie will be as annoying as when the audio and video aren't perfectly synced in a stream. At least that's how I see it..
For most of them I'm not seeing any of those issues.
A couple years ago, we thought the trend was without limits - a five second video would turn into a five minute video, and keep going from there. But now I wonder if perhaps there are built in limits to how far things can go without having a data center with a billion Nvidia cards and a dozen nuclear reactors serving them power.
Again, I don't know the limits, but we've seen in the last year some sudden walls pop up that change our sense of the trajectory down to something less "the future is just ten months away."
LLMs only exist because the companies developing them are so ridiculously powerful that can completely ignore the rule of law, or if necessary even change it (as they are currently trying to do here in Europe).
Remember we are talking about a technology created by torrenting 82 TB of pirated books, and that's just one single example.
"Steal all the users, steal all the music" and then lawyer up, as Eric Schmidt said at Stanford a few months ago.
They want to ban states from imposing their own regulations on AI.
Like let's take operating systems as an example. If there are great productivity gains from LLMs while aren't companies like Apple, Google and MS shipping operating systems with vastly less bugs and cleaning up backlogged user feature requests?
They have trouble with debugging obvious bugs though.