10 years into "we'll have self driving cars next year"
We're 10 years into "it's just completely obvious that within 5 years deep learning is going to replace radiologists"
Moravec's paradox strikes again and again. But this time it's different and it's completely obvious now, right?
I'm not at all saying that it's impossible some improvement will be discovered in the future that allows AI progress to continue at a breakneck speed, but I am saying that the "progress will only accelerate" conclusion, based primarily on the progress since 2017 or so, is faulty reasoning.
> it seems fairly apparent now that AI has largely hit a brick wall in terms of the benefits of scaling
What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.I don't know about the rest, but I spoke up because I didn't want to hit a brick wall, I want to keep going! I still want to keep going! But if accurate predictions (with good explanations) aren't a reason to shift resource allocation then we just keep making the same mistake over and over. We let the conmen come in and people who get too excited by success that they get blind to pitfalls.
And hey, I'm not saying give me money. This account is (mostly) anonymous. There's plenty of people that made accurate predictions and tried working in other directions but never got funding to test how methods scale up. We say there's no alternatives but there's been nothing else that's been given a tenth of the effort. Apples and oranges...
You need to model the business world and management more like a flock of sheep being herded by forces that mostly don't have to do with what actually is going to happen in future. It makes a lot more sense.
Those people always do that. Shouting about cryptocurrencies and NFTs from the rooftops 3-4 years ago, now completely gone.
I suspect they're the same people, basically get rich quick schemers.
But if you had been wrong and we would now have had superintelligence, the upside for its owners would presumably be great.
... Or at least that's the hypothesis. As a matter of fact intelligence is only somewhat useful in the real world :-)
This is an improvement for sure, but LLMs themselves are definitely hitting a wall. It was predicted that scaling alone would allow them to reach AGI level.
A year ago I expected a golden age of local model intelligence integrated into most software tools, and more powerful commercial tools like Google Jules to be something used perhaps 2 or 3 times a week for specific difficult tasks.
That said, my view of the future is probably now wrong, I am just saying what I expected.
Realistically, we're 2.5 years into it at most.
I admit they don't operate everywhere - only certain routes. Still they are undoubtedly cars that drive themselves.
I imagine it'll be the same with AGI. We'll have robots / AIs that are much smarter than the average human and people will be saying they don't count because humans win X Factor or something.
The argument that self-driving cars should be allowed on public roads as long as they are statistically as safe as human drivers (on average) seems valid, but of course none of these cars have AGI... they perform well in the anticipated simulator conditions in which they were trained (as long as they have the necessary sensors, e.g. Waymo's lidar, to read the environment in reliable fashion), but will not perform well in emergency/unanticipated conditions they were not trained on. Even outside of emergencies, Waymos still sometimes need to "phone home" for remote assistance in knowing what to do.
So, yes, they are out there, perhaps as safe on average as a human (I'd be interested to see a breakdown of the stats), but I'd not personally be comfortable riding in one since I'm not senile, drunk, teenager, hothead, distracted (using phone while driving), etc - not part of the class that are dragging the human safety stats down. I'd also not trust a Tesla where penny pinching, or just arrogant stupidity, has resulted in a sensor-poor design liable to failure modes like running into parked trucks.
I'd not personally be comfortable riding in one since I'm not senile, drunk, teenager, hothead, distracted (using phone while driving), etc - not part of the class that are dragging the human safety stats down.
The challenge is that most people think they’re better than average drivers.That's the main difference with a human driver. If I take an Uber and we crash, that driver is liable. Waymo would fight tooth and nail to blame anything else.
I don't care about SF. I care about what I can but as a typical American. Not as an enthusiast in one of the most technologically advanced cities on the planet
And it took what like 2 decades to get there. So no, we don't have self-driving even close. Those examples look more like hard-coded solution for custom test cases.
they have failed in sfo, phoenix and other cities that rolled red carpet for them
I don’t use RAG, and have no doubt the infrastructure for integrating AI into a large codebase has improved. But the base model powering the whole operation seems stuck.
It really hasn't.
The problem is that a GenAI system needs to not only understand the large codebase but also the latest stable version of every transitive dependency it depends on. Which is typically in the order of hundreds or thousands.
Having it build a component with 10 year old, deprecated, CVE-riddled libraries is of limited use especially when libraries tend to be upgraded in interconnected waves. And so that component will likely not even work anyway.
I was assured that MCP was going to solve all of this but nope.
MCP would allow it to instead get this information at run-time from language servers, dependency repositories etc. But it hasn't proven to be effective.
I can't. GPT-4 was useless for me for software development. Claude 4 is not.
But we are going to see a huge explosion in how those models are integrated into the rest of the tech ecosystem. Things that a current model could do right now, if only your car/watch/videogame/heart monitor/stuffed animal had a good working interface into an AI.
Not necessarily looking forward to that, but that's where the growth will come.
And each successive model that has been released has done nothing to fundamentally change the use cases that the technology can be applied to i.e. those which are tolerant of a large percentage of incoherent mistakes. Which isn't all that many.
So you can keep your 10x better and 100x cheaper models because they are of limited usefulness let alone being a turning point for anything.
The explosion of funding, awareness etc only happened after gpt-3 launch
Around 2010 when I was at university, a friend did their undergraduate thesis on neural networks. Among our cohort it was seen as a weird choice and a bit of a dead-end from the last AI winter.
Nonetheless it took openai til Nov 2022 for 1 Million users.
The overall awareness and breakthrough was probably not at 2020.
Basically, what if GenAI is the Minitel and what we want is the internet.
Human brains seem like an existence proof for what’s possible, but it would be surprising if humans also represent the farthest physical limits of what’s technologically possible without the constraints of biology (hip size, energy budget etc).
We’ve been building actuators for 100s of years and we still haven’t got anything comparable to a muscle. And even if you build a better hydraulic ram or brushless motor driven linear actuator you will still never achieve the same kind of behaviour, because the technologies are fundamentally different.
I don’t know where the ceiling of LLM performance will be, but as the building blocks are fundamentally different to those of biological computers, it seems unlikely that the limits will be in any way linked to those of the human brain. In much the same way the best hydraulic ram has completely different qualities to a human arm. In some dimensions it’s many orders of magnitudes better, but in others it’s much much worse.
For text generation, it seems like the fast progress was mainly due to feeding the models exponentially more data and exponentially more compute power. But we know that the growth in data is over. The growth in compute has a shifted from a steep curve (just buy more chips) to a slow curve (have to make exponentially more factories if we want exponentially more chips)
Im sure we will have big improvements in efficiency. Im sure nearly everyone will use good LLMs to support them in their work, and they may even be able to do all they need to do on-device. But that doesn’t make the models significantly smarter.
The thing about the latter 1/3rd of a sigmoid curve is, you're still making good progress, it's just not easy any more. The returns have begun to diminish, and I do think you could argue that's already happening for LLMs.
There is a lag in how humans are reacting to AI which is probably a reflexive aspect of human nature. There are so many strategies being employed to minimize progress in a technology which 3 years ago did not exist and now represents a frontier of countless individual disciplines.
If you took a Tesla or a Waymo and dropped into into a tier 2 city in India, it will stop moving.
Driving data is cultural data, not data about pure physics.
You will never get to full self driving, even with more processing power, because the underlying assumptions are incorrect. Doing more of the same thing, will not achieve the stated goal of full self driving.
You would need to have something like networked driving, or government supported networks of driving information, to deal with the cultural factor.
Same with GenAI - the tooling factor will not magically solve the people, process, power and economic factors.
Or actual intelligence. That observes its surroundings and learns what's going on. That can solve generic problems. Which is the definition of intelligence. One of the obvious proofs that what everybody is calling "AI" is fundamentally not intelligent, so it's a blatant misnomer.
Absolutely driving is cultural (all things people do are cultural) but given 10’s of millions of miles driven by Waymo, clearly it has managed the cultural factor in the places they have been deployed. Modern autonomous driving is about how people drive far more than the rules of the road, even on the highly regulated streets of western countries. Absolutely the constraints of driving in Chennai are different, but what is fundamentally different? What leads to an impossible leap in processing power to operate there?
Do you really think Waymos in SF operate solely on physics? There are volumes of data on driver behavior, when to pass, change lanes, react to aggressive drivers, etc.
Lol. If you dropped the average westerner into Chennai, they would either: a) stop moving b) kill someone
Decades of machine learning research would like to have a word.
3D printing is making huge progress in heavy industries. It’s not sexy and does not make headlines but it absolutely is happening. It won’t replace traditional manufacturing at huge scales (either large pieces or very high throughput). But it’s bringing costs way down for fiddly parts or replacements. It is also affecting designs, which can be made simpler by using complex pieces that cannot be produced otherwise. It is not taking over, because it is not a silver bullet, but it is now indispensable in several industries.
The same thing with AI. You'd be blind or lying if you said it hasn't advanced a lot. People aren't denying that. But people are fed up being constantly being promised the moon and getting a cheap plastic replica instead.
The tech is rapidly advancing and doing good. But it just can't keep up with the bubble of hype. That's the problem. The hype, not the tech.
Frankly, the hype harms the tech too. We can't solve problems with the tech if we're just throwing most of our money at vaporware. I'm upset with the hype BECAUSE I like the tech.
So don't confuse the difference. Make sure you understand what you're arguing against. Because it sounds like we should be on the same team, not arguing against one another. That just helps the people selling vaporware
This was never the case, and this is obvious to anyone who has ever been to factories that doing mass-produced plastic
>Or self-driving, that is "just around the corner" for a decade now.
But it is really around the corner, all that remains is to accept it. That is, to start building and modifying the road infrastructure and changing the traffic rules to enable effective integration self-driving cars into road traffic.
Programmers that don't use AI will get replaced by those that do. (no just by mandate, but by performance)
> 10 years into "we'll have self driving cars next year"
They're here now. Waymo does 250K paid rides/week.
why don't you bring it up then.
> There will be a turning point but it’s not happened yet.
do you know something that rest of us don't ?