>Actually that is done by bolting more "fact checking" layers on top. Even that does not fix it very well..
Reinforcement training is done as well. And it fixed it quite well such that we use it on a daily basis now.
>So at a fundamental level, LLMs have not really progressed. On a superficial level, they have,
No those fixes aren't superficial. They're the same fixes you have in your brain. You also fact check, people also hallucinate and people with brain damage hallucinate even more. You can bypass mechanisms in your brain that prevent hallucination by taking drugs.
Essentially the brain is a big hallucination machine with mechanisms to prevent it both low level and high level. We even consciously fact check ourselves and double check our own work. Is that superficial? No.
You look at progress by seeing how LLMs are used. At first they were used as a chatbot. Then it became autocomplete. Now basically most people don't code with their hands anymore and they use it as an agent. That is the most disruptive thing to ever happen to programming. This isn't an investor thing. This is REALITY.
>So at a fundamental level, LLMs have not really progressed. On a superficial level, they have, but that is only because marketing wanted to show the "progress" over a short amount of time, so that the "uninitiated" will extrapolate that to mean some god like AI in near future, raking in all the investor money...
This is you hallucinating. Investor money is raking in because they are closer than ever to creating AI that can replace developers and companies will pay top dollar for that. That's why AI is making money. Very few people are that speculative into making a god AI... but a few are and those are the people throwing money at Yann's AMI venture which is huge gamble and could have that money end up in the trash.
But LLM technology? We use it everyday. It's already a validated technology.
>Smart move though. It is working very well....
I can ask an LLM, "hey, human society is changing before our very eyes. Nobody programs directly anymore" The LLM is not so stupid as to say that's "superficial" progress. That's a smarter answer then a lot of the people here.
I would say 6 months ago, I would get like 5 or 6 detractors responding to one of my posts like this. Now I think this thread got 2 and a bunch of vote downs. People are realizing they're embarrassingly wrong. It'll hit you eventually. It will happen either in the next couple months or next couple years simply because humanity is pouring so much research into this area there is no way it won't progress.
For a good analogy you just need to look at self driving cars. HN used to be loaded with people saying it was a shit venture and totally useless and no progress has been made.... well now I regularly take waymo cars everywhere. Investors were wrong about crypto, but they weren't wrong about self driving.
I would say the HN crowd is just as stupid if not more stupid then investors.