(Arguably, it is the other way around: they aren’t focused on appealing to those biases, but driven by them, in the that the perception of language modeling as a road to real general reasoning is a manifestation of the same bias which makes language capacity be perceived as magical.)
Not to mention your comment doesn't track at all with the most basic findings they've shared: that adding new modalities increases performance across the board.
They shared that with GPT-4 vs GPT-4V, and the fact this is a faster model than GPT-4V while rivaling it's performance seems like further confirmation of the fact.
-
It seems like you're assigning emotional biases of your own to pretty straightforward science.
The GP comment we're all replying to outlines a non-exhaustive list of very good reasons to be highly dismissive of LLM. (No I'm not calling it AI, it is not fucking AI)
It is utterly laughable and infuriating that you're assigning legitimate skepticism about this technology as a an emotional bias. Fucking ridiculous. We're now almost a full year into the full bore open hype cycle of LLM. Where's all the LLM products? Where's the market penetration? Business can't use it because it has a nasty tendency to make shit up when it's talking. Various companies and individuals are being sued because generative art is stealing from artists. Code generators are hitting walls of usability so steep, you're better off just writing the damn code yourself.
We keep hearing this "it will do!" "it's coming!" "just think of what it can do soon!" on and on and on, and it just keeps... not doing any of it. It keeps hallucinating untrue facts, it keeps getting basics of it's tasks wrong, for fucks sake AI Dungeon can't even remember if I'm in Hyrule or Night City. Progress seems fewer and farther between, with most advances being just getting the compute cost down, because NO business currently using LLM extensively could be profitable without generous donation of compute from large corporations like Microsoft.
It's not an especially insightful or sound argument imo, and neither are random complaints about capabilities of systems millions of people use daily despite your own claims.
And for the record:
> because NO business currently using LLM extensively could be profitable without generous donation of compute from large corporations like Microsoft
OpenAI isn't the only provider of LLMs. Plenty of businesses are using providers that provide their services profitably, and I'm not convinced that OpenAI themselves are subsidising these capabilities as strongly as they once did.
The fact that you don’t see utility doesn’t mean it is not helpful to others.
A recent example, I used Grok to write me an outline of a paper regarding military and civilian emergency response as part of a refresher class.
To test it out we fed it scenario questions and saw how it compared to our classmates responses. All people with decades of emergency management experience.
The results were shocking. It was able to successfully navigate a large scale emergency management problem and get it (mostly) right.
I could see a not so distant future where we become QA checkers for our AI overlords.