Edit: Apparently not based on your clarification, instead the researchers don't know any better than to march into a local maxima because they're only human and seek to replicate themselves. I assumed too much good faith.
(Arguably, it is the other way around: they aren’t focused on appealing to those biases, but driven by them, in the that the perception of language modeling as a road to real general reasoning is a manifestation of the same bias which makes language capacity be perceived as magical.)
Not to mention your comment doesn't track at all with the most basic findings they've shared: that adding new modalities increases performance across the board.
They shared that with GPT-4 vs GPT-4V, and the fact this is a faster model than GPT-4V while rivaling it's performance seems like further confirmation of the fact.
-
It seems like you're assigning emotional biases of your own to pretty straightforward science.
The GP comment we're all replying to outlines a non-exhaustive list of very good reasons to be highly dismissive of LLM. (No I'm not calling it AI, it is not fucking AI)
It is utterly laughable and infuriating that you're assigning legitimate skepticism about this technology as a an emotional bias. Fucking ridiculous. We're now almost a full year into the full bore open hype cycle of LLM. Where's all the LLM products? Where's the market penetration? Business can't use it because it has a nasty tendency to make shit up when it's talking. Various companies and individuals are being sued because generative art is stealing from artists. Code generators are hitting walls of usability so steep, you're better off just writing the damn code yourself.
We keep hearing this "it will do!" "it's coming!" "just think of what it can do soon!" on and on and on, and it just keeps... not doing any of it. It keeps hallucinating untrue facts, it keeps getting basics of it's tasks wrong, for fucks sake AI Dungeon can't even remember if I'm in Hyrule or Night City. Progress seems fewer and farther between, with most advances being just getting the compute cost down, because NO business currently using LLM extensively could be profitable without generous donation of compute from large corporations like Microsoft.
>they aren’t focused on appealing to those biases, but driven by them, in the that the perception of language modeling...
So yes in effect that is their point, except they find the scientists are actually compelled by what markets well, rather than intentionally going after what markets well... which is frankly even less flattering. Like researchers who enabled this just didn't know better than to be seduced by some underlying human bias into a local maxima.
We all have biases in how we determine intelligence, capability, and accuracy. Our biases color our trust and ability to retain information. There's a wealth of research around it. We're all susceptible to these biases. Being a researcher doesn't exclude you from the experience of being human.
Our biases influence how we measure things, which in turn influences how things behave. I don't see why you're so upset by that pretty obvious observation.