Very shallow assessment, first of all it's not a generalist at all, it has zero concept of what it's talking about, secondly it gets confused easily unless you order it to keep context in memory, and thirdly it can't perform if it does not regularly swallow petabytes of human text.
I get your optimism but it's uninformed.
> To be fair, I’ve talked to a lot of people who cannot consistently perform at the mistral-12b level.
I can find you an old-school bot that performs better than uneducated members of marginalized and super poor communities, what is your example even supposed to prove?
> it’s hard to argue that HJ isn’t an intelligent entity.
What's HJ? If it's not a human then it's extremely easy to argue that it's not an intelligent entity. We don't have intelligent machine entities, we have stochastic parrots and it's weird to pretend otherwise when the algorithms are well-known and it's very visible there's no self-optimization in there, there's no actual learning, there's only adjusting weights (and this is not what our actual neurons do btw), there's no motivation or self-drive to continue learning, there's barely anything that has been "taught" to combine segments of human speech and somehow that's a huge achievement. Sure.
> It’s not that we aren’t on the cusp of general intelligence, it’s that we have a distorted idea of how useful that should be.
Nah, we are on no cusp of general AGI at all. We're not even at 1%. Don't know about you but I have a very clear idea what would AGI look like and LLMs are nowhere near. Not even in the same ballpark.
It helps that I am not in the area and I don't feel the need to pat myself on the back that I have managed to achieve the next AI plateau which the area will not soon recover from.
Bookmark this comment and tell me I am wrong in 10 years, I dare you.