In my opinion, it was not a very strong position because the allure of money and trying to be the biggest is too strong (as we're seeing now), but I think it was at least coherent.
They could easily lose any power they had to guide the industry, it was a huge gamble. I remember reading a Harvard business school study showing the first mover advantage repeatedly turned out to be ineffective in the tech industry as there is a looong series of early winners dying out to later market entrants like Friendster->FB, Google, a bunch of dotcom era Ecommerce companies predating Amazon, etc.
They need full industry/society buy in - at an ideological level - to win this battle, they won't win through backroom dealing in a boardroom while losing 90% of their own staff.
The intuition for GPT-4 being on the way towards superhuman intelligence or persuasion abilities is that the only significant difference between it and GPT-3 appears to be the amount of compute power applied to training.
The previous US President was much less than superhuman, and he still managed to seriously threaten democracy in the US, and by extension cause a minor existential risk to civilization. Imagine what a more intelligent agent with enough attention to be personally persuasive instead of generically persuasive could achieve for itself, given a goal. (If you think that LLMs can't express goals or preferences, read some of the Bing Sydney transcripts.)
The biggest risks I see from AI over the next 2 decades are:
1. A complete devolution in what is true (because creating convincing fakes will be trivial). From politics to the way that people interact with each other on a daily basis, this seems like it has the potential to sew division
2. the societal turmoil/upheaval arising from the mass unemployment (and subsequent unemployability) of millions of people without the social safety nets in place to handle that. When people are suddenly unable to work and support their families and those who have consolidated the power have no interest in sharing the wealth, there will inevitably be pushback.
We currently almost have scifi-level AI we can talk to on our smartphones, which would have been fantastical only a few years ago.
What exactly are you looking for here, proof of concept? This stuff is inherently speculative - because if it wasn't speculative, we'd probably be dead.
OpenAI is (hypothetically) progressing to AGI, therefore bad. Whether GPT specifically is a half a foot on a 1000km journey or potentially insignificant, it doesn't matter to them as they've pre-selected any potential for progress as all bad.
[1] if you search "Helen Toner" on YouTube it's just a collection of videos of her talking at EA conferences. So she's a believer