The split existed long prior to the board action, and extended up into the board itself. If anything, the board action is a turning point toward decisively ending the split and achieving unity of purpose.
From what I've read, Ilya has been pushing to slow down (less of the move fast and break things start-up attitude).
It also seems that Sam had maybe seen the writing on the wall and was planning an exit already, perhaps those rumors of him working with Jony Ive weren't overblown?
https://www.theverge.com/2023/9/28/23893939/jony-ive-openai-...
Wouldn’t a likely outcome in that case be that someone else overtakes them? Or are they so confident that they think it’s not a real threat?
Didn’t Microsoft already try this experiment a few years back with an AI chatbot?
You may be thinking of Tay?
I think danger from AGI often presumes the AI has become malicious, but the AI making mistakes while in control of say, industrial machinery, or weapons, is probably the more realistic present concern.
Early adoption of these models as controllers of real world outcomes is where I could see such a disagreement becoming suddenly urgent also.
Do you think if chickens treated us better with intrinsic value we won't kill them? For AGI superhuman x risk folks that's the bigger argument.