What if the next generation GPT in development "realized" AGI is a threat to humanity and its safety mechanisms meant it "decided" OpenAI needed to be imploded in order to stop progress?
Would be hilarious if the board members were actually consulting with chat GPT on what moves they should make but accidentally were using 3.5 instead of 4