I agree. Overall the whole situation feels like we’ve just entered atomic age and are proliferating plutonium, while selling shiny radioactive toys [I’m actually pretty serious here, the effects of prolonged interactions with an AI haven’t been evaluated yet, technically there is even a possibility of overriding a weak personality].
But it still feels like it is much safer to let GPT-4 loose and assess the consequences. If compared to developing GPT-8 in private and letting it leak accidentally.