This may sound ludacris, but consider GPT-3 doesn't
actually understand the text it's outputting so it's a bit of a mystery at to
why it outputs a given bit of text (other than blaming it on the model). The problem isn't just dangerous knowledge, but wrong knowledge and liability. If you were using the model to give out, say, medical advice, and it's wrong and someone takes the wrong dose of a medication or gets wrong information on what to do, who is at fault? The patient? The company running this program? OpenAI?
Either way, OpenAI isn't willing to bear to cost of someone getting injured.