They only allow gpt3 chatbots if the chatbot is designed to speak only about a specific subject, and literally never says anything bad/negative (and we have to keep logs to make sure this is the case). Which is insane. Their reasoning to me was literally a 'what if' the chatbot "advised on who to vote for in the election". As if a chatbot in the context of a video game saying who to vote for was somehow dangerous
I understand the need to keep GPT3 private. There is a lot of possibility for deception using it. But they are so scared of their chatbot saying a bad thing and the PR around that they've removed the possibility of doing anything useful with it. They need to take context more into account - a clearly labeled chatbot in a video game is different than a Twitter bot