Big model also means lots of data, including lots of unfiltered garbage used in training. Nobody can manually review so much data, all they can do is automated filtering at this scale. So this means the model has a large attack surface and it is going to be used to do something bad and shame itself when put together with critics determined to find those gaps.
We have seen in the last few months attacks on Google Translate, GPT-3 and other language models from the PC crowd, including the famous AI Ethics firings. It's just tricky to show it in this climate.
The PC crowd don't believe language is fair and concepts neutral, instead saying they are an expression of systems of power. So language models are a natural target for them because they could amplify biases against their identity groups.
I find this critique hasty especially because big language models are nascent technology. We shouldn't throw away the baby with the bath water!