I wonder if it has to do with Meta recently joining the “Frontier Model Forum” industry group alongside Microsoft and Google and OpenAI and Anthropic. AKA the group for regulatory capture by playing up “trust and safety”. They are the ones pushing for regulations which will potentially make it illegal to build models that are open or uncensored.
https://www.theguardian.com/technology/2023/jul/26/google-mi...
https://www.frontiermodelforum.org/updates/amazon-and-meta-j...
This whole group has a dystopian vibe to it, with forced assumptions for its members:
“Member firms must publicly acknowledge that frontier AI models pose both public safety and societal risks, and publicly disclose guidelines for evaluating and mitigating those risks.”
In other words, all the members must amplify the same safety tropes to force regulation on the rest of us.