While Mark claims his Open Source AI is safer, because fully transparent and many eyes make all bugs shallow, the latest technical report makes mention of an internal, secret, benchmark that had to be developed, because available benchmarks did not suffice at that level of capabilities. For child abuse generation, it only makes mention that it investigated this, not any results of these tests or conditions under which it possibly failed. They shove all this liability on the developer, while claiming any positive goodwill generated.
It completely loses their motivation to care for AI safety and ethics if fines don't punish them, but those who used the library to build.
Reasonable for Meta? Yes. Reasonable for us to nod along when they misuse open source to accomplish this? No.