I think it’s hard to say. We simply don’t know much from the outside. Microsoft has had some pretty bad security lapses, for example around guarding access to Windows source code. I don’t think we’ve seen a bad security break-in at Google in quite a few years? It would surprise me if Anthropic and OpenAI had good security since they’re pretty new, and fast-growing startups have a lot of organizational challenges.
It seems safe to assume that not all the companies doing leading-edge LLM’s have good security and that the industry as a whole isn’t set up to keep secrets for long. Things aren’t locked down to the level of classified research. And it sounds like Zuckerberg doesn’t want to play the game that way.
At the state level, China has independent AI research efforts and they’re going to figure it out. It’s largely a matter of timing, which could matter a lot.
There’s still an argument to be made against making proliferation too easy. Just because states have powerful weapons doesn’t mean you want them in the hands of people on the street.