If anyone releases all the weights of a model that does everything perfectly (or at least can use the right tools which I suspect is much easier), that model is far too valuable to make it disappear, and dangerous enough to do all the things people get worried about.
The only way to prevent that is to have a culture of "don't release unless we're sure it's safe" well before you reach that threshold.
I'm happy with the imperfections of gpt-3.5 and 4, both for this reason and for my own job security. But chatGPT hasn't even reached its first birthday yet, it's very early days for this.
https://www.lesswrong.com/posts/qy5dF7bQcFjSKaW58/bad-at-ari...