> What is stopping our adversaries from developing malicious AI models and unleashing them on us?
That fear is a big part of OpenAI’s reasoning behind not open sourcing their models. So in the immediate terms I’d say malicious uses are limited by its locked down nature. Of course, that’ll eventually end. The key research that makes this possible is open and eventually access will be democratized.
My personal take, which I know is controversial, is that by locking down these models, but still making them available over a GUI/API, the world can better prepare itself for the eventual AI onslaught. Just raising awareness that the tech has reached this level is helpful. Still not sure how we’ll deal with it when the bad actors come though.