this is hyperbolic nonsense/fantasy
Today you can.
I don't think it is a stretch to think that in another 6 months there could be financial institutions giving API access to other institutions through ChatGPT, and all it takes it a stupid access control hole or bug and my above sentence could ring true.
Look how simple and exploitable various access token breaches in various APIs have been in the last few years, or even simple stupid things like the aCropalypse "bug" (it wasn't even a bug, just someone making a bad change in the function call and thus misuse spreading without notice) from last week.
It sounds like you're weaving science fiction ideas about AGI into your comment. There's no safety issue here unless you think that ChatGPT will use api access to pursue its own goals and intentions.
I'm not saying that particular disaster is likely, but if lots of people give power to something that can be neither trusted nor understood, it doesn't seem good.
Not with ChatGPT, but plenty of people have been doing this with the OpenAI (and other) models for a while now, for instance LangChain which lets you use the GPT models to query databases to retrieve intermediate results, or issue google searches, generate and evaluate python code based on a user's query...
Most of the really bad actors have skills approximately at or below those displayed by GPT-4.
I think that's exactly right, but the point isn't that LLMs are going to go rogue (OK, maybe that's someone's point, but I don't think it's particularly likely just yet) so much as they will facilitate humans to go rogue at much higher rates. Presumably in a few years your grandma could get ChatGPT to start executing trades on the market.