this is hyperbolic nonsense/fantasy
Today you can.
I don't think it is a stretch to think that in another 6 months there could be financial institutions giving API access to other institutions through ChatGPT, and all it takes it a stupid access control hole or bug and my above sentence could ring true.
Look how simple and exploitable various access token breaches in various APIs have been in the last few years, or even simple stupid things like the aCropalypse "bug" (it wasn't even a bug, just someone making a bad change in the function call and thus misuse spreading without notice) from last week.
It sounds like you're weaving science fiction ideas about AGI into your comment. There's no safety issue here unless you think that ChatGPT will use api access to pursue its own goals and intentions.
I'm not saying that particular disaster is likely, but if lots of people give power to something that can be neither trusted nor understood, it doesn't seem good.
The thing about "it has no goals and intents" is that it's not true. It has them - you just don't know what they are.
Remember the Koan?
In the days when Sussman was a novice, Minsky once came to him
as he sat hacking at the PDP-6.
"What are you doing?", asked Minsky.
"I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.
"Why is the net wired randomly?", asked Minsky.
"I do not want it to have any preconceptions of how to play", Sussman said.
Minsky then shut his eyes.
"Why do you close your eyes?" Sussman asked his teacher.
"So that the room will be empty."
At that moment, Sussman was enlightened.Not with ChatGPT, but plenty of people have been doing this with the OpenAI (and other) models for a while now, for instance LangChain which lets you use the GPT models to query databases to retrieve intermediate results, or issue google searches, generate and evaluate python code based on a user's query...
I guess the mundane aspect of "specialness" is just that, before, you'd have to explicitly code a program to do weird stuff with APIs, which is a task complex enough that nobody really bothered. Now, LLMs seem on the verge of being able to self-program.
Dunbars number is thought to be about as many human relationships can track. After that the network costs of communication get very high and organizations can end up in internal fights. At least that is my take on it.
We are developing a technology that currently has a small context window, but no one I know has seriously defined the limits of how much an AI could pay attention to in a short period of time. Now imagine a contextual pattern matching machine that understands human behaviors and motivations. Imagine if millions of people every day told the machine how they were feeling. What secrets could it get from them and keep? And if given motivation would havoc could be wrecked if it could loose the knowledge on the internet all at once?
I guess people think that taking that next step with LLMs shouldn't happen but we know you can't put breaks on stuff like this. Someone somewhere would add that capability eventually.
Most of the really bad actors have skills approximately at or below those displayed by GPT-4.
I think that's exactly right, but the point isn't that LLMs are going to go rogue (OK, maybe that's someone's point, but I don't think it's particularly likely just yet) so much as they will facilitate humans to go rogue at much higher rates. Presumably in a few years your grandma could get ChatGPT to start executing trades on the market.