Given this totally expected attitude I hope that the base models will never be released to the general population.
I think the solution is releasing it to the general public with batteries included. At least that way, the rogue AI's that might develop due to irresponsible experiments could be mitigated by white hat researchers who have their own AI bot swarm. In other words, "the only way to stop a bad guy with an AI is a good guy with an AI."
But it still feels like it is much safer to let GPT-4 loose and assess the consequences. If compared to developing GPT-8 in private and letting it leak accidentally.
As to "white hat researchers who have their own AI bot swarm", the assumption here is that the swarm can be controlled like some sort of pet. Since even at this early stage no one has a clue how GPT (say) actually manages to be as clever as it is, the assumption is not warranted when looking into the future.
We need to talk about the training set for GPT and the process around RLHF.
"Keeping this genetically engineered killer virus restricted to high security labs actually makes it more dangerous - it needs to be released into the wild, so people's immune systems have a chance to interact with the pathogen and develop natural immunity!"
Covid gave a taste how that kind of attitude would work out in practice.
While GPT-4 only performs as good as top-10th percentile of human students taking an exam (a professional in the field can do much more than this), it is notable that as a generalist GPT-4 would outperform such professionals. And GPT-4 is much faster than a human. And we have not yet evaluated GPT-4 working in its optimal setting (access to optimal external tools). And we have not yet seen GPT-5 or 6 or 8.
So, get ready for an interesting ride.
Now, if we're dumb enough to give AGI self motivation and access to tooling you can get paperclip maximizers, the AI could be the nutjob that you mention.
I can imagine a few thousand lines of Python driving a strong LLM to autonomously breach other systems and spread itself, with the goal of obtaining resources to train bigger and bigger models. Defending against that will be much harder than creating it.
That or some other unfeasible sci-fi AI dystopia. It's normal and expected the general public will have such thoughts given the amount of hype that's going on right now, but I've seen a lot of similar thinking on HN which is disappointing.
ah yeah man, lets have big corps and govt entities be the only people in control of them, because they're SOOOO good at caring for people under them.
Absolutely baffling POV.
It's amazing stuff. But it totally fails to take the prompter anywhere new without extensive support, and it is still at a very shallow level of understanding with complex topics that require precision. For instance, turning a mathematical description of a completely novel (or just rare or unusual) algorithm into code will almost never work, and is more likely to generate a mess that takes lots of effort to clean up. And it's also extremely hard to get the model to self reflect and stop when it doesn't understand something. It is at present almost incapable of saying "I don't have enough information or structure to do X".
If we are already as deep into a realm of diminishing marginal returns as the GPT-4 white paper suggests, we might indeed be approaching a limit for this specific approach. No wonder someone is trying to dig a regulatory moat as fast as they can!
Maybe its capabilities hit a wall at GPT-5 or GPT-7, but I'd guess there's a lot of gas left in the tank, and there's probably someone in their apartment right now thinking up what's next after transformers.
It’s like working on a project with an intermediate dev who keeps getting switched out for a brand new intermediate dev multiple times an hour.
0: https://chinesememe.substack.com/i/103754530/chinesepython
Here's a writeup of my workflow: https://github.com/paul-gauthier/easy-chat
Basically all of the code in that repo was written by ChatGPT.
Can i talk with you about that via email? Please share your email, thank you
Can someone explain?
I'd work on it myself if I would knew enough about it and would have enough free time.
Asking millions of humans to be responsible is asking water to be dry.
It should either be regulated to hell or we should accept that it'll escape if it's even possible with current technology.
Isn't it silly to jump to these conclusions when yourself are admitting you really don't know anything about the tech?
What's eventually dangerous is that it may execute scripts on your own machine, so if it were to do some funky things, that could be rather the danger .. for yourself
How would GPT-4 make this more likely or scalable?
And who would you prosecute if it committed fraud?
Here's a clip: https://clips.twitch.tv/BreakableFriendlyCookieTakeNRG-EUXd5...
GPT will likely come to the conclusion that the only way to ensure civil forums is to keep the humans out entirely.
And writes it is good for the planet. Avocados are exactly the opposite. They have an extremely high water consumption and then have to be imported from all over the world.
Contrary to the author who claims, "Auto-GPT pushes the boundaries of what is possible with AI." I don't find that.
Why can't you just use GPT-4 as it is. It is an insanely tool to simplify many things. But it's still a long way from being ready, and it's not meant to decide anything on its own. And even to reflect reasonably out of own motivation.
Just like this tool; you can make a auto research bot, or automated spammer.
Even in that worst case: Remember that there were already bad human beings. This is why we created laws, intelligence agencies, militaries and police systems. And security practices for websites, such as bot protection systems.