story
We can deal with the implications of dangerous AI if and when it becomes a problem.
What makes you assume that? We haven't been able yet to deal with the repercussions of globalised social media, we don't even completely understand its impacts. Or dealt with the impact of climate change.
AI seems like a much more encompassing and transformative technology than social media, what makes you assume we will be able to deal with its problems in a timely fashion when they inevitably occur? We might as well not be able to, and as usual, unintended consequences will follow.
> Fear of the unknown shouldn’t be allowed to stop scientific progress.
Scientific progress at any cost while being irresponsible about major consequences of it shouldn't be allowed either, it needs to be a balancing act, just pushing forward without even assessing the risks is a stupid game.
Yes, and they were very worried about igniting the entire atmosphere in a chain reaction when they built the first nuclear bomb, and they only proceeded when they showed that that was very unlikely.
> We can deal with the implications of dangerous AI if and when it becomes a problem.
Can we though? What calculation is this based on?
What calculations is your conjecture that AI will spiral out of control based on? You're not allowed to cite Hollywood movies.
The science behind the nuclear bomb was well understood before any engineering activities began. We have no framework to discuss AGI because it doesn't exist. Your entire premise is based on the idea that we could accidentally create evil AGI without first developing a theory of intelligence. Maybe Newton should have been locked up before he developed his theory of gravity. The man set us on a path toward nuclear weapons, thank god he didn't accidentally build a nuke.
Not will, but a very plausible outcome:
1. An intelligence greater than human intelligence can outthink humans.
2. Artificial intelligence is effectively alien intelligence, and will not innately share human values or thought processes, and could thus be very unpredictable.
3. Artificial intelligence will not have the same physical constraints that humans do (innumerable copies, lack of physical boundaries), and so our ordinary intuitions around containment will not necessarily work.
4. The usefulness of AI means it will be deployed everywhere, controlling and monitoring many things. Combined with the above properties, it will be very difficult to contain, detect, subvert, or eliminate.
5. Millions of billions of AIs will be created, many of which will eventually match or exceed human intelligence. Alignment has to go right every single time to mitigate risk to humans. It has to fail only once.
You know, the completely obvious properties that anyone who knows anything about computers could come up with if they bothered to give this matter some actual thought without the usual arrogant assumption of human superiority and mastery over dumb machines.
> We have no framework to discuss AGI because it doesn't exist.
Which also means we have no framework from which to build safe AIs that don't want to kill us, experiment on us, or exploit us, or ...
> Your entire premise is based on the idea that we could accidentally create evil AGI without first developing a theory of intelligence.
We tamed fire before understanding chemistry. We invented catapults and crossbows before understanding elasticity and kinematics. We domesticated animals and created agriculture before understanding genetics. We created bridges and buildings before understanding civil engineering. We even created computers (Babbage machine) before understanding computation (Turing machines, lambda calculus). There's a long history of inventions preceding understanding.
For all we know we're one simple modification to the transformer architecture away from truly general artificial intelligence. We don't know what we don't know, and all of the anti-doomers are blatantly overconfident about what we don't know.
Also, my conclusions do not even require that that we lack a theory of intelligence. Software bugs will also apply to alignment code. Alignment has to go right every time, per above. I don't think people have a proper appreciation of the long list of hazards here.