In general the liberal position of progress = good is wrong in many cases, and I'll be thankful to see AI get neutered. If anything treat it like nuclear arms and have the world come up with heavy regulation.
Not even touching the fact it is quite literal copyright laundering and a massive wealth transfer to the top (two things we pass laws protecting against often), but the danger it poses to society is worth a blanket ban. The upsides aren't there.
We can walk and chew gum at the same time, and regulate two things.
Only because we know the risks and issues with them.
OP is talking about furthering technology, which is quite literally "discovering new things"; regulations on furthering technology (outside of literal nuclear weapons) would have to be along the lines of "you must submit your idea for approval to the US government before using it in a non-academic context if could be interpreted as industry-changing or inventing", which means anyone with ideas will just move to a country that doesn't hinder its own technological progress.
ha, the big difference is that this whole list can actually affect the ultra wealthy. AI has the power to make them entirely untouchable one day, so good luck seeing any kind of regulation happen here.
As technology advances, such prohibitions are going to become less and less effective.
Tech is constantly getting smaller, cheaper and easier for a random person or group of people to acquire, no matter what the laws say.
Add in the nearly infinite profit and power motive to get hold of strong AI and it'll almost impossible to stop, as governments, billionaires, and megacorps all over the world will see it as a massive competitive disadvantage not to have one.
Make laws against it in one place, your competitor in another part of the world without such laws or their effective enforcement will dominate you before long.
I wouldn't say that this is an additional reason.
I would say that this is the primary reason that overrides the reasonable concerns that people have for AI. We are human after all.
There's lots of evidence of our ability to control the development, use and proliferation of technology.
Both have happened at a rampant pace once the technology to easily copy music and copyrighted content became easily available and virtually free.
The same is likely to happen to every technology that becomes cheap enough to make and easy enough to use -- which is where technology as a whole is trending towards.
Laws against technology manufacture/use are only effective while the barrier to entry remains high.
All those examples put us in physical danger to the point of death.
See airlines, traffic control, medical equipment, government services, but also we regulate ads, TV, financial services, crypto. I mean we regulate so many “tech” things for the benefit of society this is a losing argument to take. There’s plenty of room to argue the elsewhere but the idea that we don’t regulate tech if it’s not immediately a physical danger is crazy. Even global warming is a huge one, down to housing codes and cars etc. It’s a potential physical danger hundreds of years out, and we’re freaking out about it. Yet AI had the chance to really do much more damage within a much shorter time frame.
We also just regulate soft social stability things all over, be it nudity, noise, etc.
I just think that comparing AI to nuclear weapons seems like hyperbole.
If only.