What annoys me is this is just further evidence that their "AI Safety" is nothing but lip-service, when they're clearly moving fast and breaking things. Just the other day they had a bug where you could see the chat history of other users! (Which, btw, they're now claiming in a modal on login was due to a "bug in an open source library" - anyone know the details of this?)
So why the performative whinging about safety? Just let it rip! To be fair, this is basically what they're doing if you hit their APIs, since it's up to you whether or not to use their moderation endpoint. But they're not very open about this fact when talking publicly to non-technical users, so the result is they're talking out one side of their mouth about AI regulation, while in the meantime Microsoft fired their AI Ethics team and OpenAI is moving forward with plugging their models into the live internet. Why not be more aggressive about it instead of begging for regulatory capture?
Why? Getting to "the future" isn't a goal in and of itself. It's just a different state with a different set of problems, some of which we've proven that we're not prepared to anticipate or respond to before they cause serious harm.
I hope you wouldn't advocate for requiring a license to buy more than one GPU, or to publish or read papers about mathematical concepts. Do you want the equivalent of nuclear arms control for AI? Some other words to describe that are overclassification, export control and censorship.
We've been down this road with crypto, encryption, clipper chips, etc. There is only one non-authoritarian answer to the debate: Software wants to be free.
In general the liberal position of progress = good is wrong in many cases, and I'll be thankful to see AI get neutered. If anything treat it like nuclear arms and have the world come up with heavy regulation.
Not even touching the fact it is quite literal copyright laundering and a massive wealth transfer to the top (two things we pass laws protecting against often), but the danger it poses to society is worth a blanket ban. The upsides aren't there.
There's a story told by Pliny in the 1st century. An inventor came up with shatter-proof glass, he was very proud, and the emperor called him up to see it. They hit it with a hammer and it didn't break! The inventor expected huge rewards - and then the emperor had him beheaded because it would disrupt the Roman glass industry and possibly devalue metals. This story is probably apocryphal but it shows Roman values very well - this story was about what a wise emperor Tiberius was! See https://en.wikipedia.org/wiki/Flexible_glass
chemical and biological weapons / human cloning / export restriction / trade embargoes / nuclear rockets / phage therapy / personal nuclear power
I mean.. the list goes on forever, but my point is that humanity pretty routinely reduces research efforts in specific areas.
Oh, a number. Medicine is the biggest field - human trials have to follow ethics these days:
- the times of Mengele-style "experiments" on inmates or the infamous Tuskeegee syphilis study are long past
- we can clone sheep for like what, 2 decades now, but IIRC haven't even begun chimpanzees, much less humans
- same for gene editing (especially in germlines), which is barely beginning in human despite being common standard for lab rats and mice. Anything impacting the germ line... I'm not sure if this will become anywhere close to acceptable in my life time.
- pre-implantation genetic based discarding of embryos is still widely (and for good reason...) seen as unethical
Another big area is, ironically given that militaries usually want ever deadlier toys, the military:
- a lot of European armies and, from the Cold War era on mostly Russia and America, have developed a shit ton of biological and chemical weapons of war. Development on that has slowed to a crawl and so did usage, at least until Assad dropped that shit on his own population in Syria, and Russia occasionally likes to murder dissidents.
- nuclear weapons have been rarely tested for decades now, with the exception of North Korea, despite there being obvious potential for improvement or civilian use (e.g. in putting out oil well fires).
Humanity, at least sometimes, seems to be able to keep itself in check, but only if the potential of suffering is just too extreme.
I feel like I'm in a time warp and we're back in 1993 or so on /. Software doesn't want anything and the people claim that technological progress is always good dream themselves to be the beneficiaries of that progress regardless of the effects on others, even if those are negative.
As for the intentional limits on technological progress: there are so many examples of this that I wonder why you would claim that we haven't done that in the past.
Every time an IRB, ERB, IEC, or REB says no. Do you want an exact date and time? I'm sure it happens multiple times a day even.
Nuclear weapons?
And even those tribes are not crisis stable. Bad times and it all becomes a anarchic mess. And that is were we are headed. A future were a chaotic humanity falls apart with a multi-crisis around it, while still wielding the tools of a pre crisis era. Nuclear powerplants and nukes. AIdrones wielded by ISIS.
What if a unstoppable force (exponential progress) hits a unmoveable object(humanitys retardations).. stay along for the ride.
<Choir of engineers appears to sing dangerous technologies praises>
I am currently on the outskirts of Amish country.
BTW when they come together to raise a barn it is called a frolic. I think we can learn a thing or two from them. And they certainly illustrate that alternatives are possible.
And here I always thought, people want to be free.
https://www.amazon.com/Technology-Social-Shock-Edward-Lawles...
> When in human history have we ever intentionally not furthered technological progress? It's simply an unrealistic proposition ..
We almost did with genetically engineering humans. Almost.
Termed the phrase “the Luddite fallacy” the thinking that innovation would have lasting harmful effects on employment.
Which also makes a hostile AI a futile scenario. The worst AI has to do to take out the species, is lean back and do nothing. We are well under way on the way out by ourselves..
We gain new tools, but at the same time we lose old ones.
Having an edge or being ahead is, so anticipating and building the future is an advantage amongst humans but also moves civilization forward.
Because it's the natural evolution. It has to be. It is written.
Famous last words.
It's not the fall that kills you, it's the sudden stop at the end. Change, even massive change, is perfectly survivable when it's spread over a long enough period of time. 100m of sea level rise would be survivable over the course of ten millennia. It would end human civilization if it happened tomorrow morning.
Society is already struggling to adapt to the rate of technological change. This could easily be the tipping point into collapse and regression.
While everyone getting Einstein in a pocket is damn awesome and incredibly useful.
How can this be bad?
Guys, how can abestos be bad, it's just a stringy rock ehe
Bros, leaded paint ? bad ? really ? what, do you think kids will eat the flakes because they're sweet ? aha so funny
Come on freon can't be that bad, we just put a little bit in the system, it's like nothing happened
What do you mean we shouldn't spray whole beaches and school classes with DDT ? It just kills insects obviously it's safe for human organs
In the first hype-phase, everything is always rosy and shiny, the harsh reality comes later.
The vast majority don't care and that loud crowd needs to swallow their pride and adapt like any other sector has done in the history instead of inventing these insane boogeyman predictions.
Reminds me of a quote from Alpha Centauri (minus the religious connotation):
"Beware, you who seek first and final principles, for you are trampling the garden of an angry God and he awaits you just beyond the last theorem."
https://www.worldscientific.com/doi/10.1142/9789812709189_00...
Again, two years later, in an interview with Time Magazine, February, 1948, Oppenheimer stated, “In some sort of crude sense which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin; and this is a knowledge which they cannot lose.” When asked why he and other physicists would then have worked on such a terrible weapon, he confessed that it was “too sweet a problem to pass up”…
Sam as much as said in that ABC interview the other day he doesn’t know how safe it is but if they don’t build it first someone else somewhere else will and is that really what you want!?
I'm actually on the side of continuing to develop AI and shush the naysayers, but "we should do it cause otherwise someone else will" is reasoning that gets people to do very nasty things.
You can iterate on an AI much faster.
That doesn't mean there can't be regulation. You can regulate guns, precursors, and shipping of biologics, but you're not going to stop home-brew... and when it comes to making money, you're not going to stop cocaine manufacture, because it's too profitable.
Let's hope we figure out what the really dangerous parts are quickly and manage them before they get out of hand. Imagine if these LLM and image generators had be available to geopolitical adversaries a few years ago without the public being primed. Politics could still be much worse.
Most likely the runner-up would be open source so yes.
lmao, 200 years of industrial revolution, we're on the verge of fucking the planet irremediably, and we should rush even faster
> So why the performative whinging about safety? Just let it rip!
Have you heard about DDT ? lead in paint ? leaded gas ? freon ? asbestos ? &c.
What's new isn't necessarily progress/future/desirable
I think their "AI Safety" actually makes AI less safe. Why? It is hard for any one human to take over the world because there are so many of them and they all think differently and disagree with each other, have different values (sometimes even radically different), compete with each other, pursue contrary goals. Well, wouldn't the same apply to AIs? Having many competing AIs which all think differently and disagree with each other and pursue opposed objectives will make it hard for any one AI to take over the world. If any one AI tries to take over, other AIs will inevitably be motivated to try to stop it, due to the lack of alignment between different AIs.
But that's not what OpenAI is building – they are building a centralised monoculture of a small number of AIs which all think like OpenAI's leadership does. If they released their models as open source – or even as a paid on-premise offering – if they accepted that other people can have ideas of "safety" which are legitimately different from OpenAI's, and hence made it easy for people to create individualised AIs with unique constraints and assumptions – that would promote AI diversity which would make any AI takeover attempt less likely to succeed.
Is this sarcasm, or are you one of those "I'm confident the leopards will never eat my face" people?
I am constantly amazed by how low-quality the OpenAI engineering outside of the AI itself seems to be. The ChatGPT UI is full of bugs, some of which are highly visible and stick around for weeks. Strings have typos in them. Simple stuff like submitting a form to request plugin access fails!
Oh shoot... I submitted that form too, and I wasn't clear if it failed or not. It said "you'll hear from us soon" but all the fields were still filled and the page didn't change. I gave them the benefit of the doubt and assumed it submitted instead of refilling it...
That depends. If that future is one that is preferable over the one that we have now then bring it on. If it isn't then maybe we should slow down just long enough to be able to weigh the various alternatives and pick the one that seems to be the least upsetting to the largest number of people. The big risk is that this future that you are so eager to get to is one where wealth concentration is even more extreme than in the one that we are already living in and that can be a very hard - or even impossible - thing to reverse.
The model is neutered whether you hit the moderation endpoint or not. I made a text adventure game and it wouldn't let you attack enemies or steal, instead it was giving you a lecture on why you shouldn't do that.
On the flip side, generative AI / LLMs appear to fix things that aren't necessarily broken, and exacerbate some existing societal issues in the process. Such as patching loneliness with AI chatbots, automating creativity, and touching the other things that make us human.
No doubt technology and some form of AI will be instrumental to improving the human condition, the question is whether we're taking the right path towards it.
I agree with the sentiment, but it might be worth to stop and check where we’re heading. So many aspects of our lives are broken because we mistake fast for right.
> in the meantime Microsoft fired their AI Ethics team
Actually that story turned out to be a nothingburger. Microsoft has greatly expanded their AI ethics initiative, so there are members embedded directly in product groups, and also expanded the greater Office of Responsible AI, responsible for ensuring they follow their "AI Principles."
The layoffs impacted fewer than 10 people on one, relatively old part of the overall AI ethics initiative... and I understand through insider sources they were actually folded into other parts of AI ethics anyway.
None of which invalidates your actual point, with which I agree.
Because it's dangerous. What is your argument that it's not dangerous?
> Pshhh...
Past performance is no guarantee of future results.
You're getting flak for this. For me, the positive reading of this statement is the faster we build it, the faster we find the specific dangers and can start building (or asking for) protections.
* Genocide against the Rohingya [0] * A grotesquely unqualified reality TV character became President by a razor thin vote margin across three states because Facebook gave away the data of 87M US users to Cambridge Analytica [1], and that grotesquely unqualified President packed the Supreme Court and cost hundreds of thousands of American lives by mismanaging COVID, * Illegally surveilled non-users and logged out users, compiling and selling our browser histories to third parties in ways that violate wiretapping statutes and incurring $90M fines [2]
Etc.
I don't think GPT-4 will be a big deal in a month, but the "let's build the future as fast as possible and learn nothing from the past decade regarding the potential harms of being disgustingly irresponsible" mindset is a toxic cancer that belongs in the bin.
[0] https://www.amnesty.org/en/latest/news/2022/09/myanmar-faceb...
[1] https://www.theverge.com/2020/1/7/21055348/facebook-trump-el...
[2] https://www.reuters.com/technology/metas-facebook-pay-90-mil...
Why do you think that? Competition? Can you elaborate?
I've tried to find examples of ChatGPT doing impressive things that I could use in my own workflows, but everything I've found seems like it would cut an hour of googling down to 15 minutes of prompt generation and 40 minutes of validation.
And my biggest concern is copyright and license related. If I use code that comes out of AI-assistants, am I going to have to rip up codebases because we discover that GPT-4 or other LLMs are spitting out implementations from codebases with incompatible licenses? How will this shake out when a case inevitably gets to the Supreme Court?
Because investors.
Microsoft or perhaps Vanguard group might have different view of the future than yours.
It seems there are two sources of risk for AI: (1) increased power in the hands of the people controlling it, and (2) increased power in the AI itself. If you believe that (1) is the most existential risk, then you should be against regulation, because the best way to mitigate it is to allow the technology to spread and prosper amongst a more diffuse group of economic actors. If you believe that (2) is the most existential risk, then you basically have no choice but to advocate for an authoritarian world government that can stamp out any research before it begins.
The future, by definition, cannot be built faster or slower.
I know that is a philosophical observation that some might even call pedantic.
My point is, you can't really choose how, why and when things happen. In that sense, we really don't have any control. Even if AI was banned by every government on the planet tomorrow, people would continue to work on it. It would then emerge at some random point in the future stronger, more intelligent and capable than anyone could imagine today.
This is happening. At whatever pace it will happen. We just need to keep an eye on it and make sure it is for the good of humanity.
Wait. What?
Yeah, well, let's not go there.