I just can't stand this kind of language. ChatGPT is quite useful but have you tried to ask it something serious that is not twitter worthy? We are not there yet. And in any case, this is not the first superhuman tool that humans made. Nukes have existed for 70 years and probably become much more accessible. Biotech could create thousands of humanity-ending viruses, today. These are fears that we live with and will forever live with, but we can't live lives only in fear.
Building nukes and bioweapons aren't as good of a business model as AGI though. The government was incentivised to at least take some precautions with nukes. Nukes can't be developed and launched by individual bad actors. AGI isn't that comparable to nukes for numerous reasons. Bioweapons maybe, but I wouldn't be in support of companies researching bioweapons without regulation.
It's not a choice between living in fear and going full steam ahead. Both are idiotic positions to take. The reasonable approach here would be to publicly fund alignment research while slowing and regulating AI capability research to ensure the best possible outcomes and minimise risk.
You're basically arguing in favour of a free market approach to developing what has the potential to be a dangerous technology. If you wouldn't allow the free market to regulate something as mundane as automobile safety, then why would you trust the free market to regulate AI safety?
Companies who wish to develop state-of-the-art AI models should be required to demonstrate they are taking reasonable steps to ensure safety. They should be required to disclose state-of-the-art research projects to the government. They they should be required to publish alignment research so we can learn...
I agree. It's quite possible that humanity-ending AI is also not a good business, don't you agree?
I think the whole Apocalypse discussion is premature distraction for the moment. A more improtant discussion is what kinds of AI will end up making money. We have already seen how the internet turned from an infinite frontier to a more modern version of TV dominated by a few networks with addictive buttons. Unfortunately we will see the same with AI becuase such is the nature of money today, and capitalism is one thing that AI will not change. The applications of AI that make the most money will dominate, to the detriment of applications that are only benefiting small groups of people (such as the disabled).
> to publicly fund alignment research while
We don't really know if alignment research is what we need. Governments should fund AI research in general, otherwise it would be like the early attempts of the EU to regulate AI. In fact any kind of funding of AI ethics at the moment is dubious because it is changing so fast . Stopping it for six months will not solve those ethical issues either, it will just delay their obsolescence by six months. This is stupid on the face of it
Though take the examples - nuclear weapons or biotech - as you say both have huge potential for harm.
However both are regulated and relatively inaccessible to the average person.
While training models like ChatGPT is still relatively inaccessible for the average person, using them is potentially not.
One of the features of software is the almost zero cost of copying - making proliferation much more of an issue than for nukes or custom made viruses tech [1]
ChatGPT is over-hyped of course, but I think the genie and bottle issue is more real here than for military tech or biotech.
Having said all that I do think the solution is largely around applying existing laws to these new tools.
[1] Ok if they escape, then can self replicate....
It is better than the promises we had in 1980s where we went through an AI winter.
But it is going to take some time for people an corporations to figure out if it is all hype, the next crypto, or if there are some real applications for this new technology.
Look at the cloud, S3 was launched in 2006, but you did not see much about it on Harvard Business Review till 2011. And even then, it was potential promises of what the cloud could do. Things did not really pick up till 2016.
I think it’s truly mind blowing a computer can now simulate some of the best conversations I’ve ever had on a variety of topics.
That would be new, useful, but not really twitter-worthy.
But we can't lie to ourselves about reality in order to prevent fear either.
The opinions from Elon musk, to Sam Altman to even the person who started it all Geoffrey Hinton are actually inline with the blog post.
Hinton even says things like these chatGPT models literally can understand things you tell it.
Should we call climate scientists fear mongers because they talk about a catastrophic but realistic future? I think not, and the same can be said for the people I mentioned.
I personally think these experts are right, but you are also right in that "we are not there yet". But given the trajectory of the technology for the past decade we basically have a very good chance of being "there" very soon.
Agi that is perceptually equivalent to a person more intelligent then us is now a very realistic prospect within our lifetimes.
But they have evidence, measurements and a quantitative model etc.
Where is the AGI FUD people's evidence? It's largely very opinionated arguments of rectal origin. But modern AI is a quantitative model that is completely known and can be readily analyzed. If there is some proof or even substantial quantitative or empirical evidence that those numbers are imminently dangerous, then we are talking.
We must be careful as we chart this new scary world of large language models and Artificial Intelligence and their impacts on humanity, but we do need to slow down on using scare tactics.
Please note I do not fault the author or anyone else for this representation of these new technologies. Nonetheless, I find it counterproductive to our discussions about setting guidelines and ensuring accountability in developing these models and their use.
Right now, it sounds more like the CRISPR discussion all over again.
My 2 cents for what it is worth.
The experiments to determine the answers must not be in the sole pervue of corporations. Executives of corporations have fiduciary duty only to the shareholders.
So a completely liberal approach to traversing the space of pervasive AI in society, with stated 10% probability for catastrophic results (number is per Sam Altman), can not be left to a decision making process that only seeks to maximize profits.
To "be careful as we chart" decisively means it can not be treated as a mere innovation to be subjected to market forces. That's really the only fundamental issue. This isn't a 'product' and 'market' may happily seek a local maxima which then leads to the "10%" failed state. That's it. Address that and we can safely explore away.
So not fear mongering. Correctly categorizing.
Here's the thing. Before chatGPT, it was pretty much a given that society was under more or less zero risk of losing jobs to AI.
Now with GPT-4 that zero risk has changed to unknown risk.
That is a huge change and it is a change that would be highly unwise to not address.
I agree that only time will tell. But as humans we act on predictions of the future. We all have to make a bet on what that future will be.
Right now this blog post is writing about a scenario that although speculative is also very realistic. It is, again, unwise to dismiss the possibility of a realistic scenario.
That 6 month call is driven by people who write fanfic about AI.
There's been active research in AI safety for years and years and it hasn't been without controversy, but these groups have done far more to ensure safety in its various forms exists than the fanfic authors. I think that a 6 month pause of "GPT-5" doesn't accomplish anything other than further fuel radicals who buy into the fanfic to take action that harms people who work in AI.
Department of Energy already runs many high-tech national labratories and we need a Sandia or Los Alamos for AI, for national, public use.
I would qualify your post with open source usage of the model, training algorithms and training data as well.
Instead of us hoping for Facebook or whoever to grace us with weights for example, or having debates over copyrights, fine, let's have the US government (just like the efforts that gave us the internet ..) put together a program for national academies & labratories, research universities (such as Stanford), and private sector work together.
For example such a program can then insure that the most up to date gigantic model X has a public variant with public weights with regulations for usage by public and private interests.
And the issue of hoovering of content to create private models becomes moot, or at least far less problematic.
There’s something in my head that thinks, “This writer is out of touch” (even if they are not).
I admit my logic may be faulty.
For me the perspective is straightforward: even if chatgpt is not it, there is the physical possiblity for a relatively small improvement on human intelligence just like we’re a relatively small improvement on chimps, or on neanderthals. That’s just simple for me to get my head around.
Along with that, there are easy to follow “monkey’s paw” scenarios: the easiest way to end poverty is to extinct all humans, the easiest way to end suffering is to extinct life on earth. I can’t quite formulate a straightforward way to eliminate suffering while maximizing my humanist values. This is the alignment problem.
We’ve got Yan LeCun saying that slowing down or thinking about safety would just mean the chinese get ahead. He’s also saying we understand LLMs more than we understand airplanes.
We’ve got people completely ignoring past examples of technological destruction or technological safety like nonproliferation or Asilomar.
We’ve got people saying GPT is simultaneously revolutionary and going to change everything, thus it’s critical we forge forward… but also is too dumb to change anything (makes up info, etc), and thus we should not worry about being concerned with safety.
What is it about our field that is so gung ho? Are these all bad faith FOMO arguments? It’s hard to understand.
——
The one way i can make sense of it is as a religious experience. Our culture has deep persistent roots in christian eschatological mythology, and of course the coming of a benevolent next wave of intelligence slots into this nicely. Taleb states this clearly[0] that those who are pure of heart will be welcomed into the kingdom of heaven. Not a huge fan of this style of accidental religiosity.
[0] https://twitter.com/nntaleb/status/1642241685823315972?s=20
The biggest danger from AI that I see is that they will only be able to be run by large corporations/governments and we the users will be at their mercy with regards to what we are allowed to use them for.
I used midjourney for free until now with an army of discord accounts because they simply didn't have any checks
Specifically what makes you so confident that someone won't end up creating an AGI that's unaligned? Or alternative, if you believe an unaligned AGI might be created why are you confident that it won't cause mass destruction?
I guess the way I see this is that even if you believe there is a 5-10% chance of AGI could go rouge and say take out global power grids, why is this a chance worth taking? Especially if we can try to slow capability progress as much as possible while funding alignment research?
Seriously though, if you are interested in addressing the real, presently-timed harms of large language models (and the capitalists who deploy them), this letter is just the thing:
Statement from the listed authors of Stochastic Parrots on the “AI pause” letter Timnit Gebru (DAIR), Emily M. Bender (University of Washington), Angelina McMillan-Major (University of Washington), Margaret Mitchell (Hugging Face)
https://www.dair-institute.org/blog/letter-statement-March20...
(They literally call out "Longtermism" as being elitist and the root of all evil.)
I mean, sure, one should look at one's feet from time to time to make sure one doesn't trip. However, these people come across as exclusively myopic and uncompromising in their position at that.