A friendly and helpful AI assistant that doesn't have any safety guardrails will give you detailed instructions for how to build and operate a bioweapon lab in the same style it will give you a cake recipe; and it will walk you though the process of writing code to search for dangerous nerve agents with the same apparent eagerness as when you ask for an implementation of Pong written in PostScript.
A different AI, one which can be used to create lip-synced translations of videos, can also be used to create realistic fakes that say anything at all, and that can be combined with an LLM to make a much more convincing simulacra — even just giving them cloned voices of real people makes them seem more real and this has already been used for novel fraud.
Fraud is covered by the legal system.
I don’t know anything about nerve agents.
Fraud being illegal is why I used it as an example. Fully automated fraud is to one-on-one fraud as the combined surveillance apparatus of the Stasi at their peak is to a lone private detective. Or what a computer virus is to a targeted hack of a single computer.
Sign flip: https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...
Also remember that safety for AI, as AI is an active field of research, has to be forward facing to prevent what the next big thing could do wrong if the safety people don't stop it first, and not what the current big thing can do.
Your second point boils down to "this makes fraud easier" which is true of all previous advances in communication technology, let me ask what is your opinion of EU Chat Control?
> this information is already publicly available.
In a form most people have neither time, nor money, nor the foundational skill necessary to learn.
> let me ask what is your opinion of EU Chat Control?
I could go on for pages about the pros and cons. The TL;DR summary is approximately "both the presence and the absence of perfect secrecy (including but not limited to cryptography) are existential threats to the social, political, and economic systems we currently have; the attacker always has the advantage over the defender[0], so we need to build a world where secrecy doesn't matter, where nobody can be blackmailed, where money can't be stolen. This is going to extremely difficult to get right, especially as we have no useful reference cases to build up on".
[0] extreme example: use an array of high precision atomic clocks to measure the varying gravitational time dilation caused by the mass of your body moving around to infer what you just typed on the keyboard)
Even just the realization that ‘Logs from a chatbot conversation can go viral’ has actual real world implications.
Ah yes. Let's see:
- invasive and pervasive tracking
- social credit scores
- surveillance [1]
All done by adults, no need to worry
[1] Just one such story: https://www.404media.co/fusus-ai-cameras-took-over-town-amer...
https://pubs.acs.org/doi/10.1021/acssynbio.6b00108
https://strateos.com/strateos-control-our-lab/
These realities are more adjacent than you think. Our job as a species is to talk about these things before they're on top of us. Your smugness reveals a lack of humility which is part of what puts us at risk. You look badly
All of these companies are building towards AGI, the complex ethics both of how an AGI is used and what rights it might have as an intelligent being go well beyond racist slurs.
What are these teams accomplishing? Give me a concrete example of a harm prevented. “Pen is mightier than the sword” is an aphorism.
One can only do this by inventing a machine to observe the other Everett Branches where people didn't do safety work.
Without that magic machine, the closest one can get to what you're asking for is to see OpenAI's logs for which completions for which prompts they're blocking; if they do this with content from the live model and not just the original red-team effort leading up to launch, then it's lost in the noise of all the other search results.