Dear Senator [X],
I am an engineer working for [major employer in the state]. I am extremely concerned about the message that Sam Altman is sharing with the Judiciary committee today.
Altman wants to create regulatory roadblocks to developing AI. My company produces AI-enabled products. If these roadblocks had been in place two years ago, my company would not have been able to invest into AI. Now, because we had the freedom to innovate, AI will be bringing new, high paying jobs to our factories in our state.
While AI regulation is important, it is crucial that there are no roadblocks stopping companies and individuals from even trying to build AIs. Rather, regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use - this would allow companies and individuals to research new AIs freely while still ensuring that AI products are properly reviewed.
Altman and his ilk try to claim that aggressive regulation (which will only serve to give them a monopoly over AI) is necessary because an AI could hack it's way out of a laboratory. Yet, they cannot explain how an AI would accomplish this in practice. I hope you will push back against anyone who fear-mongers about sci-fi inspired AI scenarios.
Congress should focus on the real impacts that AI will have on employment. Congress should also consider the realistic risks AI which poses to the public, such as risks from the use of AI to control national infrastructure (e.g., the electric grid) or to make healthcare decisions.
Thank you, [My name]
While in general I share the view that _research_ should be unencumbered, but deployment should be regulated, I do take issue with your view that safety only matters once they are ready for "widespread use". A tool which is made available in a limited beta can still be harmful, misleading, or too-easily support irresponsible or malicious purposes, and in some cases the harms could be _enabled_ by the fact that the release is limited.
For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn. Because almost no one is aware of the stunning new quality of outputs produced by your model, most people don't believe the victim when they assert that the footage is fake.
I would suggest that the first non-private (e.g. non-employee) release of a tool should make it subject to regulation. If I open a restaurant, on my first night I'm expected to be in compliance with basic health and safety regulations, no matter how few customers I have. If I design and sell a widget that does X, even for the first one I sell, my understanding is there's an concept of an implied requirement that my widgets must actually be "fit for purpose" for X; I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?
You make a great point here. This is why we need as much open source and as much wide adoption as possible. Wide adoption = public education in the most effective way.
The reason we are having this discussion at all is precisely because OpenAI, Stability.ai, FAIR/Llama, and Midjourney have had their products widely adopted and their capabilities have shocked and educated the whole world, technologists and laymen alike.
The benefit of adoption is education. The world is already adapting.
Doing anything that limits adoption or encourages the underground development of AI tech is a mistake. Regulating it in this way will push it underground and make it harder to track and harder for the public to understand and prepare for.
Imagine if, e.g. drugs testing and manufacture was subject to no regulations. As a consumer, if you can be aware that some chemicals are very powerful and useful, but you can't be sure that any specific product has the chemicals it says it has, that it was produced in a way that ensures a consistent product, or that it was tested for safety, or what the evidence is that it's effective against a particular condition. Even if wide adoption of drugs from a range of producers occurs, does the public really understand what they're taking, and whether it's safe? Should the burden be on them to vet every medication on the market? Or is appropriate to have some regulation to ensure medications have have their active ingredients in the amounts stated, and are produced with high quality assurance, and are actually shown to be effective? Oh, no, says a pharma industry PR person. "Doing anything that limits the adoption or encourages the underground development of bioactive chemicals is a mistake. Regulating it in this way will push it underground and make it harder to track and harder for the public to understand and prepare for."
If a team of PhDs can spend weeks trying to explain "why did the model do Y in response to X?" or figure out "can we stop it from doing Z?", expecting "wide adoption" to force "public education" to be sufficient to defuse all harms such that no regulation whatsoever is necessary is ... beyond optimistic.
i can sell a webserver that gets used to host illegal content all day long. Should that be included? Where does the regulation end? I hate that the answer to any question seems to be just add more government.
Similarly it's not really good faith to assume everyone opposed to regulation in this field is seeking a lawless libertarian (or anarchist perhaps) utopia.
There are already laws against false advertising, misrepresentation etc. We don’t need extra laws specifically for AI that doesn’t perform well.
What most people are concerned about is AI that performs too well.
I would assert that just as I have the right to pull out a sheet of paper and write the most vile, libelous thing on it I can imagine, I have the right to use AI to put anyone's face on any body, naked or not. The crime comes from using it for fraud. Take gasoline for another example. Gasoline is powerful stuff. You can use it to immolate yourself or burn down your neighbor's house. You can make Molitov cocktails and throw them at nuns. But we don't ban it, or saturate it with fire retardants, because it has a ton of other utility, and we can just make those outlying things illegal. Besides, five years from now, nobody's going to believe a damned thing they watch, listen to, or read.
I think you should continue to have the right to use whatever program to generate whatever video clip you like on your computer. That is a distinct matter from whether a commercially available video generative AI service has some obligations to guard against abusive uses. Personal freedoms are not the same as corporate freedom from regulatory burdens, no matter how hard some people will work to conflate them.
As I understand it, revenge porn is seen as being problematic because it can lead to ostracization in certain social groups. Would it not be better to regulate such discrimination? The concept of discrimination is already recognized in law. This would equally solve for revenge porn created with a camera. The use of AI is ultimately immaterial here. It is the human behaviour as a product of witnessing material that is the concern.
The right one is to grant people rights over their likeness, so you could use something more like copyright law
Even if it's a real recording, you should still have control over it
Dear Senator [X],
It's painfully obvious that Sam Altman's testimony before the judiciary committee is an attempt to set up rent-seeking conditions for OpenAI, and to snuff out competition from the flourishing open source AI community.
We will be carefully monitoring your campaign finances for evidence of bribery.
Hugs and Kiss,
[My Name]
I have sent correspondence about ten times to my Congressmen and Senators. I have received a good reply (although often just saying there is nothing that they can do) except for the one time I contacted Jon Kyl and unfortunately mentioned data about his campaign donations from Monsanto - I was writing about a bill he sponsored that I thought would have made it difficult for small farmers to survive economically and make community gardens difficult because of regulations. No response on that correspondence.
This is not a rebuttle to regulatory capture. it is in fact built into the model
These "small companies" are feeder systems for the large company, it is a place for companies to raise to the level where they would come under the burden of regulations, and prevented from growing larger there by making them very easy to acquire by the large company.
The small company has to sell or raise massive amounts of capital to just piss away on compliance cost. Most will just sell
I’m sympathetic to your position in general, but I can’t believe you wrote that with a straight face. “I don’t know how it would do it, therefore we should completely ignore the risk that it could be done.”
I’m no security expert, but I’ve been following the field incidentally and dabbling since writing login prompt simulators for the Prime terminals at college to harvest user account passwords. When I was a Unix admin I used to have fun figuring out how to hack my own systems. Security is unbelievably hard. An AI eventually jail braking is an eventual almost certainty we need to prepare for.
I highly recommend Rob Miles channel on YouTube. Here’s a good one, but they’re all fascinating. It turns out training an AI to have the actual goals we want it to have is fiendishly difficult.
People are coalescing into “for” and “against” camps, which makes very little sense given the broad spectrum of technologies and problems summarized in statements like “AI regulation”.
I think it’s a bit like saying “software (should|shouldn't) be regulated”. It’s a position that cannot be defended because the term software is too broad.
Which one do you think is more important to convince?
https://www.eff.org/deeplinks/2015/04/remembering-case-estab...
If AI will be used by public institutions, especially law enforcement, we need it regulated in the same manner. A bad AI trained on biased data has the potential to be extremely dangerous in the hands of a cop who is already predisposed for racist behavior.
https://www.smh.com.au/national/nsw/maximise-profits-facial-...
AI is being used by law enforcement and public institutions. In fact so much that perhaps this is a good link:
https://www.monster.com/jobs/search?q=artificial+intelligenc...
In both cases it's too late to do anything about it. AI is "loose". Oh and I don't know if you noticed, governments have collectively decided law doesn't apply to them, only to their citizens, and only in a negative way. For instance, just about every country has laws on the books guaranteeing timely emergency care at hospitals, with timely defined as within 1 or 2 hours.
Waiting times are 8-10 hours (going up to days) and this is the normal situation now, it's not a New Year's eve or even Friday evening thing anymore. You have the "right" to less waiting time, which can only mean the government (the worst hospitals are public ones) should be forced to fix this, spending whatever it needs to to fix it. And it can be fixed, I mean at this point you'd have to give physicians and nurses a 50% rise and double the number employed and 10x the number in training.
Government is just outright not doing this, and if one thing's guaranteed, this will keep getting worse, a direct violation of your rights in most states, for the next 10 years minimum, but probably longer.
Does scikit-learn count or we are just not going to bother defining what we mean by "AI"?
"AI" is whatever congress says it is? That is an absolutely terrible idea.
Not regulating the air quality we breathe for decades turned out amazing for millions of the Americas. Yes, lets do the same with AI! What could possibility go wrong?
Even if you really believe that somewhere in the chain of consequences derived from LLMs there could be grave and material damage or other affronts to human dignity, there is almost always a more direct causal link that acts as the thing which makes that damage kinetic and physical. And that’s the proper locus for regulation. Otherwise this is all just a bit reminiscent of banning numbers and research into numbers.
Want to protect people’s employment? Just do that! Enshrine it in law. Want to improve the safety of critical infrastructure and make sure they’re reliable? Again, just do that! Want to prevent mass surveillance? Do that! Want to protect against a lack of oversight in complex systems allowing for subterfuge via bad actors? Well, make regulation about proper standards of oversight and human accountability. AI doesn’t obviate human responsibility, and a lack of responsibility on the part of humans who should’ve been responsible, and who instead cut corners, doesn’t mean that the blame falls on the tool that cut the corners, but rather the corner-cutters themselves.
- Scamming via impersonation - Misinformation - Usage of AI in a way that could have serious legal ramifications for incorrect responses - Severe economic displacement
Congress can and should examine these issues. Just because OP works at an AI doesnt' mean that company can't exist in a regulated industry.
I too work in the AI space and welcome thoughtful regulation.
These resources should be spent lessening the impact rather than trying to completely control it.
Likewise for activities that aren't nefarious too. Whatever fears that could be placed on blobs of code like "AI", are far more merited being placed on humans.
great, how does that apply to China or Europe in general? Or a group in Russia or somewhere else? Are you assuming every governing body on the surface of the earth is going to agree on the terms used to regulate AI? I think it's a fool's errand.
Do we really have to play this game?
If what you’re arguing for is not going to specifically advantage your state over others, and the thing you’re arguing against isn’t going to create an advantage for other states over yours, why make this about ‘your state’ in the first place?
The point of elected representatives is to represent the views of their constituents, not to obtain special advantages for their constituents.
That is painfully naive, a history of pork projects speaks otherwise.
The views of their constituents are probably in favor of special advantages for their constituents, so the one may imply the other.
I mean, some elected representatives may represent constituencies consisting primarily of altruistic angels, but that is…not the norm.
A lot of said constituents' views are, in practice, that they should receive special advantages.
You think voice actors and writers are not saying the same?
When do we accept capitalism as we know it is just a bullshit hallucination we grew up with? It’s no more an immutable feature of reality than a religion?
I don’t owe propping up some rich person’s figurative identity, or yours for that matter.
I agree with being skeptical of proposals from those with vested interests, but are you just arguing against what you imagine Altman will say, or did I miss some important news?
What would you say to a simple registration requirement? You give a point of contact and a description of training data, model, and perhaps intended use (could be binary: civilian or dual use). One page, publicly visible.
This gives groundwork for future rulemaking and oversight if necessary.
If you sent it by e-mail or web contact form, chances are you wasted your time.
If you really want attention, you'll send it as a real letter. People who take the time to actually send real mail are taken more seriously.
IANA senator, but if I were you lost me there. The personal insults make it seem petty and completely overshadow the otherwise professional-sounding message.
still, there are probably a lot of people like me who have heard it used (incorrectly it seems) as an insult so many times that it's an automatic response :-(
> The American Heritage® Dictionary of the English Language, 5th Edition.
Yes, only sometimes used to indicate disapproval, but such ambiguity does not work to your favor here. It is better to remove that ambiguity.
The denotation may not be negative, but if you use ilk in what you see as a neutral way, people will get a different message than you're trying to send.
See also https://grammarist.com/words/ilk/#:~:text=It's%20neutral.,a%....
Altman can an ulterior motive, but it doesn't mean that we should strive for having some sort of handle on AI safety.
It could be that Altman and OpenAI know exactly how this will look and the backlash that will ensue that we get ZERO oversight and we rush headlong into doom.
Short term we need to focus on the structural unemployment that is about to hit us. As the AI labs use AI to make better AI, it will eat all the jobs until we have a relative handful of AI whisperers.