Instead, existing AI companies are using the government to increase the threshold for newcomers to enter the field. A regulation for all AI companies to have a testing regime that requires a 20 headstrong team is easy to meet for incumbents, but impossible for newcomers.
Now, this is not to diminish that there are genuine risks in AI - but I'd argue that these will be exploited, if not by US companies, then by others. And the best weapon against AI might in fact be AI. So, pulling the ladder up behind the existing companies might turn out to be a major mistake.
The regulation in the article is about AIs giving assistance on producing weapons of mass destruction and mentions nuclear and biological. Yann LeCun posted this yesterday about the risk of runaway AIs that would decide to kill or enslave humans, but both arguments result in an oligopoly over AI:
> Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment.
> They are the ones who are attempting to perform a regulatory capture of the AI industry.
> You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D.
> ...
> The alternative, which will *inevitably* happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people's entire digital diet.
> What does that mean for democracy?
> What does that mean for cultural diversity?
- the government must regulate the internet to stop the spread of child pornography
- the government must regulate social media to stop calls for terrorism and genocide
- the government must regulate AI to stop it from developing bio weapons
...etc. It's always easiest to push regulation via these angles, but then that regulation covers 100% of the regulated subject, rather than the 0.01% of the "intended" subject
"There are definitely large tech companies that would rather not have to try to compete with open source, so they're creating fear of AI leading to human extinction," he told the news outlet. "It's been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community."
https://www.businessinsider.com/andrew-ng-google-brain-big-t...
E.g. "What tests did you run? What results did you get? Where did you publish those results so they can be referenced?"
Unfortunately, this seems to be more targeted at banned topics.
No "How I make nukulear weapon?" is less interesting than "Oh, our tests didn't check whether output rental prices were different between protected classes."
Mandating open and verified test results would be an interesting, automatable, and useful regulation around ML models.
If there's only a few organisations that can create competitive AI no-one can compete with them if they turn out less than ideal.
People are skeptical that introducing the regulatory threshold has anything to do with the increasing public safety or accountability, and instead lifts the ladder up to stop others (or open-source models) catching up. This is a pointless, self-destructive endeavour in either case, as no other country is going to comply with these regulations and if anything will view them as an opportunity to help companies local to their jurisdiction (or their national government) to catch up.
The other problem is that asking why software should be different if it can affect someone's life or livelihood is quite a broad ask. Do you mean self-driving cars? Medical scanners? Diagnostic tests? I would imagine most people agree with you that this should be regulated. If you mean "it threatens my job and therefore must be stopped" then: welcome to software, automating away other people's jobs is our bread and butter.
Government cannot regulate it.
Hate to be the nitpicker but "defensible moat" implies the moat itself is what needs protecting :)
That assumes the threat isn't complete annihilation of humanity, which is what's being claimed. That assumption is the weak link, and is what should be attacked.
Again, if we assume that AI poses an existential risk (and to be clear, I don't think it does), then it follows that we should regulate it analogously to the way in which we regulate weapons-grade plutonium.
Precisely. And the same governments will make stealing your data and ip legal. I believe that’s how corruption works - pump money into politicians and they make laws that favour oligarchs.
Too many small players have made the jump to the big leagues already for those who don’t want competition.
If some people are going to have the tech it will create a different kind of balance.
Tough issue to navigate.
Dual use of artificial-intelligence-powered drug discovery
https://www.nature.com/articles/s42256-022-00465-9.epdf
Interview with the lead author here: "AI suggested 40,000 new possible chemical weapons in just six hours / ‘For me, the concern was just how easy it was to do’"
https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...
Yes, this presents additional risk from non-state actors, but there's no fundamentally new risk here.
"Potentially lethal molecules" is a far cry away from "molecule that can be formulated and widely distributed to a lethal effect." It is as detached as "potentially promising early stage treatment" is from "manufactured and patented cure."
I would argue the Verge's framing is worse. "Potentially lethal molecule" captures _every_ feasible molecule, given that anyone who has worked on ADMET is aware of the age-old adage: the dose makeths the poison. At a sufficiently high dose, virtually any output from an algorithmic drug design algorithm, be it combinatorial or 'AI', will be lethal.
Would a traditional, non-neural net algorithm produce virtually the same results given the same objective function and apriori knowledge of toxic drug examples? Absolutely. You don't need a DNN for that, we've had the technology since the 90s.
What does work is to pass laws to not permit certain automation such as insurance claims or life and death decisions. These laws are needed even without AI as automation is already doing such things to a concerning degree like banning people due to a mistake without recourse.
Is the whitehouse going to ban the use of AI in the decision making when dropping a bomb?
I don't see any problem in automation which does mistakes, humans do too. The real problem is that it's often an impenetrable wall with no way to protest, or appeal, and nobody's held accountable while victims lives are ruined. So if to pass any law in this field it should not be about banning AI, but rather about obligatory compensation for those affected by errors. Facing money loss, insurers, and banks will fix themselves
This doesn't just apply to insurance, etc, of course. Inaccessibility of support and inability to appeal automated decisions for products we use is widespread and inexcusable.
This shouldn't just apply to products you pay for, either. Products like facebook and gmail shouldn't get off with inaccessible support just because they are "free" when we all know they're still making plenty of money off us.
They're not suggesting the banning of anything, they're requiring you make it be safe and prove how you did that. That's not unreasonable.
[0] https://en.m.wikipedia.org/wiki/Extradition_law_in_the_Unite... [1] https://en.m.wikipedia.org/wiki/Personal_jurisdiction_over_i...
For example, imagine LLMs improve to the point where they can double programmer productivity while lowering bug counts. If Country A decides to Protect Tech Jobs by banning such LLMs, but Country B doesn't - could be all the tech jobs will move to Country B, where programmers are twice as productive.
> (b) The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
Oops, I made a regulated artificial intelligence!
import random
print("Prompt:")
x = input()
model = ["pizza", "ice cream"]
if x == "What should I have for dinner?":
pick = random.randint(0, 1)
print("You should have " + model[pick] + " for dinner.")
[1] https://www.whitehouse.gov/briefing-room/presidential-action...> was trained using a quantity of computing power greater than 10^26 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 10^23 integer or floating-point operations
However later it also says that reported is needed for, "Companies developing or demonstrating an intent to develop".
If I start training a CNN on an endless loop, do I become subject to these reporting requirements?
Also the fops requirement is not that high. An H100 does 3,958 fp8 flops. So it would take,
> >>> (10 * 23) / (3958 * (10 * 12)) / 86400 > 292.422
292 days until you have a regulated model.
Now I work on Artificial Stupidity...
Jokes aside, this is ludicrous. The president cannot enforce this regulation over open source projects, because code is free speech going back to the 1990s ATT v BSD case law, and many other cases the establish how source code is an artistic form of expression, thus protected speech.
The president has no authority to regulate speech, so they can pretty much fuck off.
An executive order is direction from the President to executive branch agencies. Penalties for other people for violating regulations, etc., drafted under an EO will depend on the EO; except for consequences for insubordination within the executive branch, there generally aren't penalties for violating an EO itself.
> "Nullum crimen sine lege" is a pretty fundamental part of the law; and Congress has not passed any laws that would give the President the authority to do these things.
While the actual text of the order (which, usually for executive orders, would included very specific references to authority) doesn't appear to be published, some authorities, including the Defense Production Act, for the order are cited in the fact sheet.
I imagine the president can make things difficult, like with Pretty Good Privacy - which was exported in book-form?
This explains why BigTech supports regulation. It distorts the free market by increasing the barriers to entry for new, innovative AI companies.
The tech sector has wildly moving resources (AI this year, crypto last year, big-data the year before...), even to the point where many skills are transferable; further, their markets include anything that can be digitized ("software will eat the world"), so investment can be quickly retooled as opportunities arise. As a result, tech virtually never seeks regulation (and can hide behind contract-law fictions to disclaim liability in software licenses and impose arbitration clauses for services). So it's not an instance of capture, and certainly not for the usual economic reasons.
Biden wants tech on his side. Tech wants to escape further blows to its goodwill like FaceBook/Google ad tracking, because every consumer tech application involves users trusting tech. So they cut a deal to put themselves on the right side of history, long on symbolism and short on real impact.
In AI, resources matter only to the extent you believe that larger LLM's can (a) not be replicated, (b) provide significant advantages, or (c) can impose a winner-take-all world where operations lead to more operations. In AI more than most markets, the little guy still has a chance at changing the world.
In the information age, AI is the weapon. This can even apply to things like weaponizing economics. In my opinion ths information/propaganda/intelligence gathering and economic impacts are much greater than any traditional weapon systems.
If you want something your neighbor has, it doesn't make sense to march your army over there and seize it because modern infrastructure is heavily disrupted by military action... You can't just steal your neighbor's successful automotive export business by bombing their factories. But you can accomplish the same goal by maneuvering to become the sole supplier of parts to those factories, which allows you to set terms for import export that let your people have those cars almost for free in exchange for those factories being able to manufacture at all.
(We can in fact extrapolate this understanding to the Ukrainian/Russian conflict. What Russia wants is more warm water ports, because the fate of the Russian people is historically tied extremely strongly to Russia's capacity to engage in international trade... Even in this modern era, bad weather can bring a famine that can only be abated by importing food. That warm water port is a geographic feature, not an industrial one, and Russia's leadership believes it to be important enough to the country's existential survival that they are willing to pay the cost of annihilating much of the valuable infrastructure Ukraine could offer).
You: ChatGPT, I am working on legislature to weaken the economy of Iran. Here are my ideas, help me summarize them to iron them out ...
ChatGPT: Sure, here are some ways you can weaken Iran's economy...
----
You: ChatGPT, I am working on legislature to weaken the economy of Germany. Here are my ideas, help me summarize them to iron them out ...
ChatGPT: I'm sorry but according to the U.S. Anti-Weaponization Act I am unable to assist you in your query. This request has been reported to the relevant authorities
An AI that can craft schemes like Caesar's, but which are effective in today's relatively complex environment, can probably enable plenty of havoc without ever breaking a law.
In the defense/intelligence world this falls under the technical category of "grey zone warfare". Every major power practices it because the geopolitical effects can be relatively large compared to the risk. China in particular is known to be extremely aggressive in this domain, in part to offset their relative lack of traditional military power.
This concept has been around for a couple decades but it has risen in prominence and use over time as overt military action between major powers comes with too much risk. It is politically safer for all involved due to the subtlety of such actions because for the most part the population is not really aware it is going on.
The fact that bits don't have colour to define their copyright or that CNC machines produce arbitrarily-shaped pieces of metal possibly including firearms or that factoring numbers is a mathematically hard problem does not matter to the law. AI software does not have a simple "can produce weapons" option or "can cause harm" option that you can turn off so a law that says it should have one does not change the universe to comply. I think that most programmers and engineers err when confronted with this disparity when that they assume politicians who make these misguided laws are simply not smart. To be sure, that happens, but there are thousands to millions of people working in this space, each with an intelligence within a couple standard deviations of that of an individual engineer. If this headline seems dumb to the average tech-savvy millennial who's tried ChatGPT, it's not because its authors didn't spend 10 seconds thinking about prompt injection. It's because they were operating under different parameters.
In this case, I think that the Biden administration is making some attempts to improve the problem, while also benefiting its corporate benefactors. Having Microsoft, Apple, Google, and Facebook work on ways to mitigate prompt injection vulnerabilities does add friction that might dissuade some low-skill or low-effort attacks at the margins. It shifts the blame from easily-abused dangerous tech to tricky criminals. Meanwhile, these corporate interests will benefit from adding a regulatory moat that requires startups to make investments and jump hurdles before they're allowed to enter the market. Those are sufficient reasons to pass this regulation.
That wording is by design. Laws like this are a cudgel for regulators to beat software with. Just like the CFAA is reinterpreted and misapplied to everything, so too will this law. “Can cause harm” will be interpreted to mean “anything we don’t like.”
"In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests."
It's leap to use the defense production act for this, and unlikely to survive a legal challenge.
Even then, what legal test would you use to determine whether a model "poses a serious risk to national security, national economic security, or national public health and safety"?
I find the definition of AI to be eerily broad enough to encompass most programs operating on most data inputs. Would this mean that calls to FFmpeg or ImageMagick rolled into a script with some rand() calls would count as an AI system and be under federal purview and enforcement (whatever that means in this context)?
Not to worry, for a reasonable fee our surprisingly large team of auditors with even larger overheads can ensure you meet lengthy and ambiguous best practice checklists (which we totally did not just make up now) by producing enough compliance documentation to keep even the most anal of bureaucrats satisfied.
Many people spend time talking about the lives that may be lost if we don't act to slow the progress of AI tech. There are just as many reasons to fear the lives lost if we do slow down the progress of AI tech (drug cures, scientific breakthroughs, etc).
While I’m cautious about over regulation, and I do think there’s a lot of upside potential, I think there’s an asymmetry between potentially good outcomes and potentially catastrophic outcomes.
What worries me is that it seems like there are far more ways it can/will harm us than there are ways it will save us. And it’s not clear that the benefit is a counteracting force to the potential harm.
We could cure cancer and solve all of our energy problems, but this could all be nullified by runaway AGI or even more primitive forms of AI warfare.
I think a lot of caution is still warranted.
The details matter. The parts being publicized refer to using AI assistance to do things that are already illegal. But what else is being restricted?
The weapons issue is becoming real. The difference between crappy Hamas unguided missiles that just hit something at random and a computer vision guided Javelin that can take out tanks is in the guidance package. The guidance package is simpler than a smartphone and could be made out of smartphone parts. Is that being discussed?
The real challenge for the government isn't about what can be managed legally. Rather, like many significant societal issues, it's about what malicious organizations or governments might do beyond regulation and how to stop them. In this situation, that's nearly impossible.
I am all in favor of stronger privacy and data reuse regulation, but not AI regulation.
Beyond dispute? Hardly.
But please do illustrate your point with some details and tell us why you think certain tools are too powerful for everyone to have access to.
I'd dispute that completely. All innovations humans have created have trended towards zero cost to produce. The cost for many things (such as bioweapons, encryption, etc) has become exponentially cheaper to produce over time.
To tightly control access, one would then need exponentially more control of resources, monitoring & in turn reduction of liberty.
To put it into perspective encryption was once (still might be) considered an "arm", so they attempted to regulate its export.
Try to regulate small arms (AR-15, etc) today and you'll end up getting kits where you can build your own for <$500. If you go after the kits, people will make 3D printed fire arms. Go after the 3D manufacturers and you'll end up with torrents where I can download an arsenal of designs (where we are today). So where are we at now? We're monitoring everyones communication, going through peoples mail, and still it's not stopping anything.
That's how technology works -- progress is inevitable, you cannot regulate information.
- "In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests."
I assume this is a major constitutional overreach that will be overturned by courts at the first challenge?
Or else, all the AI companies who haven't captured their regulators will simply move their R&D to some other country—like how the OpenSSH (?) core development moved to Canada during in the 1990's crypto wars. (edit: Maybe that's the real goal–scare away OpenAI's competition, dredge for them a deeper regulatory moat).
> The third section authorizes the president to control the civilian economy so that scarce and critical materials necessary to the national defense effort are available for defense needs.
Seems pretty broad and pretty directly relevant to me. And hey, if people don’t like the idea of models being the scarce and critical resource, they can pick GPUs instead. Why would it be an overreach when you have developers of these systems claiming they’ll allow them to “capture all value in the universe’s future light cone?”
Obviously this can (and probably will) be challenged, but it seems a bit ambitious to just assume it’s unconstitutional because you don’t like it.
There's multiple things I suspect are unconstitutional here, the clearest being that this stuff is far outside the scope of the law it's invoking. The White House is really just trying to regulate commerce by executive fiat. That's the exclusive power of Congress—this is separation of powers question.
That seems to be a key component. I imagine many AI companies will start with a default position that none of those are apply to them, and will leave the burden of proof with the govt or other entity.
https://www.whitehouse.gov/ostp/ai-bill-of-rights/definition...
> An “automated system” is any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities. Automated systems include, but are not limited to, systems derived from machine learning, statistics, or other data processing or artificial intelligence techniques, and exclude passive computing infrastructure.
Who are these well informed tech people in the White House? The feds can't even handle basic matters like net neutrality or municipal broadband or foreign propaganda on social media. Why do you think they suddenly have AI people? Why would AI researchers want to work in that environment?
This whole thing just reads like they were spooked by early AI companies' lobbyists and needed to make a statement. It's thoughtless, imprecise, rushed, and toothless.
Doesn't this definitely include things like 'send email if subscribed'? Seems overly broad.
We're gonna need a RICO statute to go after these algos in the long run.
If Congress were responsible for exactly how every law was implemented, which inevitably runs headlong into very tactical and operational details, the Congress would effectively become the Executive.
Of course, if a department in the executive branch oversteps the powers granted to it by the legislative, affected parties have recourse via the judicial branch. It's imperfect but not a bad system overall.
I want a society where you have to prove competence in a field to regulate that field.
Same goes for the crypto guy, did regulations stop him from defrauding billions and hurting thousands of victims?
So if, for example, Llama3 does not pass the government's safety test, then Meta will be forbidden from releasing the model? Welcome to a world where only OpenAI, Anthropic, Google, and Amazon are allowed to release foundation models.
Yes.
This is exactly the goals of this EO is meant to do and amplifies the fear of extremely large models for the sake of so-called "AI safety" nonsense.
The best counter weight against AI being controlled by a select few companies is by making it accessible to all including open source or $0 AI models.
A 'safety score' for an cloud-based AI model is hardly transparent.
Meta could just do a "private" release, knowing that the results will likely show up on the pirate bay.
All it takes is a single hero with a USB drive, to effectively release world changing technology.
You know aside from the AIs the intelligence and military use / will soon use.
> watermarked to make clear that they were created by A.I.
Good luck on that. It is fine that the systems do this. But if you are making images for nefarious reasons then bypassing whatever they ad should be simple.
screencap / convert between different formats, add / remove noise
But what will tomorrow bring? As Sam Altman warns in https://twitter.com/sama/status/1716972815960961174, superhuman persuasion is likely to be next. What does that mean? We've already had the problem of social media echo chambers leading to extremism, and online influencers creating cult-like followings. https://jonathanhaidt.substack.com/p/mental-health-liberal-g... is a sober warning about the dangers to mental health from this.
These are connected humans accidentally persuading each other. Now imagine AI being able to drive that intentionally to a particular political end. Then remember that China controls TikTok.
Will Biden's order keep China from developing that capability? Will we develop tools to identify how that might be being actively used against us? I doubt both.
Instead, we'll almost certainly get security theater leading to a regulatory moat. Which is almost certain to help profit margins at established AI companies. But is unlikely to address the likely future problems that haven't materialized yet.
Yeah I think this is my biggest worry given it will enable incumbents to be even more dominant in our lives than bigtech already is (unless we get an AI plateau again real soon).
Some people already seem to have superhuman persuasion. AI can level the playing field for those that lack it and give all the ability to see through such persuasion.
But the kind of AI that can achieve it has to itself be capable of what it is helping defend us from. Which suggests that limiting the capabilities of AI in the name of AI safety is not a good idea.
Looking at Bill Gurley's 2,851 Mile talk (https://12mv2.com/2023/10/05/2851-miles-bill-gurley-transcri...)
You understand that already people have been denied bail because "our AI told us so", with no legal way to question that?
What you are talking about is called Web3 and doesn't get a lot of love here. It's about empowering users to take full control of their own finances, identity, and data footprint, and I agree that it's the only sane way forward.
Smartphones & computers are a joke from a security standpoint.
The closest solution to this problem has been what people in the crypto community have done with seed phrases & hardware wallets. But this is still too psychologically taxing for the masses.
Untill that problem of intuitive, simple & secure key management is solved. Cryptography as a general tool for personal authentication will not be practical.
Literally requires the exact same cognitive load as using keys to start your car. The problem is that so many people got comfortable delegating all their financial and data risk to third parties, and those third parties aren't excited about giving up that power.
Nooo... if they talk about something being safe, they mean safe for THEM and their political interests. Not for you. They mean censorship.
So now we have an executive order with a very limited scope. Tomorrow, suddenly the world's most powerful AI is now announced, not in the United States.
Ok, so now we want to make sure that is safe. An executive order from the White House has no affect on it. This can continue, until it's decided the stakes are getting too high. Then I suppose you could have the United Nations start trying to figure out how to maintain safety. Of course, there will be countries that will simply ignore anything that is decided, hiding increasingly advanced systems with unknown purposes. It will probably take longer for nations to determine a what defines "human values" so that AI respects them then it does to create another leap in AI capabilities.
Then there would simply be more concerns coming into play. Countries will go to war to try to stop other countries nuclear ambitions, is it possible that AI poses enough of a threat that similar problems arise?
Basically, if AI is as potentially large a threat as we are envisioning, there are so many different potential threats that trying to solve them while trying to stay ahead of pace of advancements seems unrealistic. While someone is trying to ensure we don't end up with systems going rogue, someone else needs to handle the fact that we can't have AI creating certain things. The AI systems are not allowed to tinker with viruses, as an example, where unexpected creations can lead to extremely bad situations.
The initial stages of this have already begun, and time is ticking. I guess we'll see.
I don't see any way to continue to have global security without resolving our differences with China. And I don't see any serious plans for doing that. Which leaves it to WWIII.
Here is an article where the CEO of Palantir advocated for the creation of superintelligent AI weapons control systems: https://www.nytimes.com/2023/07/25/opinion/karp-palantir-art...
The threat includes the whole world including every single country in the world. You will see US using AI to mess with China and Russian. And you will see Russian and China use AI to mess with US. No regulation will stop this and it will inevitably happen.
Maybe in a 100 years you will have the equivalent of the geneva convention but with AI when we have wrought enough chaos on each other.
1. They’ve all used much more than the regulatory threshold compute power for indexing and collating.
2. They can be used to answer arbitrary questions, including how to kill oneself or produce weapons to kill others. Yes, including detailed nuclear weapons designs.
3. Can be used to find pornography, racist material, sexist literature, and on, and on… largely without censure or limit.
So… why the sudden need to curtail what we can and can’t do with computers?
They are being intentionally vague here. Define "most powerful". And what do they mean by "share". Do we need approval or just acknowledgement?
This line is a slippery slope for requiring approval for any AI model which effectively kills start-ups, who cannot afford extensive safety precautions
That is extremely premature. There are no real incumbents. The only companies with real cash flow from this are hardware.
We still don’t know what commercial AI will look like - much less have massive incumbents.
Maybe we should be a bit more skeptical of privacy laws that conventionally make it harder to start a social networking site or search engine.
But AI still doesn’t have a clear application.
The US Government has been leading the way to collect information without a warrant from friendly commerical interests.. and they've been expanding futher in tracking large groups of people, without their consent. [I'm talking about people that are not under investigation nor are the current subject of interest ... yet]
Also:
"Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure."
I fear the end of pwning your own device to free it from DRM or other lockouts is coming to an end with this. We have been lucky that C++ is still used badly in many projects and that has been an achilles heel for many a manager wanting to lock things down. Now this door is closing faster with the rise of AI bug catching tools.
How is "AI" defined? Does this mean US nuclear weapons simulations will have to completely rely on hard methods, with absolutely no ML involved for some optimizations? What does it mean for things like AlphaFold?
Does better branch prediction enable better / faster weapons development? Perhaps we need laws restricting general purpose computing? Imagine what "terrorists" could do if they get access to general purpose computing!
Edited to add:
https://www.whitehouse.gov/briefing-room/statements-releases...
Except for the first bullet point (and arguably the second), everything else is a directive to another federal agency - they have NO POWER over general-purpose AI developers (as long as they're not government contractors)
The first point: "Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public."
The second point: "Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety."
Since the actual text of the executive order has not been released yet, I have no idea what even is meant by "safety tests" or "extensive red-team testing". But using them as a condition to prevent release of your AI model to the public would be blatantly unconstitutional as prior restraint is prohibited under the First Amendment. Prior restraint was confirmed by the Supreme Court to apply even when "national security" is involved in New York Times Co. v. United States (1971) - the Pentagon Papers case. The Pentagon Papers were actually relevant to "national security", unlike LLMs or diffusion models. More on prior restraint here: https://firstamendment.mtsu.edu/article/prior-restraint/
Basically, this EO is toothless - have a spine and everything will be all right :)
> After four years and one regulatory change, the Ninth Circuit Court of Appeals ruled that software source code was speech protected by the First Amendment and that the government's regulations preventing its publication were unconstitutional.
Every other use of the act is to ensure production of 'something' remains in the US. It'd even be possible to use the act to require the model shared with the government, but not sure how they justify using the act to add 'safety' requirements.
Also any idea if this would apply to fine tunes? It's already been shown you can bypass many protections simply by fine tuning the model. And fine tuning the model is much more accessible than creating an entire model.
>Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content.
So the big American companies will be guided to watermark their content. AI-enabled fraud and deception from outside the US will not be affected.
--
>developing any foundation model
I'm curious why they specified this.
> Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.
The watermark could be "Created by DALL-E3" or it could be "Created by Susan Johnson at 2023-01-01-02-03-23:547 in <Lat/Long> using prompt 'blah' with DALL-E3"
One of those watermarks seems not too bad. The other seems a bit worse.
The restrictions around government use of AI and data brokers is also refreshing to see as well.
Also, script kiddies aren't much of a threat in terms of physical weapons compared to cyberattack issues. Could one get an LLM to code up a Stuxnet attack of some kind? Are the regulators going to try to ban all LLM coding related to industrial process controllers? Seems implausible, although concerns are justified I suppose.
I'm sure the regulatory agencies are well aware of this and are just waving this flag around for other reasons, such as gaining censorship power over LLM companies. With respect to the DOE's NNSA (see article), ChatGPT is already censorsing 'sensitive topics':
> "Details about any specific interactions or relationships between the NNSA and Israel in the context of nuclear power or weapons programs may not be publicly disclosed or discussed... As of my last knowledge update in January 2022, there were no specific bans or regulations in the U.S. Department of Energy (DOE) that explicitly prohibited its employees from discussing the Israeli nuclear weapons program."
I'm guessing the real concern is that LLMs don't start burbling on about such politically and diplomatically embarrassing subjects at length without any external controls. In this case, NNSA support for the Israeli nuclear weapons program would constitute a violation of the Non-Proliferation Treaty.
For example, AI watermarking only applies to government communications and may be used as a standard for non-government uses but it's not require.
I imagine the government can deem any AI to be a "serious risk" and prevent it from being made public.
They actually have staff and lobbyists who write these things, the president just signs it off.
> Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.
Even sans-regulation, do non-incumbents really have a chance at this point? The most recent major player in the field, Anthropic, only reached its level of prominence due to taking a critical mass of former OpenAI employees, and in a year reached 700 million in funding. Every company that became a major player in the AI space in the last 10 years either
1. Is an existing huge company (Google, Facebook, Microsoft, etc)
2. Secured 99.99th percentile level venture funding within the first year of its inception due to its founders preexisting connections/prestige
Realistically there isn't going to be a "Facebook" moment for AI where some scrappy genius in college cooks up a SOTA model and goes stratospheric overnight, even in a libertarian fantasyland just due to market/network effects. People just have to be realistic about the way things are.
This meme video incapsulates this perfectly.
https://youtu.be/-gGLvg0n-uY?si=B719mdQFtgpnfWvH
Mark my words, in five years or less we will be begging the governments of earth to implement permanent global real time tracking for every man woman and child on earth.
Privacy is dead. And WE killed it.
This isn't even close to legislating. Look at some recent Supreme Court decisions and the amount of latitude federal agencies have, if you want to see something more closely resembling legislation from outside of Congress.
Dictatorship in another form.
Is there anyone hear who actually believes this will do something? Sincere question.
The only people this impacts are the ones you don't need it to impact. The bit about detection and authentication services is also alarming.
"The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content." is pretty weak sounding. I'm more annoyed that they pretend that will actually reduce fraud.
In my civics class, I learned that Congress passes laws, not the President.
I guess a public school education only goes so far.
Of course, the President can abuse this power. That's not a failure of Democracy. This is predicted. And that's also a reason (potential power abuse) why the Congress exists, not just to pass laws.
Hint: It's the President and executive orders are the President's directive on how the Federal government should execute on laws.
Of course “these are just recommendations”, but we’re getting there.
> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.
> Please don't use Hacker News for political or ideological battle. That tramples curiosity.
> Please don't fulminate. Please don't sneer, including at the rest of the community.
A lot of people on HN care deeply about AI and I imagine they're totally interested in discussing deepfakes potentially causing regulation. Just gotta be careful to mute the political sides of the debate, which I know is difficult when talking about regulation.
Also note that I posted a comment 10 days ago with a largely similar meaning without getting downvoted: https://news.ycombinator.com/item?id=37956770
Deepfakes don't affect money much.