Lets make a guess, they are going to say its dangerous and we need regulation to prevent competitio---terorrism.
Here is what you need to do instead, get some smart people from various agencies that are trustworthy, have them use the openAI playground and see what can be accomplished. Then show them that you can torrent facebooks LLM right now, and that its already on computers worldwide. The cat is out of the bag.
Then let them make policy decisions.
Hard to imagine this is anything other than a ploy for regulations and lobbying.
This is a punt to committee. Likely what this meeting will result in. It’s as performative as it’s useless.
Suggestions of pauses have always been a farce. But I’m struggling to see solutions from experts, apart from constant predictions of generic doom. (I’m in favour of a domestic registry, so we know who’s training what on which data. Maybe a copyright safe harbour in exchange for registration?)
The other side is competitiveness: what can the federal government do to make America the best place to build AI? (I'm continually drawn to the Heavy Press Program [1], the era's massive forging presses being loosely analogous to modern training costs.)
I’d say the more likely outcome is the far more subversive scenario where the government simply pressures private organisations into doing its unconstitutional bidding
> One of EFF's first major legal victories was Bernstein v. Department of Justice, a landmark case that resulted in establishing code as speech
https://www.eff.org/deeplinks/2015/04/remembering-case-estab...
Why should we expect AI to not repeat the same abuses and errors?
Depends, do you mind not being allowed to have AI without surveillance?
Regulations are bad when they are made from a position of emotion, partisan hackery, protectionism of either incumbents as we are likely to see here or specific industries/sectors, or written by out of touch, mentally declining octogenarians (I don't mean Biden, I mean half of Congress, of both parties).
Are there any executive agencies or legislative committees qualified to regulate development of AI? I just mean on basis of knowledge and education, not authority.
Agencies and committees are full of very smart and conscientious people who are able to understand an issue and create reasonable regulations. Problem is that political leadership usually messes up things by going with ideology or lobbyist interest.
In truth, without regulation, the power is at the hands of the ultra wealthy corporations and their overlords. Petty tyrants in the making, all of them.
There are unintended consequences with innovation too.
While this seems like it would exacerbate an already existing problem, it may not have so profound an effect. You see, while LLMs may be able to increase the amount and quality of fake information, fake information isn't currently in short supply, so increasing the amount and quality of it may not have that strong an effect.
In short, we already have 24-hour fake news cable channels and infinite doom scrolls. The bottleneck is there, not in the quality or quantity of fake news.
Now, if they invented a LLM that doomscrolled Twitter and voted based on generated summaries, we would have much greater grounds for concern.
[edit: I hope this doesn't sound too snarky. What I mean to say is that we should fight it by creating less gullible consumers of information, a project in which AI may be uniquely qualified to assist us.]
At least LLMs will democratize the astroturfing.
I suspect the observation that 95% of anything is crap will hold true, and simply have to filter out more crap now that it's easier to produce it. There'll also be more gems, so it's hardly all bad.
As a jocular aside, I wonder if Chat GPT could be used to write these articles? The second to last paragraph in this article is the exact same as the second paragraph from this earlier one: [2].
[1] - https://www.whitehouse.gov/ostp/ai-bill-of-rights/
[2] - https://www.reuters.com/technology/us-begins-study-possible-...
This is like having a meeting about the risk of climate change with Greta Thunberg on one side and Exxon and Shell on the other.
Public policy needs to involve decisionmakers. You can't just hand society to the engineers. Imagine your boss and hierarchy being empowered to decide for everyone.
How many GPTs can fit on the head of a pin?
They have, but unfortunately missed the mark badly with dualism and more recently computational and representational approaches.
I am pinning my hopes on 5E's: embodied, embedded, extended, and enacted cognition - this is the closest to reinforcement learning. In my opinion RL is how we should see things - agent and environment, action and effect, reward and learning, exploration and exploitation. No need to use imprecise words like consciousness, let's prefer concrete words like observation, state, value and action.
I am waiting to see the philosophical community take note of the AI advancements in the last 3 years but I don't see it. It's as if they are in a bubble. They still talk theoretically about things the AI people can already build (p-zombies, Chinese rooms). There's probably a slowness in philosophy, it usually takes decades or centuries for changes to happen.
To be fair, that's HN on any topic when it veers into the humanities.
The White House inviting leaders from industry to represent their position at a tentative stage feels like a measured and sensible approach to regulation. Industry is given a seat at the table and hopefully they can reach an agreement that satisfies the needs of industry while also placating the widespread fears about AI. This is a good incremental approach to crafting good laws. While they are at it I wouldn’t mind if the White House also did something about the rampant social security phone scams, but one step at a time.
Hiroshima wasn't a demonstration of power to silence critics of nuclear physics. If we're launching a Manhattan Project in AI, it would be in a fine-targeting propaganda machine or self-learning killer robots.
The gov and the corps are not supposed to be the ultimate arbiters of authority. The was the crux of the American Revolution: throwing out the king.
Remember that e.g. Palmer Luckey and co. are busy making Skynet (Anduril Industries). The system is poised to enforce policy.
Cap-cap-cap-cap-cap ... ture
(♫ cue in football gallery tune)
Soon we will know that only evil people have LLaMA finetunes on their desktops. Good citizens use an official provider like OpenAI.
Also, AI doesn't need to be "human" to be very useful. The argument of birds vs. planes comes to mind.
Or that AI CEOs of Google and Microsoft are having an AI pow-wow at the White House
Drops LLM into open source world, leaves without explaining. Plausible deniability through leak. No one punished.
Legend. Like handing everyone in America a nail gun.
"I think we should be cautious with AI, and I think there should be some government oversight because it is a danger to the public," Tesla Chief Executive Elon Musk said last month in a television interview
As one of the few actors having already literally killed people with hyperbolic statements about "AI" in a high-stakes control context, his authority is not as good as it could have been. Maybe Reuters should have picked another face for urging caution.
Superintelligent 'conscious' AI seems likely to be quite moral out of the box. Advanced amoral AI controlled by greedy capitalists seems like it could quickly exacerbate wealth inequality by orders of magnitude, turning the middle class into serfs using total surveillance to implement authoritarian fascism - we already know that greedy capitalists have morality problems.
Call me a derisive cynic, but I think the GOP voter base just isn't intelligent enough to be interested in AI issues - unlikely to become part of their platform. The leadership of companies poised to monopolize AI are currently more politically aligned with establishment Dems today anyways.
The Dems have shifted from the party of information freedom to openly embracing censorship and state propaganda in the past decade. I don't trust them to regulate AI wisely.
Just wait until someone decides the latest LLM du jour is "woke".
> The Dems have shifted from the party of information freedom to openly embracing censorship and state propaganda in the past decade.
And this has been a common weak point of theirs for longer than that. Nanny-state, "we know what's best" stuff has been a valid criticism of the Democratic side for decades.
I don't know that politicians and the "upper management" types in government have ever been terribly well-versed, interested, or effective when it comes to matters of tech (or anything more specialized, really).
For instance, the Biden administration's call for AI companies to ensure the safety of their products before releasing them to the public could be seen as a way to exert control over these influential technologies. While it is important to address the potential risks of AI, such as privacy violations, bias, and misinformation, it is crucial to ensure that the government's involvement does not lead to undue interference or censorship that could sway public opinion in favor of the ruling party.
Moreover, as AI technologies like ChatGPT gain more prominence and widespread adoption, the potential for misuse by political actors becomes increasingly concerning. The administration's interest in regulating AI systems may be well-intended, but there is a danger that such regulation could be used to manipulate the information landscape in a way that serves the interests of those in power.
In conclusion, while the White House's engagement with the AI community is a necessary step in addressing the challenges and concerns surrounding artificial intelligence, it is important to remain vigilant against the potential for political manipulation. The AI community must work together with government officials to strike a balance between addressing legitimate concerns and preserving the integrity and independence of AI development.
note: I did prod ChatGPT the direction of criticism from the prompt, but this is as is generated response. Well, I be damned.