Keeping AI models closed under the guise of “ethics”, is I think the most unethical stance as it makes people more dependent on the arbitrary decisions, goals, and priorities of big companies, instead being allowed to define “alignment” for themselves.
It's the same sort of asymmetrical cost/benefit that tobacco companies and greenhouse gas emitters face. Of course if you went to an online forum for oil companies, they'd be hopping mad if they're prevented from drilling, and dismissive of global warming risks. It's no different here.
It gets old hearing about these "risks" in the context of AI. It's just an excuse used by companies to keep as much power as possible to themselves. The real risk is AI being applied in decision making where it affects humans.
It would be much safer if reading were strictly controlled. The companies would have “reading as a service” where regular people could bring their books to have them read. The reader would ensure that the book aligns with the ethics of the company and would refuse to read any work that either does. It align with their ethics or teaches people anything dangerous (like chemistry or physics which can be used to build bombs and other weapons).
There certainly needs to be regulation about use of AI to make decisions without sufficient human supervision (which has already proven a problem with prior systems), and someone will have to make a decision about copyright eventually, but closing the models off does absolutely nothing to protect anyone.
There certainly needs to be regulation about use of bioweapons without sufficient human supervision (which has already proven a problem with prior systems), and someone will have to make a decision about synthetic viruses, but closing the gain of function labs does absolutely nothing to protect anyone.
I can't speak about meta specifically, but from my exposure "responsible ai" are generally policy doomers with a heavy pro-control pro-limits perspective, or even worse-- psycho cultists that believe the only safe objective for AI work is the development of an electronic god to impose their own moral will on the world.
Either of those options are incompatible with actually ethical behavior, like assuring that the public has access instead of keeping it exclusive to a priesthood that hopes to weaponize the technology against the public 'for the public's own good'.
Yeah, that is the whole point - not wanting bad actors to be able to define "alignment" for themselves.
Not sure how that is unethical.
Historically the people in power have been by far the worst actors (e.g. over a hundred million people killed by their own governments in the past century), so given them the sole right to "align" AI with their desires seems extremely unethical.
It’s a slippery slope letting private or even public entities define “bad actors” or “misinformation”. And it isn’t even a hypothetical… plenty of factually true information about covid got you labeled as a “bad actor” peddling “dangerous misinformation”.
Letting private entities whose platforms have huge influence on society decide what is “misinformation” coming from “bad actors” has proven to be a very scary proposition.
Whatever their motivation to release models, it’s a for-profit business tactic first. Any ethical spin is varnish that was decided after the fact to promote Meta to its employees and the general public.
Do you have a bone to pick with Meta, the whole internet, or the fact that you wish people would teach their kids how to behave and how long to spend online?
claim seems dubious to me
Is he explaining somewhere why it is worse than virology scientists publishing research?
Or is he proposing to ban virology as a field?
Also, if AI can actually synthesize knowledge at expert level - then we have far larger problems than this anyway.
It's the whole Guttenberg's printing press argument. "Whoaa hold on now, what do you mean you want knowledge to be freely available to the vulgar masses?"
The only difference with LLMs is that you do not have to search for this knowledge by yourself, you get a very much hallucination prone AI to tell you the answers. If we extend this argument further why don't we restrict access to public libraries, scientific research and neuter Google even more. And what about Wikipedia?
At some point AI becomes important enough to a company (and mature enough as a field) that there is a specific part of legal/compliance in big companies that deals with the concrete elements of AI ethics and compliance and maybe trains everyone else, but everyone doing AI has to do responsible AI. It can't be a team.
For me this is exactly like how big Megacorps have an "Innovation team"[1] and convince themselves that makes them an innovative company. No - if you're an innovative company then you foster innovation everywhere. If you have an "innovation team" that's where innovation goes to die.
[1] In my experience they make a "really cool" floor with couches and everyone thinks it's cool to draw on the glass walls of the conference rooms instead of whiteboards.
I heard about one specific ratchet effect directly from an AI researcher. The ethics/risk oriented people get in direct internal conflict with the charge-forward people because one wants to slow down and the other wants to speed up. The charge-ahead people almost always win because it’s easier to get measurable outcomes for organization goals when one is not worrying about ethical concerns. (As my charge-ahead AI acquaintance put it, AI safety people don’t get anything done.)
If you want something like ethics or responsibility or safety to be considered, it’s essential to split it out into its own team and give that team priorities aligned with that mission.
Internally I expect that Meta is very much reducing responsible AI to a lip service bullet point at the bottom of a slide full of organizational goals, and otherwise not doing anything about it.
If it's the latter, then getting rid of them does not seem like a loss.
It's not that engineers left to their own will do evil things but rather that to a lot of engineers (and of course management) there is no such thing as too much data.
So the privacy team comes in and asks, "Are we sure there is no user-identifiable data you are collecting?" They point out that usage pattern data should be associated with random identifiers and even these identifiers rotated every so-many months.
These are things that a privacy team can bring to an engineering team that perhaps otherwise didn't see a big deal with data collection to begin with.
I had a lot of respect for the privacy team and a lot of respect frankly for Apple for making it important.
* I retired two years ago so can't say there is still a privacy team at Apple.
I don't think we solved the need for a specialized team dealing with legality, feels hard to expect companies to solve it for ethics.
Unfortunately there’s so much shared legal context between different parts of an enterprise that it’s difficult for each internal organisation to have their own own separate legal resources.
In an ideal world there’d be a lawyer embedded in every product team so that decisions could get made without going to massive committees.
of course the worst case is when this responsibility is both outsourced (“oh it’s the rAI team’s job to worry about it”) and disempowered (e.g. any rAI team without the ability to unilaterally put the brakes on product decisions)
unfortunately, the idea that AI people effectively self-govern without accountability is magical thinking
A "Responsible AI Team" at a for-profit was always marketing (sleight of hand) to manipulate users.
Just see OpenAI today: safety vs profit, who wins?
But I don't know if things are straight forward with machine learning. If the recommendations are blanket, And there is a way to automate checks, It could work. Main thing is there should be trust between teams. This can't be an adversarial power play.
Sure. It is not that a “Responsible AI team” absolves other teams from thinking about that aspect of their job. It is an enabling function. They set out a framework how to think about the problem. (Write documents, do their own research, disseminate new findings internally.) They also interface with outside organisations (for example when a politician or a regulatory agency asks a questions, they already have the answers 99% ready and written. They just copy paste the right bits from already existing documents together.) They also facilitate in internal discussions. For example who are you going to ask for opinion if there is a dispute between two approaches and both are arguing that their solution is more ethical?
I don’t have direct experience with a “responsible AI team” but I do have experience with two similar teams we have at my job. One is a cyber security team, and the other is a safety team. I’m just a regular software engineer working on safety critical applications.
With my team we were working on an over-the-air auto update feature. This is very clearly a feature where the grue can eat our face if we are not very carefull, so we designed it very conservatively and then shared the designs with the cyber security team. They looked over it, asked for a few improvements here and there and now I think we have a more solid system than we would have had without them.
The safety helped us decide a dispute between two teams. We have a class of users whose job is to supervise a dangerous process while their finger hovers over a shutdown button. The dispute was over what information should we display to this kind of user on a screen. One team was arguing that we need to display more information so the supervisor person knows what is going on, the other team was arguing that the role of the supervisor is to look at the physical process with their eyes, and if we display more info that is going to make them distracted and more likely to concentrate on the screen instead of the real world happenings. In effect both teams argued that what the other one is asking for is not safe. So we got the safety team involved and we worked through the implications with their help and come to a better reasoned approach.
I personally don't find that a compelling concern. I grew up devoutly Christian and it has flavors of a "Pascal's Wager" to me.
But anyway, it was enough of a concern to those developing these latest AI's (e.g. it's core to Ilya's DNA at OpenAI), and - if true! - a significant enough risk that it warranted as much mindshare as it got. If AI is truly on the level of biohazards or nuclear weapons, then it makes sense to have a "safety" pillar as equal measure to its technical development.
However, as AI became more commercial and widespread and got away from these early founders, I think the "existential risk" became less of a concern, as more people chalked it up to silly sci-fi thinking. They, instead, became concerned with brand image, and the chatbot being polite and respectful and such.
So I think the "safety" pillar got sort of co-opted by the more mundane - but realistic - concerns. And due to the foundational quirks, safety is in the bones of how we talk about AI. So, currently we're in a state where teams get to enjoy the gravity of "existential risk" but actually work on "politeness and respect". I don't think it will shake out that way much longer.
For my money, Carmack has got the right idea. He wrote off immediately the existential risk concern (based on some napkin math about how much computation would be required, and latencies across datacenters vs GPUs and such), and is plowing ahead on the technical development without the headwinds of a "safety" or even "respect" thought. Sort of a Los Alamos approach - focus on developing the tech, and let the government or someone else (importantly: external!) figure out the policy side of things.
I think both are needed. I agree that there needs to be a "Responsible AI" mindset in every team (or every individual, ideally), but there also needs to be a central team to set standards and keep an independent eye on other teams.
The same happens e.g. in Infosec, Corruption Prevention, etc: Everyone should be aware of best practices, but there also needs to be a central team in organizations of a certain size.
In this case, there are “responsibility”-scoped technologies that can be built and applied across products: measuring distributional bias, debiasing, differential privacy, societal harms, red-teaming processes, among many others. These things can be tricky to spin up and centralising them can be viable (at least in theory).
That makes as much sense as claiming that infosec teams never make organizational sense because every development team should be responsible and should think about the security dimensions of what they are doing.
And guess why infosec teams are absolutely required in any moderately large org?
Step 2: Create a “team” responsible for implementing the thing in a vacuum from other developers.
Step 3: Observe the “team” become the nag: ethics nag, security nag, code quality nag.
Step 4: Conclude that developers need to be broadly empowered and expected to create holistic quality by growing as individuals and as members of organizations, because nag teams are a road to nowhere.
Aren't we all responsible for being ethical? There seems to be a rise in the opinion that ethics do not matter and all that matters is the law. If it's legal then it must be ethical!
Perhaps having an ethical AI team helps the other teams ignore ethics. We have a team for that!
AI safety and ethics is not "done". Just like these large companies have large teams working on algorithmic R&D, there is still work to be done in the direction of what AI safety and ethics means, looks like ot can be attached to other systems. It's not, well shouldn't be, about bullshit PR pronouncements.
Yeah, product teams can/should care about being responsible, but there’s an obvious conflict of interest.
To me, this story means Facebook dgaf about being responsible (big surprise).
(Here X is a variable not Twitter)
So that's why everyone is so reluctant to work on deep-fake software? No, they did it, knowing what problems it could cause, and yet published everything, and now we have fake revenge porn. And we can not even trust tv broadcasts anymore.
So perhaps we do need some other people involved. Not employed by Meta, of course, because their only interest is their stock value.
Should all be completely disbanded.
I’m sure the rationalization is an appeal to the immature “move fast and break things” dogma.
My day job is about delivery of technology services to a distributed enterprise. 9 figure budget, a couple of thousand employees, countless contractors. If “everyone” is responsible, nobody is responsible.
My business doesn’t have the potential to impact elections or enable genocide like Facebook. But if an AI partner or service leaks sensitive data from the magic box, procurements could be compromised, inferences about events that are not public can be inferred, and in some cases human safety could be at elevated risk.
I’m working on an AI initiative now that will save me a lot of money. Time to market is important to my compensation. But the impact of a big failure, at the most selfish level, is the implosion of my career. So the task order isn’t signed until the due diligence is done.
In the early stages of a new technology the core ethics lies in the hands of very small teams or often individuals.
If those handling the core direction decide to unleash irresponsibly, it’s done. Significant harm can be done by one person dealing with weapons of mass destruction, chemical weapons, digital intelligence, etc.
It’s not wrong to have these teams, but the truth is that anyone working with the technology needs to be treated like they are on an ethics team, not build an “ethical group” who’s supposed to proxy the responsibility for doing it the “right way.”
Self-directed or self-aware AI also complicate this situation immeasurably, as having an ethics team presents a perfect target for a rogue AI or bad actor. You’re creating a “trusted group” with special authority for something/someone to corrupt. Not wise to create privileged attack surfaces when working with digital intelligences.
To me, the greatest apocalypse scenario is not some AGI global extinction event but a corporation with an extensive data hoard replete with ample ML and GPU power being able to monopolize a useful service that cannot be matched by the public... that is the true, (and likely imo) AI nightmare we're heading towards.
1.) there is an equilibrium that can be reached
2.) the journey to and stabilizing at said equilibrium is compatible with human life
I have a feeling that the swings of AI stabilizing among adversarial agents is going to happen at a scale of destruction that is very taxing on our civilizations.
Think of it this way, every time there's a murder suicide or a mass shooting type thing, I basically write that off as "this individual is doing as much damage as they possibly could, with whatever they could reasonably get their hands on to do so." When you start getting some of these agents unlocked and accessible to these people, eventually you're going to start having people with no regard for the consequences requesting that their agents do things like try to knock out transformer stations and parts of the power grid; things of this nature. And the amount of mission critical things on unsecured networks, or using outdated cryptography, etc, all basically sitting there waiting, is staggering.
For a human to even be able to probe this space means that they have to be pretty competent and are probably less nihilistic, detached, and destructive than your typical shooter type. Meanwhile, you get a reasonable agent in the hands of a shooter type, and they can be any midwit looking to wreak havoc on their way out.
So I suspect we'll have a few of these incidents, and then the white hat adversarial AIs will come online in earnest, and they'll begin probing, themselves, and alerting to us to major vulnerabilities and maybe even fixing them. As I said, eventually this behavior will stabilize, but that doesn't mean that the blows dealt in this adversarial relationship don't carry the cost of thousands of human lives.
And this is all within the subset of cases that are going to be "AI with nefarious motivations as directed by user(s)." This isn't even touching on scenarios in which an AI might be self motivated against our interests
On the other hand I totally relate with the idea that it could be preferable that everyday has access to advanced AI and not just large companies and nation states.
What is the "it" that no single entity has control over?
You have absolutely no control of what your next door neighbor is doing with open source.
Hey, if we want alcohol to be made responsibly, everyone should have their own still, made from freely redistributed blueprints. That way no single entity has control.
Anyone who wants to can, in fact, find blueprints for making their own still. For example, https://moonshinestillplans.com/ contains plans for a variety of different types of stills and guidance on which type to build based on how you want to use it.
And in fact I think it's good that this site exists, because it's very easy to build a still that appears to work but actually leaves you with a high-methanol end product.
Great example! Yes, linux being open source has been massively beneficial to society. And this is true despite the fact that some bad guys use computers as well.
If you don't put barriers, how quickly will AI bots take over people in online discourse, interaction and publication?
This isn't just for the sake of keeping the Internet an interesting place free of bots and fraud and all that.
But I've also heard that it's about improving AI itself. If AI starts to pollute the dataset we train AI on, the entire Internet, you get this weird feedback loop where the models could almost get worse over time, as they will start to unknowingly train on things their older versions produced.
It's entirely conceivable that even if AGI (or something comparably significant in terms of how impactful it would be to changing society or nation states) was achievable in our lifetime, it might be that:
1) Achieving it requires a critical mass of research talent in one place that perhaps currently exists at fewer than 5 companies - anecdotally only Google, Meta, and OpenAI. And a comparable number of world governments (At least in the US the best researchers in this field are at these companies, not in academia or government. China may be different.)
This makes it sound like a "security by obscurity" situation, and on a long enough timeline it may be. Without World War 2, without the Manhattan Project, and without the looming Cold War how long would it have taken for Humanity to construct a nuclear bomb? An extra 10 years? 20? 50? Hard to know. Regardless, there is a possibility that for things like AI, with extra time comes the ability to better understand and build those defenses before they're needed.
2) It might also require an amount of computing capacity that only a dozen companies/governments have.
If you open source all the work you remove the guard rails for the growth or what people focus investments on. It also means that hostile nations like Iran or North Korea who may not have the research talent but could acquire the raw compute could utilize it for unknown goals.
Not to mention that what nefarious parties on the internet would use it for. We only know about deep fake porn and generated vocal audio of family members for extortion. Things can get much much worse.
Or not, and damaging wrongheaded ideas will become a self-reinforcing (because safety! humanity is at stake!) orthodoxy, leaving us completely butt-naked before actual risks once somebody makes a sudden clandestine breakthrough.
https://bounded-regret.ghost.io/ai-pause-will-likely-backfir...
> We don’t need to speculate about what would happen to AI alignment research during a pause—we can look at the historical record. Before the launch of GPT-3 in 2020, the alignment community had nothing even remotely like a general intelligence to empirically study, and spent its time doing theoretical research, engaging in philosophical arguments on LessWrong, and occasionally performing toy experiments in reinforcement learning.
> The Machine Intelligence Research Institute (MIRI), which was at the forefront of theoretical AI safety research during this period, has since admitted that its efforts have utterly failed. Other agendas, such as “assistance games”, are still being actively pursued but have not been significantly integrated into modern deep learning systems— see Rohin Shah’s review here, as well as Alex Turner’s comments here. Finally, Nick Bostrom’s argument in Superintelligence, that value specification is the fundamental challenge to safety, seems dubious in light of LLM's ability to perform commonsense reasoning.[2]
> At best, these theory-first efforts did very little to improve our understanding of how to align powerful AI. And they may have been net negative, insofar as they propagated a variety of actively misleading ways of thinking both among alignment researchers and the broader public. Some examples include the now-debunked analogy from evolution, the false distinction between “inner” and “outer” alignment, and the idea that AIs will be rigid utility maximizing consequentialists (here, here, and here).
> During an AI pause, I expect alignment research would enter another “winter” in which progress stalls, and plausible-sounding-but-false speculations become entrenched as orthodoxy without empirical evidence to falsify them. While some good work would of course get done, it’s not clear that the field would be better off as a whole. And even if a pause would be net positive for alignment research, it would likely be net negative for humanity’s future all things considered, due to the pause’s various unintended consequences. We’ll look at that in detail in the final section of the essay.
Free software means you have to be able to build the final binary from source. Having 10 TB of text is no problem, but having a data center of GPUs is. Until the training cost comes down there is no way to make it free software.
If the training data and model training code is available then it should be considered open, even if it’s hard to train.
Making it open is the only way AI fulfills a power to the people goal. Without open source and locally trainable models AI is just more power to the big-tech industry's authorities.
I thought the big secret sauce is the sources of data that is used to train the models. Without this, the model itself is useless quite literally.
At least, that's my understanding.
Both countries have access to LLMs already. And if they didn’t, they would have built their own or gotten access through corporate espionage.
What open source does is it helps us better understand & control the tech these countries use. And it helps level up our own homegrown tech. Both of these are good advantages to have.
They are literally leaking more and more users to the open source models because of it. So, in retrospect, maybe it would be better if they didn't disband it.
Right now, those values are simply what content is bad for business.
These internal committees are Kabuki theater.
The reason why there's so much emphasis on this is liability. That's it. Otherwise there's really no point.
It's the psychological aspect of blame that influences the liability. If I wanted to make a dirty bomb it's harder to blame google for it if I found the results through google, easier to blame AI for it if I found the results from an LLM. Mainly because the data was transferred from the servers directly to me when it's an LLM. But the logical route of getting that information is essentially the same.
So because of this companies like Meta (who really don't give a shit) spend so much time emphasizing on this safety bs. Now I'm not denigrating meta for not giving a shit, because I don't give a shit either.
Kitchen knives can kill people folks. Nothing can stop it. And I don't give a shit about people designing safety into kitchen knives anymore than I give a shit about people designing safety into AI. Pointless.
All the unsafe things I can do with AI I can do with Google. No safety on Google. why? Liability is less of an issue.
This seems like a very confused analogy for two reasons. One, there's a reason you aren't able to get your hands on a sword or shotgun in most places on earth, I'd prefer that not to be the case for AI.
Secondly, AI is a general purpose tool. Safety for AI is like safety for a car, or a phone, or the electrity grid. it's going to be a ubiqutous background technology, not merely a tool to inflict damage. And I want safety and reliablity in a technology that's going to power most stuff around me.
In the US, I can get my hands on guns, knives and swords. In other countries you can get axes and knives. I think guns are mostly banned in other places.
>Safety for AI is like safety for a car, or a phone
Your phone has a safety? What about your car? At best the car has air bags that prevent you from dying. Doesn't prevent you from running other people over. The type of "safety" that big tech is talking about is safety to prevent people from using it malicious ways. They do this by making the AI LESS reliable.
For example chatGPT will refuse to help you do malicious things.
The big emphasis on this is pointless imo. If people aren't using AI to look up malicious things, they're going to be using google instead which has mostly the same information.
they already have processes for manipulating results and have a trained and likely tagged data set of “bad” things the AI shouldn’t return. if they don’t want the ai telling how to do illegal stuff they will just not include that in its dataset. if the ai “learns” this, that’s responsibility of the user likely in the clause. they will simply document how it was trained and true expected results, add clause on “if you don’t wanna see disturbing responses don’t ask disturbing questions for it to find he answer to”, and probably it will be enough unless the ai gets really combative and destructive.
i really don’t thjnk this about safety at all, it’s trying to seed the idea that the ai companies are at all concerned about violating existing privacy regulations that Meta et. al. already are bumping against.
obviously it’s supposition but i thjnk this is far likelier what they’re worried on and what all this “safety” talk is about. they just want plausible deniability to be seeded before the first lawsuits come.
anyone who has a problem with this should have quantitatively MORE of a problem with the WHO removing "do no harm" from their guidelines. i would accept nothing less.
So yeah... the whole idea of "responsible AI" is just wishful thinking at best and deceptive hypocrisy at worst.
You used to be able to tell it to not include parts of the prompt or write in a certain style — and now it’ll ignore those guidelines.
I believe they did this to stop DAN jailbreaks, but now, it can no longer follow directions for composition at all.
Well said - there's been too much "Skynet going rogue" sci-fi nonsense injected into this debate.
Except it's not only a text generator. It now browses the web, runs code and calls functions.
Others are looking at the trajectory and thinking about the future, where safety does start to become important.
Many of these AI Ethics foundations (e.g., DAIR), just seem to advocate rent seeking behavior, scraping out a role for themselves off the backs of others who do the actual technical (and indeed ethical) work. I'm sure the Meta Responsible AI team was staffed with similar semi-literate blowhards, all stance and no actual work.
See that’s the thing you can say A is like B, but that doesn’t actually make them the same thing. AI has new implications because it’s a new thing, some of those are overblown, but others need to be carefully considered. Companies are getting sued for their training data, chances are they’re going to win but lawsuits aren’t free. Managing such risks ahead of time can be a lot cheaper than yelling yolo and forging ahead.
It has been the agenda of most FAANG corporations (with the notable exception of Apple) to turn the computers average people own into mere thin clients with all the computing resources.
Luckily, before the cloud era, the idea that people can and should own powerful personal computers was the normal. If PCs were invented today, I guess there would be people raising ethical concerns about regular citizens owning PCs that can hack into NASA.
So the structure matters. Ethicists who produce papers on why ethics matters and the like are kind of like security, compliance, and legal people at your company who can only say no to your feature.
But Google’s Project Zero team is a capable team and produces output that actually help Google and everyone. In a particularly moribund organization, they really stand out.
I think the model is sound. If your safety, security, compliance, and legal teams believe that the only acceptable risk is from a mud ball buried in the ground then you don’t have any of those functions because that’s doable by an EA with an autoresponder. What this effective team does is minimize your risks on this front while allowing you to build your objective.
> Whoever ultimately owns the AI (or the Bazooka)
This is not the user in most cases. So a responsible AI can make sense. I believe you don't think AI can be dangerous, but some people do and from their point of view having a team for this makes sense.
Your take confuses me, because in this case the owner is Meta. So yes, they have to think about what tools they make ("should we design a bazooka") and how they'll use what they made ("what's the target and when to pull the trigger ?")
They disbanded the team that was tasked with thinking about both.
From the article:
> RAI was created to identify problems with its AI training approaches, including whether the company’s models are trained with adequately diverse information, with an eye toward preventing things like moderation issues on its platforms. Automated systems on Meta’s social platforms have led to problems like a Facebook translation issue that caused a false arrest
Conversely, there's no real downside to being too conservative, especially if engineers and leadership are entirely deferential to you because they don't understand your field (or are too afraid to speak up.)
Although this is also somewhat true for security, privacy, and safety organizations, their remit tends to include "enabling business." A safety team that defaults to "you shouldn't be doing this" is not going to have much sway. A legal department might.