Generally, it's a boring boneheaded talking point that the 1% of us actually working in AI use as a sorting hat for who else is.
Talking about sorting hats for those who do and don’t have the one-percenter AI badge isn’t a super hot look my guy (and I’ve veered dangerously close to that sort of thing myself, this is painful experience talking): while there is no shortage of uninformed editorializing about fairly cutting edge stuff, the image of a small cabal of robed insiders chucking in their cashews while swiping left and right on who gets to be part of the discussion serves neither experts nor their employers nor enthusiastic laypeople. This is especially true for “alignment” stuff, which is probably the single most electrified rail in the whole discussion.
And as a Google employee in the diffuser game by way of color theory, you guys have a “days since we over-aligned an image generation model right into a PR catastrophe” sign on the wall in the micro kitchen right? That looked “control vector” whacky, not DPO with pretty extreme negative prompt whacky, and substantially undermined the public’s trust in the secretive mega labs.
So as one long-time HN user and FAANG ML person to another, maybe ixnay with the atekeepinggay on the contentious AI #1 thread a bit?
But even if folks don't find that argument persuasive, I'd remind everyone that the "insiders" have a tendency to get run over by the commons/maker/hacker/technical public in this business: Linux destroying basically the entire elite Unix vendor ecosystem and ending up on well over half of mobile came about (among many other reasons) because plenty of good hackers weren't part of the establishment, or were sick of the bullshit they were doing at work all day and went home and worked on the open stuff (bringing all their expertise with them) is a signal example. And what e.g. the Sun people were doing in the 90s was every bit as impressive given the hardware they had as anything coming out of a big lab today. I think LeCun did the original MNIST stuff on a Sun box.
The hard-core DRM stuff during the Napster Wars getting hacked, leaked, reverse engineered, and otherwise rendered irrelevant until a workable compromise was brokered would be another example of how that mentality destroyed the old guard.
I guess I sort of agree that it's good people are saying this out loud, because it's probably a conversation we should have, but yikes, someone is going to end up on the wrong side of history here and realizing how closely scrutinized all of this is going to be by that history has really motivated me to watch my snark on the topic and apologize pretty quickly when I land in that place.
When I was in Menlo Park, Mark and Sheryl had intentionally left a ton of Sun Microsystems iconography all over the place and the message was pretty clear: if you get complacent in this business, start thinking you're too smart to be challenged, someone else is going to be working in your office faster than you ever thought possible.
It's at least funny, because you're doubling down on OP's bad takes, and embarrassing yourself with trying to justify it with what you thought was brilliant research and a witty person-based argument. But, you messed up. So it's funny.
Punchline? Even if you weren't wrong, it would have been trivial while doing your research to find out half of Deep Mind followed me this week. Why? I crapped all over Gemini this week and went viral for it.
I guess, given that, I should find it utterly unsurprising you're also getting personal, and clinging to 1% as a class distinction thing and making mental images of cloistered councils in robes, instead of, well, people who know what they're talking about, as the other repliers to you point out.
"1%ers are when the Home Depot elites make fun of me for screaming about how a hammer is a nerfed screwdriver!"
If that's not your blog, you should probably take it off your profile?
People "actually working in AI" have all sorts of nonsense takes.
Please stop karma bombing comments saying reasonable things on important topics. The parent is maybe a little spicy, but the GP bought a ticket to that and plenty more.
edit: fixed typo.
Is there any chance that could be happening, instead of a complex drama play with OP buying tickets to spice that's 100% obviously true?
For you that may be the case.
But the widespread popularity of ChatGPT and similar models shows that it isn't a serious impediment to adoption. And erring on the side of safety comes with significant benefits e.g. less negative media coverage, investigations by regulators etc.