https://www.npr.org/sections/codeswitch/2019/02/20/695941323...
You’ll find these bad ideas never really die. Look and you’ll see it throughout time and location. Russia, Germany, the U.S., Japan. Tyranny isn’t something accidental, exotic or mysterious. People take their eye off the ball and get clobbered with it from time to time.
I’ll always argue we’re better off with a world war than tyranny, but the whole goddamn point of the UN Charter is to prevent both. The lesson was learned. It was written down. And we’re still fucking it up again.
For decades, Nazi-adjacency has been just another insult to be hurled at the political opponents we've othered. Depending on where you are on the political spectrum, "Nazi" could be synonymous with Elon Musk. In one breath we trivialize the evil humanity is capable of inflicting upon itself. In the next breath we exclaim, "Never again!"
The American Eugenics Society rebranded itself into, "Society for Biodemography and Social Biology". Ambiguous terms like, "bioethics" are used by eugenicist think tanks like "The Hastings Center" where explicit appeals to eugenics are undesirable. The Club of Rome evolved into the WEF. Paul Ehrlich's ideas are as popular as ever. The same eugenicist appeals for population control remain in the forefront of public discourse. Even here on HN, you will regularly find posters lamenting the impending doom of climate change. The answer, if you ask many here is the eugenicist policy of population control.
There are other themes in parallel, but I'll try to keep it somewhat concise and less controversial.
It isn't only the "Banality of Evil" or an engineer only who wants to go home to watch Netflix after designing a killer drone. Similar authoritarian ideas are celebrated in our popular discourse. Instead of examining these ideas critically, we accuse political others, dehumanize them and finally rationalize them into the Nazis.
Either that or genAI will be used to publish a bunch of books telling fantasy stories about how IBM personally arrested Hitler. :)
already the AI detects criticism of itself. except its response it's to shadowban you meaning you can continue to post but nobody sees your opinion online.
eventually, you're "bubbled" by AIs.. all your interactions online are surrounded by an AI and you'd think you're interacting with other people when you're just AI-bubbled so to not disrupt the rest of the workers.
you'll still see likes, and other interactions with the social media posts you leave behind, but as a flagged critic of the system, all these interactions are merely faked to keep you calm. as the AI advances you'll even see responses, retweets and other interactions.... all AI driven in order to keep you busy while IBM keeps a calm overwatch over all. the end.
even if neither of us is actually an AI, this interaction will surely aid in training some LLM in the end...
Or at a higher level, at the ISP level.
Targeted via DNS tunneling and all.
fudge the up/down votes to make it look like it's been seen but not reacted to.
but do you need to burn cycles on AI to keep these people engaged? if someone is spamming stuff you don't want seen have them throw out a basic response and then shadowban or just straight-up ban them. if they're very negative bad actor types just give em the boot
Enough frustrated people will use AI to quickly generate the code for an alternative platform to avoid this bubble system.
It will be individual platforms all the way down...oh wait.
On platforms like Facebook or YouTube where the feed is algorithmically generated and you can't easily view a filtered list of topics (like Reddit) something like this would be very easy.
The interactions don't even need to be generated by AI, it just needs to keep you seeing interactions with other people in your social status circle. And if you try to venture too far outside of that it shadow bans you.
Heck I'd be surprised if the way the news feed algorithms work today they don't already do something like this, as a byproduct of optimising for viewership.
They'd just need to take it a bit further by preventing you from seeing viewpoints outside your circle. So taking the WWII example, people in the Nazi group would not be able to see pro-Ally content. All they'd see about Allies would be content that paints them in a bad light, and vice versa.
[1]:https://time.com/6960587/meta-instagram-political-content-li...