And will you still be able to tell in five years, when this tech has had 20 new iterations, each addressing the very tells that right now let you notice it's fake?
I know, I know, photoshop and fake pictures have always been around. But now, everyone can do it in 30 seconds. That changes things.
No, they won't. There's not a chance.
Yesterday we were visiting an elderly relative, who received a video in WhatsApp family group chat with an AI generated child saying a prayer. I had to repeatedly tell them it was not a real child on the video. To me it was obvious since the whole body was static and only the face was moving slightly, and the child was talking in a way that you knew it was text-to-speech.
There is sadly no hope for the elderly generation. If your parents or grandparents receive a video with your face, asking them for money - they will believe it. Even if you stand right in front of their face in real life telling them the video on their device is not real, they will believe it more than you. You will be considered not real rather.
My mother thought it was a bad idea and didn't want to get involved.
It would have been incredible and I would absolutely have fallen for it.
I think it's a matter of our 'priming', what we're used to: If you're used to digital recordings that seem real to actually be real, you'll be tripped up.
If you have no such expectation, you're less inclined to buy into the reality of something that is shown to you. That you have to believe some version of events that you pick up is a false dichotomy.
You can also not believe any version until you get more substantial proof.
Well, my parents and grandparents are dead, so probably not.
But, when they were alive, they had enough experience of the real world that they would be suspicious of bandages worn outside of clothes, the printed document wrapped around the arm, and the gauze that rather than wrapping fully around the head seamless merges into the forehead.
But, more importantly, it would be trivial to do what all the really successful fake images of this war and most precious ones have done: just take a real picture, from a real conflict (sometimes, even the same one) and lie about the context. Lying with pictures is not a novel threat.
Would they have noticed them at the quick glance that this image invites? Assume an astute person but with media literacy based on:
* newspapers (where fake photos are possible but more primitive), and
* TV (where the image is moving and appears on the screen only for you to notice it, but not to examine it closely).
* also possibly from the tiny screen of a janky smartphone mandatorily plugged into the dark forest of social media.
At a glance, this hypothetical person would likely see something like "a guy was hurt in a war, he is now miserable in a hospital, here he is, _there is no other information in this picture_, read on". And then they are more likely to pay a little more attention to the text, because they just saw an image containing things that prime them for experiencing compassion.
Disregarding that experience becomes outdated, and the target audience for propaganda also includes inexperienced people and those who never developed good thinking habits in the first place, elderly people are also more likely to have degraded senses, and may be less interested in playing "spot the 10 differences" than a young digital native absolutely fascinated with this fun new technological development.
I for one spotted none of the things that were wrong with the photo, even though some of them were really obvious. But I didn't look twice until seeing them pointed out in the replies. And even then, I though "maybe it's a scratch that only takes a large band aid and not a full bandage, what the hell do I know about bandaging a head anyway" (defaulting to trusting the image and excusing inconsistencies). And, since, the context was being already established by the poster so I didn't even look twice. Besides, now that many camera phones have "AI smoothing filters", that blurs the boundary even more, making real photos look AI generated. The overall "AI smoothness" that I noticed about the image (where it's the "notional" resolution that is degraded, not the rasterization) might be completely lost on people who are visually impaired, or just don't stay up to date with the novelties of image processing.
So I fall back to the same heuristic that your grandparents, bless them, probably also used: if it's on the news, it's somewhat fake by definition. And how much attention you should pay to it depends on how much your interests align with those of whoever's paying for you to see it.
Ofc, becoming stuck in a local optimum bubble of fake perceptions that confirm each other, and gaining a "political identity", is nothing new, either. Our generation just got blindsided by the idea that computers and the Internet would somehow make this fuckery less necessary. Can't wait to see what scams would target me when I'm old but my elderly parents did fall for a fake phone bill because the guy brought it in person - once again, no AI necessary, neither is there a viable way for AI to help with this problem. (AI doorbells recognizing scammers I guess? but that can turn real dystopic real fast.)
Also the particular image in this tweet doesn't seem like a great example of the power of AI propaganda. Unless I'm missing something, it's a fairly generic image of a nameless person...the propaganda is all in the story attached to it. The same story attached to a stock photo seems like it would have virtually the same impact.
Doesn't matter. The best propaganda isn't fake, but truthful. It emphasizes true stories that further its goal, and de-emphasizes or buries stories that hinder it. E.g. https://ifamericansknew.org/media/nyt-report.html
The relevant statistics I can think of (crime/violence etc) show nefarious acts worldwide on the decrease. So, if there is something linking them and tech it may be the reverse of what popular commentary seems to expect.
This is like how I could smuggle two “the”s into the previous paragraph without most people noticing; we skim more often than we realize.
I couldn't tell the photo was fake from a first or second glance, only after I had started reading the posts and taking a 3rd and 4th look at it I could sort of notice some uncanniness.
But, even then, had no-one told me that the image was fake I would have certainly continue to at least have some doubts about the image's fake-ness.
Does it? I've seen limited amounts of text-based propaganda / misinformation and anyone can do it 30 seconds, yet somehow we're not drowning in a flood of it. Society got better at verifying textual information, even though some individuals remain susceptible. In my chat groups, it is always the same 2 people that fall for - and propagate - scams/snake-oil/misinformation regardless of the medium.
Keep hammering them with facts, details, allegations, baseless claims, even slivers of truth. The average person doesn't have the time, interest, or capabilities to dig through all of those claims, and will eventually settle on consuming the facts they want to hear. Keep em too confused to do anything except what feels right to their gut.
Unsurprisingly, this was pioneered by the Soviets, and heavily used by the Russians, both via foreign agit-prop, but also heavily on their domestic audience.
See also: The Russian "Firehose of Falsehood" Propaganda Model https://www.rand.org/pubs/perspectives/PE198.html
What's interesting about the soldier image is it has the same emotional impact at first glance as if it were real, before your critical faculties engage to sort it, which means its effect has already occurred. If your feeds are full of indignation and outrage, it doesn't matter whether it's real or not, you're going to have a physical and uncritical association with the sensations it creates. It's straight Pavlovian response. Your perceptions literally come through a feed.
Maybe we recognize how thoroughly propagandized we are already, and these examples are a merciful uncanny valley that can let us step back and really question the shit we are letting pile up in our psyches. Even as a self check, do a word association exercise and then ask how closely your associations reflect objective or even an ideal reality. I do these occasionally to test the quality of my beliefs, the results are reliably poor. Apprehending anything close to reality at all requires constant vigilance and asking how you know the things you know, and we're just at t=6mos, what does t=36 look like?
Wars are strategic actions by nations. But humans generally will not engage in mass killing for strategic reasons. They need moral reasons. So propaganda is necessary for warfare in order to frame war in a moral manner. The enemy or enemy leaders are depicted as evil or inhuman. Or their most despicable acts are emphasized to create a sense of 'morally-justified' hatred or the idea that they must be stopped or punished at all costs. Such as killing millions of people if necessary or destroying a country.
Actually, if it serves their interests and especially if all of their neighbors are not protesting, humans will go along with pretty much anything. But you do need to at least give them a cover story.
Technology should theoretically be able to help reduce the influence of propaganda through things like new types of decentralized news distribution.
Why and how? Maybe I don’t know what “decentralized news distribution” means, but whether or not it’s decentralized seems irrelevant to me. People pick sources to follow and share news with others; if those sources are producing propaganda, then people are amplifying propaganda.
And not just multiple angles of something, but distributed images through the entire crowd while it's happening. Let's say they show a fight, does the rest of the crowd move appropriately to make room for it? If they show a politicians speaking does the crowd surrounding the closer videos cheer in time with the distant crowd, or are they perhaps spliced together?
Forensics will get harder and only more data will give us a chance. Ultimately, analyzing the data is easier than faking it consistently and scale is our advantage.
One of the hazards of browsing HN: old darlings like Twitter will be tolerated even after they become NSFW by default.
This also turns Reddit links into Libreddit/Teddit links, YouTube links into Invidious links, etc.
Basically you get to browse an Internet without intrusive pre-roll ads or outrage algorithms. I think based on your comment that this might be of interest to you.
https://uproar-crowned-964778.appspot.com/2023/04/24/my-dlya...
0: https://uproar-crowned-964778.appspot.com/2023/04/24/my-dlya... via https://twitter.com/ChrisO_wiki/status/1653118082766852097?s...
The whole trend of adding stock pictures to everything only creates distractions. It's bad for the reader, good for the publisher. The pictures are either useless or misleading.
Archive of the posts which have since been deleted: https://archive.md/20230501183435/https://twitter.com/Amnest...
I don’t support it.
What I worry about is "artificial / generated consent". You read some upsetting story, and your skeptical brain holds it at arms length. Then you read commentary in a forum you trust and you see message after message of thoughtfully worded support for some position. I think reading gobs of "informed real people" commentary is far more persuasive - and subtly so - than reading an article from someone you KNOW is pushing a specific perspective.
I like to believe I'm an independent thinker, but a big part of my process is to seek out many different points of view and judging which feel well supported and well reasoned. Consensus DOES play a role in my judgement forming. If consensus is easily faked, yikes.
Ultimately that's the best most people can do short of intensive 'independent' research on most topics which outside of your personal expertise generally isn't entirely possible (even if you have good research skills, there's time limitations.)
>Consensus DOES play a role in my judgement forming. If consensus is easily faked, yikes.
Even prior to widespread AI tools this has been a strong method in information warfare, that's why it's detrimental to not show dislikes/downvotes in rating systems, it can make a far greater consensus appear to exist where there is far more disagreement among a topic.
As far as I'm concerned, the removal of those metrics are to enforce that specific purpose.
You're proposing to ban speech that, by your own admission, you do not read.
Not much makes sense in the image. Like almost every other AI generated image, as soon as you start digging it falls apart completely
- What's the weird arm sling thing ?
- Why is there a white corner on his right side, but not the other ?
- Why does he have a full size paper sheet glued to his arm ?
- The body position doesn't match the bed
- The background is a mess, things collide and disappear in weird ways
The light looks very artificial too
He's the mayor of Bloodfield
Basically, it says "this was cheaper then conventional film making techniques."
Thus, can someone with access to DALL-E 2 or similar feed it the public archives of propaganda posters [1], generate a few samples, and deliver these for a discussion here at Hacker News? It would be interesting to see what kind white propaganda an AI and its users would generate!
Thing is, when you're arguing online with some political consultant who four years ago was convinced Joe Biden was senile (because they supported Amy Klobuchar), who is now just as convinced he's sharp as a razor and ready for another term, are they really trying to convince you with their arguments?
No, I don't think so. I think the whole point is what they're doing to themselves. Look at me, how loyal I am. I won't stop at anything to win. I don't care if my past words indict me, that was then, this is now and everything is at stake! Winners never quit! Never mind what you think, don't you feel my determination?
So it's not about the quality of the words that come out of their mouth. GPT4 can surely produce much better words, but that's not what's supposed to convince. It's the example, the example of fanatical organizational-personal loyalty, that's supposed to convince. GPT4 would need to lie and convince people it's a real person - or rather, many real people - in order for that sort of thing to work. But political consultants are already doing that at scale. It's probably not any better at lying, and even if it is, what they have is good enough.
Substitute Biden and Klobuchar for any other politicians, obviously. It's not a left/right thing.
They're signaling to the people who pay them, and to the people who might hire them in the future, not to their readers. To their readers, they're kind of anti-signaling. I mean, are many people genuinely persuaded by whoever can yell the longest? (Because that's what they're doing. They always reply with something, and that something is never an indication that the other side might have even a shred of a point. But is anyone actually persuaded by some "brick wall" posting the last word at the end of some long back-and-forth?)
Maybe a watermark?
If watermarking is the solution it needs to be applied to the legitimate content.
What I thought was going to be a massive cyber-war with russia/ukraine/nato/US/China, did not turn out so.
But once we have the first major cyber attack on [sinfrastructure] with an AI based crawling weapons, such as an AI developed STUXNET/DUQU - then we will have crossed the threshold into the next information era.
This is a really dark pattern, which is clearly being used for deception and plausible deniability (the article is reposted with the embedded picture - everyone sees the picture and assumes it's a photo without checking - it's not us, we warned you it's generated, with a tiny line decoupled from the picture nobody ever paid attention to).
Such use of generated images is extremely myopic and will backfire, or already backfired, sowing doubt in everything they and their side writes, regardless of their credibility. The case will be heavily abused by the opposite propaganda as well.
EDIT: If you haven't realized this, then change some to a lot.
The people using the image aren't doing propaganda, they're just cutting costs.
The people claiming that using the image is doing propaganda, are doing propaganda.
Tucker Carlson's texts have exposed the fact that he was lying throughout the Trump presidency. The number one news anchor on the most watched channel in the US was lying to the public and then privately texting with friends and colleagues about it. He should be shunned from public media for good now.
The internet has been full of crap for years, so my prediction is AI generated content has a burst of utility early on for bad actors, but it will quickly be normalized and cast aside like most of the garbage we wade through today.
It has brought a slew of right wing populists.
We need to stop viewing technology as neutral, and "oh it will all turn out nice, like the railways did". These are tools which manipulate the human psyche, and not the same as stagecoaches to be disposed of.