It's glib to dismiss safety concerns because we haven't all turned into paperclips yet. LLMs and image gen models are having real effects now.
We're already at a point where AI can generate text and images that will fool a lot of people a lot of the time. For every college-educated young person smugly pointing out that they aren't fooled by an image with six-fingered hands, there are far more people who had marginal media literacy to begin with and are now almost defenceless against a tidal wave of hyper-scaleable deception.
We're already at a point where we're counselling elders to ignore late-night messages from people claiming to be a relative in need of an urgent wire transfer. What defences do we have when an LLM will be able to have a completely fluent, natural-sounding conversation in someone else's voice? I'm not confident that I'd be able to distinguish GPT-4o from a human speaker in the best of circumstances and I'm almost certain that I could be fooled if I'm hurried, distracted, sleep deprived or otherwise impaired.
Regardless of any future impacts on the labour market or any hypothesised X-risks, I think we should be very worried about the immediate risks to trust and social cohesion. An awful lot of people are turning into paranoid weirdos at the moment and I don't particularly blame them, but I can see things getting seriously ugly if we can't abate that trend.
Set a memorable verification phrase with your friends and loved ones. That way if you call them out of the blue or from some strange number (and they actually pick up for some reason) and you tell them you need $300 to get you out of trouble they can ask you to say the phrase and they'll know it's you if you respond appropriately.
I've already done that and I'm far less worried about AI fooling me or my family in a scam than I am about corporations and governments using it without caring about the impact of the inevitable mistakes and hallucinations. AI is already being used by judges to decide how long people should go to jail. Parole boards are using it to decide who to keep locked up. Governments are using it to decide which people/buildings to bomb. Insurance companies are using to deny critical health coverage to people. Police are using it to decide who to target and even to write their reports for them.
More and more people are going to get badly screwed over, lose their freedom, or lose their lives because of AI. It'll save time/money for people with more money and power than you or I will ever have though, so there's no fighting it.
Alternatively while it may be difficult to trick you directly, phishing the passphrase from a more naive loved one or bored coworker and then parroting it back to you is also a possibility. 'etc.
Phone scams are no joke and this is getting past the point where regular people can be expected to easily filter them out.
1. something you have
2. something you know
3. something you are
These three things are required for any authz.
I only say this sort of jokingly. Three out of four of my parents/in laws are questionably literate on the internet. It wouldn't take much of a "me bot" for them to start telling it the stories of our childhood and then that information is out there.
As technology advances those proportions will be boosted. Seems inevitable.
We went from living in villages where everyone knew each other to living in big cities where almost everyone is a stranger.
We went from photos being relatively reliable evidence to digital photography where anyone can fake almost anything and even the line between faking and improving is blurred.
We went from mass distribution of media being a massive capital expenditure that only big publishers could afford to something that is free and anonymous for everyone.
We went from a tiny number of people in close proximity being able to initiate a conversation with us to being reachable for everyone who could dial a phone number or send an email message.
Each of these transitions caused big problems. None of these problems have ever been completely solved. But each time we found mitigations that limit the impact of any misuse.
I see the current AI wave as yet another step away from trusting superficial appearances to a world that requires more formal authentication protocols.
Passports were introduced long ago but never properly transitioned into the digital world. Using some unsigned PDF allegedly representing a utility bill as proof of address seems questionable as well. And the way in which social security numbers are used for authentication in the US is nothing short of bizarre.
So I think there are some very low hanging fruits in terms of authentication and digital signatures. We have all the tools to deal with the trust issues caused by generative AI. We just have to use them.
The best we can ever hope to do is find mitigations as and when problems arise.
That's massive fast change, and we haven't culturally caught up to any of it yet.
This happened from the 15th century onward. By the 19th century more than half the UK population could read and write.
That's how I look at where we're going with AI. Plunge along into the new arms race first and build the capacity, then later figure out the treaties and safeguards which we hope will keep our society safe (and by that I don't mean a Skynet-like AI-powered destruction, but the upheaval of our society potentially as impactful as the industrial revolution.)
Humanity will get through it, I'm sure. But I'm not confident it will be without a lot of pain and suffering for a large percentage of people. We also managed to survive 2 world wars in the last century--but it cost the lives of 100 million people.
So the question, I think, is how do we reclaim trust in a world where every kind of content can be convincingly faked? And I think the answer is by rebuilding trust between users such that we actually have reason to simply trust the users we're interacting with aren't lying to us (and that also goes for building trust in the platforms we use). In my mind, that means a shift to small federated and P2P communication since both of these enable both the users and the operators to build the network around existing real-world relationships. A federation network can still grow large, but it can do so through those relationships rather than giving institutional bad actors as easy of an entrance as anyone else.
Isn't it rather brilliant that you can just ask questions of competent people in some subreddit without first becoming part of that particular social circle?
It could also reintroduce geographical exclusion based on the rather arbitrary birth lottery.
This a problem with all technology. The mitigations are like technical debt but with a difference. You can fix technical debt. Short of societal collapse mitigations persist, the impacts ratchet upward and disproportionately affect people at the margin.
There's an old (not quite joke) that if civilization fell, a large percentage of the population would die of the effects of tooth decay.
The nature of this tech itself is probably what is getting most people - it looks, sounds and feels _human_ - it's very relatable and easy for a non-tech person to understand it and thus get creeped out. I'd argue there are _far_ more dangerous technologies out there, but no one notices and / or cares because they don't understand the tech in the first place!
The "yet" is carrying a lot of weight in that statement. It is now five years since the launch of GPT-2, three years since the launch of GPT-3 and less than 18 months since the launch of ChatGPT. I cannot think of any technology that has improved so much in such a short space of time.
We might hit an inflection point and see that rate of improvement stall, but we might not; we're not really sure where that point might lie, because there's likely to still be a reasonable amount of low-hanging fruit regarding algorithmic and hardware efficiency. If OpenAI and their peers can maintain a reasonable rate of improvement for just a few more years, then we're looking at a truly transformational technology, something like the internet that will have vast repercussions that we can't begin to predict.
The whole LLM thing might be a nothingburger, but how much are we willing to gamble on that outcome?
> it'd have to be impacting the real world
By writing business plans? Getting lawyers punished because they didn't realise that "passes bar exam" isn't the same as "can be relied on for citations"? By defrauding people with synthesised conversations using stolen voices? By automating and personalising propaganda?
Or does it only count when it's guiding a robot that's not merely a tech demo?
Replacing all jobs except LLM developers? I’ll tell my hairdresser
Maybe they didn't know, maybe none of their colleagues used it, their company didn't pay for it, or maybe all they need is an Excel update.
But I am confident that using Copilot would be faster than clicking through the sludge that are Microsoft Office help pages (third party or not.)
So I think it is correct to fear capabilities, even if the real world impace is still missing. When you invent an airplane, there won't be an airstrip to land on yet. Is it useless, won't it change anything?
It's still early, and I don't see much in corporate communications, for instance, but it will be quite the change.
It's worse than I thought. They've already managed to mimick the median HN user perfectly!
We need one who's doing the dirty work of not discussing.
I fear that at some point the anonymity that made the internet great in the first place will be destroyed by this.
The dead internet theory started to look more real with time, AI spam is just scaling it up.
It has been so bad, I even considered injecting misspelling and incorrect grammar and bad punctuation into my prose to prove my words are mine.
i remember seeing the change when GPT-2 was announced
I guess we need to have an AI secretary to take in all phonecalls from now on (spam folder will become a lot more interesting with celebrity phone calls, your dead relative phoning you etc)
As someone that grew up with late-90’s internet culture and has seen all the pros and cons and changes over the decades, I find myself using the internet less and less for dialogue with people. And I’m spending more time in nature and saying hi to strangers in reality.
I’m still worries about the impact this will have on a lot of people’s ability to reason however. “Just” Tik Tok and apps like it has already had devastating results on certain demographics.
Brave New World indeed.
And that's the case even if you've never ever posted anything on your social media - it could be family&friends, or employer, or if you're ever been in a public-facing job position that has ever done any community outreach, or ever done a public performance with your music or another hobby, or if you've ever walked past a news crew asking questions to bystanders of some event, or if you've ever participated in some contests or competitions or sports leagues, etc, all of that is generally findable in various archives.
I'm sure AI-based ageing can do a good enough job to convince many people that a fake image of someone they haven't seen for years is an older version of the person they remember; but how often would it succeed in ageing an old photo in such a way that it looks like a person I have seen recently and therefore have knowledge rather than guesses about exactly what the years have changed about them?
(Not a rhetorical question to disagree with you, I genuinely have no idea if ageing is predictable enough for a high % result or if it would only fool people with poor visual memory and/or who haven't seen the person in over a decade.)
I feel like even ignoring the big unknowns (at what age, if any, will a person start going bald, or choose to grow a beard or to die their hair, or get a scar on their face, etc.) there must be a lot of more subtle but still important aspects from skin tone to makeup style to hair to...
I've looked up photos of some school classmates that I haven't seen since we were teens (a couple of decades ago), and while nearly all of them I think "ah yes I can still recognise them", I don't feel I would have accurately guessed how they would look now from my memories of how they used to look. Even looking at old photos of family members I see regularly still to this day, even for example comparing old photos of me and old photos of my siblings, it's surprising how hard it would be for a human to predict the exact course of ageing - and my instinct is that this is more down to randomness that can't be predicted than down to precise logic that an AI could learn to predict rather than guess at. But I could be wrong.
Why not an AI assistant in the browser to fend all the adversarial manipulation and spam AIs on the web? Going online without your AI assistant would be like venturing without a mask during COVID
I foresee a cat-and-mouse game, AIs for manipulation vs AIs for protection one upping each other. It will be like immune system vs viruses.
has been for years mon ami. i remember when they started talking about GPT-2 here, and then seeing a sea-change in places like reddit and quora
quite visible on HN, esp. in certain threads like those involving brands that market heavily, or discussions of particular countries and politics.
I don't think anyone has a good answer to that question, which is the problem in a nutshell. Job one is to start investing seriously in finding possible answers.
>We need to roll back to "don't trust anything online, don't share your identity or payment information online"
That's easy to say, but it's a trillion-dollar decision. Alphabet and Meta are both worthless in that scenario, because ~all of their revenue comes from connecting unfamiliar sellers with buyers. Amazon is at existential risk. The collapse of Alibaba would have a devastating impact on Chinese exporters, with massive consequent geopolitical risks. Rolling back to the internet of old means rolling back on many years worth of productivity and GDP growth.
Well that's exactly the sort of service that will be extremely valuable in a post-trust internet. They can develop authentication solutions that cut down on fraud at the cost of anonymity.
Even when it comes to people like our parents, there are things we would trust them to do, and things that we would not trust them to do. But what happens when you have zero trusted elements in a category?
At the end of the day, the digital world is the real world, not some seperate place 'outside the environment'. Trying to treat digital like it doesn't exist puts you in a dangerous place to be deceived. For example if you're looking for XYZ and you manage to leak this into the digital world, said digital world may manipulate your trusted friends via ads, articles, the social media posts they see on what they think about XYZ before you ask them.
This tech is dangerous, and I'm currently of the opinion that its uses for malicious purposes are far better and more significant than LLM's replacing anyone's jobs. The bullshit asymmetry principle is very incredibly significant for covert ops and asymmetric warfare, and generating convincing misinformation has become basically free overnight.
Discovering an asteroid full of gold, with as much gold as half the earth to put a modest number, would have huge impact to the labour market. Anything conductive like copper, silver, mining jobs would all go away. Also housing would be obsolete as we would all live in golden houses. A huge impact to the housing market, yet it doesn't seem such a bad thing to me.
>We're already at a point where we're counselling elders to ignore late-night messages from people claiming to be a relative in need of an urgent wire transfer.
Anyone can prove their identity, or identities, over the wire, wire-fully or wire-lessly, anything you like. When i did go to university, i was the only one attending the cryptography class, no one else showed up for a boring class like this. I wrote a story about the Electrona Corp in my blog.
What i say to people for at least 2 years now, is that "Remember when governments were not just some cryptographic algorithms?" Yeah, that's gonna change. Cryptography is here to stay, it is not as dead as people think and it's gonna make a huge blast.
All this would do is crash the gold price. Also note that all the gold at our disposal right now (worldwide) basically fits into a cube with 20m edges (its not as much as you might think).
Gold is not suitable to replace steel as building material (because it has much lower strength and hardness), nor copper/aluminium as conductor (it's a worse conductor than copper and much worse in conductivity/weigth than aluminium). The main technical application short term would be gold plated electrical contacts on every plug and little else...
I didn't know that copper is a better conductor than gold. Surprised by that.
.. And gold teeth and grillz.
The thing about cryptography and government is that it's easy to imagine for a great technology to be adapted on the governmental level because of its greatness. But it is another thing to actually implement it. We live in a bubble, where almost anyone knows about cryptographic hashes and RSA, but for most of the people it is not the case.
Another thing is that political actors are tending to try to concentrate power in their own hands. No way they will delegate a decision making to any form of algorithm — being cryptographic or not.
As soon as this becomes a problem, then it might start bottom-up, citizens to government officials, rather than top to bottom, from president to government departments. Then governments will be forced to formalize identity solutions based on cryptography. See also this case in Germany [2].
One example like that, is bankruptcy laws in China. China didn't have any law regarding to bankruptcy till 2007. For a communist country, or rather not totally capitalist country like China, bankruptcy is not an important subject. When some people stop being profitable, they will keep working because they like to work and they contribute to the great nation of China. That doesn't make any sense of course, so their government was forced to implement some bankruptcy laws.
[1]https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos... [2]https://news.ycombinator.com/item?id=39866056
Probably why it's not released yet. It's unsafe for phishing.
- It helps them sleep at night if their creation doesn't put millions of people out of work.
- Fear of regulation
The world learnt to deal with Nigerian Prince emails and nobody is falling to those anymore. Nothing was changed - no new laws or regulations needed.
Phishing calls have been going on without an AI for decades.
You can be skeptical and call back. If you know your friends or family you should be able to find an alternative way to get in touch always without too much effort in the modern connected world.
Just recently a gang in Spain was arrested for "son in trouble" scam. No AI used. Most of the parents are not fooled in this.
https://www.bbc.com/news/world-europe-68931214
The AI might have some marginal impact, but it does not matter in the big picture of scams. While it is worrisome, it is not a true safety concern.