> The New Mexico case also raised concerns that allowing teens to use end-to-end encryption on Instagram chats — a privacy measure that blocks anyone other than sender and receiver from viewing a conversation — could make it harder for law enforcement to catch predators. Midway through trial, Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
The New York case has explicitly gone after their support of end-to-end encryption as a target: https://www.reuters.com/legal/government/meta-executive-warn...
* Classifying accounts as child accounts (moderated by a parent)
* Allowing account moderators to review content in the account that is moderated (including assigning other moderation tools of choice)
In call cases transparency and enabling consumer choice should be the core focus.
Additionally: by default treat everyone online as an adult. Parents that allow their kids online like that without supervision / some setting that the user agent is operated by a child intend to allow their children to interact with strangers. This tends to work out better in more controlled and limited circumstances where the adults involved have the resources to provide suitable supervision.
At the same time, any requirements should apply only to commercial products. Community (gratis / not for profit) efforts presumably reflect the needs of a given community.
> Surveys by Britain’s tech regulator, Ofcom, find that among children aged 10-12, over half use Snapchat, more than 60% TikTok and more than 70% WhatsApp. All three apps have a notional minimum age of 13: https://archive.ph/y3pQO
Once you get the classification correct — and AI cannot it do this — only via community ombudsman/age verifiers, in a privacy first way*, the app stores can easily tell the app devs what accounts are sensitive and filtering should be much more effective.
*Basically once your age is verified by a real human for your device(using device local encryption to verify biometrics) you are set. No kid should be able to bypass and install apps it on devices that their parents hand to them. There will always be black market devices with these apps, but there are ways of beating those to be very minimal by existing tech.
Notice also that even if you do this, you still don't need the service to be able to decrypt the content, only the parent.
This could even be generically useful, e.g. you have a messenger used by business and then the messages can be read by the client company's administrator/manager but not the messaging company's.
It's ok to drive Dad's truck unless he catches you and tells you no.
Dad should either know his children would never drive the truck without permission, or keep his keys as safe as his wallet (and if he can't trust his kids with keys, you bet his wallet needs protection).
Not guesses. Not is told about and takes on trust. Knows.
There's nothing to stop a kid creating a fake adult account and using it as an adult, perhaps creating their own kid account for "official" use.
Ultimately this is an unsolvable problem without a single source of truth for verified ID and user age.
The only responsible way to do that is to create a global "ID escrow" agency, where ID details are private and aren't available to governments or corporations without a court order, but the agency can provide basic age checks and other privacy services of a limited nature.
Good luck with that idea in this culture.
Meanwhile we have the opposite - real ID is known to governments and corporations, personal habits and beliefs of all kinds can be tracked, there is zero expectation of privacy, and kids still aren't protected.
Firms have a fiduciary duty to shareholders and profit.
On the other hand, You ultimately decide the rules and goals that operate government organizations, and do not have a profit maximization target.
They aren’t the same tool, and they work for different situations.
The E2EE slippery slope is a different challenge, and for that I have no thoughts
We are at a point where we are picking and choosing collateral damage targets.
If you don't support this you're obviously a pedo nazi terrorist.
It is better for them to be forced to turn off the security theater so people that need actual privacy can research alternatives.
"research alternatives" meaning what exactly? You think open source is somehow not susceptible to the same issue, plus all of the malicious updates?
We know that this isn't really going to reduce harm for children, we know Meta is not seriously going to suffer or change, and we know this is going to be used as a cudgel to beat down privacy and increase surveillance.
We don't need all this privacy invasion if we just didn't give kids a smartphone with a data plan.
Harm to kids is actually happening, and this is always going to be a hot button topic.
E2E is critical for our current ability to communicate online, but will be a lower priority when pitted against child safety.
Fighting the good fight is one thing, fighting for the sake of it, without a plan that addresses the tactical reality is another altogether.
Personally, I think E2E will be defended, but it’s becoming a lightning rod for attention. As if removing encryption will solve the emerging issues.
I suspect providing alternatives to champion, such as privacy preserving ways to verify age, will force a conversation on why E2E needs to go.
Absolutely. Particularly where they've been found to be guilty.
> but we should be aware that these cases are one of the key reasons why companies are backtracking from features like end-to-end encryption
Why _social media_ companies are backtracking. I'm extremely nonplussed by this outcome.
> concerns that allowing teens
Yes, because that's what we all had in mind when considering the victims and perpetrators of these crimes.
Whatsapp and messenger are still fine, then.
The business case was to be able to say “we don’t know”. That case is gone.
Also, so an aspiring pedo who gets a job at the service can now read the messages of all the underaged kids?
We all know Meta can still read E2EE chats (otherwise they wouldn't do it) and they're using E2EE as an excuse to avoid liability for the things their platform encourages. Contrast this with something like Signal where the entire point is to be secure.
That can't be true, otherwise in what sense is it E2EE?
Has anyone actually audited it?
This also isn't helpful, but I think the sudden push of urgency isn't helping. The internet has existed without any kind of age verification or safety measures for about 30 years. We could have used that time to have a sensible conversation about policy trade offs, but instead we've waited till now to decide that everything has to be rushed through with minimal consideration.
That's the flow that California's age verification system uses. Personally, I'm opposed to any age verification beyond the current "pinky promise you're 18" type deals, but California's is the least intrinsically offensive to me.
On HN itself, no way. Too many people here make far too much money on ads to want that. It seems the other part that want freedom also want so much freedom it gives huge corporations the freedom to crush them.
>things than a digital age verification that doesn't track every time you use it.
The big companies that pay the politicians don't want that, therefore we won't get that.
There is always a conversation, but it is often not the popular one and gets drown out by whatever everyone is excited about at the moment. You can find it if you seek it out.
Lawrence Lessig’s book “Code” (1999), for example, talks about how a completely unrelated internet is an anomaly, and that regulation will certainly be necessary, and advocates that it be done in a thoughtful manner.
It's really either they can't track you or they will track you.
Second best time to plant a tree: now.
The ideal scenario would be everyone choosing not to engage with these predatory platforms. Going from there, the right question to me is what steps we have to take as a society for that to become even remotely realistic and, subsequently, what role governments can or should have in that.
For starters, I would be in favor of fines that actually hurt the bottom line instead of this "cost of business" bullshit. We have handed these corporations unprecedented access to and control over our lives, to the point that they erode democracy and the social fabric itself. The inevitable abuse of that power when it comes with barely any strings attached needs to be punished in a way that makes it unattractive as a business model at the very least.
Instead of lowering the attack surface by locking out kids, and in turn introducing mass surveillance which at best also lends itself to abuse, the root issues of ruinous greed and lack of accountability need to be addressed. The whole concept that there is no price too high for profits needs to burn. Social media is just one of the more recent manifestations of it.
Unfortunately, social media users don't have billions of dollars to spend on lobbying and related activities around the world.
These lawsuits and regulations are against the industry, not the users.
The regulations and lawsuits are driving the pressure to ID check users and remove end-to-end encryption.
Instead we are saying "only adults should use this" which, while technically regulating the industry, places the restriction on users.
We're treating it like tobacco or alcohol (2 industries who have similarly spent millions upon millions of dollars in lobbying efforts) but we should be treating it like asbestos.
Especially since, when you look at the behavior of younger people, they're way more careful about social media than millennials were. My teenage child an their friends keep all of their conversations in a massive but private group chat. Any social media consumed by them, is basically 'read only'. They don't post online, none of them of have social media accounts where they post pictures of themselves etc.
Same with all of my younger gen-z coworkers. If they have socials the post very selectively and all content is work friendly.
The people I see that need "protection" are aging millenials that don't really understand how wildly they're exposing themselves and families. I cringe when I see the amount of personal photos and information shared by the view millenials I know who still need their ego-boost from these platforms (and that number itself is much smaller).
Younger people don't share their opinion and anything resembling private photos online any more.
Absolutely are a lot of gen z who avoid social media, but to pretend most are privately hunkered away is completely ignorant of today's social media usage.
The “think of the children” angle is the perfect angle to pressure companies to make communications readable by the government. And here tech audiences are welcoming it and applauding because they couldn’t read past the headline and they think anything that hurts Zuck is good.
How anyone can see this happening and not draw the connections to Discord and other services also pushing ID checks is beyond me. Believing that this will only apply to services that don’t effect you is short sighted.
I unlurked and made a thread last night, but I think it might be hidden due to account age: https://news.ycombinator.com/item?id=47511919
I have read the OSINT report from Reddit. The data it has is being interpreted as Meta orchestrating a global lobbying scheme.
However the data is equally if not more supportive of Meta simply taking advantage of global political sentiment to position itself better.
I’ve mentioned this elsewhere, but the HN zeitgeist seems to be resistant to the idea that tech is the “bad guy” today.
I work in trust and safety, and have near front row seats to all the insanity playing out today.
Critically think about this for a second before believing some ChatGPT generated "OSINT" report on reddit. Otherwise, you'll allow corpos to use your mob hatered against you
There is no conspiracy the general public is faced with a crisis and they are desperate for a solution.
The teen suicide statistics do not lie.
Teen suicide rates in the US are lower now than they were in the 1990s.
I'm sorry but if you don't think there's a conspiracy I have a bridge to sell you. It was already unveiled that Meta has lobbied billions towards promoting this legislative change
> This has been a problem for at least a decade.
I get you're point, but anyone that doesn't is asking "Which is it?"
I think everyone can see there is problems. Is there a crisis? I don't think so. Same problems we've always had, but on a computer.
People that know tech, know these laws cross a MAJOR line. Not a little slippery slope thing, this is off a cliff. But I don't think most people, that are already used to having to sign in with an online account on every device they use, even their TV, see it as that big a step. They don't even realize how predatory it is that they are required to sign in. What they need to see is that the sign in requirement was a choice by the vendor. These are LAWS, demanding no one ever be given the choice to not reveal personal information about themselves to use ANY computer. That's the point that needs to be driven home.
Its been decades of work to even get social media to court.
No one wants to talk about this or look at the issues when it’s not sexy.
$@&$$ - I’ve been at conferences and had safety teams cry on my shoulder about how THEY don’t get engineering resources if they ask for it.
Tech platforms suppress so much research and hold so much data hostage, that an entire research coalition based on independence from tech.
Zuck and tech as a whole pivoted to drop safety investments the moment this government came to power.
And this is for user in frikking America !
The shit that is going down in the rest of the world is a curse. The sheer amount of NCII that exists, with zero recourse for people whose lives are destroyed is insane.
> The fake child accounts were allegedly contacted and solicited for sex by the three New Mexico adult men who were arrested in May of 2024. Two of the three men were arrested at a motel, where they allegedly believed they would be meeting up with a 12-year-old girl, based on their conversations with the decoy accounts.
and
> “The product is very good at connecting people with interests, and if your interest is little girls, it will be really good at connecting you with little girls,” Bejar said.
This is what it's about right? The article doesn't make it seem like encryption is meaningfully part of this case at all.
> Midway through trial, Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
There's no indication that that decision, or the announcement, are directly related to the trial, just they just happened at the same time? It's a link drawn by CNN, without presenting any clear connection
However there is another possible explanation
> Tom Sulston, head of policy at Digital Rights Watch, said rather than acceding to law enforcement demands, the move was more likely due to Meta deciding against moving messaging on WhatsApp, Facebook and Instagram to a single platform.
Got away with it again, good profit, will repeat.
The legal system does not seek to destroy the business, or individual criminal. Instead it wants them to be able to continue doing their other non-criminal stuff.
Meta knowingly hurt children for profit. It worked.
If we are in any way serious about technocratic solutions to social problems, this would be untenable, the company would be bankrupted, a new company would fill its place. No tears would be cried, nothing of value would be lost, half of hacker news would be chafing at the bit to build a better alternative for the newly opened market.
But that's not what happened. We allowed children to be knowingly hurt for profit.
The system is functioning as intended.
The legal system, to this day, does in fact seek to destroy individual criminals on a regular basis.
By coincidence, New Mexico represents 0.6% of America's population.
Reality, folks: you can't have both.
There are people who are against age verification just on principle and others who are against it because they know any realistic implementation is going to be abused.
We can assume Meta has backdoored its E2EE somehow anyway.
Also, “the total civil penalty of $375m was reached after the jury decided there were thousands of violations of the act, each with a maximum penalty of $5,000. Meta is also involved in a separate trial in Los Angeles, in which a young woman claims that she became addicted to platforms like Instagram and YouTube, owned by Google, as a child because of how they are intentionally designed.
There are thousands of similar lawsuits winding their way through the US courts.”
If all 50 states sue at the same rate, that'll be a 30% dent, and I'm sure states can sue for more than 0.6% too. That would be historic action against malfeasance and would send a strong FAFO single to all corporates.
Let's lobby for it.
By "erasure," I'm not referring to the death of the involved; I'm referring to the elimination of the individual's social capital.
When the privileged lose their ability to influence others, they tend to get rather distressed.
This is really bad for Meta.
Where are you seeing that?
The article says:
> Jurors found there were thousands of violations, each counting separately toward a penalty of $375 million. That’s less than one-fifth of what prosecutors were seeking.
> Meta is valued at about $1.5 trillion and the company’s stock was up 5% in early after-hours trading following the verdict, a signal that shareholders were shrugging off the news.
> Juror Linda Payton, 38, said the jury reached a compromise on the estimated number of teenagers affected by Meta’s platforms, while opting for the maximum penalty per violation. With a maximum $5,000 penalty for each violation, she said she thought each child was worth the maximum amount.
they did $200 billion in revenue and $60 billion in net income last year.
a $3 billion fine would be barely more than a slap on the wrist.
I'm hardly the first person to use this logic, but if they make more money breaking the law than they have to pay in fines, then it's not a fine, it's a business expense.
One of the challenges we need to resolve is the race to the bottom for online communities - engagement metrics will always result in a PH level that supports more acerbic behavior.
There’s multiple analyses that you can find, if not your own experience, to believe that we should be able to do better with our information commons.
Just today, I found a paper that studied a corpus of Twitter discussions and found that bad-faith interactions constituted 68.3% of all replies (Twitter data).
The engineer and analyst side of us will always question these types of analyses.
I’ve read enough papers at this point for the methods to matter more than the conclusion.
1) meta, and the other tech platforms need to open up their research and data. NDAs and business incentives prevent us from having the boring technical conversations.
2) tech needs someone else to be the bogeyman - the way we did for tobacco. The profit incentive ensures profitable predatory features pass review. Expecting firms to ignore quarterly shareholder reviews for warm fuzzies is … setting ourselves up for failure.
Regulators (with teeth) need to be propped up so that the right amount of predictable friction (liability) is introduced.
3) tech firms need an opportunity or forum to come clean. The sheer gap between the practical reality of something like content moderation vs the ignorance of users and regulators - results in surprise and outrage when people find out how the sausage is made.
4) algorithm defaults decide the median experience for participants in our shred market place of ideas. The defaults need to be set in a manner that works for humans and society (whatever that might be).
Economies are systems to align incentives to achieve subjective goals.
I seem to recall someone taking pictures of their baby, naked, because it was sick, and emailing them to the doctor -- and having their Apple account terminated. Terminated, with the father being labeled a pedophile, and the police contacted (all automatically).
Everyone was quite upset. Everyone felt it was too intrusive.
Frankly, communication platforms have no business trying to police anything at all. I wouldn't want the phone company recording all my conversations, hunting for trigger words, and then contacting the police or cutting off my phone if I sad "bad word".
Yet somehow it's OK to have this level of intrusion because.. um "computers".
The state has no business listening in on private citizen's communication.
Corporations have no business doing so.
To protect the 12 year girl, something called "her parents" need to pay attention and watch what she does. That's their job. They're her guardian.
Some random corporation has no business in that. Some random corporation has no business being an 'algorithmic parent', an automated machine with no appeal.
Here's something I'd support -- a way for parents to prevent children from registering for accounts, and, to be able to examine children's accounts.
But... then we get into ID verification. Of course, surely you support ID verification for platforms, because if you support platforms knowing the age of people (40 and 12, you listed), then you therefore must support a way to verify those ages.
No, they literally identified a plausibly sensible policy flag, not some arbitrary action.
These flags are used in literally every system imaginable.
They they don't conform to some hard criteria, to your criteria, or to some working or ideological group's criteria is a bit besides the point.
Every system has these for good reason.
We have laws and regulations for all sorts of things to help people - including children and parents - in a complex society.
"The state has no business listening in on private citizen's communication."
The absolutely do, depending on circumstances. While Facebook is not a place for state monitoring, it's definitely in the public interest if they flag something that is 'very bad' by some reasonable criteria, so that the state can then act if necessary. They do so within the boundaries of the law subject to judicial oversight.
Facebook is a popular social network, a place that they want people to feel imminently safe. It's a Starbucks lounge without coffee - not a 'personal hyper protected zone'.
Other places, such as Signal, Telegram etc. can have different levels of privacy aka e2e given the different offering and expectations of privacy.
Facebook more or less wants to offer a relatively safe place where the kids can hang out, where they know crazy people are not going to attack their kinds. It's a community centre not a hacker zone.
If we can get past that, then we can move onto basic issues of privacy, advertising etc. which are damaging to everyone, especially young people, for which Facebook has perverse incentives.
NYT: “A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal.” [2022]
But just imagine that kids' accounts are coming with restrictions and privileges and when one account is marked as such, accounts marked as adult cannot initiate contact and the kids's data is automatically private, and those accounts cannot be comercialized under any shape or form.
How would you actually know this? Facebook is a surveillance company, but they are not omniscient.
It helps to reduce hegemony of large social platforms and promotes privately owned websites. For example, I know everyone who has permissions to post on my website (or pre-moderate strangers comments), and is ready to take responsibility for their posts, what my website publishes.
Currently the legal stance seems strange to me -- large media platforms are allowed to store, distribute, rank and sell strangers data, while at the same time they claim they are not responsible for it.
https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod....
Meta has always wanted the appearance of caring about safety (helps them attract talent and keep mission-related morale high), while nearly always prioritizing growth (save for tiny blips of time, like in 2017 when the fallout of the cambridge analytica stuff was hitting a crescendo), whereas companies like X are run by people explicitly disinterested in putting significant resources into safety, especially research.
I will also add that, for the past few years, Meta and X both have become extremely hostile to external researchers of their platforms, shutting down access to tools and data.
Zuckerberg has a brain, he decided to take this action, it is absurd he is not being hit with a personal penalty.
You can't realistically make a space that's free from predators. The real answer is teaching children to recognize unacceptable behavior. But most abuse is from inside--typically adults that the parents put in a position of trust or quasi-trust.
I do not fault Meta for there being predators, I fault Meta for pretending they're being kept out.
Now I'm afraid they've screwed everyone over and the idea of an anonymous open internet is now dead- we're gonna see age (read, real ID) verification gating on every site and app soon....
The dumb thing is to look back and see how umimportant it is that Facebook feed algorithm be this addictive. They already had the network effects and no real competitors. They could have just left it alone.
The laws being passed target exactly the wrong thing that wasn't a problem. They should have been passing "duty to care" laws aimed at social media companies not "give me your age" laws.
I may have missed it, but almost all these laws being passed for this issue have been pretty much solely around data collection rather than modifying the behavior of the worst businesses in the game.
It would be like seeing a car wreck kill a bunch of pedestrians and then passing a law that pedestrians need to carry IDs on them.
Now we're just moving on to a kind of moral panic think-of-the-kids kind of moment that is thinly-veiled state surveillance.
You start slow, then push it the limits
Netflix, never ads to some ads, then eventually its just Adflix, after 20 years.
Each new manager wants that comp up. So ads up by 5% every year.
You can purchase a scam ad it'll be up in 10 minutes. Lie to every anxious child they have ADHD and need meth, lie to every dejected boy that they just need to manosphere up and buy supplements.
They think the public is stupid. They might be right.
Meta's biggest competitor was users' personal lives, not any other web service. They have been ruthless in crushing that competition.
If you know what the platform is capable of, if you seen how the sausage is made, you're probably not using it.
People are also a little naive in not seeing that these platforms aren't just bad for children, they are bad for adults as well. I'm not oppose to not "selling" them to children, but we also need to label correctly for adults and have rules like those for alcohol, tobakko and gambling, so no or limited advertising. Scrub the public spaces of Facebook logos.
I personally stopped using Facebook because it was annoying me with useless doom and aggressive comments of people on stupid topics. If it would have showed me only cat pictures (like Instagrams does) or reasonable stuff (news, etc.) I would have continued using it.
Lets admit it, in same vein trump is a symptom of current US society, the approach and effects of social networks we allow them to be is a result of how lazy and thus addicted people got. On top of many of the parents doing exactly the same, then don't expect miracles.
One thing that I don't understand - even here, some folks call that sociopathic amoral piece of shit 'zuck' and treat his empire like some sort of semi-charity. When I attacked facebook company in the past, there was always a lot of defense (look at this open sourced stuff, look at that... which I presume came from either direct employees or clueless stock holders). People are people, deeply flawed and often weak without willingness to admit it to themselves.
That's pretty cheap when it comes to deception.
The eyes of Texas should be upon this, which is 15X the size and should not settle for less than $1000 per person, where deceptive trade practice is much more serious than other places.
Now that would set a $30 billion example which may not be enough of a deterrent either.
But there are probably plenty of people for whom a $5000 one-time payment might not come close to being fair compensation for what's already happened, especially with Meta allowed to continue as an ongoing concern, that's got to be psychologically harmful.
To really fix it each state would have to follow "suit" while greatly upping the ante so there's at least hundreds of billions at stake.
Meta can afford it and who else is responsible for so much widespread sneaky deception at this scale for so long ?
Mark's personally worth more than 10x that, Facebook's got a 1.7 trillion market cap, so it really wouldn't move the needle for them. Cost of doing business and whatnot.
New Mexico is 0.6% of the U.S. population [1].
[1] https://en.wikipedia.org/wiki/New_Mexico 2.13mm
[2] https://www.census.gov/popclock/ 342mm
Their stated reason? Child safety.
Their actual reason? You can figure that out.
They don't care about child safety as long as it doesn't become so bad as to impact their revenue negatively. But they see that governments all over the world push for some kinds of age restrictions, and they know they are a prime target and it is hard for them to push back against that.
The reason they are (not so secretly) lobbying for requiring us to ID ourselves at the device level is that they don't want to be the gatekeepers. They want to make creating an account as effortless as possible and having to prove your age is a barrier that make turn off some people, including adults, and they may instead turn to services that don't require age verification. By moving the age verification in the OS, not only the responsibility shifts to the OS or hardware vendor, but it also removes the disadvantage they have against services that don't require age verification.
For a similar issue, PornHub is currently blocked in France, because they don't want to comply with the law related to age verification. Here is their argument: https://www.aylo.com/newsroom/aylo-suspends-access-to-pornhu...
If you read between the lines, you will see that they have the same stance: "put age verification at the OS level, so that people don't discriminate against us". They know they are not in a position to argue against "child safety" laws, so instead, they lobby for making it worse for everyone instead of just themselves.
[1]: I could be wrong thinking those are benign.
Cancer is a great metaphor because its a perversion of natural, healthy processes. So called social media is nearly that, but actually grotesquely unhealthy.
People are dramatically unwell when they are not social, but that unregulated process is also negative up to and including being lethal.
I call it _anti_social media.
The internet was not a calm and well behaved place before Facebook arrived. The original “Eternal September” was in the early 90s. Usenet, forums, Reddit, comment sections, and every other social part of the internet have been full of bad behavior long before Facebook came along.
Anyways, is there a "just use vue" effort like there is with postgres :)
In the UK, you cannot use App Store and iPhone (your own phone) without verifying your identity:
and makes more sense, Apple and Google have your credit card , or if you are a parent that bought soem phone for you child then at first boot up as a parent should be your job to setup a child account.
Why else would they want to sneakily add facial recognition to smart glasses?! /s https://www.businessinsider.com/meta-ray-ban-smart-glasses-f...
As more and more people essentially lock themselves in with these identitybrokers tho I imagine it has a very stifling effect on speech tho. Imagine getting banned from those.
This is unfalsifiable. Just say what you think it is explicitly.
If so, it is customarily permissible to use rhetoric and sarcasm to more strongly emphasize a point. Or, to leave the conclusion as an exercise for the reader.
There are many interesting ways that the conversation could have been carried forward but there is no way to continue the conservation as the OP doesn't make it clear what they think.
The only thing I can say is: No I cannot figure it out, please tell me what you're trying to say here.
The other one was the time I was speaking to my brother in law, who had just paved his driveway, he said "I could have used airport grade tar, but thought it was too much" and we were in front of his Nest security cam is the only thing I can think of, but the very next morning, I'm scrolling through Facebook, and sure enough, someone local is advertising airport grade tar. Why? I didn't google this, I only heard it from them.
There's some serious shenanigans going on with ad companies, and we just seem to handwave it around.
Coincidentally, I remember both experiences very very vividly, because this was the last time I used either platform in any meaningful capacity.
Option A: The Nest camera not only listened to the conversation and picked out "Airport Grade Tar" and decided it needed to show adverts about it to people, but the camera also identified you to the point it could isolate your FB account in order to serve you those adverts.
(I'm making some assumptions but...)
Option B: Your brother had done various searches for airport grade tar from his home (in order to know how expensive it was). You, whilst visiting his home, were on his Wifi and therefore shared the same external IP address, your phone did enough activity whilst at his house (FB app checked in to their servers in the background, or used Messenger, etc) to get the "thinking of buying airport grade tar" associated with his external IP address associated with your FB account that was temporarily on that IP.
I had a friend who was convinced that some device in his house was listening in on his conversations with his wife as he kept on getting adverts for things they'd been talking about buying the day before but he hadn't searched for. (But she was searching for it from their home wifi, which is why it appeared in his adverts afterwards.)
Currently, websites and apps are supposed to ensure they don't have kids under 13, or if they do - that they have the parents permission. That's federal law in the US.
These laws make the operating system or app store (depends on the particular law) responsible for being the age gate.
This doesn't stop the federal law from being enforced or anything, but the idea is apps/websites don't handle it directly, that's handled by the operating system or app store.
So now - companies like Meta can throw up their hands and say "hey, the operating system told us they were of age, not our fault." It also makes some things murkier. Now if Meta gets sued, can they bring Google/Apple/Microsoft in as some kind of co-defendent?
I think that murkiness is the point. They don't need to create the most bullet-proof set of regulations that 100% absolves them of all responsibility, they just need to create enough to save some money next time they get sued.
I can think of a ton of regulations we could create to better help protect kids. We could mandate that mobile phones, upon first setup, tell the user about parental controls that are available on the device and ask if they'd like to be enabled. Establish a baseline set of parental controls that need to be implemented and available by phone manufacturers, like an approval process that you need to go through to hit store shelves.
We could create educational programs. Remember being in school and having anti-drug shit come through the school? It could be like that but about social media (and also not like that because it wouldn't just be "social media is bad," hopefully).
Again all these laws do is take what should be Meta's burden, and make it everybody else's burden.
You’re conflating different things. The OS-level age setting proposals are not the same as scanning IDs and faces.
I’m anti age check legislation, too, but the misinformation is getting so bad that it’s starting to weaken the counter-arguments.
> Their stated reason? Child safety.
> Their actual reason? You can figure that out.
We’re commenting under an article about one $375M lawsuit over child safety and many more on the way. They are obviously being pressured for child safety by over zealous prosecutors. This is why they reversed course and removed end-to-end encryption from Instagram because it was brought up as a threat to child safety.
Also your “you can figure that out” implication doesn’t even make sense. The proposal to move age verification to the OS level would give Meta less information about the user, because the OS, not Meta apps, would be responsible for gating age content. I’m not agreeing with the proposal, but it’s easy to see that it would be more privacy-preserving than having to submit your ID to Meta.
I find it hard to believe that meta doesn't already have a pretty good age estimate for 95%+ of their users.
What offloading the responsibility to the app stores (or OS vendors) gives Meta is exactly that, offloading responsibility. In a future lawsuit, they can say that someone else provided them with incorrect information.
Since the dawn of the Internet era, we've had a legal principle that platforms are relatively shielded from liability for what their users do.
It's the Internet. There's sexual content and sketchy characters on it. Occasionally people will encounter them -- even if they're under 18.
Anyone who grew up in the mid-1990s or later, think back to your own Internet usage when you were under 18. You probably found something NSFW or NSFL, dealt with it, and came out basically OK after applying your common sense. Maybe it was shocking and mildly traumatizing -- but having negative experience is how we grow. Part of growing up is honing one's sense of "that link is staying blue" or "I'm not comfortable with this, it's time to GTFO". And it seems a lot safer if you encounter the sketchy side of humanity from the other side of a screen. Think about how a young person's exposure to the underbelly of humanity might have gone in pre-Internet times: Get invited to a party, find out it's in the bad part of town and there are a bunch of sketchy people there -- well, you're exposed to all kinds of physical risks. You can't leave the party as easily as you can put your phone down.
I stopped logging onto Facebook regularly around 2009; I only log in a couple times a year. I hate what Facebook has become in the past decade and a half.
But giving a site with millions of users a multi-hundred-million-dollar fine because some of those users behave badly seems...asinine.
If your kid is old enough and responsible enough to be given unsupervised Internet access, you'd better teach them how to deal with the skeevy stuff they might encounter.
Letting companies sell addiction has pretty significant negative externalities. That’s why we regulate gambling and drugs. Facebook sells addiction, so it makes sense to regulate it like we do drugs and gambling.
...when they've made a good faith effort to address harms.
But who gets the $375 million dollars? Anyone know the cut the law firm will get from this incredible amount of money?
Also, parents have in fact full control of snail mail.
These platforms expose minors to predators and bad actors, and Meta was proven lying about safety.
The state will ask Biedscheid to direct Meta to make changes to its platforms, including adding effective age verification
They immunised us.
Are the kids alright?
They very much want to push this liability off onto someone else...
As far as end-to-end encryption, on SM sites (social media or SadoMasochism, however you want to read it) I don't really see the need.
You don't see any benefit to allowing people to encrypt their private communications in a way that can't be accessed by the company?
It's weird to see tech news commenters swing from being pro-privacy to anti-privacy when the topic of social media sites come up.
There's a difference between E2EE between friends who want to remain secure, and E2EE between strangers in an attempt for the platform to avoid legal liability for spam.
The references I saw showed Meta had lobbied for some of the laws that require age verification be done by the site or by third party ID services. They did not show that Meta lobbied for any of the OS bills.
Some showed that Meta had lobbied in some of the states with those bills, but they just showed Meta's total lobbying budget for those states.
Online child exploitation should be a strict liability offense.
I don't like Meta in any sense of the word and I think they've degraded humanity and society as a whole significantly for generations now to come. But I hope my conspiratorial mind is just over reacting.
https://news.ycombinator.com/item?id=47519625
One is a story by a journalist at CNN, the other is a story by a journalist at the LA Times
Multiple articles on the same topic can sometimes offer different facts and opinions, different perspectives
I would love to see some justice.
Stopping misleading advertisments and mental health issues while claiming to be protecting children is not on the parents. The parents were given the false information to believe their kids would be safe.
You think they need this to know your age? Your gender? Your home, your birthplace, your political stance?
Naming and shaming won't do much good. It could backfire and serve as a positive mark on their resume for other morally corrupt leaders.
Unfortunately, as we found out recently, Meta's lobbyists are a powerful force to contend with and I do not trust our governments to stand up to them.
It all boils down to consent.
I might want to take some drugs that have some harmful side effects. But i knew about them and i willingly made the choice because I valued the high more.
Contrast this with, I knew about the harmful side effects and told you they didnt exist and you should take more. And then i change the drug so its even MORE harmful because it also makes you BUY more. That's what these social media sites do.
They use engineered sociology and psychology to create addictive products, and then refine them to maximize profit at the cost of anything they can pull a lever on.
What bothers me the most is not the vampires at the top sucking out every dollar they can extract out of vulnerable people, but the fact that so many engineers are supporting this. So much for engineering ethics. Why even bother teaching it anymore?
If you want to punish Meta then you have to punish the wonder boy who runs it. Not even share holders can fight off the guy spending 80B on the metaverse.
This fine is somewhat larger, at $375 million, but the other one (https://www.msn.com/en-us/health/other/meta-and-youtube-fine...) basically open the gates for millions of people suing.
Sadly I don't think it's enough for Meta to change, because they have no business model if they are forced to be serious about online safety. That's probably also why they are pushing so hard for age verification, make safety a problem for someone else.
All that to say: I don't think "objectivity" should be the (main) factor resulting in existence of adequate punishment.
[a] https://dictionary.cambridge.org/us/dictionary/english/peanu...