This told me all I needed to know about her level of understanding of complex topics. It only went downhill from there.
"The Home Secretary's husband has said sorry for embarrassing his wife after two adult films were viewed at their home, then claimed for on expenses."
The follow up article has some fun nuggets too http://news.bbc.co.uk/2/hi/8145935.stm
https://www.theregister.com/1999/01/15/france_to_end_severe_...
> Until 1996 anyone wishing to encrypt any document had to first receive an official sanction or risk fines from F6000 to F500,000 ($1000 to $89,300) and a 2-6 month jail term. Right now, apart from a handful of exemptions, any unauthorised use of encryption software is illegal.
These two former empires seem/seemed to have an over-inflated sense of importance and ability to control the world.
If anything, greater intelligence would only accelerate the damage and persuasiveness behind its public consent.
Children are using mobiles and tablets almost exclusively, both major providers of which supply tools for parental administration.
Content filtering is already facilitated by existing parental control. Mobile browsers could be made to issue a header if the user is under a certain age. Mobile apps could have access to a flag.
Parents should be responsible for parenting their child - not big tech. Why does it need to be any more complicated than that?
Lots of things that feel relatively common online feel like they would be very alien and weird situations if they happened offline.
a) Content controls don't work, what are the government thinking? b) This is parents' problem, they should use content controls.
Individual action doesn't work because it only takes one kid in the class who doesn't have parental controls then everyone loses. There's also obvious workarounds such as VPNs and a teenager walking into a pawn shop with £50 for a second hand smartphone without parental controls.
It also makes no sense that parents can't be bothered to turn on parental controls yet can be bothered to run a national grassroots campaign for this stuff (see e.g. http://smartphonefreechildhood.org)
See also- I Had a Helicopter Mom. I Found Pornhub Anyway: https://www.thefp.com/p/why-are-our-fourth-graders-on-pornhu... 8-year old watches violent porn on friend’s iPad: https://www.thesun.co.uk/fabulous/32857335/son-watched-viole...
Although your idea of an OS-level age flag is also being pushed by the Anxious Generation's Jonathan Haidt, so definitely has merit/traction as an alternative.
I don't think my parents had realized that scene was in the book. But I don't think it matters that much. Kids are going to encounter sex. In a pre-industrial society, it's pretty likely that children would catch adults having sex at some point during their childhood -- even assuming they didn't see their own parents doing it at a very young age. Privacy used to be more difficult. Houses often had one bedroom.
I don't mean to say that content controls are useless. I think it was probably for the better that I wasn't watching tons of porn in middle school. But I don't think that content controls need to be perfect; we don't need to ensure that the kids are never exposed to any pornographic content. As long as it isn't so accessible that the kid is viewing it regularly, it probably isn't the end of the world. Like in the one story, PornHub didn't even have a checkbox to ask if you were eighteen. Just don't do that. I didn't end up downloading porn intentionally myself until about five years after reading that book.
The response to this, of course, is that many kids will be educated by their responsible parents.
They will know Santa isn't real or what sex is or why sometimes girls and boys kiss other girls and boys.
Are we going to outlaw teaching your own children about life next? Because they might "spread" the knowledge of... The real world they are about to experience and navigate?
That would be the ideal. Unfortunately, many parents do not have the skills and/or motivation to manage their children's devices.
My parents for a long time used their neighbor's wifi, despite having their own, because they didn't remember the password.
That said, having the carrier assign certain devices marked as "child" or "adult" or even with a DoB stamp that would change the flag when they became an adult might not be a bad thing. While intrusive would still be better than the forced ID path that some states and countries are striving towards.
This is nothing to do with children, those utterances are just bare faced cover for increased surveillance and control.
It's a distraction.
Real objective is to further increase the barrier of entry for SMEs to compete (try start your own forum or any kind of challenger to Facebook et al). Government on the other hand gets a tidy surveillance tool as a sweetener.
So whenever time comes to turn a screw on dissent, the law is ready to be used.
Welcome to British corporate fascism.
They are an extreme minority of every population (mostly people who aren't interested in politics or civil liberties who enjoy and care about children.) But sensible people are also an extreme minority of the population; we normal people usually aren't so sensible, instead we listen to sensible people and follow their advice.
So the people who want everybody on the internet to identify themselves pit hysterics against measured voices in the media, in order to create a fake controversy that only has to last until the law gets passed. Afterwards, the politicians and commentariat who were directly paid or found personal brand benefit in associating with the hysterics start leaving quotes like: "This isn't what we thought we passed" and "It might be useful to have a review to see if this has gone too far." Then we find out that half the politicians connected with the legislation have connections to an age verification firm which is also an data broker, and has half a billion in contracts with the MoD.
Because the government is lying and this is about spying on the populace, not about parental control.
I seriously doubt that the majority of parents want the state to raise their children for them.
By arguing about irresponsible or lazy parents you are latching on to the first, most convenient thing that seems to make sense to you. But I think that is a mistake because not only does it perpetuate some kind of distorted sense of reality where parents don't care about their children and want to hand off all responsibility for them, but it distracts you from the real causal issues.
The fact is that humans have for millions of years acted in various levels of coordination to raise and look after children as a group. Modern society has made this all sorts of dysfunctional, but it still exists.
And THAT is the problem that they should be tackling.
Long working hours and both parents working full time means they do not have the time or the energy. Then you have the state offering help, and encouraging parents to drop them off at school first thing for breakfast club, and then keep them there for after school activities.
This is normal and what public education is for. Teaching online safety and sex ed should be considered no different than teaching history
Guys, this right here is Wikipedia standing. It is that under the current law, Wikipedia would fall under cat 1 rules, even if by the law own admission it should not.
The categorisation regulations are a statutory instrument rather than primary legislation, so they _are_ open to judicial review. But the Wikimedia foundation haven't presented an argument as to why the regulations are unlawful, just an argument for why they disagree with them.
It should be noted that even if they succeed (which seems a long shot), this wouldn't affect the main thrust of the Online Safety Act which _is_ primary legislation and includes the bit making the rounds about adult content being locked behind age verification.
It is actually (as noted in many previous discussion about the Online Safety Act) pushing people to using big tech platforms, because they can no longer afford the compliance cost and risk of running their own.
so big tech platforms will cheerfully embrace it. as expected, major players love regulations.
I suspect any smaller site that claims the Online Safety Act was a reason they closed, needed to close due to other complications. For example an art site that features occasional (or more) artistic nudes. Stuff that normal people wouldn’t consider mature content but the site maintainers wouldn’t want to take the risk on.
Either way, whether I’m right or wrong here, I still think the Online Safety Act is grotesque piece of legislation.
Likewise, I suspect that most geoblocks are out of misplaced fear not actual analysis.
It seems to be a fairly standard judicial review: if OFCOM(?) class them as "category 1", they are under a very serious burden, so they want the categorization decision reviewed in court.
Very interested how this goes.
Scroll to "Who falls under Category 1"
https://medium.com/wikimedia-policy/wikipedias-nonprofit-hos...
A lot of it. Often in high quality and with a permissible license.
I would link to relevant meta pages but I want to be able travel through LHR.
Have the court filings become available?
Of course, the random PR in the OP isn't going to go through their barrister's arguments.
While I agree that the main thrust of the legislation won't be affected either way, the regulatory framework really matters for this sort of thing.
Plus, win or lose, this will shine a light on some the stupidity of the legislation. Lots of random Wikipedia articles would offend the puritans.
However, I don't see what the legal basis of Wikimedia's challenge is. The OSA is primary legislation, so can't be challenged except under the HRA, which I don't really see working. The regulations are secondary regulation and are more open to challenge, but it's not clear what the basis of the challenge is. Are they saying the regulations are outside the scope of the statutory authority (doubtful)? You can't really challenge law or regulation in the UK on the basis of "I don't like it".
To continue the thought experiment though: another implementation would be to list up to N tags that best describe the content being served. You could base these on various agreed tagging systems such as UN ISIC tagging (6010 Broadcasting Pop Music) or UDC, the successor to the Dewey Decimal System (657 Accountancy, 797 Water Sports etc.) The more popular sites could just grandfather in their own tag zoologies.
A cartoon song about wind surfing:
X-Content-Tags: ISIC:6010 UDC:797 YouTube:KidsTV
It’s then up to the recipient’s device to warn them of incoming illegal-in-your-state content.That's no different to the current legislation.
https://www.gbnews.com/politics/labour-ban-vpn-online-safety...
- Labour have made no plans to ban VPNs.
- One MP wanted to add a clause for a government review into the impact of VPNs on the bill after 6 months, with no direction on what that would mean.
- I have no idea if this clause actually got added, but it'd make sense. If you're going to introduce a stupid law you should at least plan to review if the stupid law is having any impact.
- GB news is bottom of the barrel propaganda.
thats government speak for deciding to do something about the VPN problem. because there is no way a commission will not find a good reason to ban VPNs when you reach that point, because you could argue they help avoid UK restrictions.
'Kyle told The Telegraph last week in a warning: "If platforms or sites signpost towards workarounds like VPNs, then that itself is a crime and will be tackled by these codes."'
https://www.tomsguide.com/computing/vpns/what-does-the-labou... :
"In 2022 when the Online Safety Act was being debated in Parliament, Labour explicitly brought up the subject of VPNs with MP Sarah Champion worried that children could use VPNs to access harmful content and bypass the measures of the Safety Act. "
https://www.independent.co.uk/news/uk/politics/vpns-online-s...
Sure. Nothing was said directly right now, but to just take Labour's word for it that they won't go further with these restrictions is really naive.
The Labour think tank Labour Together also recently brought up a manditory goverenment ID called BritCard, ostensibly for government services but to be rolled out else where.
At the same time they've just set up an elite police force to monitor social media.
Labour must know people are rattled by all this, they just published a response to the petiion they recieved.
They're not addressing any concerns though, it's all we know best or shutting down debate with slurs.
In the absence of anything new we just have to take Labour policy on the last things they've said or done.
https://successfulsoftware.net/2025/07/29/the-online-safety-...
And moreover: WF's special pleading is[1], paraphrased, "because we already strongly moderate in exactly the ways this government wants, so there's no need to regulate *us* in particular". That's capitulation; or, they were never really adverse in the first place.
Wikimedia's counsel is of course pleading Wikimedia's own interests[2]. Their interests are not the same as the public's interest. Don't confuse ourselves: if you are not a centimillionaire entity with sacks full of lawyers, you are not Wikimedia Foundation's peer group.
[0] ("It’s the only top-ten website operated by a non-profit and one of the highest-quality datasets used in training Large Language Models (LLMs)"—to the extent anyone parses that as virtuous)
[1] ("These volunteers set and enforce policies to ensure that information on the platform is fact-based, neutral, and attributed to reliable sources.")
[2] ("The organization is not bringing a general challenge to the OSA as a whole, nor to the existence of the Category 1 duties themselves. Rather, the legal challenge focuses solely on the new Categorisation Regulations that risk imposing Category 1 duties (the OSA’s most stringent obligations) on Wikipedia.")
The law has passed, Wikipedia has to enforce that law but don’t wish to because of privacy concerns.
What should Wikimedia now do? Give up? Ignore the laws of the UK? Shutdown in the UK? What exactly are the options for wikimedia?
https://news.ycombinator.com/item?id=3477966 ("Wikipedia blackout page (wikipedia.org)" (2012))
Wikimedia weren't always a giant ambulating pile of cash; they used to be activists. Long ago.
And after the grace period... yeah, I think blocking UK IPs is the "correct" thing to do. If the government doesn't make them an exception than they'll have to do that, correct or not, anyway.
Yes. This is what every single large company which is subject to this distopian law should do. They should do everything they can to block any traffic from the UK, until the law is repelled.
If UK wants to be more like China: let them.
That might actually be one of the few things that would help.
So it would be interesting to understand if shutting down in the UK would have an impact, now we all had to learn how to circumvent georestrictions this past week.
And if it marginally is, how come they cannot just turn off their "content recommender system"? Perhaps an example is the auto-generated "Related articles" that appear in the footer on mobile only?
[1] https://www.legislation.gov.uk/uksi/2025/226/regulation/3/ma...
> In paragraph (1), a “content recommender system” means a system, used by the provider of a regulated user-to-user service in respect of the user-to-user part of that service, that uses algorithms which by means of machine learning or other techniques determines, or otherwise affects, the way in which regulated user-generated content of a user, whether alone or with other content, may be encountered by other users of the service.
Speculating wildly, I think a bunch of the moderation / patroller tools might count. They help to find revisions ("user-generated content") that need further review from other editors ("other users").
There's not much machine learning happening (https://www.mediawiki.org/wiki/ORES), but "other techniques" seems like it'd cover basically-anything up to and including "here's the list of revisions that have violated user-provided rules recently" (https://www.mediawiki.org/wiki/Extension:AbuseFilter).
(Disclaimer: I work for the WMF. I know literally nothing about this court case or how this law applies.)
That's even to just access mainstream services such as direct messaging on social media sites.
The most high profile example of this is the Technology Secretary Peter Kyle comparing Nigel Farage to a sex offender, specifically Jimmy Saville.
Regardless of what you think of Farrage[1], that's a terrible thing to say. There are ligitimate concerns about this Act.
The ICO is charged with protecting privacy and punishing breaches of personal information due to incompetence. Well they've never prosecuted an organisation for a biometric data breach.
Until the ICO actually has teeth, and uses them, they shouldn't have introduced these restrictions and that's before we even get to the fact that the Act doesn't achieve what it say it will.
1. ...and I have very little positive to say about him but he is still an opposition MP and it is his job to oppose the government.
I find this a very unprincipled stance.
https://en.wikipedia.org/wiki/Protests_against_SOPA_and_PIPA...
Every Company that implemented any compliance is a traitor to the free internet and should be treated as such.
While the law would not be toppled tomorrow, the companies of the internet need to stop being so desperate for small scraps of money and eyeballs.
The internet might be free if companies instead of trying to skirt laws and regulations just operated where they are welcome. Good for the internet but bad for the VCs so it wont happen.
A UK internet blockade might just get this going.
But of course Meta carved out their own exception in the law, so this law benefits Meta at the cost of alternatives.
It's like the government thought long and hard about how to make the restrictions the most inconvenient and with the largest number of gaps in the approach.
Cutting off UK for a few weeks won't cause that much damage but might help them in the long run.
What would be the punishment for that?
On a legal level? None. On a personal level? Don't give them money or your business. Avoid them completely or ensure you use ad blockers on their sites and throw away accounts if necessary. Do not contribute to their content.
In short: you take whatever they give you, and you give nothing in return.
Companies can be fined £18 million pounds or 10% of revenue, whichever is greater. If you feel like being the first test case, be my guest.
> The Wikimedia Foundation shares the UK government’s commitment to promoting online environments where everyone can safely participate. The organization is not bringing a general challenge to the OSA as a whole, nor to the existence of the Category 1 duties themselves. Rather, the legal challenge focuses solely on the new Categorisation Regulations that risk imposing Category 1 duties (the OSA’s most stringent obligations) on Wikipedia.
Seems to require an algorithmic feed to be Category 1 - https://www.legislation.gov.uk/ukdsi/2025/9780348267174
https://www.edinburghnews.scotsman.com/news/uk/online-safety...
If Operating Systems had a way for parents to adequately monitor/administer the machines of their children, this would not be such a huge, massive hole, in which to pour (yet more) human rights abuses.
Parents have the right to have an eye on their children. This is not repressive, it is not authoritarian, it is a right and a responsibility.
The fact that I can't - easily, and with little fuss - quickly see what my kids are viewing on their screens, is the issue.
Sure, children have the right to privacy - but it is their parents who should provide it to them. Not just the state, but the parents. And certainly, the state should not be eliminating the rest of society's privacy in the rush to prevent parents from having oversight of - and responsibility for - the online activities of their children.
The fact is, Operating System Vendors would rather turn their platforms into ad-vending machines, than actually improve the means by which the computers are operated by their users.
It would be a simple thing to establish parent/child relationship security between not just two computers, but two human beings who love and trust each other.
Kids will always be inquisitive. They will always try to exceed the limits imposed upon them by their parents. But this should not be a reason for more draconian control over consenting adults, or indeed individual adults. It should be a motivating factor to build better computing platforms, which can be reliably configured to prevent porn from having the detrimental impact many controllers of society have decided is occurring.
Another undeniable fact, is that parents - and parenting - get a bad rap. However, if a parent and child love and trust each other, having the ability to quickly observe the kids computing environment in productive ways, should be being provided, technologically.
When really, we should be building tools which strengthen parent/child relationships, we are instead eradicating the need for parents.
Unpopular opinion, I know: but Thats The Point.
Unless you've got some specific better examples?