and it's all being shut down.
the UK Online Safety Act creates a massive liability, and whilst at first glance the risk seems low the reality is that moderating people usually provokes ire from those people, if we had to moderate them because they were a threat to the community then they are usually the kind of people who get angry.
in 28 years of running forums, as a result of moderation I've had people try to get the domain revoked, fake copyright notices, death threats, stalkers (IRL and online)... as a forum moderator you are known, and you are a target, and the Online Safety Act creates a weapon that can be used against you. the risk is no longer hypothetical, so even if I got lawyers involved to be compliant I'd still have the liability and risk.
in over 28 years I've run close to 500 fora in total, and they've changed so many lives.
I created them to provide a way for those without families to build families, to catch the waifs and strays, and to try to hold back loneliness, depression, and the risk of isolation and suicide... and it worked, it still works.
but on 17th March 2025 it will become too much, no longer tenable, the personal liability and risks too significant.
I guess I'm just the first to name a date, and now we'll watch many small communities slowly shutter.
the Online Safety Act was supposed to hold big tech to account, but in fact they're the only ones who will be able to comply... it consolidates more on those platforms.
Increase fuel economy -> Introduce fuel economy standards -> Economic cars practically phased out in favour of guzzling "trucks" that are exempt from fuel economy standards -> Worse fuel economy.
or
Protect the children -> Criminalize activites that might in any way cause an increase in risk to children -> Best to just keep them indoors playing with electronic gadgets -> Increased rates of obesity/depression etc -> Children worse off.
As the article itself says: Hold big tech accountable -> Introduce rules so hard to comply with that only big tech will be able to comply -> Big tech goes on, but indie tech forced offline.
When intentional, this is Regulatory Capture. Per https://www.investopedia.com/terms/r/regulatory-capture.asp :
> Regulation inherently tends to raise the cost of entry into a regulated market because new entrants have to bear not just the costs of entering the market but also of complying with the regulations. Oftentimes regulations explicitly impose barriers to entry, such as licenses, permits, and certificates of need, without which one may not legally operate in a market or industry. Incumbent firms may even receive legacy consideration by regulators, meaning that only new entrants are subject to certain regulations.
A system with no regulation can be equally bad for consumers, though; there's a fine line between too little and too much regulation. The devil, as always, is in the details.
Sketchy large employers like G4S responded by setting up tens of thousands of "Mini umbrella companies" [1] with directors in the Philippines, each company employing only a handful of people - allowing G4S to benefit from the £4,000 discount tens of thousands of times.
Sadly, exempting small operations from regulation isn't a simple matter.
It almost always doesn't, because the big guys have lobbyists and the small guys don't.
The big guys would rather not have to comply with these rules, but typically their take is, well, if we're going to have to anyway, let's at least make it an opportunity to drive out some of the scrappy competition and claim the whole pie for ourselves.
Too many cobras > bounty for slain cobras > people start breeding them for the bounty > law is revoked > people release their cobras > even more cobras around
He was recently interviewed about that book on the New Books Network:
<https://newbooksnetwork.com/michael-g-vann-the-great-hanoi-r...>
Audio: <https://traffic.megaphone.fm/LIT1560680456.mp3> (mp3)
(Episode begins at 1:30.)
Among the interesting revelations: the rat problem was concentrated in the French Quarter of Hanoi, as that's where the sewerage system was developed. What drained away filth also provided an express subway for rats. Which had been brought to Vietnam by steamship-powered trade, for what it's worth.
(That's only a few minutes into the interview. The whole episode is great listening, and includes a few details on the Freakonomics experience.)
I think most of the examples fit this, but a few don't.
This is a way to regulate political speech and create a weapon to silence free speech online. It's what opponents to these measures have been saying forever. Why do we have to pretend those enacting them didn't listen, are naive, or are innocent well intentioned actors? They know what this is and what it does. The purpose of a system is what it does.
Related to this, and one version of a label for this type of silencing particularly as potentially weaponized by arbitrary people not just politicians is Heckler's veto. Just stir up a storm and cite this convenient regulation to shut down a site you don't like. It's useful to those enacting these laws that they don't even themselves have to point the finger, disgruntled users or whoever will do it for them.
- very basic macro economics
- very basic game theory
- very basic statistics
Come to think of it, kids should learn this in high school
If we can get the voters to understand the things you mention, then maybe we’d have a chance.
Imagine a society so stable it doesn't need new laws or rules. All the elected representatives would just sit around all day and twiddle their thumbs. A bad look in their eyes.
This is how it should be of course.
The next UK general election is ~5 years away so this makes no sense.
The more likely reason is that it's simply good policy. We have enough research now that shows that (a) social media use is harmful for children and (b) social media companies like Meta, TikTok etc have done a wilfully poor job at protecting them.
It is bizarre to me how many people here seem willing to defend them.
The problem is that the real problems are very hard, and their job is to simplify it to their constituents well enough to keep their jobs, which may or may not line up with doing the right thing.
This is a truly hard problem. CSAM is a real problem, and those who engage in its distribution are experts in subverting the system. So is freedom of expression. So is the onerous imposition of regulations.
And any such issue (whether it be transnational migration, or infrastructure, or EPA regulations in America, or whatever issue you want to bring up) is going to have some very complex tradeoffs and even if you have a set of Ph.Ds in the room with no political pressure, you are going to have uncomfortable tradeoffs.
What if the regulations are bad because the problem is so hard we can't make good ones, even with the best and brightest?
I doubt this. Legislation is written by committee and passed by democracy. Most of the voting public don't look up the voting records which are available to them. Most of the voting public can't name a third of the members of parliament.
If there is a conspiratorial take, the one about regulatory capture is more believable.
Seriously, the problem is not politicians being clueless about all the above, but having too much power which makes them think they need to solve everything.
I'd give you 100 upvotes if I could.
Generally it's something along the lines of "a truck or van registered to a business is assumed to be a work vehicle, so pays less tax than a passenger car".
Of course you need to have a business to take advantage of that loophole, but it doesn't need to be a business that actually has any use for the truck- it could be a one-person IT consultancy.
Politicians can be very very good at those things, when they have a reason to be.
Many things in a society exist on thin margins, not only monetary, but also of attention, free time, care and interest, etc. You put a burden, such as a regulation, saying that people have to either comply or cease the activity, and people just cease it, like in the post. What used to be a piece of flourishing (or festering, depending on your POV) complexity gets reduced to a plain, compliant nothing.
Maybe that was the plan all along.
These are not unintended consequences. All media legislation of late has been to eliminate all but the companies that are largest and closest to government. Clegg works at Facebook now, they'd all be happy to keep government offices on the premises to ensure compliance; they'd even pay for them.
Western governments are encouraging monopolies in media (through legal pressure) in order to suppress speech through the voluntary cooperation of the companies who don't want to be destroyed. Those companies are not only threatened with the stick, but are given the carrots of becoming government contractors. There's a revolving door between their c-suites and government agencies. Their kids go to the same schools and sleep with each other.
In particular, Merton notes:
Discovery of latent functions represents significant increments in sociological knowledge .... It is precisely the latent functions of a practice or belief which are not common knowledge, for these are unintended and generally unrecognized social and psychological consequences.
Robert K. Merton, "Manifest and Latent Functions", in Wesley Longhofer, Daniel Winchester (eds) Social Theory Re-Wired, Routledge (2016).
<https://www.worldcat.org/title/social-theory-re-wired-new-co...>
More on Merton:
<https://en.wikipedia.org/wiki/Robert_K._Merton#Unanticipated...>
Unintended consequences:
<https://en.wikipedia.org/wiki/Unintended_consequences#Robert...>
Manifest and latent functions:
<https://en.wikipedia.org/wiki/Manifest_and_latent_functions_...>
It might even be possible now to combine nuanced perspectives/responses to proposed policies from millions of people together!? I think it's not that unreasonable to suggest that kind of thing nowadays, I think there's precedent for it too even though stuff like how-wikipedia-works isn't really ideal, (even though it's somewhat an example of the main idea!).
This way, the public servants (including politicians) can mainly just take care of making sure the ideas that the people vote-for get implemented! (like all the lower tiers of government currently do - just extend it to the top level too!) I don't think we should give individuals that power any more!
What might make such a system work in practice is to only let a small randomly selected group of people vote for each issue. You still get a similar representation as a full vote, but with each person having much fewer votes to attend to it isn't overwhelming.
But: https://www.inf.ed.ac.uk/teaching/courses/seoc2/1996_1997/ad...
Any bureaucracy evolves, ultimately, to serve and protect itself. So the populist boss snips at the easy, but actually useful parts: Social safety nets, environmental regulations, etc. Whereas the core bureaucracy, the one that should really be snipped, has gotten so good at protecting itself that it remains untouchable. So in the end the percentage of useless administratium is actually up, and the government, as a whole, still bloated but even less functional. Just another "unintended consequences" example.
We'll see if Argentina can do better than this.
I've heard it called "law of unintended consequences" and "cobra effect".
Too bad this isn't the case here.
The US Supreme Court disagrees. https://www.dentons.com/en/insights/articles/2024/july/3/-/m...
It’s why when a law/rule/standard has a carveout for its first edge case, it quickly becomes nothing but edge cases all the way down. And because language is ever-changing, rules lawyering is always possible - and governments must be ever-resistant to attempts to rules lawyer by bad actors.
Modern regulations are sorely needed, but we’ve gone so long without meaningful reform that the powers that be have captured any potential regulation before it’s ever begun. I would think most common-sense reforms would say that these rules should be more specific in intent and targeting only those institutions clearing a specific revenue threshold or user count, but even that could be exploited by companies with vast legal teams creating new LLCs for every thin sliver of services offered to wiggle around such guardrails, or scriptkiddies creating millions of bot accounts with a zero-day to trigger compliance requirements.
Regulation is a never-ending game. The only reason we “lost” is because our opponent convinced us that any regulation is bad. This law is awful and nakedly assaults indietech while protecting big tech, but we shouldn’t give up trying to untangle this mess and regulate it properly.
This is what judges are for. A human judge can understand that the threshold is intended to apply across the parent company when there is shared ownership, and that bot accounts aren't real users. You only have to go back and fix it if they get it wrong.
> The only reason we “lost” is because our opponent convinced us that any regulation is bad. This law is awful and nakedly assaults indietech while protecting big tech, but we shouldn’t give up trying to untangle this mess and regulate it properly.
The people who passed this law didn't do so by arguing that any regulation is bad. The reason you lost is that your regulators are captured by the incumbents, and when that's the case any regulation is bad, because any regulation that passes under that circumstance will be the one that benefits the incumbents.
Not sure how keeping kids off the internet keeps them indoors? Surely the opposite is true?
So what do you do to entertain children? Use what you have. Dunk them on the internet via YouTube first and then let them free range because you’re tired and can’t give a fuck anymore.
^1 https://abcnews.go.com/amp/GMA/Family/mom-arrested-after-son... ^2 https://www.aol.com/news/2015-12-03-woman-gets-arrested-for-...
We were interviewed, they found there were no issues, and the case was dropped. Very stressful experience, though.
And for what? I grew up on a farm in Nebraska. We had endless fields and roads around us to explore. The only off-limits area was an abandoned hog confinement, which to be fair, absolutely could have killed us (by falling into the open trench of porcine waste) – naturally, we still went there.
I know that reeks of survivor bias, but given the length of time Homo sapiens have survived, I think it’s a reasonably safe assumption that kids, when left to their own devices, are unlikely to be seriously injured or killed. Though, that’s probably only true if they’ve been exposed to it gradually over time, and are aware of the risks.
The other link you have is neighbors that obviously dislike each other, and they told the cops the kid was in danger.
That is like saying "when we write software there are bugs, so rather than fix them, we should never write software again".
Your second example is ascribing to regulation something that goes way beyond regulation.
Although I do think they overlook that their legislation is restricted to their domestic market though, so any potential positive effect is more or less immediately negated. That is especially true for English speaking countries.
Because no one would fork over stupid amounts of money for a f*k off big truck if they didn't have a real need. Right?
tl;dr: This is a myth.
There is no incentive to the consumer to purchase a vehicle with worse fuel economy.
There USED to be an incentive, 30-40 years ago.
It is not 1985 anymore.
The gas guzzler tax covers a range of fuel economies from 12.5 to 22.5 mpg.
It is practically impossible to design a car that gets less than 22.5 mpg.
The Dodge Challenger SRT Demon 170, with an 6.2 L 8 cylinder engine making ONE THOUSAND AND TWENTY FIVE horsepower is officially rated for 13 mpg but that's bullshit, it's Dodge juicing the numbers just so buyers can say "I paid fifty-four hundred bucks gas guzzler tax BAYBEE" and in real-world usage the Demon 170 is getting 25 mpg. Other examples of cars that cannot achieve 22.5 mpg are the BMW M2/M3/M4/M8, the Cadillac CT5, high-performance sports sedans for which the gas guzzler tax is a <5% price increase. ($5400 is 5% of the Demon 170 price, but 2-3% of what dealers are actually charging for it.)
The three most popular vehicles by sales volume in the United States are: 1. The Ford F-150, 2. The Chevy Silverado, and 3. The Dodge Ram 1500.
The most popular engine configuration for these vehicles is the ~3L V6. Not a V8. A V6.
Less than 1/4th of all pickup trucks are sold equipped with a V8.
According to fueleconomy.gov every single Ford, Chevrolet, and Ram full-size pickup with a V6 would pay no gas guzzler tax.
Most V8s would be close, perhaps an ECU flash away, to paying no gas guzzler tax. The only pickups that would qualify for a gas guzzler tax are the high-performance models-- single-digit percentages of the overall sales volume and at those prices the gas guzzler tax would not even factor into a buyer's decision.
People buy trucks, SUVs, and compact SUVs because they want them and can afford them.
Not because auto manufacturers phased out cars due to fuel economy standards. Not because consumers were "tricked" or "coerced". And certainly not because "the gubmint" messed things up.
They buy them because they WANT them.
The Toyota RAV4 is the 4th most popular car in the US. The Corolla is the 13th most popular. They are built on the same platform and dimensionally, the Corolla is actually very slightly larger except for height. They both come with the same general ballpark choices in engines. The gas guzzler tax only applies to the Corolla, but that doesn't matter because they both would be exempt. People don't freely choose the RAV4 over the Corolla because of fuel economy they buy it because the Corolla has 13 cubic feet of cargo capacity and the RAV4 has 70 cubic feet.
And before anyone says that the gas guzzler tax made passenger cars more expensive, passenger cars can be purchased for the same price adjusted for inflation they could be 50 years ago, but people don't want a Mitsubishi Mirage, which is the same price as a vintage VW Beetle (perennial cheapest new car from the 1960s) and better in every quantifiable metric, they want an SUV.
What may be true is that there is a national policy to keep fuel prices as low as possible, for a myriad of reasons, with one side effect of that policy being that it has enabled people to buy larger less fuel-efficient cars.
I do not believe it is auto manufacturers who are pushing for this policy. I believe it is the freight and logistic market. The auto market is valued at $4 billion, the freight and logistics market is $1,300 billion. GM and Ford are insignificant specks compared to the diesel and gasoline consumers of the freight and logistics firms (who have several powerful lobbies).
https://www.thetruthaboutcars.com/2017/08/v8-market-share-ju...
https://www.irs.gov/pub/irs-pdf/f6197.pdf (gas guzzler worksheet)
> What may be true is that there is a national policy to keep fuel prices as low as possible, for a myriad of reasons, with one side effect of that policy being that it has enabled people to buy larger less fuel-efficient cars.
Yes. Americans have always had cheap fuel and it's shaped the entire society around it.
Consumers want larger vehicles, and manufactures bend the rules to allow for such vehicles to be more easily build. Manufactures write the laws, after all. CAFE allows for SUVs and other "light trucks" to get worse fuel economy than a car. Since fuel economy allowances are based on vehicle footprint, and its easier to make a car larger than it is to improve fuel economy.
I know my wife likes storing things in the boot of our car and I'm not even American. It means they're always conveniently there - chairs for sitting in the park, shopping bags, groceries that she's going to take to a party or bought for someone else, kids sports equipment.
Not true: Section 179 [0]. Luxury auto manufacturers are well-aware of this [1] and advertise it as a benefit. YouTube et al. are also littered with videos of people discussing how they're saving $X on some luxury vehicle.
> Not because consumers were "tricked" or "coerced". ... They buy them because they WANT them.
To be fair, they only want them because they've been made into extremely comfortable daily drivers. Anyone who's driven a truck from the 90s or earlier can attest that they were not designed with comfort in mind. They were utilitarian, with minimal passenger seating even with Crew Cab configurations. At some point – and I have no idea if this was driven by demand or not – trucks became, well, nice. I had a 2010 Honda Ridgeline until a few weeks ago, which is among the un-truck-iest of trucks, since it's unibody. That also means it's extremely comfortable, seats 5 with ease, and can still do what most people need a truck to do: carry bulky items home from Lowe's / Home Depot. Even in the 2010 model, it had niceties like heated seats. I just replaced it last week with a 2025 Ridgeline, and the new one is astonishingly nicer. Heated and ventilated seats, seat position memory, Android Auto / Apple CarPlay, adaptive cruise control, etc.
That's also not to say that modern trucks haven't progressed in their utility. A Ford F-350 from my youth could pull 20,000 lbs. on a gooseneck in the right configuration. The 2025 model can pull 40,000 lbs., and will do it in quiet luxury, getting better fuel economy.
[0]: https://www.irs.gov/publications/p946#idm140048254261728
In practice, this means the local cycling forum that fostered trust, friendship, and even mental health support is at risk of vanishing, while the megacorps sail on without a scratch. Ironically, a measure allegedly designed to rein in “Big Tech” ends up discouraging small, independent communities and pushing users toward the same large platforms the legislation was supposedly targeting.
It’s discouraging to watch governments double down on complex, top-down solutions that ignore the cultural and social value of these smaller spaces. We need policy that recognises genuine community-led forums as a public good, encourages sustainable moderation practices, and holds bad actors accountable without strangling the grassroots projects that make the internet more human. Instead, this act risks hollowing out our online diversity, leaving behind a more homogenised, corporate-dominated landscape.
That wasn't the one I was thinking of, to be honest.
I'd have thought you would be mentioning the latest ball of WTF: "Online Safety Amendment (Social Media Minimum Age) Bill 2024".
According to the bill, HN needs to identify all Australian users to prevent under-16's from using it.
https://www.aph.gov.au/Parliamentary_Business/Bills_Legislat...
But yes, I'm confused as to whether it applies to online gaming, or sites such as wikipedia as well
As written, it should. Which is ridiculous, and it's a ridiculous law in the first place. I'm loathe to discuss politics, but by god both Labor and the LNP are woeful when it comes to tech policy.
As sad as it may be, their imagination is correct. The small spaces, summed up all together, are lost in the rounding errors.
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
So, I fully understand why someone would rather shut down their site rather than potentially deal with the legal fallout. Even if the end result is "just getting shut down", that will come after a significant amount of legal troubles, and likely money spent dealing with them.
The fear some have is not misunderstandings, but disgruntled types (the sort of people who blow up over a perfectly reasonable moderation decision) and common garden variety griefers reporting things to cause inconvenience. I know people who have in the past run forums and had to put up with spurious reports to their ISP/host or even on one occasion local law enforcement. If someone did this it would likely go nowhere in the end but not before causing much stress and perhaps cost via paying for legal advice.
> I don't get preemptively doing it other than giving up after a long duty of almost 30 years and using this as excuse.
Having been involved less directly with that sort of admin & moderation work I can see this change being the final straw after putting up with the people of the internet for years. Calling it “just an excuse” seems rather harsh.
> At least pass them to someone else that won't care about the liability.
Depending on the terms people agreed to when signing up and posting, passing on the reigns might not be nearly as legally/morally clear-cut as several in these comments are assuming.
Authoritarians don't want people to be able to talk (and organize) in private. What better way to discourage them than some "think of the children" nonsense? That's how they attacked (repeatedly) encryption.
Google, Facebook, and Twitter all could have lobbied against this stuff and shut it down, hard. They didn't.
That speaks volumes, and my theory is that they feel shutting down these forums will push people onto their centralized platforms, increasing ad revenues - and the government is happy because it's much easier to find out all the things someone is discussing online.
It's honestly super weird. Now of course they are just proposing to tax the tech companies if they don't pay money to our local media orgs for something the tech companies neither want nor care about.
A cycling site with 275k MAU would be in the very lowest category where compliance is things like 'having a content moderation function to review and assess suspected illegal content'. So having a report button.
Companies have legal departments, which exist to figure out answers to questions like that. This is because these questions are extremely tricky and the answers might even change as case law trickles in or rules get revised.
Expecting individuals to interpret complex rulesets under threat of legal liability is a very good way to make sure these people stop what they are doing.
The law worked the same way yesterday as it does today. It's not like the website run in Britain operated under some state of anarchy and in a few months it doesn't. There's already laws a site has to comply with and the risk that someone sues you, but if you were okay with running a site for 20 years adding a report button isn't drastically going to change the nature of your business.
Im surprised they don’t already have some form of report/flag button.
That means you need to do CSAM scanning if you accept images, CSAM URL scanning if you accept links, and there’s a lot more than that to parse here.
Cases where they assume you should say "medium risk" without evidence of it happening are if you've got several major risk factors:
> (a) child users; (b) social media services; (c) messaging services; (d) discussion forums and chat rooms; (e) user groups; (f) direct messaging; (g) encrypted messaging.
Also, before someone comes along with a specific subset and says those several things are benign
> This is intended as an overall guide, but rather than focusing purely on the number of risk factors, you should consider the combined effect of the risk factors to make an overall judgement about the level of risk on your service
And frankly if you have image sharing, groups, direct messaging, encrypted messaging, child users, a decent volume and no automated processes for checking content you probably do have CSAM and grooming on your service or there clearly is a risk of it happening.
• A "large service" (more than 7 million monthly active UK users) that is at a medium or high risk of image-based CSAM, or
• A service that is at a high risk of image-based CSAM and either has more than 700000 monthly active UK users or is a file-storage and file-sharing service.
Which really should be happening anyway.
I would strongly prefer that forums I visit not expose me to child pornography.
I think there’s a pretty decent argument being made here that OP is reading too far in the new rules and letting the worst case scenario get in the way of something they’re passionate about.
I wonder if they consulted with a lawyer before making this decision? That’s what I would be doing.
Those that do whist not seeking financial gain are impacted the most.
Regulatory capture. https://en.wikipedia.org/wiki/Regulatory_capture
They do not have the resources to find out exactly what they need to do so that there is no risk of them being made totally bankrupt.
If that is all - please point to the guidance or law that says just having a report button is sufficient in all cases.
Also if it is well monitored and seems to have a positive community, I don't see the major risk to shut down. Seems more shutting down out of frustration against a law that, while silly on it's face, doesn't really impact this provider.
From another commenter:
Platforms failing this duty would be liable to fines of up to £18 million or 10% of their annual turnover, whichever is higher.
Then you will see that a forum that allows user generated content, and isn't proactively moderated (approval prior to publishing, which would never work for even a small moderately busy forum of 50 people chatting)... will fall under "All Services" and "Multi-Risk Services".
This means I would be required to do all the following:
1. Individual accountable for illegal content safety duties and reporting and complaints duties
2. Written statements of responsibilities
3. Internal monitoring and assurance
4. Tracking evidence of new and increasing illegal harm
5. Code of conduct regarding protection of users from illegal harm
6. Compliance training
7. Having a content moderation function to review and assess suspected illegal content
8. Having a content moderation function that allows for the swift take down of illegal content
9. Setting internal content policies
10. Provision of materials to volunteers
11. (Probably this because of file attachments) Using hash matching to detect and remove CSAM
12. (Probably this, but could implement Google Safe Browser) Detecting and removing content matching listed CSAM URLs
...
the list goes on.
It is technical work, extra time, the inability to not constantly be on-call when I'm on vacation, the need for extra volunteers, training materials for volunteers, appeals processes for moderation (in addition to the flak one already receives for moderating), somehow removing accounts of proscribed organisations (who has this list, and how would I know if an account is affiliated?), etc, etc.
Bear in mind I am a sole volunteer, and that I have a challenging and very enjoyable day job that is actually my primary focus.
Running the forums is an extra-curricular volunteer thing, it's a thing that I do for the good it does... I don't do it for the "fun" of learning how to become a compliance officer, and to spend my evenings implementing what I know will be technically flawed efforts to scan for CSAM, and then involve time correcting those mistakes.
I really do not think I am throwing the baby out with the bathwater, but I did stay awake last night dwelling on that very question, as the decision wasn't easily taken and I'm not at ease with it, it was a hard choice, but I believe it's the right one for what I can give to it... I've given over 28 years, there's a time to say that it's enough, the chilling effect of this legislation has changed the nature of what I was working on, and I don't accept these new conditions.
The vast majority of the risk can be realised by a single disgruntled user on a VPN from who knows where posting a lot of abuse material when I happen to not be paying attention (travelling for work and focusing on IRL things)... and then the consequences and liability comes. This isn't risk I'm in control of, that can be easily mitigated, the effort required is high, and everyone here knows you cannot solve social issues with technical solutions.
In this case, it's "I'm shutting down my hobby that I've had for years because I have to add a report button".
That costs money. The average person can't know every law. You have to hire lawyers to adjudicate every report or otherwise assess every report as illegal. No one is going to do that for free if the penalty for being wrong is being thrown in prison.
A fair system would be to send every report of illegal content to a judge to check if it's illegal or not. If it is the post is taken down and the prosecution starts.
But that would cost the country an enormous amount of money. So instead the cost is passed to the operators. Which in effect means only the richest or riskiest sites can afford to continue to operate.
I would never except personal liability for my correct interpretation of the GDPR. I would be extremely dumb if I did.
Orgs are already fleeing LSEG for deeper capital markets in the US.
Beautiful landscape, the best breakfast around, really nice people, tons of sights to see.
Argentina has had nearly 100 years of decline, Japan is onto its third lost decade. The only other party in the UK that has a chance of being elected (because of the voting system) is lead by someone who thinks sandwiches are not real [1]. It's entirely possible the UK doesn't become a serious country in our lifetimes.
[1] https://www.politico.eu/article/uk-tory-leader-sandwiches-no...
The headline is clickbait. She didn't say that sandwiches are not real. She is saying that she doesn't believe it is a proper lunch/meal.
Orwell pointed this out in England your England which was written during the Blitz. Many of the problems he described have only got worse in the decades since he wrote about them in my opinion. While the essay is a bit dated now (it predates the post-war era of globalisation for example which created new axes in UK politics) I still think it's essential background reading for people who want to know what's wrong with the UK, and it's an excellent example of political writing in general.
The current actual leader of the UK decided to politicise this, in a real moist bread response:
> Prime Minister Keir Starmer — who leads a country grappling with a stagnant economy, straining public services and multiple crises abroad — in turn accused Badenoch of talking down a “Great British institution.”
Weird flex but okay?
Funnily enough we wonder this about the USA and its drain-circling obsession with giving power -- and now grotesque, performative preemptive obeisance -- to Donald Trump.
This says it so well, acknowledging the work of a misguided bureaucracy.
Looks like it now requires an online community to have its own bureaucracy in place, to preemptively stand by ready to effectively interact in new ways with a powerful, growing, long-established authoritarian government bureaucracy of overwhelming size and increasing overreach.
Measures like this are promulgated in such a way that only large highly prosperous outfits beyond a certain size can justify maintaining readiness for their own bureaucracies to spring into action on a full-time basis with as much staff as necessary to compare to the scale of the government bureaucracy concerned, and as concerns may arise that mattered naught before. Especially when there are new open-ended provisions for unpredictable show-stoppers, now fiercely codified to the distinct disadvantage of so many non-bureaucrats just because they are online.
If you think you are going to be able to rise to the occasion and dutifully establish your own embryonic bureaucracy for the first time to cope with this type unstable landscape, you are mistaken.
It was already bad enough before without a newly imposed, bigger moving target than everything else combined :\
Nope, these type regulations only allow firms that already have a prominent well-funded bureaucracy of their own, on a full-time basis, long-established after growing in response to less-onerous mandates of the past. Anyone else who cannot just take this in stride without batting an eye, need not apply.
What do you mean by bureaucracy in this case? Doing the risk assessment?
I would say more like the prohibitive cost of compliance comes from the non-productive (or even anti-productive) nature of the activities needed to do so, as an ongoing basis.
An initial risk assessment is a lot more of a fixed target with a goal that is in sight if not well within reach. Once it's behind you, it's possible to get back to putting more effort into productive actions. Assessments are often sprinted through so things can get "back to normal" ASAP, which can be worth it sometimes. Other times it's a world of hurt without paying attention to whether it's a moving goalpoast and the "sprint" might need to last forever.
Which can also be coped with successfully, like dealing with large bureaucratic institutions as customers, since that's another time when you've got to have your own little bureaucracy. To be fully dedicated to the interaction and well-staffed enough for continuous 24/7 problem-solving operation at a moment's notice. If it's just a skeleton crew at a minimum they will have a stunted ability for teamwork since the most effective deployment can be more like a relay race, where each member must pull the complete weight, go the distance, not drop the baton, and pass it with finesse.
While outrunning a pursuing horde and their support vehicles ;)
> 1. Individual accountable for illegal content safety duties and reporting and complaints duties
> 2. Written statements of responsibilities
> 3. Internal monitoring and assurance
> 4. Tracking evidence of new and increasing illegal harm
> 5. Code of conduct regarding protection of users from illegal harm
> 6. Compliance training
> 7. Having a content moderation function to review and assess suspected illegal content
> 8. Having a content moderation function that allows for the swift take down of illegal content
> 9. Setting internal content policies
> 10. Provision of materials to volunteers
> 11. (Probably this because of file attachments) Using hash matching to detect and remove CSAM
> 12. (Probably this, but could implement Google Safe Browser) Detecting and removing content matching listed CSAM URLs
> ...
> the list goes on.
Having said all that, I can’t criticise the decision. It makes me sad to see it and it feels like the end of an era online
I completely understand a desire to shut things down cleanly, rather than risk something you watched over for years become something terrible.
Finding someone trustworthy is hard, but I know buro9 knows tons of people.
"[A] significant number"? How Britishly vague.
There was one person involved in the doompf of that ceo guy....
I would say that a significant-sized football crowd would be over 75,000.
That's a lot of numbers that 'significant', has to lean on.
That section details how to calculate the figures, because they're relevant for sections like CSAM scanning
> Services that are at high risk of imagebased CSAM and (a) have more than 700,000 monthly active United Kingdom users or (b) are file-storage and file-sharing services.
Just email us.
The act is intentionally very vague and broad.
Generally, the gist is that it's up to the platforms themselves to assess and identify risks of "harm", implement safety measures, keep records and run audits. The guidance on what that means is very loose, but some examples might mean stringent age verifications, proactive and effective moderation and thorough assessment of all algorithms.
If you were to ever be investigated, it will be up to someone to decide if your measures were good or you have been found lacking.
This means you might need to spend significant time making sure that your platform can't allow "harm" to happen, and maybe you'll need to spend money on lawyers to review your "audits".
The repercussions of being found wanting can be harsh, and so, one has to ask if it's still worth it to risk it all to run that online community?
This is the problem with many European (and I guess also UK) laws.
GDPR is one notable example. Very few people actually comply with it properly. Hidden "disagree" options in cookie pop-ups and unauthorized data transfers to the US are almost everywhere, not to mention the "see personalized ads or pay" business model.
Unlike with most American laws, GDPR investigations happen through a regulator, not a privately-initiated discovery process where the suing party has an incentive to dig up as much dirt as possible, so in effect, you only get punished if you either really go overboard or are a company that the EU dislikes (which is honestly mostly just Meta at this point).
Exactly the complaint that everyone on here made about GDPR, saying the sky would fall in. If you read UK law like an American lawyer you will find it very scary.
But we don't have political prosecuters out to make a name for themselves, so it works ok for us.
> The act creates a new duty of care of online platforms, requiring them to take action against illegal, or legal but "harmful", content from their users. Platforms failing this duty would be liable to fines of up to £18 million or 10% of their annual turnover, whichever is higher.
They mention especially in their CSAM discussion that, in practice, a lot of that stuff ends up being distributed by smallish operators, by intention or by negligence—so if your policy goal is to deter it, you have to be able to spank those operators too. [0]
> In response to feedback, we have expanded the scope of our CSAM hash-matching measure to capture smaller file hosting and file storage services, which are at particularly high risk of being used to distribute CSAM.
Surely we can all think of web properties that have gone to seed (and spam) after they outlive their usefulness to their creators.
I wonder how much actual “turnover” something like 4chan turns over, and how they would respond to the threat of a 10% fine vs an £18mm one…
[0] https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
If it's the company, the shareholders etc are not liable.
This basically ensures that the only people allowed to host online services for other people in the UK will be large corporations. As they are the only ones that can afford the automation and moderation requirements imposed by this bill.
You should be able to self-host content, but you can't do something like operate a forums website or other smaller social media platform unless you can afford to hire lawyers and spend thousands of dollars a month hiring moderators and/or implementing a bullet proof moderation system.
Otherwise you risk simply getting shutdown by Ofcom. Or you can do everything yo are supposed to do and get shutdown anyways. Good luck navigating their appeals processes.
You need to do a risk assessment and keep a copy. Depending on how risky things are, you need to put more mitigations in place.
If you have a neighbourhood events thing that people can post to, and you haven't had complaints and generally keep an eye out for misuse, that's it.
If you run a large scale chat room for kids with suicidal thoughts where unvetted adults can talk to them in DMs you're going to have a higher set of mitigations and things in place.
Scale is important, but it's not the only determining factor. An example of low risk for suicide harm is
> A large vertical search service specialised in travel searches, including for flights and hotels. It has around 10 million monthly UK users. It uses recommender systems, including for suggesting destinations. It has a basic user reporting system. There has never been any evidence or suggestion of illegal suicide content appearing in search results, and the provider can see no way in which this could ever happen. Even though it is a large service, the provider concludes it has negligible or no risk for the encouraging or assisting suicide offence
An example for high risk of grooming is
> A social media site has over 10 million monthly UK users. It allows direct messaging and has network expansion prompts. The terms of service say the service is only for people aged 16 and over. As well as a content reporting system, the service allows users to report and block other users. While in theory only those aged 16 and over are allowed to use the service, it does not use highly effective age assurance and it is known to be used by younger children. While the service has received few reports from users of grooming, external expert organisations have highlighted that it is known to be used for grooming. It has been named in various police cases and in a prominent newspaper investigation about grooming. The provider concludes the service is high risk for grooming
I'm a little confused about this part. Does the Online Safety Act create personal liabilities for site operators (EDIT: to clarify: would a corporation not be sufficient protection)? Or are they referring to harassment they'd receive from disgruntled users?
Also, this is the first I've heard of Microcosm. It looks like some nice forum software and one I maybe would've considered for future projects. Shame to see it go.
> Senior accountability for safety. To ensure strict accountability, each provider should name a senior person accountable to their most senior governance body for compliance with their illegal content, reporting and complaints duties.
This seems very plausible to me, given what they and other moderators have said about the lengths some people will go to online when they feel antagonised.
The UK has lots of regulatory bodies and they all work in broadly the same way. Provided you do the bare minimum to comply with the rules as defined in plain English by the regulator, you won't either be fined or personally liable. It's only companies that either repeatedly or maliciously fail to put basic measures in place that end up being prosecuted.
If someone starts maliciously uploading CSAM and reporting you, provided you can demonstrate you're taking whatever measures are recommended by Ofcom for the risk level of your business (e.g. deleting reported threads and reporting to police), you'll be absolutely fine. If anything, the regulators will likely prove to be quite toothless.
> provided you can demonstrate you're taking whatever measures are recommended by Ofcom
That level of moderation might not be remotely feasible for a sole operator. And yes, there's a legitimate social question here: Should we as a society permit sites/forums that cannot be moderated to that extent? But the point I'm trying to make is not whether the answer to that question is yes or no, it's that the consequences of this Act are that no sensible individual person or small group will now undertake the risk of running such a site.
The example of "medium risk" for CSAM urls is a site with 8M users that has actively had CSAM shared on it before multiple times, been told this by multiple international organisations and has no checking on the content. It's a medium risk of it happening again.
In the same way that you could be sued for anything, I'm sure you could also be dragged to court for things like that under this law... And probably under existing laws, too.
That doesn't mean you'll lose, though. It just means you're out some time and money and stress.
The risk and cost imbalance is much more extreme than that of a lawsuit.
I'm confident that, were I sufficiently motivated, I could upload a swathe of incriminating material to a website and cover my tracks within a couple of hours, doing damage that potentially costs the site operator £18M with no risk to myself -- not even my identity would be revealed. OTOH, starting a lawsuit at the very least requires me to pay for a lawyer's time, my face to appear in the court -- and if the suit is thrown out, I'll need to pay their court costs, too.
Heh, welcome to the internet where the perpetrator and the beneficiary can be in different jurisdictions that make enforcement on the original bad actors impossible.
For example, have a friend in China upload something terrible to a UK site and then 'drop the dime' to a regular in the UK. The UK state can easily come after you and find it nearly impossible to go after the international actor.
Then again, maybe he's just burnt out from running these sites and this was the final straw. I can understand if he wants to pack it in after so long, and this is as good reason as any to call it a day.
Though, has no-one in that community offered to take over? Forums do change hands now and then.
As i have read it, no, it's worth a read to see for yourself though.
> it doesn't put that much of an onerous demand on forum operators.
It doesn't until it does, the issue is the massive amount of work needed to cover the "what if?".
It's not clear that it doesn't apply and so it will be abused, that's how the internet works, DMCA, youtube strikes, domain strikes etc.
> Then again, maybe he's just burnt out from running these sites and this was the final straw. I can understand if he wants to pack it in after so long, and this is as good reason as any to call it a day.
Possibly, worth asking.
> Though, has no-one in that community offered to take over? Forums do change hands now and then.
Someone else taking over doesn't remove the problem, though there might be someone willing to assume the risk.
Honestly, same could be said for this one, it reads less like an attempt at making the internet better and more like a technical sounding PR stunt with sneaky power encroachment thrown in.
"We just need you to uses your government ID to sign in because of the children, we have a long track record of competent execution, maintenance and accountability, we are 100% not going to use this for other ...reasons"
It's the same governmental "Trust me bro, think of the children" they always throw out.
Outside of the intelligence agencies the UK government is absolutely diabolical at anything technical, chronically overbudget (because their original budget was decided by someone in an office with no actual experience managing an IT project) on projects they outsourced to corrupt friends who siphon the money away, not just IT, all projects.
They pay atrociously for the level of skill required for the positions advertised, so they get middle of the road staff, which isn't a problem normally, middle of the road is the backbone of IT projects.
The problem arises when you get actively bad project management, either incompetence or outright maliciousness, throw in some glacial bureaucracy laden processes that didn't work when they were drafted 40 years ago, let alone now.
and you get an entire industry of corruption and mediocrity.
/rant
anyway, i mean, sure you can take the lacklustre GDPR enforcement and use that to make decisions going forward, i wouldn't personally, because i don't think a single data point is a good basis for risk assessment.
DMCA, youtube copyright strikes, domain strikes, bank transaction complaints/chargebacks, all are mechanisms used to attack internet based businesses.
Do they serve a purpose, debatable, are they misused on a regular basis, absolutely.
This isn't a "the sky is falling" this is a "They have put into law the ability to drop the sky on me just because they (the government, or disgruntled internet denizens) feel like it"
It's up to you to decide how likely you think that is and plan accordingly.
There was a story very recently about the whole of itch.io going down because of some overzealous rent-seeking bullshit middleman (hired by rent-seeking bullshit artist FunkoPop)
How much money should he spend on a lawyer to figure this out for him?
Would you be willing to risk personal liability for your interpretation of this law? Obviously I would not.
As the manager of a community where people meet in person, I understand where he is coming from. Acting like law enforcement puts one in a position to confront dangerous individuals without authority or weapons. It is literally life-endangering.
I have zero legal connection to the UK and their law doesn't mean jack to me. I look forward to thoroughly ignoring it, in the same way that I thoroughly ignore other dumb laws in other distant jurisdictions.
UK, look back on this as the day -- well, another day -- when you destroyed your local tech in favor of the rest of the world.
But they make a good point: if you exclude the smaller providers, that’s where the drugs and CSAM and the freewheeling dialog go. Assuming it’s their policy goal to deter these categories of speech, I’m not sure how you do that without a net fine enough to scoop up the 4chans of the world too.
It’s not the behavior of a confident, open, healthy society, though…
- Bad actors go everywhere now.
- £18 million fines seem like a fairly unhinged cannon to aim at small webistes.
- A baseless accusation is enough to trigger a risk of life-changing fines. Bad actors don't just sell drugs and freewheel; they also falsely accuse.
CSAM is absolutely horrible.. but CSAM laws don't stop CSAM (primarily this happens from group defections).
Instead it's just a form of tarring, in this case unliked speech, by associating it with the most horrible thing anyone can think of.
These governments only want institutions to host web services. Their rules are openly hostile to individuals. One obvious benefit is much tighter control, having a few companies with large, registered sites, gives the government control.
It is also pretty clear that the public at large does not care. Most people are completely unaffected and rarely venture outside of the large, regulated platforms.
It's more about "accepting and publishing arbitrary content".
But, in practice, how hard is it to host a website anonymously? Or off-shore?
Obviously it is trivial, but so is shoplifting.
Both are illegal and telling people to commit crimes is not helpful.
(Unless of course someone is resurrecting the site)
https://wiki.archiveteam.org/index.php/ArchiveBot https://wiki.archiveteam.org/index.php/Wikibot
> Read the regs and you can absolutely see how complying with them to allow for banana peeling could become prohibitively costly. But the debate of whether they are pro-fruit or anti-fruit misses the point. If daycares end up serving bags of chips instead of bananas, that’s the impact they’ve had. Maybe you could blame all sorts of folks for misinterpreting the regs, or applying them too strictly, or maybe you couldn’t. It doesn’t matter. This happens all the time in government, where policy makers and policy enforcers insist that the negative effects of the words they write don’t matter because that’s not how they intended them.
> I’m sorry, but they do matter. In fact, the impact – separate from the intent – is all that really matters.
[0] https://www.eatingpolicy.com/p/stop-telling-constituents-the...
>Every step that law takes down the enormous hierarchy of bureaucracy, the incentives for the public servants who operationalize it is to take a more literal, less flexible interpretation. By the time the daycare worker interacts with it, the effect of the law is often at odds with lawmakers’ intent.
Put another way, everyone in the chain is incentivized to be very risk averse when faced with a vague regulation, and this risk aversion can compound to reach absurd places.
I want to emphasize just how true this is, in case anyone thinks this is hyperbole.
I managed a pissant VBulletin forum, and moderated a pretty small subreddit. The former got me woken up at 2, 3, 4am with phone calls because someone got banned and was upset about it. The latter got me death threats from someone who lived in my neighborhood, knew approximately where I lived, and knew my full name. (Would they have gone beyond the tough-guy-words-online stage? Who knows. I didn't bother waiting to find out, and resigned as moderator immediately and publicly.)
I used to run a moderately sized forum for a few years. Death threats, legal threats, had faeces mailed to my house, someone found out where I worked and started making harrasing calls/turning up to the office.
I don't run a forum no more. For what I feel are obvious reasons.
I home-hosted a minecraft server and was repeatedly DDoS'd. Don't underestimate disgruntled 10yo's.
what can we do about this creep up of totalitarian surveillance plutocracy?
sweet were the 1990s with a dream.of.information access for all.
little did we know we were the information being accessed.
srry
very un-HN-y.. maybe it's just the time of the year but this really pulls me down currently.
Sometimes it's explicitly mentioned but oftentimes it's behind "appropriate and proportionate measures"
https://www.ofcom.org.uk/siteassets/resources/documents/onli...
It amounts to your basic terms of service. It means that you'll need to moderate your forums, and prove that you have a policy for moderation. (basically what all decent forums do anyway) The crucial thing is that you need to record that you've done it, and reassessed it. and prove "you understand the 17 priority areas"
Its similar for what a trustee of a small charity is supposed to do each year for its due diligence.
I can't imagine one person running over 300 forums with 275,000 active users. That gives you an average of eight minutes a week to tend to the needs of each one.
I used to run a single forum with 50,000 active users, and even putting 20 hours a week into it, I still didn't give it everything it needed.
I know someone currently running a forum with about 20,000 active users and it's a full-time job for him.
I don't understand how it's possible for one person to run 300 forums well.
I think what he fears is he has no control on how these individual forums moderate their content and how liable he would be as the hosting admin.
It seems that some people are convinced that the benefits of having strangers interact with each other are not worth the costs. I certainly disagree.
If I designed a site for 14-year-old girls to sext with 30-year-old men it would be rightfully shut down.
If I designed a site as a fun chat site but becomes in actual reality it became a sexting site for 14-year-old girls with adult men, should it be shut down?
The stories… people get really personally invested in their online arguments and have all sorts of bad behavior that stems from it.
I feel like the whole time this was being argued and passed, everyone in power just considered the internet to be the major social media sites and never considered that a single person or smaller group will run a site.
IMO I think that you're going to get two groups of poeple emerge from this. One group will just shut down their sites to avoid running a fowl of the rules and the other group will go the "go fuck yourself" route and continue to host anonymously.
Does this shock you? I don't recall a time in memory where a politician discussing technology was at best, cringe and at worst, completely incompetent and factually wrong.
That would be insane, and it's not true. You have to consider the risks and impacts of your service, and scale is a key part of that.
I think it's really important around this to actually talk about what's in the requirements, and if you think something that has gone through this much stuff is truly insane (rather than just a set of tradeoffs you're on the other side of) then it's worth asking if you have understood it. Maybe you have and lots of other people are extremely stupid, or maybe your understanding of it is off - if it's important to you in any way it probably makes sense to check right?
There's only 13 provisions that apply to sites with less than 7 million users (10% of the UK population).
7 of those are basically having an inbox where people can make a complaint and there is a process to deal with complaints.
1 is having a 'report' button for users.
2 say you will provide a 'terms of service'
1 says you will remove accounts if you think they're run by terrorists.
The OP is blowing this out of proportion.
Very little legislation does.
Two things my clients have dealt with: VATMOSS and GDPR. The former was fixed with a much higher ceiling for compliance but not before causing a lot of costs and lost revenue to small businesses. GDPR treats a small businesses and non profits that just keep simple lists for people (customers, donors, members, parishioners, etc.) has to put effort into complying even thought they have a relatively small number of people's data and do not use it outside their organisation. The rules are the same as for a huge social network that buys and sells information about hundreds of millions of people.
I have no knowledge of your site, but I'm still sad to see it having to shut down.
The thing though is how to finance it and how to provide stewardship for the sites going forward.
Running sites like this post is about is not profitable. Nor is it too resource intensive.
Sad that lufguss will probably become just another channel on one of the big platforms. RIP.
Having said that, thanks for all the work you have done. I was (and maybe still am) a member of lfgss although I mostly lurked once in a long while without logging in and barely commented over the years.
It is sad to see all online communities slowly migrate to discord, reddit and other walled gardens.
I've been wanting to pay with remote modern terminals and Ratatui anyway.
Also depending on the terms agreed to when people signed on and started posting, it might be legally or morally difficult because transferring the data to the control of another party could be against the letter or the spirit of the terms users agreed to. Probably not, but I wouldn't want to wave such potential concerns off as “nah, it'll be fine” and hoping for the best.
Even leaving a read-only version up, so a new home could develop with the old content remaining for reference, isn't risk free: the virtual-swatting risk that people are concerned about with this regulation would be an issue for archived content as much as live stuff.
At least people have a full three months notice. Maybe in that time someone can come up with a transfer and continuation plan the current maintainer is happy with, if not the users at least have some time to try to move any social connectivity based around the site elsewhere.
So basically is this act a ban on indvidual communication through undermoderated platforms?
The EU and UK have been making these anti-tech, anti-freedom moves for years. Nothing can be better if you are from the US. Just hoover up talent from their continent.
Even if US immigration were more liberal, moving is very costly (financially, emotionally, psychologically). Injustice anywhere is a threat to justice everywhere.
Is there an argument why we would want it any other way?
I don't know where you're seeing that as the site does not have such things. The only cookies present are essential and so nothing further was needed.
The site does not track you, sell your data, or otherwise test you as a source of monetisation. Without such things conforming with cookie laws is trivial... You are conformant by just connecting nothing that isn't essential to providing the service.
For most of the sites only a single cookie is set for the session, and for the few via cloudflare those cookies get set too.
I don't believe this kind of regulation will do anything but put the real criminals more underground while killing all these helpful community initiatives. It's just window dressing for electoral purposes.
so thanks for all that buro9! <3
Or another thought, distribute it only through VPN, OpenVPN can be installed on mobiles these days (I have one installed on my Android). Make keys creation part of registration process.
UK sucks
Seems a bit megalomaniacal.
"I'm not interested in doing this any more. Therefore I'll shut it down for everyone"
This way, people have been given plenty of advanced notice and can start their own forums somewhere instead. I'm sure each of the 300 subforums already has some people running them, and they could do the above if they actually cared.
I find it hard to believe someone will take over 300 forums out of the goodness of their hearts and not start making it worse eventually, if not immediately.
Nonsense.
the liability is very high, and whilst I would perceive the risk to be low if it were based on how we moderate... the real risk is what happens when one moderates another person.
as I outlined, whether it's attempts to revoke the domain names with ICANN, or fake DMCA reports to hosting companies, or stalkers, or pizzas being ordered to your door, or being signed up to porn sites, or being DOX'd, or being bombarded with emails... all of this stuff has happened, and happens.
but the new risk is that there is nothing about the Online Safety Act or Ofcom's communication that gives me confidence that this cannot be weaponised against myself, as the person who ultimately does the moderation and runs the site.
and that risk changes even more in the current culture war climate, given that I've come out, and that those attacks now take a personal aspect too.
the risk feels too high for me personally. it's, a lot.
I'm sorry, what precisely do you mean by this? The rules don't punish you for illegal content ending up on your site, so you can't have a user upload something then report it and you get in trouble.
A forum that isn't proactively monitored (approval before publishing) is in the "Multi-Risk service" category (see page 77 of that link), and the "kinds of illegal harm" include things as obvious as "users encountering CSAM" and as nebulous as "users encountering Hate".
Does no-one recall Slashdot and the https://en.wikipedia.org/wiki/Gay_Nigger_Association_of_Amer... trolls? Such activity would make the site owner liable under this law.
You might glibly reply that we should moderate, take it down, etc... but we, is me... a single individual who likes to go hiking off-grid for a vacation and to look at stars at night. There are enough times when I could not respond in the timely way to moderate things.
This is what I mean by the Act providing a weapon to disgruntled users, trolls, those who have been moderated... a service providing user generated content in a user to user environment can trivially be weaponised, and it will be a very short amount of time before it happens.
Forum invasions by 4chan and others make this extremely obvious.
edit: removed unintentional deadnaming
Thank you to those who have tirelessly run these online communities for decades, I'm sorry we can't collectively elect lawmakers who are more educated about the real challenges online, and thoughtful on real ways to solve them.
My outlook on doing this is that this is not the way to do it because these things exist:
- EU citizens living in non-EU countries (isn't GDPR supposed to apply EU citizens worldwide?)
- EU citizens using VPN with exit node to/IP address spoofing a non-EU country
Either comply with GDPR or just don't exist, period.
I would argue the honorable thing to do in the event excess monies remain would be to donate it to a charity. Using it for personal ends, whatever the details, is wrong because that's not what the donations were for.