The small groups of Twitter and Reddit moderators are far too small to ever represent the diversity of human thought. You may think the rules prevent harm today, but what happens if and when they encourage harm tomorrow? What if the rules turned against you? Wouldn't you want to be able to speak out?
This just feels like a rehash of the "think about the children" argument. We should not base human communication on the idea that some grown adult somehow somewhere could have such an adverse reaction to your content that they suffer serious mental or physical harm. Especially when said communication is hidden behind NSFW spoilers and other appropriate trigger warnings. Nobody could have possibly stumbled upon r/watchpeopledie and thought it was anything other than what it said it was.
For instance, can an ethical case be made for watching people die? Is there any benefit to be gained from this beyond the first novelty factor. (A rhetorical question, just to note that the ethical debate precedes rule making)
Every other social media giant operates the same way. To my knowledge no mainstream social network has ever polled its users for changes to its community policing model. Which is crazy, because in actual society we all have the right to vote, but online we are beholden to nameless moderators and provided no representation whatsoever. It's entirely up to chance whether your case gets seen by someone who would be sympathetic to you (assuming it even gets seen by a person instead of some glorified regex matcher posing as 'AI').
I know in tech we like to outsource lots of hard problems, but nobody should accept outsourcing their moral framework to Twitter, Inc. or Condé Nast.
I don't feel the need to make an ethical argument for or against HN
That's funny, I find the moderators stifle talking about the rules on Hackernews via secret shadowbans, post rate limiting, and other secret punishments...yet here you are.
When we ban an account, we don't use shadowbanning it unless it is relatively new and shows signs of spamming or trolling, or being related to past abuses [1]. When an account has an established history, we say that we're banning it and why [2].
We rate limit accounts when they post too many low-quality comments too quickly and/or get involved in flamewars [3]. We're happy to take the rate limit off (and often do) when people give us reason to believe that they'll use the site as intended in the future. Emailing hn@ycombinator.com is the best way to do that.
Creating accounts to get around these restrictions is obviously a repetition of the original abuse and will get your main account banned as well if you keep doing it, so please don't do that.
[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
[2] https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
[3] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
I'm on HN because dang and the others in charge of moderating repeatedly make good faith efforts to explain their moderation philosophy and keep the rules updated and visible.
If Twitter and Reddit moderators were as public as dang were I would feel far more comfortable relying on them to make decisions for me.
Most things in the world outside of pure mathematics are subjective. If you’re looking to do nothing if it involves subjective decision making, nothing would ever get done.
All around us, all day every day, we look at a problem, we take the best info and expertise available to us, and we make a judgement call.
The fact that almost always some level of subjectivity exists doesn’t mean we do nothing.