The whole point of democratic society in the United States is to allow people freedom of speech and thought. If we limit the discussion of ideas to only popular/accepted/politically-correct ones, then there is no chance to analyze ideas from a relative perspective; there is nothing to compare those ideas to. We must allow discourse, even at the extremes of the spectrum.
To quote various sources, "There can be no light without dark."
Yes, basically. OP apparently also thinks that what constitutes community is a central authority handing out badges.
the company should issue badges of status or of shame based on signals about how people actually use, or abuse, Twitter. In other words, Twitter should begin to think of itself, and its users, as a community
And believes that Facebook or Linkedin profiles are markers of online reputation.
These people (he's not alone) are watching their authority eroding and believe that it could be stopped with another status symbol. Doubtful.
Aside, which online communities has Anil Dash created?
> Aside, which online communities has Anil Dash created?
Stackoverflow? https://stackoverflow.com/company/management
If you don't like the way a social platform is heading, then leave it. You are not entitled to a safe space.
At the end of the day twitter is going to cater to its biggest and most influential users.
It costs you nothing to ignore the users who are saying things you don't like.
What??? There are people out there getting death threats, getting doxxed, being attacked with all manner of epithets on Twitter. It stands to reason that maybe the experience would be better for those people without that sort of content directed at them.
> If you don't like the way a social platform is heading, then leave it. You are not entitled to a safe space.
Yes. And as a business, Twitter has a vested interest in not letting their platform drive away swaths of users due to toxicity. Indeed, one of the key reasons Facebook was successful is that users have the ability to restrict communications to the people they want to.
Spoken like someone who's never been the victim of a targeted harassment campaign. It's a little hard to take that view when people are posting pictures of your children's school along with their death threats.
It's less "that idea is popular" and more "these people are inciting hatred against other people".
I'm baffled that people still repeat this "free speech is important" thing when we're talking about a private service giving an audience to white supremacists.
But indiscriminately providing services to everyone (within the laws of your country), without booting people based on a personal agenda, is the ABC of gaining the trust of your users.
Think about what you're saying: corporations have the power to prevent you from saying anything online. Twitter and Facebook can ban you for your political opinions. Fine, you say: get your own website. Well, we've seen that your registrar, CDN, and hosting providers will also kick you off if they find your speech sufficiently offensive (or if enough pressure comes down on them to do so.) In the end, your ISP can do the same thing. Corporations own the commons we all use. Yes, the FA only protects you from government censorship, but the societal principle of free speech is not so limited. If you cannot speak because every commons is owned by a private corporation, or where any speech can be banned because a mob is threatening to protest you, you do not have free speech in any meaningful sense.
I don't believe the solution to our problems is to blithely conclude that corporations can do whatever they want, it's their platforms, and that anyway, it's about "hate" (who gets to define that?) We need to look very seriously about how communication has evolved since the 18th century and what "freedom of speech" means now, and how to settle it in a way that at least leads to democratically elected institutions having the power to restrict it, rather than corporate overlords.
I would also argue that repression of speech is a contributing factor to the rise of violence. As long as all speech is legitimized, there's a relief valve for despicable beliefs. The Nazi Party was born in an environment where political violence was already commonplace, and where nationalist rhetoric resulted in the nationalists being severely beaten. Consequently, only the most violent and committed people remained, who then organized their own violent mobs. Ultimately, the perception of being "repressed" by both mob violence and the state (the Nazi party was later banned) only lent support by people who felt for the "underdog" and radicalized supporters who felt that legitimate, nonviolent means would never be enough.
Problem is they and other media companies (and often the users as well) has ridiculous problems to understand the diffence between racism (judging and treating people different depending on who their parents where etc) and scepticism towards a recent immigration wave (do we want to relocate all these humans right here, right now? Can we help more people by doing it differently?)
In my experience, they're much more dangerous to the well-being of society than the people they're trying to silence, as they're generally authoritarians of the worst sort without a shred of remorse because they're doing it for The Right Reasons(TM) and are sure of it.
But if I tolerate you speaking, someone I view as an immediate and visceral threat to my society, I suppose I can tolerate just about anyone using words.
It's a shame we'll likely come to violence because you can't show the same tolerance, and use words not force on people you disagree with.
Newspapers and TV are controlled so you don't see too much of it. But any public internet site cannot avoid this without heavy moderation which at Twitter, Youtube, Facebook scale are not easy problems to solve.
The government supports free speech because it's important and a Good Thing. Just because private companies aren't required to do the same doesn't make it any less important and a Good Thing.
Tolerance is generally a useful tool for social discourse, but when your "peers" are actively working to recruit and organize a movement to silence/deport/kill your friends it's generally not useful to tolerate that behavior.
Tolerance only works when you have at least the smallest and most fundamental amount of social cohesion.
As for illegals, every major country in the world deports illegal immigrants, even good old Canada, to the chagrin of many who thought they'd walk over and be granted immigration status without any further effort.
In the end it's still taking political sides.
IMO that solves most of the problems - allowing unbiased freedom of speech/communication, while still preserving basic decency for the majority of people. In addition, you'd have a capitalist market/competition for ideas - if a curator of a filter becomes untrustworthy (starts abusing their "power" by "censoring" too many voices) the filter could simply by forked, improved, and users would migrate to something better!
Edit: In addition, this would also solve all kinds of legal issues - you could simply make a per-country level filters that users connecting from that country would automatically be exposed to, but all such government-imposed censorship would be implemented in a very transparent manner!
https://support.twitter.com/articles/20169222#
https://motherboard.vice.com/en_us/article/3kz57j/this-is-ho...
User-curated blocklists were tool deployed by both sides in Gamergate. https://blog.twitter.com/official/en_us/a/2015/sharing-block...
And creates echo chambers.
That's part of the problem with Facebook, once you start posting and interacting with certain types of content you get the same type of content more often. Leaving out the opposing viewpoints and giving you the impression that everyone agrees with you.
Air, it's time to end your anything-goes paradise. Your medium allows murderers, rapists and my crotchety old neighbor to create vibrations that will reach the ears of others.
Why are we allowing these criminals to vibrate molecules of air? We should be ashamed of ourselves.
I'm happy to announce Airfilter. A new device that uses machine learning-based active noise cancellation (and a blockchain reputation system) to selectively remove offensive vibrations from the entire atmosphere of the earth.
We're hiring!
So someone is free to post their racist and misogynist memes but someone who doesn't wish to see that content just wouldn't see it.
Yep, because these are easily verifiable parameters that don't depend in any way on the ethical beliefs and political alignment of whoever rates users.
This would only be a fair standard if also applied to Communists. In terms of sheer historical body count, there is no rival.
And then, if you did draw ideological lines in the sand, who sets that line? Whom does it emcompass? Does it apply to religions? Do we exclude Christians and Muslims? Do we exclude anybody who ever said anything nice about Genghis Khan or the Achaemenid Immortals?
Establishing standards of tolerance is one thing, but defining where the lines are can be difficult, and possibly embroil anyone.
NYT, it's time to end your "editorials are news" dystopia.
Does Twitter want to be a dumb pipe where it doesn't do anything at all about what people post? If that is the case, then yeah, don't do anything about bots, trolls, harassment/abuse, etc.
Is Twitter actually an online community? If that is what they want it to be, then it will need at least some mechanisms to deal with toxic users.
Maybe that's just giving users robust tools for filtering what they want to see. Maybe you could use sentiment analysis to give people the ability to block certain types of tweets from reaching them. And indeed, they already have some of that with the 'sensitive content' setting on images and video.
The author here wants to go further into a reputation system, which may or may not be the right choice for Twitter. That seems to be a harder thing to do technically, but I'm pretty sure they could come up with something that tones down the impact of the worst abuse.
At the end of the day, if a general public social media platform becomes too toxic for too many users, it probably won't survive. And that's not even getting into the real world consequences that we're starting to see from foreign intelligence services attempting to manipulate them.
This writer does realize that President Obama was the first President to consistently utilize twitter right? Of course President Trump took it to another level, but he is insinuating that President Trump is using to twitter to sway "small minded" Americans, but ignoring the fact that President Obama did the exact same thing on a smaller scale.
If Twitter has become a place where users spend more time complaining about the platform rather than actually using the platform, they should quit. Not because I have some utopian capitalist view where each user can just go off and build their own Twitter clone or because of some "don't like it, leave" mentality. I'm saying this because I believe it's conducive to their mental health.
Otherwise, people become quite cynical and make choices they wouldn't normally make - as in limiting free speech for the sake of corporate gains.
Or rather, a platform for nothing. RIP Twitter.
If you do allow responses/conversations, allow it only between users who follow one another. But maybe just don't allow it at all.
[EDIT] actually, if anyone's thinking about doing this and wants to chat... email's in my profile.
I'm confused, I see stories complaining about Twitter's heavy-handed censorship all the time, whether it's banning accounts or simply removing objectionable hashtags from the "Trending" list.
When did they suddenly develop the opposite problem?
There are two stories you hear:
1. Twitter killed the account of normal person X for no reason (or for arguing with someone who deserves to have their account killed but don’t)
2. Twitter, for once, killed the account of someone who really deserved it and all their followers/ideological believers scream CENSORSHIP LIBERAL MEDIA at the top of their lungs