If someone is a racist bigot, they shouldn't be physically restrained(deleting posts is like physically covering someones' mouth) from being bigots but they should definitely be known for it. Then it's up to the community to decide how to interact with those people. That's how we do it in real life and works pretty well.
Another thing is the amplification: people pretending to be multiple people. This is also an issue, giving wrong impression about the state of the society and must be solved.
Lastly, we need some kind of spread management. We have the problem of BS getting huge traction and the correction getting no traction. Maybe everyone exposed to something should be re-exposed to the theme once there's a new development. For example, when people share someone's photo as a suspect and it turns out that the person in the photo is not the suspect, the platform can say "remember this Tweet? Yeah, there are some doubts about it. Just letting you know". The implementation of it wouldn't need a ministry of truth but an algo to track theme developments.
IMHO if Musk manages to solve these few problems, which I think he can, a free speech social media is possible.
> Than it's up to the community to decide how t interact with those people.
Twitter is a private company, and it chooses to run it's service how it wants. The government avoids actually physically restraining racist bigots, and lets the community decide how to deal and interact with those people. Some may chose to harbor them (Parlor, 4chan, etc), and others (like twitter) may opt to not host them.
It's not a huge social injustice if you're not allowed to tweet. Feel free to go to one of the millions of other websites, or your start your own (it's easier to do this than ever!) and see who's interested in what you have to say.
> Maybe everyone exposed to something should be re-exposed to the theme once there's a new development.
You're just reinventing content moderation!
And no, attaching follow up to organic content is not moderation.
This allows people to evolve and to not be beholden to something they said/thought a decade ago and no longer think.
This doesn't work. Show people two articles, one that is false and one that is true, and most people will say the one that aligns with their priors is true. We need to either teach people to recognize fake news, censor fake news, or accept that basically everyone will believe false propaganda. There are no other options. Once someone has been shown an article they agree with telling them the article was false just leads them to think you're on "the other side".
Do you have cause to believe that repeated exposure to every side of every story won't lead the average person towards truth?
We're hot off the heels of hunter biden, surely that should be a wakeup call regarding how "misinformation experts" go both ways.
For a free speech absolutist curtailing this could also be seen as removing free speech.
> Lastly, we need some kind of spread management. We have the problem of BS getting huge traction and the correction getting no traction. Maybe everyone exposed to something should be re-exposed to the theme once there's a new development. For example, when people share someone's photo as a suspect and it turns out that the person in the photo is not the suspect, the platform can say "remember this Tweet? Yeah, there are some doubts about it. Just letting you know". The implementation of it wouldn't need a ministry of truth but a algo to track theme developments.
Still this wouldn't solve the issue with spread of BS, specially targeted BS: it is tailored to invoke and reinforce inherent biases and, on average, someone exposed to it will become less inclined to read/critically judge any rebuttal. Bullshit spreads much easier than well researched rebuttals, just by the nature of bullshit. It's a game where truth is bound to lose, no matter how many "algorithms" you implement to spread developments of a story to the same audience, the engagement of said audience to the rebuttals will vary depending on their biases. I'm not even including the required inherent drive and energy to actually follow-up, as an audience, on further developments, in the fast-paced world of social media people will selectively choose what to invest their energy into. Someone falling for bullshits won't want their effort to be thrown out by rebuttals and so will avoid such activities perceived as a waste of energy, after you formed an opinion it's much harder to un-form it.
I'm strictly in the camp that absolute free speech on social media is a fool's errand, at least in 2022. There is no upside to the massive downsides that we already see and experience, even in the scope of not existing with absolute free speech.
The detachment on social media between the written words vs the real humans behind those words causes a non-insignificant amount of grief that wouldn't happen in a in-person interaction. It seems that we humans easily lose our humanity when not in a real world social environment, the vileness is exaggerated while empathy is easily pushed aside.
Jerks and BS artist are nothing new but in real world we do have some tools to deal with them. IMHO, changing how some things work can create an atmosphere of healthy interactions.
What I wonder is what Musk will do if he finds out the scales are artificially weighted towards conservative content. Like if conservative content is artificially boosted by bots and algorithms. Facebook was much more liberal before thumbs were put on the scale. I don’t remember when but I think it was Mother Jones that saw huge traffic movement changes after algo changes like a decade ago?
https://www.theverge.com/2020/10/17/21520634/facebook-report...
Like what if the natural state of humanity is much more liberal than the American media and social media allow for? Will Musk allow that or will he see anything that doesn’t align with his views as error or manipulation?
What if a truly free and transparent self-moderating platform naturally promotes leftism more than a moderated but manipulated feed does?
> Results show that in the context of online firestorms, non-anonymous individuals are more aggressive compared to anonymous individuals. This effect is reinforced if selective incentives are present and if aggressors are intrinsically motivated.
https://journals.plos.org/plosone/article?id=10.1371/journal...
See, because we don't say everything that comes to our mind, we are able to interact in a civil manner with people that can have any kind of opinions. In real life, I'm sometimes shocked that someone is a total bigot.
However, when civility is established we can discuss these ideas too and instead of having these people being toxic these ideas can be expressed in a civil manner and discussed. Maybe they have a point sometimes? If they do, it can be dully noted and if they don't they will be exposed to the counter arguments. Also, when ideas are expressed in civil manners, people don't label other people straight as "bigot", "racists" and accept the nuances. In fact, some prominent right-wing people are doing that, people like Jordan Peterson. Because the guy is civil, he is effective and it's up to the rest to contradict his claims in civil manners.
So yes, it is alright to have some self restraint and think before you speak. It's definitely much better than oppressing it.
edit: the comment I responded was a bit different, I guess the OP added more thoughts.