The short answer is it's complicated.
The longer answer is...
There are 2 sections we care about. The first (203(c)1) defines platforms as not being publishers so they're not liable (civilly) for content. That means they don't HAVE to moderate.
But that leaves a problem. As soon as you start moderating, you become a publisher whether you like it or now.
So there is ALSO 230(c)2
>Section 230(c)(2) further provides "Good Samaritan" protection from civil liability for operators of interactive computer services in the good faith removal or moderation of third-party material they deem "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."
That lets platforms moderate and censor if they want to AND STILL NOT GET SUED when their moderation is not perfect.
Platforms probably have c1 rights anyway, courts ruled on that before 230 was enacted (section 230 is the remains of a bigger law that was ruled unconstitutional because it didn't respect the free press aspects enough)
So without section 230, platforms have 2 options: no moderation OR moderate but eat a tonne of liability.
Section 230 let's them have the best of both worlds: remove things AND don't be held responsible for things you didn't remove.
I should have been clearer that it was c2 that I was referring to.
If c2 were repealed, most platforms would have to stop all moderation. HN would be gone as we know it.
Messy enough?
In theory we could repeal the whole thing, but that has much the same effect as just repealing c2: companies close their user created content or get sued into oblivion by any lawsuit happy citizen.
You can see how this turns into political talking points and mis understandings very easily about how section 230 is both critical the free speech and against free speech and stops companies and enables them depending on the agenda of the speaker...