It's unclear to me what you're suggesting.
What's happening is people conflate "not suppressing speech" with "supporting hate speech." They don't support anything. They're nothing more at this point than a public space maintained by a private entity. Private entities I'll add that are essentially subsidized by taxpayers due to not meeting their obligations of paying their taxes, and no I'm not talking about minimizing tax obligations[0].
[0] - https://www.theverge.com/2020/2/19/21144291/facebook-irs-law...
And nobody ever said "no matter how objectionable." We still have limits on obscenity.
Primarily, you are arguing for compelling corporation to expend effort and expense to carry content deemed objectionable by officers of that company, and I'm not sure you've thought that through. You might think politically-[whatever] voices are being silenced--absent evidence--but if the US government decides to violate hundreds of years of established norms and force websites to cease censoring content, it won't be just content you favor that explodes. And if the US government can force Facebook and Twitter to carry content they don't like, then what legal standard separates them from Hacker News, for example?
If a company wants to editorialize their content and provide moderation when they deem fit then they specifically should lose their protections under section 230.
Not sure how legislation would define the requirements for something like this. What would metrics would signal that a platform to have to comply with the regulation? (MAUs?, revenue?, surely having moderation, but should certain kinds of moderation like removing botspam count?) Should a provider be able to silently set-up default lists for new users or should they be prompted? Should the company be required to break down blocklists among different topics (spam/pornography/depictions of violence/a goose with a campfire in the background) and how should the boundaries of topics be defined if so? What kind of functionality should be considered the bare minimum for a framework to build community-curated blocklists?
Mind you, I still am not sure regulation is the way to go. But the truth is that these platforms wield a scary amount of power over the psyche of people to extents that would've made the Stasi drool all over the place, and it does have the ability of corrupting democracy far more than biased TV channels and newspapers since it's no longer Fox News or CNN presenting you something, it's your own acquaintances that get some of their voices artificially boosted while some others are silenced; literally distorting a huge amount of voters' own grasp on reality.