Freedom of speech is rightly often characterized as a core American principle; it's emphasized in civic education, and most of the country will, if anything, overstate what is actually allowed by it. Generally though, I think it does follow the common interpretation; people can say what they want is the default, and courts have carved out specific exceptions over the centuries (libel, public endangerment, etc). Looking at the history of these laws, all the examples I know of started off to be assumed legal, and in specific cases those scenarios were deemed sufficiently bad to now be illegal.
In recent years, we've seen increasing amounts of misinformation that are hard to track down thanks to social media, and so there is now increasing debate about how to combat this. I think there are two parts to this question:
- Does (or how much of) this misinformation constitute a necessary legal response? Put another way, in the context of social media, which depending on platform and settings might not even be fully public, what defines whether something is serious enough of libel or a danger to the public to require legal action against its perpetrators? Explicitly calling for a lynch mob against someone probably breaches current laws, but claiming that Trump should have won the 2020 election probably doesn't (even if the person saying it knows its false; lying isn't normally a crime!).
- In an online world, how do we enforce these laws? Social media is often anonymous. Should public profiles be required to have verified contact information? How can we track and police international actors? Does liking a criminal post count as a crime? What about a retweet to millions of followers? Given these challenges, there is a push to have platforms take a role in this enforcement, whether through account verification, removal of potentially criminal speech, or other methods.
Both these questions are unsettled. The common person probably isn't thinking too much about the first question, and the courts will mostly hash it out over time. The second one is what gets more public debate.
Personally, I'd say the American enthusiasm for free speech, and wariness of business regulation more generally, make it unlikely to take significant action there, particularly since the big platforms themselves are clearly putting a lot of time into trying to address these things. If Europe creates a legal framework around platform responsibility, the US might follow, but otherwise will probably let the platforms keep working at it. That's just my guess though!