Obviously the context is different (particularly in the difference between a state and a private corporation, quasi-monopolistic or not).
The key comparison is this: who decides what is misinformation? who decides what information/disinformation is "dangerous"? The right consequentialist narrative can frame any idea as dangerous.
Obviously it's dangerous to advise people to drink bleach. But you can find millions who'd make the same claim of vaccines (both "the jab", or pre-COVID childhood vaccinations); or, of ivermectin (while claims of its COVID efficacy are dubious, neither is it categorically unsafe for humans).
I'm not going to lose much sleep over YouTube attaching "The more you know..." links on controversial subjects, paternalism and "the backfire effect" notwithstanding. The concern is algorithmically putting their thumbs on the scale of discourse (and with second-order self-censorship effects). Even if/when they're mostly right (COVID vaccines work; global warming is anthropogenic), that is a disturbing level of unaccountable power.