I was once attracted to it, and to EA, because of the surface-level values. It
seemed like a community that wanted to acknowledge their own biases and work past them, yet in practice, it is a community that uses intellectualism as an aesthetic to confirm their preconceived biases. In its malignant form, you have LW and adjacent communities engaging in scientific racism revival. In the less malignant form, you have people working backwards and pretending that the conclusions they came to were "objective" because of the flowery Bayesian language they dressed their thought experiments in, as if they're constantly doing complex Bayesian inference in their heads. In the end, what was striking, to me, was the lack of humility you'd expect from those who agreed with the LW premise.
Similarly with EA, I liked the idea of optimizing charity for the most good, but in practice, the community seems to have no problem dedicating a ton of money, time and effort to MIRI and adjacent groups and people, because they've managed to use their intellectual aesthetics to spook themselves into believing that science fiction is reality. As a result, very real problems people experience today are discounted in favor of whatever scary future AI meme is spooking the community this month.
It's really kind of funny when you think about it, it's just a shame that they suck up so much oxygen in the room.