> Not to totally sidetrack the discussion, but what are some of the things that you strongly disagree with on SSC?
Most things I disagree with SSC on seem to be general rationalist beliefs and may also be found on places like LessWrong. These views are usually expressed less directly, and sometimes in comments.
For example, SSC and rationalists in general attribute very high value to IQ. SSC has some posts relating to ability, genetics, and growth mindset that I find very good:
http://slatestarcodex.com/2015/01/31/the-parable-of-the-tale...
http://slatestarcodex.com/2015/04/08/no-clarity-around-growt...
But, while I mostly agree with both of those series, the continual claim that IQ is the best thing since sliced bread, that it's everything, correlates with everything, and is necessary for someone to reach certain heights, is a something that I find to be more dogmatic than rational. I think the IQ-is-everything model is too simplistic, and rather self-fulfilling, and if you have a lot of patience, you can extract my position on ability development from this old post: https://news.ycombinator.com/item?id=12617007
> And, by the way, I had not considered the Moloch article as a direct re-framing of a problem until you put it as such.
To be fair, I'm not sure if Scott Alexander meant it that way. There was a related post on the Goddess of Cancer, where I think the reframing part was mentioned. But I already believe that Moloch is a manifestation of a wider process, so the issue of explaining to someone how a blind process can have so much power is not new.
> The biggest issue with the toMoloch transform is that the conversion process is obviously going to be significantly more noisy and provide the author the ability copious amounts of wiggle-room to steer the reader towards their own conclusions.
I don't know that it really introduces any more significant noise than anything else. We're already surrounded by so much noise, and I would argue much of it is from the aforementioned process itself, that better means are needed than hoping that a given transformation was accurate anyway. I.e., can we make predictions from the concept of Moloch? It looks to me that we can.
Generally, information needs to be routed to the right subsystems. Humans have a few subsystems that are really good at identifying an adversary or assigning blame. But they don't have any good subsystems to examine the situation itself unless they're already above it, nor can they assign blame to the situation, as they perceive it as neutral and inert. I would say the extreme informational loss from the inability to process effects of systems and situations is so much larger than the added noise that the transformation absolutely needs to be done.