It gets worse. I actually came here to say pretty much the same thing — happy you did.
Before I get into that, I do want to counterbalance it by saying that I agree with Stallman in that specific context. The context being key. The question is whether a 17yo can morally consent to having sex (not legally). Unless one truly believes that a switch flips the day you turn 18, you’ll be hard-pressed to form an argument that a 20yo having sex with a 17.99yo is not merely illegal, but also immoral.
But the problem of course is that Stallman has a room temperature social IQ. I hate to phrase it so bluntly, because it’s like staring into my future when I’m 60. I bet I’ll run my mouth one too many times and get into some hot water, and it’s all too easy to take potshots at someone who’s willing to risk saying controversial-but-true things.
And yet.
> It seems that [Stallman’s] general points are two.
> 1. …
> 2. And that, depending on the context, what is technically child pornography didn't cause anyone harm.
>> This child pornography might be a photo of yourself or your lover that the two of you shared.
> He then says that the mere possession of child pornography does not harm anyone. I assume he is implying that only the production of child pornography harms people.
>> But even when it is uncontroversial to call the subject depicted a child, that is no excuse for censorship. Having a photo or drawing does not hurt anyone, so and if you or I think it is disgusting, that is no excuse for censorship.
> I would disagree with that, but it's his personal blog. He is not speaking for MIT or the free software foundation.
“Disagree” is a rather benign term for the emotions one tends to feel at that sentiment.
It’s an interesting experience to consciously try to override one’s own tendency to raise a “conversational exception”. (From http://paulgraham.com/heresy.html, “Using such labels is the conversational equivalent of signalling an exception. That's one of the reasons they're used: to end a discussion.”)
Yes, it’s technically true that possessing CSAM does not physically harm anyone. And that’s worth pausing to consider the implications. For example, I’ve spent a lot of time considering whether AI models should be allowed to generate CSAM. After all, isn’t it tempting to think you could train one Final Model on Forbidden Content, with the justification of “See, now no one needs to go and produce this training data anymore; we have an endless supply that harms nobody”? It’s at least worth considering, if only to disagree with it.
Then you get into the really weird questions. If you train a model to produce loli hentai porn (which 4chan is actually doing), is it morally reprehensible? (Turns out, it’s often illegal.) After all, it’s just drawings. No real people were even involved. How do you even argue against that from a moral standpoint?
Yet all of this context seems completely lost on Stallman, who spends two seconds to type two paragraphs and clicks “reply” without pausing to consider how it might sound. It’s so frustrating to see someone bring up so many valid points in a way that’s not merely tone deaf, but existing in a universe absent of undertones and nuance.
At times like these, I like to reread http://paulgraham.com/say.html:
> If you said them all you'd have no time left for your real work. You'd have to turn into Noam Chomsky.
I despise AI safety filters, but they exist because we can’t afford to ship a Richard Stallman model out to the real world and have it generate an endless list of reasons why child porn doesn’t hurt anybody. We’d have no time left to do actual research, because we’d have to spend it all defending ourselves for the decision to release such a model.