That strikes me as dodging the question of AI extortion, which is kind of orthogonal to the question of simulations and eternal torture. Just imagine that the AI might appear within your lifetime, and will be strong enough to physically find you and punish you. It's easy enough to figure out after the fact that you didn't donate to the AI's creation, and come up with a very unpleasant punishment that doesn't require any advanced tech. That's the heart of Roko's basilisk argument, I think it's interesting even if you don't mention AI at all. It's basically a decision theory puzzle.