> asking if adversarial AI tools count as DRM or as malware
Neither. Nightshade is not DRM or malware, it's "lying" about the contents of an image.
Arguably, Nightshade does not corrupt or disable the model at all. It feeds it bad data that leads the model to generate incorrect conclusions or patterns about how to generate images. This is assuming it works, which we'll have to wait and see, I'm not taking that as a given.
But the only "corruption" happening here is that the model is being fed data that it "trusts" without verifying that what the data is "telling" it is correct. It's not disabling the model or crashing it, the model is forming incorrect conclusions and patterns about how to generate the image. If Google translate asked you to rate its performance on a task, and you gave it an incorrect rating from what you actually thought its performance was, is that DRM? Malware? Have you disabled Google translate by giving it bad feedback?
I don't think the framing of this as either DRM or malware is correct. This is bad training data. Assuming it works, it works because it's bad training data -- that's why ingesting one or two images doesn't affect models but ingesting a lot of images does, because training a model on bad data leads the model to perform worse if and only if there is enough of that bad data. And so what we're really talking about here is not a question of DRM or malware, it's a question of whether or not artists have a legal obligation to make their data useful for training -- and of course they don't. The implications of saying that they did would be enormous, it would imply that any time you knowingly lied about a question that was being fed into an AI training set that doing so was illegal.