It's an area where there are no existing laws. We're not going to stop AI because some furry deviant art artist complains loudly online.
Whether or not one believes those laws apply to generative AI seems to be based on one's belief in how similar that AI software is to humans.
I'd argue that systematically ingesting 2.3 billion images is not remotely human (one of a myriad of reasons the comparisons break down), and that it is a long stretch to claim that this falls into the realm of fair use as originally envisioned.
It is this insistence that the software is human enough to be granted human-like status that is playing fast and loose with the definitions of things, ranging from consciousness, to learning, to the interpretation of those concepts relative to current laws.
I believe new laws will be written, and old laws will be updated. There's no question that the current legal system is not well equipped for various generative AI systems. But I don't think the current laws have nothing to say.
And I'd still argue that this conversation can be separated from the one about indiscriminately slurping up artist's content.
> We're not going to stop AI because some furry deviant art artist complains loudly online.
Please don't argue against straw men. There are legitimate concerns from artists across disciplines and genres, and this isn't just isolated shrieking.
Artist backlash is frankly one of the most natural outcomes I could imagine from a system that uses their work without permission. Many of the people who are complaining loudly are not against AI, just against the use of their work without consent or attribution.
I'm both extremely excited about the possibilities the software unlocks and concerned about the implications. AI can exist without ignoring the rights of artists.
There's implicit assumption that if you can get a hold of a copy and manage to learn from it you are free to use what you learned in your creations.
But there are explicit laws about how you acquire copies of things, and whether they apply seems to be based on what someone believes “learning” to be.
Your claim relies on the belief that a computer ingesting images is similar to a human learning from those images.