If I can hire an employee who draws on knowledge they learned from copyrighted textbooks, why can't I hire an AI which draws on knowledge it learned from copyrighted textbooks? What makes that argument "wacky" in your eyes?
Unlike a person, an large language model is product built by a company and sold by a company. While I am not a lawyer, I believe much of the copyright arguments around LLM training revolve around the idea that copyrighted content should be licensed by the company training the LLM. In much the same way that people are not allowed to scrape the content of the New York Time website and then pass it off as their own content, so should OpenAI be barred from scraping the New York Times website to train ChatGPT and then sell the service without providing some dollars back to the New York Times.
You're either going to get: it's a technological, infinitely scalable process, and the training data should be considered what it is, which is intellectual property that should be being licensed before being used.
...or... It actually is the same as human learning, and it's time we started loading these things up with other baggage to be attached to persons if we're going to accept it's possible for a machine to learn like a human.
There isn't a reasonable middle ground due to the magnitude of social disruption a chattel quasi-human technological human replacement would cause.
Can you help me to understand the term "chattel" as you used it? I never heard the term before I read your post, and I needed to Google for it: <<
(in general use) a personal possession.
(in law) an item of property other than freehold land, including tangible goods ( chattels personal ) and leasehold interests ( chattels real ). >>
The other is a person learning from a copyrighted textbook in the legally protected manner, and whom and use the textbook was written for.
"Can you elaborate on how it's not comparable?"
The process of individual people interacting with their culture is a vastly different process than that used to train large language models. In what ways to you think these processes have anything in common?
"It seems obvious to me that it is -- they both learn and then create -- so what's the difference?"
This doesn't seem obvious to me (obviously)! Maybe you can argue that an LLM "learns" during training, but that ceases once training is complete. For sure, there are work-arounds that meet certain goals (RAG, fine-tuning); maybe your already vague definition of "learning" could be stretched to include these? Still, comparing this to how people learn is pretty far-fetched. AFAICT, there's no literature supporting the view that there's any commonality here; if you have some I would be very interested to read it. :-)
Do they both create? I suspect not; an LLM is parroting back data from it's training set. We've seen many studies showing that tested LLMs perform poorly on novel problem sets. This article was posted just this week:
https://news.ycombinator.com/item?id=42565606
The court is still out on the copyright issue, for the perspective of US law we'll have to wait on this one. Still, it's clear that an LLM can't "create" in any meaningful way.
And so on and so forth. How is hiring an employee at all similar to subscribing to an OpenAI ChatGPT plan? Wacky indeed!
But if they're learning from the same kinds of materials, and producing the same kind of output, then obviously the comparison can be made. And your idea that LLM's don't create seems obviously false.
So I have to conclude the two seem comparable, and someone would have to show why different legal principles around copyright ought to apply, when it's a simple question of input/output. Why should it matter if it's a human or algorithm doing the processing, from a copyright perspective? Nothing "wacky" about the question at all.