The courts have repeatedly said that copyright only applies to human creativity. The Supreme Court explicitly said this when they refused to hear the appeal:
https://en.wikisource.org/wiki/Thaler_v._Perlmutter,_Refusal...
> "We affirm our decision to refuse registration for the Work because it lacks the human authorship necessary to be eligible for copyright protection."
So they're saying that the LLM cannot be the author, because LLMs cannot claim copyright.
The related case about patents is more supportive of the narrative that AIs cannot be authors (see https://www.cafc.uscourts.gov/opinions-orders/21-2347.OPINIO...), specifically: "Here, there is no ambiguity: the Patent Act requires that inventors must be natural persons; that is, human beings."
The patent situation is that the Act says that inventor must be an individual, which the courts are interpreting to mean a human, so the LLM cannot be named as the inventor. So, in this case, yes, this is just saying that an LLM cannot be named as the inventor of a patent. That's not the same thing as the courts are saying with copyrights.
They're saying that the LLM can't be the author.
Now suppose you supply the LLM with a prompt that contains human creativity, it performs a deterministic mathematical transformation on the prompt to produce a derivative text, and you want to copyright that, claiming yourself as the author. What happens then?
If you think the answer is that you can't, how do you distinguish that from what happens when someone writes source code and has a compiler turn it into a binary computer program? Or do you think that e.g. Windows binaries can't be copyrighted because they were compiled by a machine?
My understanding was that they did in fact do just that, but the court somehow misunderstood what they were doing, and assumed that the LLM was working completely autonomously without any human input at all, which isn't really possible IMO. Someone told it what to do.
They also argued that you couldn't copyright an output that you can't explain how it came to be, i.e. if they had been able to articulate how an LLM works, the outcome might have been quite different, which I found surprising.
If art in general (human-made or otherwise) is always derived from existing influences... should we really be forced to explain how or why we created a piece of art in order to defend it?
The usual bar for copyright infringement of a derivative work is, from what I have seen, "how much did you copy from the original, and how obvious is it", which is of course a subjective determination that would be made by each individual judge or jury of a case.
The part that the human created, the prompt, can be copyrighted.
The part that the LLM created, cannot be.
Copyright in code works exactly the same way: the source code is copyrighted. The binary code is only copyrighted to the extent that it is derived from the source code. This is well-established.