The way I see this it looks like this:
1. Initially, when you claim that someone has violated your copyright, the burden is on you to make a convincing claim on why the work represents a copy or derivative of your work.
2. If the work doesn't obviously resemble your original, which is the case here, then the burden is still on you to prove that either
(a), it is actually very similar in some fundamental way that makes it a derived work, such as being a translation or a summary of your work
or (b), it was produced following some kind of mechanical process and is not a result of the original human creativity of its authors
Now, in regards to item 2b, there are two possible uses of LLMs that are fundamentally different.
One is actually very clear cut: if I give an LLM a prompt consisting of the original work + a request to create a new work, then the new work is quite clearly a derived work of the original, just as much as a zip file of a work is a derived work.
The other is very much not yet settled: if I give an LLM a prompt asking for it to produce a piece of code that achieves the same goal as the original work, and the LLM had in its training set the original work, is the output of the LLM a derived work of the original (and possibly of other parts of the training set)? Of course, we'll only consider the case where the output doesn't resemble the original in any obvious way (i.e. the LLM is not producing a verbatim copy from memory). This question is novel, and I believe it is being currently tested in court for some cases, such as the NYT's case against OpenAI.