Of course, the entire basis for LLMs being legal is that they use work collectively to know how code/language works and how to write it in relation to the given context. In this case, the legal defense is that the tool is like a human that learned how to code by looking at CC-BY-SA and other licensed publicly-available code and assimilating it into their own fleshy human neural network.
This only becomes shaky once you add in regurgitating code verbatim, but humans do this too, so the solution there is the copilot setting that tries to detect and revert any verbatim generated code snippets.