Except that's not how LLM works. LLM has no idea about "ideas", only probabilities of how certain words string together.
So you literally can't make it produce functionally identical but not verbatim identical code. It doesn't understand that the two are equivalent.
Also, such "functionally identical but not violating copyright" transformation is not possible to do, both given the complexity of the problem and the sheer volume of the data.
And training it on some simplistically obfuscated code wouldn't help - all it would learn would be production of obfuscated code. Not useful for the intended use.