"What happens when an LLM outputs a patented algorithm?" remains a huge land mine out there, particularly since patent infringement does not require intent or even knowledge, and these models have trained on every patent ever granted.
If you can prove that your LLM did not learn from the patent (eg cut-off for learning was before), then the LLM outputting the algorithm (or product etc) would be pretty good evidence that a practitioner of ordinary competence in the field, or whatever the exact legal wording is, found the whole invention to be trivial.