Couldn't it also be true that the AI didn't produce innovative output even though the human asked it to produce something innovative?
Otherwise you're saying an AI always produces innovative output, if it is asked to produce something innovative. And I don't think that is a perfection that AI has achieved. Sometimes AI can't even produce correct output even when non-innovative output is requested.
It could have been, but unless said human in this case was lying, there is no indication that they did. In fact, what they have said is that they steered it towards including things that makes for a very conventional compiler architecture at this point, such as telling it to use SSA.
> Otherwise you're saying an AI always produces innovative output
They did not say that. They suggested that the AI output closely matches what the human asks for.
> And I don't think that is a perfection that AI has achieved.
I won't answer for the person you replied to, but while I think AI can innovate, I would still 100% agree with this. It is of course by no means perfect at it. Arguably often not even good.
> Sometimes AI can't even produce correct output even when non-innovative output is requested.
Sometimes humans can't either. And that is true for innovation as well.
But on this subject, let me add that one of my first chats with GPT 5.1, I think it was, I asked it a question on parallelised parsing. That in itself is not entirely new, but it came up with a particular scheme for paralellised (GPU friendly) parsing and compiler transformations I have not found in the literature (I wouldn't call myself an expert, but I have kept tabs on the field for ~30 years). I might have missed something, so I intend to do further literature search. It's also not clear how practical it is, but it is interesting enough that when I have time, I'll set up a harness to let it explore it further and write it up, as irrespective of whether it'd be applicable for a production compiler, the ideas are fascinating.