I thought the rhetoric sounded somewhat like the AGI/accelerationist folks who postulate some sort of eventual "godlike" AI whose thought processes are somehow fundamentally inaccessible to humans. So if you had a proof that was only understandable to this sort if AIs, then mathematics as a discipline of understanding would be over for good.
But this sounds like it would at least theoretically let you tackle the proof? Like, it's imaginable that some AI generates a proof that is several TB (or EB) in size but still validates - which would of course be impossible to understand for human readers in the way you can understand a paper. But then "understanding" that proof would probably become a field of research of its own, sort of like the "BERTology" papers that try to understand the semantics of specific hidden states in BERT (or similar approaches for GPTs).
So I'd see an incomprehensible AI-generated proof not as the end of research in some conjecture, but more as a sort of guidance: Unlike before, you now know that the treasure chest exist and you even have its coordinates, you just don't have the route to that location. The task then becomes about figuring out that route.