Still waiting for the first one. I'm not holding my breath - just like fuzzing found a lot of vulnerabilities in low-level software, I expect novel automated analysis approaches will yield some vulnerabilities - but that won't be a catastrophic event just like fuzzing wasn't.
Why don't you think that AI models will, perhaps rather soon, surpass human capabilities in finding security vulnerabilities? Because an AI that's even equally competent would be a fairly catastrophic event.
It's rumored that the NSA has 600 mathematicians working for them. If they are the ones finding the exploits you will probably never hear about them until they are independently discovered by someone who can publish.