It's just really hard to detect and exploit profitable and safe vectorization opportunities. The theory behind some of the optimizers is beautiful, though: https://en.wikipedia.org/wiki/Polytope_model
A way to express the operations you want, without unintentionally expressing operations you don't want, would be much easier to auto-vectorise. I'm not familiar enough with SIMD to give examples, but if a transformation would preserve the operations you want, but observably be different to what you coded, I assume it's not eligible (unless you enable flags that allow a compiler to perform optimisations that produce code that's not quite what you wrote).
I remember intel had something like it but it went nowhere.
You don't want "vectorization" though, you either want
a) a code generation tool that generates exactly the platform-specific code you want and can't silently fail.
b) at least a fundamentally vectorized language that does "scalarization" instead of the other way round.
the only thing that might stand in the way is a dependence on reproducibility, but it seems like a weak argument: We already have a long history of people trying to push build reproducibility, and for better or worse they never got traction.
same story with LTO and PGO: I can't think of anyone other than browser and compiler people who are using either (and even they took a long time before they started using them). judged to be more effort than its worth i guess.
An ML model can fit into existing compiler pipelines anywhere that heuristics are used though, as an alternative to PGO.
I tried it a year or so back and was sorta disappointed at the results beyond simple cases, but it feels like an area that could improve rapidly.