It definitely is not just you. I spent my first two years of PhD wrapping my head around the stats and maths commonly used in ML, and realized that mathematically (as "theoretical" ML is practiced today), most answers are already provided in classical work of statisticians and probabilists. There are many fascinating questions of probability theory and statistics, but most have little to do with AI. In fact, in terms of the biggest empirical success story (deep neural networks), there are essentially no theorems providing a solid conceptual leap of understanding. Mikhail Gromov goes one step further regarding the lack of theory for neural networks (
https://www.youtube.com/watch?v=g4Wl3Ggho6k), and provides a fascinating overview of his thoughts in:
https://www.ihes.fr/~gromov/category/ergosystems/I am interested in the points you raise, but also realized that I would not find a good environment for it at MIT in EECS, for reasons that are rather obvious from the article's subtext. As such, the last year or so has been spent in a search for good alternatives in terms of research, and I am slowly finding answers. I am happy to discuss more over email.
Long story short: you are certainly not the only one who thinks that way.
EDIT: added a video link to Mikhail Gromov's actual views for better accuracy.