The problem is that AI is weird not because of academia. In fact, right not it has been captured by industry and it is why we've severely slowed down in progress[0]. Most people in the space now are working in industry labs. Frankly, you can do more, you get paid A LOT more (2-3x) and you have less bureaucratic bullshit. But I think you're keenly aware of this industry capture as you're mentioning aspects of it.
I don't want there to be any confusion: I think it is good that industry and academia work together. There's lots of benefits. But we also need to recognize that these two typically have very different goals, work at different TRLs, and have have very different expectations on the time where the work will be seen as impactful. Traditionally, academia has generally been the dominating player in the high risk high reward/low level research space (yes, much more goes on too, but of people that do this type of research, you think academia) while industry research typically is focused on higher TRL because they're focused on selling things in the near future. There's just a danger when you work too closely to industry: you can't have any wizards if you don't have any noobs.
But I'm not sure it is just ML that's been going this way. There's a lot of sentiment on this website where people dismiss research papers (outside ML) that show up here due to them not being viable products. I mean... yeah... they're research. We can agree that the value is oversold, but often that's by the publisher (read university) and not the paper (not sure if I can say the same for ML). But it's a kinda environmental problem because if everything has to be a product you can't be honest about what you did and if discussing the limits and where you need to still improve upon to actually get an product down the line gets you rejected, well... you just don't talk about that.
This is all the "RL hacking" or better known as Goodhart's Law. I've been saying we're living in Goodhart's Hell because it seems, especially in the last 5-10 years, we've recognized that a lot of metric hacking is going on and decided that the best course of action is not to resolve the issues, but lean into it. We've seen the house of cards that this has created. Crypto is a good example. Shame is if we kill AI because there is a lot of real value there. But if you're a chocolate factory and promise people that eating your chocolate will give them superpowers, it doesn't matter how life changingly delicious that chocolate is, people will be upset and feel cheated. Problem is, the whole chocolate industry is doing this right now and we're not Willy fucking Wonka.
[0] More progress looks like it is being made and there is a lot of progress that should have been made but wasn't but these types of nuances are a bit harder to discuss without intimate knowledge of the field. I'll say that diffusion should have happened much sooner but industry capture had everyone looking at GANs. Anything not, got extra scrutiny and became easy to reject due to not having state of the art results (are we doing research or are we building products?)