I'd argue the opposite.
Vanilla CS theory assumes a very simple system model for the measurement process (ie, linear measurement) and shows that with a few assumptions you need far fewer measurements than you might expect. This by itself is a valuable observation.
However, I'm less impressed with CS as a practical tool because in many realistic systems the measurement process is nonlinear and difficult to model. That is, the desired system state you're trying to infer is some nonlinear function of the experimentally measured quantities.
I think the best instruments of the future will use CS ideas to motivate the use of an appropriate sensor domain to reduce the number of measurements required. But that actual computational reconstructions of system state (ie, inference) will be done with a nonlinear differentiable model which can be optimized using a data driven approach and with ML-like tools.
In addition to the benefit of allowing nonlinearities, this approach gives you a lot of freedom to optimize the computational complexity of the reconstruction (ie, complexity of the system model) vs accuracy of the results.