Classification depends on the problem (and mostly the datasize). Boosting is certainly competitive on tabular data and widely everywhere I've worked.
No one talks about it (except on Kaggle) because it's pretty much at a local maximum. All the improvement comes from manual feature engineering.
But modern techniques using NNs on tabular data are are competitive with boosting and do away with a lot of the feature engineering. That's a really interesting development.