So? It has a point. I disagree with a lot of it, but...
* The fundamental point: A study with 100 comparisons will erroneously reject the null hypothesis at p<0.05 for 5 of them, which is a good part of why we adjust for multiple comparisons. But the same issue holds if we do 100 studies, and reject null for 5 of them. One of the fundamental problems with p values is that we don't really know the baseline number of things being compared in unpublished and preliminary research, which in turn makes the p value somewhat meaningless.
In effect, we've unfairly penalized the study with multiple comparisons vs. the same findings showing up from studies with individual comparisons.
* Studies with multiple comparisons are great engines of hypothesis generation. Setting too high a bar for rejecting associations means that we'll possibly discard too much.
* Most of our tests for multiple comparisons assume a degree of statistical independence which just isn't present.
The abstract is particularly horribly written, but those three points are reasonable points. (At the same time, there's circumstances where obviously we need to adjust appropriately or get absolutely stupid, irreproducible results-- e.g. fMRI data.