Computer science is very close to math and should be even easier to verify, but there's plenty of dubious results published every year, simply because it's more profitable to game the system. For example, I'd wager that 50%+ of academic claims related to information security are bogus or useless. Similarly, in the physics-adjacent world of materials science, a lot of announcements related to metamaterials and nanotech are suspect.
Most scientific research represents about the same amount of improvement over the state of the art as the shitty web app or whatever that you're working on right now. It's not zero, but very few are going to be groundbreaking. And since the rules are that we all have to publish papers[*], the scientific literature (at least in my field, CS) looks less like a carefully curated library of works by geniuses, and more like an Amazon or Etsy marketplace of ideas, where most are crappy.
[* just like software engineers have to write code, even if the product ends up being shitty or ultimately gets canceled]
Neither of us are going to be changing how the system works, so my advice is to deal with it.
I used to work for the leading statistical expert witness in the country. Whenever I read something like this:
> The empirical strategy in Eccles, Ioannou, and Serafeim (2014) rests on a demanding requirement: the “treated” and “control” firms must be so closely matched that which firm is treated is essentially random. The authors appear to recognize this, reporting that they used very strict matching criteria “to ensure that none of the matched pairs is materially different.”
I just assume they kept trying different "very strict matching criteria" until they got the matches they wanted. Which is basically what we did all day to support our client (usually big auto or big tobacco). We never presented any of the detrimental analyses to our boss, so he couldn't testify about them on the stand even if asked.
Although in this case it sounds like the authors couldn't even do that, and just fudged the data instead.
Math (and theoretical adjacents like TCS) claim not to make any fundamental claims about the actual world (compared to 17th century philosopher-mathematicians like Leibniz), and physics studies the basest of, well, physical phenomenon.
I don't even know how you would begin actually rigorously studying sociology unless you could start simulating real humans in a vat, or you inject everybody with neuralink. (but that already selects for a type of society, and probably not a good one...)
To be clear, I don't think all sociological observations are bad. However, I tend to heavily disregard "mathematical sociological studies" in favor of just... hearing perspectives. New ones and unconventional ones especially, as in a domain where a lot of theories "seem legit", I want to just hear very specific new ways of thinking that I didn't think about before. I find that to be a pretty good heuristic for finding value, if the verification process itself is broken.
Annals of Mathematics once published a supposed proof (related to intersection bodies IIRC) for a statement that turned out to be false, and it was discovered only by someone else proving the opposite, not by someone finding an error.
If it doesn't have "science" in the name, it's a science
If it has the suffix "logy", it's a semi-science
If it has the word "science", it's not