A crazy world we live in where Robert Maxwell's daughter is more notorious than he is.
What is currently called "peer review" didn't exist back then, back then the meaning of "peer review" was just the back and forth happening in the open academic literature. Note the inevitable lack of finality in the original concept of peer review, a discussion in the scientific community could go on for 100's of years before being finally resolved. The current concept of "peer review" is closer to the concept of a delegation of some opaque ministry of truth composed of some opaquely selected experts (who often truly intend well) to settle in a short duration the finality.
Some measurements or experiments or questions to be settled can be very actionable and provide highly accurate results, others require much longer gathering of data to draw a clear picture.
The modern concept of "peer review" tries to sell the idea of almost immediate finality, like an economic transaction. In reality it is selling just the illusion, and creating lots of victims ranging from truth, individuals, departments institutions, or even entire fields (think of the replication crisis in psychology) along with any patients or others they treat.
perhaps a bit off-topic, but what is coincidental about this and/or what is the relevance of Ghislaine Maxwell here?
I know a PhD professor doing post doc or something, and he accepted a scientific study just because it was published in Nature.
He didn't look at methodology or data.
From that point forward, I have never really respected Academia. They seem like bottom floor scientists who never truly understood the scientific method.
It helped that a year later Ivys had their cheating scandals, fake data, and academia wide replication crisis.
https://sarahkendzior.substack.com/p/red-lines
tl;dr He is the bridge that uncomfortably links Biden's former Secretary of State, Antony Blinken, to Jeffrey Epstein and Mossad. Hence, *gestures at the last couple of weeks and years*. Dude was just, like, Fraud Central, apparently.
To have research happening, you need someone saying "I want to give money to this researcher". There is an endless queue of people lining up who are ready to take this money and do something with it. The person with money (govt or private) has to use some heuristics to pick. One way is to say "I trust this one, I don't care too much what the project is, I'm sure this person will do something that makes sense". But that is dependent on a track record.
Also who's funding you for replication work? Do you know the pressure you have in tenure track to have a consistent thesis on what you work on?
Literally every single know that designs academia is tuned to not incentivize what you complain about. Its not just journals being picky.
Also the people committing fraud aren't ones who will say "gosh I will replicate things now!" Replicating work is far more difficult than a lot of original work.
Of course I do! Not all of course, and taking (subjectively measured) impact into account. "We tried to replicate the study published in the same journal 3 years ago using a larger sample size and failed to achieve similar results..." OR "after successfully replicating the study we can confirm the therapeutic mechanism proposed by X actually works" - these are extremely important results that are takin into account in meta studies and e.g. form the base of policies worldwide.
More than anything. That might legitimately be enough to save science on its own.
Actually, yes, I do. The marginal cost for publishing a study online at this point is essentially nil.
All because journals prefer novelty over confirmation. It's like a castle of cards, looks cool but not stable or long-term at all.
Hell yeah. We’re all trying to get that Nature paper. Imagine if you could accomplish that by setting the record straight.
I believe people will enthusiastically say yes but that they do not routinely read that journal.
> Replicating work is far more difficult than a lot of original work.
Only if the original work was BS. And what, just because it's harder, we shouldn't do it?
I'm sure you can more narrowly tune your email alerts FFS.
There are journals dedicated to replication, like ReScience C[1]. But they are niche. Perhaps we should have more of these.
I don’t regularly read scientific studies but I’ve read a few of them.
How is it possible that a serious study is harder to replicate than it is to do originally. Are papers no longer including their process? Are we at the point where they are just saying “trust me bro” for how they achieved their results?
> Do you want issues of Nature and cell to be replication studies?
Not issues of Nature but I’ve long thought that universities or the government should fund a department of “I don’t believe you” entirely focused on reproducing scientific results and seeing if they are real
This is partly why much of today's science is bs, pure and simple.
Replications don't have to be in the journals either. As long as money flows, someone will do them, and that is what matters. The randomization will help prevent coordination between authors and replicators.
In a better world, negative studies and replications would count towards tenure, but that is unlikely to occur. At least half of the problem is the pressure to continuously publish positive results.
Publishing a failed replication of the work of a colleague will not earn you many brownie points. I'm stating this as an observation of what is the case, not as something that I think should be the case. If you attack other researchers like this and damage their reputation - even if for valid scientific reason - you'll have a hard time when those colleagues sit on committees deciding about your next grant etc.
Of course if you discover something truly monumental that will override this. But simply sniping down the mediocre research published by other run-of-the-mill researchers will get you more trouble than good. Yes it's directly in contradiction to the textbook-ideal of what science should be, as described to high school students, but there are many things in life this way.
Of course it can be laudable to go on such a crusade despite all this, and to relentlessly pursue scientific truth, etc. but that just won't scale.
The biggest problem by far is modern society: Tenure, getting paid a livable wage as a researcher, not getting stack-ranked and eliminated from your organization all overindex on positive research results that are marketable. This "loss function" encourages scientific fraud of sorts.
The simple fact that theories should be falsified and not verified is something that most scientists don’t know.
> Most will refuse to publish replications, negative studies, or anything they deem unimportant, even if the study was conducted correctly.
I think this was really caused by the rise of bureaucracy in academia. Bureaucrats favorite thing is a measurement, especially when they don't understand its meaning. There's always been a drive for novelty in academia, it's just at the very core of the game. But we placed far too much focus on this, despite the foundation of science being replication. We made a trade, foundation for (the illusion of) progress. It's like trying to build a skyscraper higher and higher without concern for the ground it stands on. Doesn't take a genius to tell you that building is going to come crashing down. But proponents say "it hasn't yet! If it was going to fall it would have already" while critics are actually saying "we can't tell you when it'll fall, but there's some concerning cracks and we're worried it'll collapse and we won't even be able to tell we're in a pile of rubble."I don't know what the solution is, but I do know that our fear of people wasting money and creating fraudulent studies has only resulted in wasting money and fraudulent studies. We've removed the verification system while creating strong incentives to cheat (punish or perish, right?).
I think one thing we do need to recognize is that in the grand scheme of things, academia isn't very expensive. A small percentage of a large number is still a large number. Even if half of academics were frauds it would be a small percentage of waste, and pale in comparison to more common waste, fraud, and abuse of government funds.
From what I can tell, the US spent $60bn for University R&D in 2023[0] (less than 1% of US Federal expenditures). But in that same time there was $400bn in waste and fraud through Covid relief funds [1]. With $280bn being straight up fraud. That alone is more than 4x of all academic research funding!!!
I'm unconvinced most in academia are motivated by money or prestige, as it's a terrible way to achieve those things. But I am convinced people are likely to commit fraud when their livelihoods are at stake or when they can believe that a small lie now will allow them to continue doing their work. So as I see it, the publish or perish paradigm only promotes the former. The lack of replication only allows, and even normalizes, the latter. The stress for novelty only makes academics try to write more like business people, trying to sell their product in some perverse rat race.
So I think we have to be a bit honest here. Even if we were to naively make this space essentially unregulated it couldn't be the pinnacle of waste, fraud, and abuse that many claim it is. But I doubt even letting scientists be entirely free from publication requirements that you'd find much waste, fraud, and abuse. Science has a naturally regulating structure. It was literally created to be that way! We got to where we are in through this self regulating system because scientists love to argue about who is right and the process of science is meant to do exactly that. Was there waste and fraud in the past? Yes. I don't think it's entirely avoidable, it'll never be $0 of waste money. But the system was undoubtably successful. And those that took advantage of the system were better at fooling the public than they were their fellow scientists. Which is something I think we've still failed to catch onto
[0] https://usafacts.org/articles/what-do-universities-do-with-t...
[1] https://apnews.com/article/pandemic-fraud-waste-billions-sma...
The cost of academic fraud should also include the indirect costs of bad decision making.
The Covid relief funds were only needed because politicians implemented extremely aggressive policies based on unproven epidemiological models built on fraudulent practices. I investigated all this extensively at the time and it was really sad/shocking how non-existent intellectual standards are in the field of epidemiology. The models were trash RNGs that couldn't have been validated even if they'd tried, which they never had because the field doesn't consider validation to be necessary to get a paper published. So the models made wildly wrong predictions based on untested, buggy, non-replicable models, which then led to lockdowns, which led to economic catastrophe, which led to the relief programme. All of the fraud in that programme - really the entire cost of it - should be laid at the feet of academic fraud.
With that said, due to the apparent sizes of the fraud networks I'm not sure this will be easy to address. Having some kind of kill flag for individuals found to have committed fraud will be needed, but with nation state backing and the size of the groups this may quickly turn into a tit for tat where fraud accusations may not end up being an accurate signal.
May you live in interesting times.
Also, Brandolini's law. And Adam Smith's law of supply and demand. When the ability to produce overwhelms the ability to review or refute, it cheapens the product.
There was this guy, well connected in the science world, that managed to publish a poor study quite high (PNAS level). It was not fraud, just bad science. There were dozens of papers and letters refuting his claims, highlighting mistakes, and so... Guess what? Attending to metrics (citations, don't matter if they are citing you to say you were wrong and should retract the paper!), the original paper was even more stellar on the eyes of grants and the journal itself.
It was rage bait before Facebook even existed.
If the fraudsters “fail to replicate” legitimate experiments, ask them for details/proof, and replicate the experiment yourself while providing more details/proof. Either they’re running a different experiment, their details have inconsistencies, or they have unreasonable omissions.
We can't look for failed replication experiments if none exist.
An example is papers which claims of the form, "We proved X by doing Y" where Y is a methodology that isn't derived from and can't prove X. This sort of paper will replicate every time because if you re-derive a correct methodology the original authors say you didn't really replicate their study and your work should be ignored, but if you use their broken methodology you'll just give an intellectually fraudulent paper the stamp of replication approval.
This kind of problem is actually much more widespread than work that looks scientific but in which the data is faked.
the effort to publish a fraudulent study is less (sometimes much less) than the effort to replicate a study.
Machine Learning papers, for example, used to have a terrible reputation for being inconsistent and impossible to replicate.
That didn't make them (all) fraudulent, because that requires intent to deceive.
>>95% of the time, the fraudsters get off scot-free. Look at Dan Ariely: Caught red-handed faking data in Excel using the stupidest approach imaginable, and outed as a sex pest in the Epstein files. Duke is still giving him their full backing.
It’s easy to find fraud, but what’s the point if our institutions have rotten all the way through and don’t care, even when there’s a smoking gun?
Only in stupid university leaderships is that truly what gets you hired or promoted. It's simply not true. Junior researchers in fact are believing it stronger than the facts actually support. Yes, you have to have a solid amount of publications, but doing a ridiculous amount of low-impact salami-sliced stuff or getting your name on a ton of papers where you did no real work is not going to win you a job. People who evaluate applications also live in this world and know that these metrics are being gamed. It's a cat and mouse game but the cats are also paying attention. You can only play this against really dumb government bureaucracies that mechanically give points for publications and have hard numerical criteria etc. Good institutions don't do that.
Good evaluators actually read the papers themselves. Of course you can't read the papers of every single applicant if there are many. But once the applicant gets into the a somewhat filtered down list, reading the paper(s) or having an interview about it, or having them give a talk is much more informative than the number of the papers. Still not perfect, because some people can't communicate well, but communicating is part of the job, so maybe that's super bad but somewhat bad.
Evaluators will use also other evidence such as recommendation letters (informally being aware of the reputation of the recommender), previous fellowships or grants obtained, etc.
None of these are foolproof in themselves. But someone who has super few publications relative to their career stage will need some other piece of evidence in favor.
In machine learning and AI, peer reviews are known to be quite random. If you have a good Arxiv-only paper that makes sense and you can give a good talk on it and answer questions, that will get you further than having a rubberstamp on some paper that's "meh, so what".
There are some players in this game (which includes funding agencies, journals, university administration, hiring committees, conference organizers, students, etc) that are more ossified and slow-moving than others.
And it's also true that double blind peer review and the rubberstamp of a top-tier conference was mostly beneficial to small, not well connected research groups, as it puts the paper on an equal footing with the big labs. The more this system erodes, to more we fall back to reputation and branding of big labs and famous researchers. Again, because there is no infinite time and infinite wisdom available to pick from applicants and there never will be. There are only tradeoffs.
https://traditional.leidenranking.com/ranking/2025/list
and select "Mathematics and Computer Science", you'll find the top-ranked university is the University of Electronic Science and Technology of China.My Chinese colleagues have heard of it, but never considered it a top-ranked school, and a quick inspection of their CS faculty pages shows a distinct lack of PhDs from top-ranked Chinese or US schools. It's possible their math faculty is amazing, but I think it's more likely that something underhanded is going on...
Maybe it's the scientists they don't trust?
There are many things that cannot be feasibly verified empirically without access to rare resources.
It's a bit like how can we trust online shopping if I get all these emails trying to sell me aphrodisiac pills?
(it's very funny to pretend tech is still, or really much ever was, one big libfest. it's funnier to say this here of all places)
Non-scientists often seem to think that if a paper is published, it is likely to be true. Most practicing scientists are much more skeptical. When I read a that paper sounds interesting in a high impact journal, I am constantly trying to figure out whether I should believe it. If it goes against a vast amount of science (e.g. bacteria that use arsenic rather than phosphorus in their DNA), I don't believe it (and can think of lots of ways to show that it is wrong). In lower impact journals, papers make claims that are not very surprising, so if they are fraudulent in some way, I don't care.
Science has to be reproducible, but more importantly, it must be possible to build on a set of results to extend them. Some results are hard to reproduce because the methods are technically challenging. But if results cannot be extended, they have little effect. Science really is self-correcting, and correction happens faster for results that matter. Not all fraud has the same impact. Most fraud is unfortunate, and should be reduced, but has a short lived impact.
I want to push back a little on "science is self-correcting" though. It's true in the limit, but correction has a latency, and that latency has real costs. In fields like nutrition, psychology, or pharmacology, a fraudulent or deeply flawed result can shape clinical guidelines, public policy, and drug development pipelines for a decade or more before the correction lands. The people harmed during that window don't get made whole by the eventual retraction.
The comparison I keep coming back to is fault tolerance in distributed systems. You can build a system that's "eventually consistent" and still have it be practically broken if convergence takes too long or if bad state propagates faster than corrections do. The fraud networks described in TFA are basically an adversarial workload against a system (peer review) that was designed for a much lower rate of bad input. Saying the system self-corrects is accurate, but it's not the same as saying the system is healthy or that the current correction rate is adequate.
I think the practical question isn't whether science corrects itself in theory but whether the feedback loops are fast enough relative to the rate of fraud production, and right now the answer seems pretty clearly no.
But I’m comfortable arguing that where science intersects with policy, fraud plays a very minor role. I suspect that most policy “mistakes” (policies that were adopted and then reversed) are more about the need for a policy in the absence of data (covid and masks), or subtle tradeoffs (covid and masks), or a policy choice that seems slightly better than an alternative (mammography) but also has poorly understood harms. Policy involves politics, and science unfortunately plays less of a role than one might like (and fraudulent science an even smaller role). This is not my field, but I cannot think of policies that were reversed because of discoveries of fraud (perhaps thalidomide and other drug approvals).
And finanacially too..
>Science really is self-correcting..
When economy allows it....
My eyes have been opened!
Unfortunately I don't think a dialogue around vague anecdotes is going to be particularly enlightening. What matters is culture, but also process--mechanisms and checks--plus consequences. Consequences don't happen if everyone is hush-hush about it and no one wants to be a "rat".
That is where being good at politics come into play. And if you are good at it, instead of being career-ending, fraud will put you in the highest of the positions!
No one wants a "plant" who cannot navigate scrutiny!
I worked for exactly one academic, and he indulged in impossible-to-detect research fraud. So in my own limited experience research fraud was 100%.
It was a biology lab, and this was an extremely hard working man. 18 hours per day in the lab was the norm. But the data wasn't coming out the way he wanted, and his career was at stake, so he put his thumb on the scale in various ways to get the data he needed. E.g. he didn't like one neural recording, so he repeated it until he got what he wanted and ignored the others. You would have to be right in the middle of the experiment to notice anything, and he just waved me off when I did.
This same professor was the loudest voice in the department when it came to critiquing experimental designs and championing rigor. I knew what he did was wrong, because he taught me that. And he really appeared to mean it, but when push came to shove, he fiddled, and was probably even lying to himself.
So I came away feeling that academic fraud is probably rampant, because the incentives all align that way. Anyone with the extraordinary integrity to resist was generally self-curated out of the job.
However, among certain departments, at large schools, under certain leaders.. yes, and growing
$0.02
> Petitioners also formed a variety of organizations to create what they termed "marketable science." Pet. App. 1687a. For example, through the Council for To bacco Research (CTR) and Lawyers' Special Accounts, petitioners jointly financed research programs that were directed by company lawyers and calculated to yield favorable results. Id. at 240a-275a. Petitioners regu larly cited the conclusions of the scientists funded through these programs as if they were the objective results of disinterested research, without revealing that the scientists had, in fact, been funded by the industry. Id. at 195a.
That comes from here: https://www.justice.gov/osg/brief/philip-morris-usa-inc-v-un...
It's possible all the science was good but people were upset about who funded it.
Science is good, but it's mediated via corruptible humans.
"Trust the science" is anathema to the process. If anything, the chant should be "Doubt the science! Give it your best shot, refute it with data, with logic, provide a better explanation!"
For example, when deciding whether to give your kids certain vaccines or not, you really can't expect that new parents will read the primary literature and try to refute or confirm the conclusions based on the numbers and will trace through the citations and so on... Any of those claims will also have some online account on social media refuting it with equally scientifically sounding words. In the end it will come down to heuristics and your model of how the world works, which set of people operate with what kind of intention. Like maybe you know people working in the field who you trust and hear from them that generally this sort of stuff can be trusted. Or maybe you had some bad experiences getting screwed by "the establishment" (maybe even unrelated to medicine) and now you lump all this together and distrust them.
firstly, there are basically no legal repercussions for scientific misconduct (e.g. falsifying data, fake images, etc.). most individuals who are caught doing this get either 1) a slap on the wrist if they are too big to fail or in the employ of those who are too big to fail or 2) disbarred, banned, and lose their jobs. i don't see why you can go to jail for lying to investors about the number of users in your app but don't go to jail for lying to the public, government, and members of the scientific community about your results.
secondly, due to the over production of PhD's and limited number of professorship slots competition has become so incredibly intense that in order to even be considered for these jobs you must have Nature, Cell, and Science papers (or the field equivalent). for those desperate for the job their academic career is over either way if they caught falsifying data or if they don't get the professorship. so if your project is not going the way you want it to then...
sad state of things all around. i've personally witnessed enough misconduct that i have made the decision to leave the field entirely and go do something else.
If it then turns out any of it is fabricated, you should be personally liable for paying it back
Some things should not have been democratized. Silicon Valley assumes that removing restrictions on information brings freedom, but reality shows that was naïve.
The soviets may have rigged a few studies; but the democratized world now faces almost all studies being rigged.
Whether or not people will build resilient chains is another story, contingent on whether the strength of that chain actually matters to people. It probably doesn't for a lot of people. Boo. But inasmuch as I care, I feel I ought to be free to try and derive a strong signal through the noise.
The gate has been removed from the signal chain, and now the noise floor is at infinity.
in no sense was it corrupted by the desire to include a larger population in journal publications.
I guess, to convert it into this context, we can say that if you mix the high minded and infantile (which I think is what Internet and social media did), the high minded becomes infantile, instead of the other way around.
How many will see the connections between this and our capitalist mode of production? Probably few since modern lit/news is allergic to systemic analysis.
The blatant flaws of capitalism can't be ignored for much longer.
When I was a kid I thought it was the issue with USSR rotting to the core (it was), but when it crashed and later when the web appeared, it became obvious that it's a common problem with academia and its incentives.
There is no single solution, but public fund usurping is basically a law of capitalism, which is why I critique it in this context. Public money laundering is a developed industry in capitalism.
Socialism wouldn't be the answer to this because socialism is famous for struggling with surpluses and shortages. All socialism would do is clamp down (hard) on academic's, which case you wind up with the famous shortage where not enough PHD's are available to produce research for an industry.
And that's not a problem specific to just socialism, that's the fallacy of central-planning. The US government clamped down on welfare fraud and the result were freak government social workers sniffing people's bed sheets and rooting through drawers and forcing everyone to document partners.
This is the situation where there needs to be a market correction because the alternative could be far worse.
The real problem here is the fundamental lack of democratic control over our agencies. That our political organization is intensely lagging behind our productive organization. That our whole political will involves TRUSTING strangers to not be corrupt instead of directly democratizing these processes as much as possible.
But besides that, you cannot remove history from historical analysis. The reason socialism countries struggled in the beginning wasn't an inherent flaw in its organization, but the fact that they were under constant war war by capitalist countries through out their existence. Also keep in mind that most socialist countries did NOT have a whole section of the world where-from to extract riches through murder (S.America, Africa, Middle east, etc), like western capitalist countries had. This is convenient for you to ignore. Maybe because you don't know, or don't care about the super-exploitative history of these places and how they tie into western capitalism. But they are inherent to western wealth and these countries' whole history is struggle against this exploitation.
Not to mention that most of the countries on earth are capitalists and are very very very poor.
To add: Socialism has nothing to do with "clamping down" on X or Y industry, as you hypothetically claim would happen. Socialism is almost exclusively about removing the need to generate capital from production. It unleashes production from its historical ball and chain that is profiteering.
In a single sentence: Instead of production being held back by capitalists generating wealth we can produce for our own needs. It is self sustaining production.
Central planning is not fallacious. Your problem is with corruption, not democratic central planning. The US Govt is a pro-capitalist entity that pro-capitalists try to distance themselves from (ironically). So using them as an example isn't saying anything at all.
Central planning is not "allow a small group of people to decide things", as happens in the US Govt. Central planning is to take into account all sources of information on production to plan said production democratically.
This will always beat the highly highly inefficient speculation of capitalism. Where trillions vanish on a whim and cause of a tweet, where crisis occur every 8-10 years, and where its whole trade market is built to hide that it is mostly insider trading. Again, your problem is with corruption not democratic central planning.
And the way to deal with corruption is to create more democratic bodies where avg people hold real power. I don't see you asking for that either. We call that socialism.
Marxism isnt "lets try something different based on ideas of justice"
Marxism is "Society evolves through general stages: primitive, slave, feudal and capitalist, this is determined by the level production. Capitalism is holding back its current evolution into a society of plenty."
Good luck backing speculation and profit gatekept production.
Profits are the deciding factor, not honor.