If the field were sane, you would train all the apprentices on replication studies. Once they demonstrated dispassionate expertise with the tools, only then would they be allowed to try to use those tools to test their own ideas, where they will have a strong emotional preference for how the study will come out.
If universities hired grad students based on their replication work, not on their eye-popping original research, we'd have better science and better scientists.
The Bachelor's degree is loosely similar to an apprentice's role. The young boy (they were almost exclusively male) worked in a shop or with a priest for some time. He learned the trade, the tools, and gained some experience from 'level 0'. When you are done with the apprenticeship, you are 'cleared' to work in other shops and are known to not be a total moron or break tools or burn down shops.
The master's degree is just that. You are considered a master of the craft (like plumbing or prinitng) or the discipline (like The Book of Mark or Crusader History). As such, you typically have a master's level project. Something that is 'new' or shows that you know your stuff. That might be a very decorative silver bowl or a thesis.
The Doctorate means you are 'world class.' Not just a mastery in a field, but a paragon of it. Today, that means that you are the expert in your little niche of underwater basket weaving. There should be no-one better than you. This means you MUST have produced something new or novel way of thinking about the God or something. This has always been the idea, if not the practice.
To change that and say that the doctorate should be the bachelor's is very big. To suggest that PhDs should just replicate experiments is anathema to the idea of graduate education and would be a tremendous waste of time and energy. When you enter the Phd, you are assumed to already know how to do all the replication and the facts about the field. Granted, fields are exponentially larger than they were in the 1600's, but you still should know stats and biology if your PhD is in cancer biology.
I think you are totally wrong about this. What you are suggesting should be covered in undergrad and I think it largely is.
I do think it makes sense to stick with the PhD meaning you're a world class expert in some area, but if so then we need to adjust our expectations for what Master's level work means in the sciences. Right now it seems to just represent a hurdle you need to whiz past on your way to the PhD.
However, PHD students on average are very far from world class. And in just about every case they are simply looking at a problem so unimportant that nobody considered it before, and most likely nobody will ever look at again.
It's almost always a waste of time for both the student and everyone else involved.
PS: There are plenty of counter examples where PHD research happened to be valuable, but that's a tiny minority of cases.
Today the MS & PhD are both supposed to prepare you to do novel research & science. Since reproducibility is a core part of research & science, it would make sense for you to reproduce another study as part of your learning process.
We used to think placebo controls and double blinds were a waste of time. One great thing about the history of science your version excludes is that the way we do science is subject to review too, and we continually throw out what doesn't work in favor of methods that do.
Apprentices work on papers and research for their professors. No one is incentivized to do replication work: either you replicate it successfully (great) or you find flaws. In the latter case, you'll probably be a nit-picker anyway—also not really additive. On the off-chance that you refute a high-profile study (e.g., some of the outright instances of fraud), you might get some recognition, but now your name is associated with something negative (e.g., fraud: "X is not true") vs. positive ("X is true").
Finally, this is some of the role of review panels in journals: too pass the burden of proof.
It's important to note that though this seems to happen in all fields, it is far less common in some. I rarely hear of such things in astronomy; it does happen, but more often I hear about faculty explicitly working to ensure they can protect projects for their students so their students can get the credit deserved for doing the project. I hear of professors appropriating student research in biology and chemistry more frequently, however.
The occurrences of professors taking and publishing student research is certainly a problem, is unethical, and should be stopped. But the way in which it is usually discussed ("in science") implies that it's a systemic issue across all science, which (from my experience) isn't true. This topic of credit and attribution for research deserves more nuanced discussion and fewer blanket statements.
Edit: Spelling, wording clarification.
http://journals.plos.org/plosbiology/article?id=10.1371/jour...
https://www.google.com/url?sa=t&source=web&rct=j&url=http://...
http://www.npr.org/sections/money/2016/01/15/463237871/episo...
I see an entire civilization confused by a ubiquous mass communication tool which they invented to do exactly the opposite: enlight them.
There was never a stream of pure, refined truth available, that just somehow lacked a distribution method. All there ever was was a confusing mixture of an uncountable varieties of different possibilities, theories, and facts of varying veracity. Now you can have more direct access to that confusing mixture, with all the attendant privileges and responsibilities.
There were things that claimed to be streams of pure, refined truth. Those claims didn't become false. They always were false. Now you can tell that better. It may not feel like progress, but it is.
Academic educational programs have exactly been the source-without-a-scalable-distribution-method for at least the past century.
You're unlikely to find a treatment of mathematics or physics or even Computer Science that is as well-curated, free of coercive bias, and well-presented as it often is in the undergraduate programs at universities that care deeply about their educational programs. (Such universities and colleges do exist; they unfortunately also tend to be expensive, selective, and not always well-represented in top N lists -- especially in the US.)
The reason the distribution method of university education is lacking has more to do with economics than anything else. Hiring truly high-quality people to teach small groups of people difficult content in a rigorous way is expensive. If you skimp on any of those features, the quality of the end result goes way down (c.f. typical MOOCs and the university courses to which they are purportedly equivalent, in pretty much any dimension.)
> There were things that claimed to be streams of pure, refined truth. Those claims didn't become false. They always were false.
Again, I fail to see how this critique applies in any meaningful way to high-quality undergraduate education programs in hard sciences. Nothing is perfect -- and in fact I doubt any of those programs ever claimed to be "streams of pure, refined truth". But they come far closer than your comment seems to suggest.
Now, going back to the contents of the article, universites certainly aren't "streams of pure, refined, and new truth". Such streams very likely don't exist.
I've noticed that questioning some scientific finding often makes people think I am either anti-science, anti-intellectual, conservative, religious, or any combination thereof (I am none of these). To state quite the opposite, I think not questioning science makes you religious - you are putting faith in the findings, rather than disputing them or scrutinizing them with the scientific method (not that I think there is anything wrong with faith or religion, within their own realm).
When you're discussing anything nontrivial it takes a lot of effort/knowledge to dispute a flawed argument (disproportionately harder than making one IMO) - disregarding someone on biases/agenda is a decent heuristic.
(Note that running a gish gallop doesn't mean you're wrong, it just means you're intellectually dishonest.)
(I try not to pay too much attention to them, and I'm not super interested in refuting their bullshit, so this comment is going to be pretty unsatisfying. Sorry.)
For example, a cited claim could be contradicted by the citation, or missing valuable context provided by the citation, and you'd need to look up the citation to discover this. Or the citation could be bogus, and you'd need to do further research. Or the claim could be a really trivial objection and its reply omitted, and again you'd need to do further research.
I'll accept that not all of these things necessarily make for a Gish gallop as such. But they do all seem to be in the broad category of "intellectual dishonesty through weak argumentation which is easier to perpetuate than refute". I'm not going to quibble about how to subcategorise that.
I'm reluctant to give specific examples, because that risks starting an argument about those examples. Hopefully this thread is now old enough to avoid that.
---
So for example, we can look at http://rationalwiki.org/wiki/Cryonics (permalink in case of edits: http://rationalwiki.org/w/index.php?title=Cryonics&oldid=159...). Just skimming it, the "engineering problems" section keeps referring to "freezing". Trivial objection: "freezing damages the cells!" Omitted reply: "yes, so we moved on from freezing". It does also talk about vitrification, but it makes no particular effort to distinguish between the two or to tell the reader that freezing is no longer current practice.
Elsewhere, "Alcor Corporation calls cryonics "a scientific approach to extending human life" and compares it to heart surgery.[8] This is a gross misrepresentation of the state of both the science and technology and verges on both pseudoscience and quackery."
What the citation actually says is: "Cryonics, like heart surgery, is a scientific approach to extending human life that does not violate any religious beliefs or their principles." They aren't making the comparison that RW wants to paint them as. They're sort of hinting in that direction, but they're not making any actual scientific claims here. (They do make scientific claims elsewhere. If they ever say that cryonics has the same chance of success as heart surgery, RW should feel free to call them on it. I'm pretty sure they never say that.)
Their citation for "Some advocates literally propose a magic-equivalent future artificial superintelligence that will make everything better" is not someone proposing that (later in the thread, he says nanobots would be sufficient but not necessary). You might want to read the citation for more replies to RW-level objections to cryonics.
I can tell you that "Belief in cryonics is pretty much required on LessWrong to be accepted as "rational."" is simply false, and probably wasn't true when it was written. The citation leads not to a survey of LWers opinions on cryonics, but to the opinion of the founder of LW.
Note that these objections are true, and reflect badly on RW, even if cryonics is complete bunk.
And also note that I didn't follow a single citation on that article and then decide "no, this seems fair". I picked things to follow according to what I expected to see, but I was never pleasantly surprised.
(In the interests of fairness, I'll say that I am pleasantly surprised they don't call cryonics a scam. RationalWiki: not quite as bad as it could be.)
And I've spent over an hour on this now, when I should have gone to bed. So I'm done.
So the scientific debate equivalent of the Trump campaign.
For instance, Trump utters something completely off the wall. Retweets galore. Everybody gets spun up. Let's say it reaches 1million people which some portion now repeats the soundbite. Someone like Politifacts comes along after the fact and points out the discrepancy after fact checking. Reaches a much smaller follow up audience.
This is the sort of informational asymmetry that I find infuriating...
Amen to that. :)
I'm a bit tired of seeing critics of various hypotheses being dismissed out of hand simply "Because Science".
While this is certainly an issue, more often than not I see "critics" claiming broad statements without even trying to verify/falsify their claims scientifically.
And if I have to choose between two sides, where one tries hard to fulfill scientific requirements while the other doesn't care, I'm certainly on the scientific side - even though our current academic landscape is far from perfect.
People are bad at maintaining the "I don't know" stance. But it's sometimes the only objective.
Somebody's ignorance of the background of some scientific claims is absolutely not an excuse to accept to even spend the energy considering his claims of "wrongness." The only thing we should consider is how we can educate as much people as we can, but we certainly should not treat them as having any contribution to the understanding of the topic.
An example for a case study:
https://www.youtube.com/watch?v=8athT6tfRIg
"Horizon Zoom Boom Earth Flat"
The modern concept of "journalistic balance" (in reality, trying to win the eyeballs by creating "conflict") by representing 99% percent of all world scientists with one person and some group with silly claims with another person and giving these two then the same air time or coverage space is exactly one of the things that produce this effect:
"The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it."
Also never ignore the "accidental" fact when the "silly" group represents the interests of the people with immense amount of wealth and/or power, or potential financial or power gain in maintaining the "controversy." That's where the events really get nasty.
So that leaves us with the critics. If you can't be bothered to do the work and are taking potshots from the sidelines there really isn't likely to be much value in your criticism.
It's not impossible that there is value there, but the odds aren't very good at all, so you can't really blame people for not engaging with it easily. I agree it is unfortunate when people are quickly dismissive of an idea with "Because Science". But it remains true that by far the best answer to "Because Science" is, "Not, so fast, what about Science".
Picking at potential flaws in a study rarely adds much signal. Suggesting improvements and helping make those happen has potentially immense value.
Arguing about it on the internet usually has negative value.
This speaks to part of the problem - the undue weight that non-scientists place on expert opinion. Trained scientists see appeal to authority arguments for what they are: bullshit.
I see this most frequently in areas for which few controlled studies are available to light the way. Human nutrition and toxicology come to mind. Oddly enough, these are the areas that are most likely to be of interest to non-scientists, setting up a vicious cycle of guru-ism complete with economic incentive to continue spouting nonsense.
Speaking as a non-scientist, I can recognize an appeal to authority probably just as well as a scientist. But having recognized one, what do I do? I lack the training, knowledge, and time necessary to evaluate the research directly. I can choose to only trust studies that are peer reviewed, or in major journals, or backed by whatever relevant government body there might be, or that my friend who knows about this thinks are right. And maybe that's a good idea, but it's still just appealing to different kinds of authority.
Most of the time laypeople have no realistic alternative to expert opinion.
The tendency of the general public would be to look at that study and say the first method is better than the second method (assuming the general public would care at all, which they don't). That's incorrect. I would only conclude that method is actually better if several other people found similar results for similar methods in separate studies.
People tend to place far too much importance on one paper or one study. In theoretical research this can be okay sometimes, but in applied research this is almost always the wrong way to go.
https://mises.org/library/skeptics-case
Note; not "climate change" in general, or even just "global warming" but the specific hypothesis that it's all down to co2 emissions by humans, invariably followed up with a proposed solution to the problem consisting of an increase in state power and interference into the markets in order to avert certain catastrophe, that has the neat side effect of allowing the political authorities of the world to impose yet another tax on almost all economic activity on the planet.
And all of the above gets neatly rolled up into a label like "climate change" instead of the greatly expanded problem / solution combination the harder the expanded problem / solution combination is critically evaluated.
Nothing to see here, move along.
As yes, it can be abused of course.
http://www.nature.com/news/reproducibility-1.17552
I believe this is the primary issue and the cause is from one of two causes:
1) secret sauce in research -- details are lacking because there is a push to commercialize things that come out of academia
2) insufficient experimental design -- small sample size, poor controls, etc.
I would like to see an open publication that as a part of publication the result must be reproduced in a separate independent lab or two. This would almost double the required funding (maybe less because you eliminate false starts). Maybe just a few institutions could handle many reproductions.
There is a bit of a self-healing aspect in that the non-reproducible and non-interesting/advancing studies just get dropped on the floor. However, it would lend a lot of credibility to a journal that required an independent research confirmation.
I personally fell into this trap, not because I was trying to refute something, but because I was trying to back up one of my own assumptions and I found that Lord Voldemort was citing Obscure Reference X to back up the same assumption. The joke was on me when I actually tracked down Obscure Reference X in the 30-years-out-of-print proceedings of a symposium on Y. Obscure Reference X had nothing at all to say about my assumption! Needless to say, I no longer trust Lord Voldemort or anyone who publishes with him.
Pastebin?
Fuck these frauds.
The tactics I and the original article were describing allow Lord Voldemort to clothe any assertion he likes in the robes of science. The root of this problem is a broken incentive system for publication. We're required to publish a lot to show productivity, and we're trained to put in lots of citations to back up our work. This creates an unmanageable avalanche of worthless papers and makes it easy to build a false trail of scammy citations.
Compare this to the situation 50 years ago, before publication inflation had set in. John Nash wrote a 30 page dissertation, and cited two works at the end of it. Simon's classic "Behavioral Model of Rational Choice" cited 5 works. The entire Cowles commission report on Activity Analysis devoted only 4.5 of its 418 pages to citations, and that included a detailed lit review in its introduction. Nothing makes it to press these days without five to ten times as many citations.
I think its also about time we have post-publication peer review and about time scientists get off their high horses and start responding to it.
In nature, immune systems evolve based on natural selection. We need to keep trying new things and see what works. Maybe "peer review" needs to extend not only to individual papers, but to scientists and institutions themselves. Maybe their reputation needs to be evaluated over time so that a scientist who has been "infected" then has a standing presumption against their work until they can overwhelmingly demonstrate they are "healed."
This idea might be good or might be terrible, but I believe that we should be trying things like this to see if any of them stick and cause more good than harm.
The "Publish or Perish" has made Gish Gallops much harder to catch and almost impossible to punish.
It would be great if we could fund science purely for science' sake, and if scientists didn't have egos or careers or reputations or children, but despite the objections, I will expect a certain amount of bullshit to continue unabated. In the mean time, the author's most important point, IMO is "if you love science, you had better question it, and question it well, so it can live up to its potential." This is true, and always will be, regardless of how much bullshit is involved!
We routinely say things that aren't quite true -- sometimes with the best intentions or out of necessity. The truth can be a very complicated thing.
What we are calling "bullshit" may go under more serious names (and serious discussions) if we look at specific cases - e.g. finance, medicine, game theory, biology, etc... My back-of-the-envlope definition is that of an "approximation to the truth".
Given the importance of academia and its impact on policy in general perhaps something similar should exist?
I'm not sure if ethical tests and committees curb bad behavior but it could be a start or at least improve awareness. Maybe there is even something like the above already in place for academia that I'm unaware of?
https://www.washingtonpost.com/news/wonk/wp/2016/02/17/scien...
This assumes that we know, a priori, what is bullshit and what is not. Sometimes bullshitters know they are bullshitting, but most often they do not.
What I think is really going on here is that the scientific method is crap for this sort of thing, there is no such thing as "empirical truth" that exists in the real world, and subjective debate, reasoning, and so on, is hard and requires enormous effort on all parts.
Let's consider another hypothetical work of bullshit by one Maleficent. I, an oblivious third party, come across her published work. How am I to know that this work is bullshit?
The traditional response is that we use the scientific method, verifiability and empiricism, to test the bounds of a proposed model against observation. "You said Planet X should be at y, but it is actually at z, therefore bullshit." To quote Laurence Laurentz: 'Would that it were so simple.'
The problem here is that observation is fraught, and is usually based on its own assumptions. For example, let's say Maleficent is studying treatments for depression; to do so she must observe whether an individual is "depressed" or "not depressed". How the fuck should she do this? Frequently people use questionnaire measures like the Beck Depression Inventory (BDI). Is this a valid tool? I have been heavily depressed when I scored low on the BDI, so I would say not. But what IS a valid tool? Is there any objective criterion we can bring to bear, here? What is it? Does "depression" even exist as a thing?
This problem, that observations are themselves laden with assumptions and based on pre-existing models, is a mire that all science is forced to wade through. Before we make decisions about anything, we must have a lens with which to view the world - but that, itself, is a decision!
More broadly speaking this is a problem with deductive reasoning, and because empiricism claims to be based on deductive reasoning it falls into error as a result. Because it is impossible to begin with truth, any scientific observation must be riddled through with approximations. And usually, we are unaware of the approximations that are blinding us when we build flawed models on top of them.
This is the main reason we get bullshit: there is no good way to do science.
Having a vested interest doesn't automatically discredit the things one says for their cause.
In an ideal world where everyone was a rational, disinterested superhuman who devoted themselves to a scientific pursuit of truth, only ever focusing on arguments would be the perfect approach.
Unfortunately there do exist people who deliberately and knowingly bullshit other people in order to get what they want. Such people absolutely win from a policy of "attack the argument, not the person" because their goal is not to further understanding or even win arguments, it's to confuse people into acting a certain way ... often paralysing them into inaction by creating the appearance of an unending debate.
Thus refuting one bullshit argument simply results in two more popping up to replace it. Even if some people remember the first argument that was refuted, this doesn't help, because:
• Lots of other people won't remember the names of who was involved, or won't be aware of the previous arguments at all.
• Of the people who do remember, the fact that a debate was happening at all may be taken as evidence that the people involved must be "experts", and thus the fact that they lost the argument doesn't necessarily reduce their credibility.
• If someone was a good enough bullshitter to require a response in the first place, they will probably be good enough at it a second time to ensure that if they get no response, some people will start to assume they must be correct.
This can rapidly turn into complete defeat by the people who are actually making reasonable points because they simply become exhausted and burn out faced with an unending wall of plausible sounding nonsense, which then eventually replaces reality with itself.
I've seen this problem play out in brutally sharp detail not so long ago. The people involved knew they were bullshitting, but didn't care because in their eyes it was all for the greater good.
The only solution to this is, in fact, to attack the credibility of the people doing it once they have repeatedly made absurd or invalid arguments, because otherwise it's much harder for people to learn to tune them out.
If I present my argument in a manner that intellectually dishonest and/or distorts your position, you can call me out on that without going ad hominem.
It serves not to disinterest the public, but the debater as it avoids the actual argument, with a side dish of confusion to the public.
It is however notable for often being used by those without actual arguments and thus should always be countered by an argument for your cause combined with one against theirs. But if left un-countered it might however get a life on its own and while this might be less true in science, it it killing in politics (see the electability 'arguments').
It can be a viable argument if bias is overly dominant in the research but is always supporting and never a single argument.
note: This is an opinion, like most 'arguments' are.
Why not money? Most of them built a career that is celebrated, while being lousy scientists, by playing the Creationist card and appealing to particular political/religious publics. And working for similarly minded organizations and "research" institutes.
>If you say religious belief, you'd be right, but it's not a terribly effective criticism because the majority of U.S. scientists are religious, too.
That would be relevant only if they let their religion influence their science. Which a computer scientist or a chemist doesn't have to, or at least as much as a evolutionary biologist.
So yes, for some people there are some financial interest. But I would say this is not the main factor. If you have defended an idea for a long time, maybe you have sacrificed some part of your life like family or friend to your passion, and at some point it becomes so tied to your identity, that it must be impossible to realize how wrong your were.
The latter is true for any kind of belief, religious, political or even scientific.
[1] http://creationmuseum.org/
[2] https://en.wikipedia.org/wiki/Alex_Jones_%28radio_host%29#Fi...
Plus, some people simply can't suspend disbelief of self-organizing systems. I have to wonder if there was trauma in science class for many of them at one point.
When I mentioned (as a 5th grade student ) to a 5th-grade student that evolution isn't exclusive to Creation , he burst into tears.
In general, highly charismatic religion seems somehow related to the general changes of the 1960s, coupled with some more sinister uses of mass media. I have to wonder if decades of television have made people addicted to willing suspension of disbelief of a particularly unexamined kind.
Best way to deal with biased bullshit is open discussion where the biased parties make their arguments, and others evaluate them on quality and completeness. Competition on who is better at hiding biases and conflicts of interest, as opposed to who has the strongest argument, is not the best way at getting at the truth.
Now, knowingly and repeatedly making the same questionable arguments without even mentioning criticism, is quite another matter and should indeed count against one's reputation. Too bad we so often see all sides guilty of that one :(
Or you could say the scientist interest is to keep his job, which requires a certain volume of publications, and that conflict with publishing only results that are both sincere and worthwhile.
I think the procedure itself entails a conflict of interest: If you dispute certain findings by certain researchers (and you have a clear agenda), how can you be trusted to write an objective year-end summary of relevant findings in the field? I think these kinds of articles are the root of the problem. Of course, it would be far from easy to find an objective voice interested in writing these without having an 'ulterior opinion'. Still, I think editors should at least bar researchers from summarizing what they have a stake in (or summarizing a debate that they have taken part in during the last couple of months).
That 2.8mb .jpg is Bullshit
Notice how a couple Voldermorts have gone through the comments and down-voted anyone who questioned catastrophic global warming predictions. As Voldermorts are wont to do.