EDIT: I hear complaints from students all the time that they are not allowed to cite Wikipedia. I tell them no you should instead cite the Wikipedia citations. They invariably tell me how much better they do academically because of that.
Students should be encouraged to cite Wikipedia when they found information in Wikipedia, so that when they grow up and start writing real research papers they will continue citing Wikipedia when they find information there.
Finding information somewhere and then not citing it (or citing some random other source that actually says something different) erodes the whole academic project. Any teacher who tells their students not to cite Wikipedia should be ashamed.
I guess it's fine to be idealistic here but most reviewers would look down upon your work if you do this. And that impression can be the difference between acceptance and rejection. I'm sure ideally this shouldn't happen, but it is what it is for now.
However! It's also important to teach students how sources differ in quality or "authoritativeness". The problem with citing Wikipedia is not the citing per se but relying on that source. A peer reviewed academic journal is considered more reliable, although no source should be taken as gospel and definitive truth, especially on controversial topics.
You can even cite blog posts, personal letters, and even personal oral communication! The point is to let the reader know where your info comes from. Making students memorize rules like "don't cite Wikipedia" just results in a cargo cult, not actual understanding of critical thinking related to sources.
References themselves are gameable. They argue references mentioned in Wikipedia are more likely to be cited by a judge!
This is not about the quality of Wikipedia, but about its undue influence and how easy it is to game it, references included!
The difference is that a crowdsourced resource like Wikipedia is easier to manipulate by people who understand the system. There are plenty of PR specialists who get client articles pushed into Wikipedia or updated to their liking.
Wikipedia is a treasure, but it’s also vulnerable to a bunch of different attacks.
False ideas can be spread simply by overemphasizing biased true statements and disregarding true statements that don't fit the narrative.
The effect can be multiplied by controlling the discussion through selecting the right 'questions' that are discussed.
Snopes is the exemplar.
If you're looking for major/important sources to read on a topic, not just a quick way to halfway-fake a works-cited section, I've found it valuable to locate some representative, recent academic book in the field and read the author's introduction and other pre-chapter-1 material. These will often include a lot of name-dropping of what are considered major works in the field. There may also be a list of abbreviations the book will use, and those often include several major works in the field that'll come up often in the body text.
That's your list of books and papers to find and read. Repeat that technique with each of those books and papers, too, if you want to keep going deeper.
Often you can get enough off an Amazon or Google preview of a book for this to work. Plus, libraries exist, and you pull that kind of information out of several books (which can be handy—anything that appears more than once deserves special attention) in less than an hour, without checking anything out. And there's always Library Genesis, which may not have every book but probably has at least one in your interest area that can be mined in this way.
Wikipedia's sort of useful for this, at least for tracking down a first work to attack with this approach, but the problem is that many articles don't cite highly-regarded or authoritative or landmark works on the topic, so much as whatever the author(s) happened to have handy or what was easiest to find online (a whole hell of a lot of great information is still not available on the Web, even in 2022, including material in very recent books, not just pre-Web ones, or is available on the Web but only in poorly- or not-indexed-by-web-search-engines under-copyright ebooks).
https://aacrjournals.org/cancerdiscovery/article/12/1/31/675...
Ed. competently replaced usefully
This isn't an entirely trivial matter, as it shows that "random" persons may be able to shape judicial and scientific narratives through wikipedia.
A wikipedia article is going to have orders of magnitude more influence than nearly any journal article or textbook, and scholars should put at least a basic amount of effort into improving them.
It should be seen as a kind of public outreach.
No, they don't. Correlation is not causation, even if you see it in a randomized experiment. With shoddy reasoning like this, it's no wonder science has a replication crisis.
https://towardsdatascience.com/establishing-causality-part-1...
https://bolt.mph.ufl.edu/6050-6052/unit-2/causation-and-expe...
https://towardsdatascience.com/establishing-causality-part-1...
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6235704/
https://escholarship.org/uc/item/42v4w8k1
http://ippsr.msu.edu/public-policy/michigan-wonk-blog/random...
https://www.cs.cornell.edu/courses/cs1380/2018sp/textbook/ch...
I’ll leave open the possibility that it’s everyone else that’s wrong but RCTs are used to establish causality and are as much “proof” as you’re gonna get in science.
Hell ya know what I’ll just let the actual paper explain it.
> The second, more important advantage of randomized field experiments is that they can distinguish causation from correlation. The ability to prove causal relationships derives from the combination of two characteristics. The first is having a control group, that is, a group unaffected by the intervention (in our case, publication of a Wikipedia article on the topic) that can be used as a counterfactual to estimate the size of causal effects. The second is randomization, that is, random assignment into the control and intervention groups. With sufficient data and a sound experimental design, the experiment can reduce the probability of being misled by correlation or noise to whatever arbitrarily small value is desired.
No, they're not. The real "gold standard" in science--the standard that prevails in, for example, physics or chemistry--is a controlled experiment. Not just a "randomized controlled trial", but a controlled experiment, where you can actually dictate exactly what state the things you are going to experiment on start out in. And the eventual output of controlled experiments is a predictive model--a model that can predict, accurately, what will happen if you run further experiments. That is what it takes to truly "establish causality".
But in most other domains, including the one under study here, controlled experiments simply cannot be done and predictive models with any kind of accuracy simply don't exist. The correct response to that unfortunate fact is to realize that we can never achieve the same level of confidence in these other domains as we can in domains like physics or chemistry where we can do controlled experiments. Unfortunately, the response "science" has settled on instead is to pretend that it doesn't matter--that because we can't do controlled experiments in these other domains, the universe will somehow magically lower its standards of what it takes to achieve the level of confidence we want. But the universe doesn't care what we can or can't achieve.
More likely, today's clerks look at Wikipedia.
Fortunately, this is the kind of thing we can all sit back and laugh at. If a candidate can’t be arsed to hire a competent PR firm to handle their public profiles, then they probably don’t deserve the position.
[0] https://en.m.wikipedia.org/wiki/2022_New_Mexico_gubernatoria...
The image for Mark Ronchetti was uploaded by a user who shot a video of him in 2020. Since they created the video, they own the copyright to the image and can license it to Wikimedia Commons:
https://commons.wikimedia.org/wiki/File:Mark_Ronchetti.jpg
Searching for other CC-licensed images doesn't return anything:
https://duckduckgo.com/?t=ffab&q=mark+ronchetti&iax=images&i...
https://www.google.com/search?q=mark%20ronchetti&tbm=isch&tb...
I've emailed his campaign to ask if they have a photo they can license appropriately and upload to Commons. If you have a better photo of him (that you took yourself and are willing to license for free use), you can upload it here:
https://commons.wikimedia.org/wiki/File:Mark_Ronchetti_Heads...
...and requested they complete the license authorizing its use:
https://commons.wikimedia.org/wiki/Commons:Email_templates
I'm giving it about 50/50 odds they'll fill out the form and the photo will stay up, but at least I tried.
I agree with the other posters who point out that this is a bit of an unfair advantage for incumbents (who have government-sponsored public domain photos available for use). It'd be an awesome thing for volunteers to try to help with, by reaching out to less tech-saavy campaigns as I've done here.
I've been to New Mexico maybe 4-5 times in my life and have basically zero stake in this race, but I guess duty calls[0]. ;)
To be sure, I don’t even live in Mark’s state and it’s on him if he wants another picture on his profile. You’re a stellar citizen, though, for taking action on your own!
I simply wanted to point out a trend on Wikipedia. Mark is just one of more than a dozen candidates with bad pictures or, worse, empty pictures that seem to be the result of sabotage. If you look on these candidates’ pages (or Mark’s page), you can look at the revision history and determine that there were past pictures taken down or replaced near election times.
Again, I don’t care. It’s up to these candidates to fix this stuff if they want to win. With the amount of money they’re bringing in, you’d think they could hire someone to spin them up nice profiles with ‘Political Stances’ sections and quirky stories about their family life. I wonder if they intentionally keep their profiles empty to funnel traffic to their personal websites instead.
> The Hunter Biden laptop controversy involves a laptop computer that conservative media outlets claimed without evidence had belonged to Hunter Biden. They further stated that the laptop had been dropped off but never collected.by an unknown individual at the Wilmington, Delaware repair shop of a blind proprietor in April 2019.
Three paragraphs saying it's all made up, you can't trust the NY Post's reporting, it's probably just Russian propaganda and then it finishes with:
> In March 2022, The New York Times reported it had authenticated some emails "from a cache of files that appears to have come from a laptop abandoned by Mr. Biden in a Delaware repair shop."[10][11] Also in March, The Washington Post reported that two security experts authenticated thousands of the 129,000 emails, though the vast majority of the laptop contents, including most of its emails, could not be authenticated.[12] Among the emails that The Washington Post was able to authenticate was the Pozharskyi email that formed the basis of the New York Post's original article.
[0]https://en.m.wikipedia.org/wiki/Hunter_Biden_laptop_controve...
There’s even a whole “judicial philosophy” based around this method of deciding first (based on personal preference or coin tosses or bribes or whatever) and then cherry-picking citations to pretend it wasn’t really your own decision / avoid having to explain your reasoning: so-called “originalism”. And it goes back decades, long before Wikipedia.
It's true that since they find that the gain appears to be concentrated in 'positive' citations, used as justification, they probably didn't flip many decisions immediately (if any). But they also do followup linguistic analysis to show that (like the earlier studies) judges are borrowing language in describing their decisions. So you are going to have an accumulating effect here where the citations at zeroth order are used for justification, but that makes those cases better known later on, and they will be described as the article describes, and increasingly interpreted that way when read by later judges due to being precedent, and who will then copy it (that's how common law is supposed to work!). And that may well start begin flipping cases.
I guess there's only one way to really find out :D
> The experiment featured Wikipedia entries authored by faculty and by law students under faculty supervision, who each had access, through their university library, to all the relevant primary and secondary legal materials available to judges and their clerks. This assurance of accuracy and of informed analysis in the content of the entries — though short of that offered by a specialist textbook — indicates that judges or lawyers would be unlikely to be misled by what they might read.
I find nothing ethically questionable at all about publishing accurate legal analysis on a case anywhere, including Wikipedia.
The issue isn't that they were misleading anyone; the issue is that they were, for research purposes, trying (successfully) to influence the outcomes of court cases without the subjects' consent.
https://en.wikipedia.org/wiki/Open_access_citation_advantage
Articles on Wikipedia and in open access journals are more accessible than paywalled sources, which means that people will read them more often.
Ed. legal precedent replaced sources for clarity of this significance
This is just that with a different medium.
I think you misread the article (or I’m misreading you). There were no counterfactual legal precedents published. They took a set of cases and for half of them published Wikipedia articles on them for half, did not publish them (the non-publication was the counterfactual case, not the contents of the articles).