Search This Blog

Friday, May 22, 2015


Nick Gass reports at Politico:
One of the authors of a recent study that claimed that short conversations with gay people could change minds on same-sex marriage has retracted it.
Columbia University political science professor Donald Green’s retraction this week of a popular article published in the December issue of the academic journal Science follows revelations that his co-author allegedly faked data for the study, “When contact changes minds: An experiment on transmission of support of gay marriage.”
According to the academic watchdog blog Retraction Watch, Green published a retraction of the paper Tuesday after confronting co-author Michael LaCour, a graduate assistant at UCLA.
The study received widespread coverage from The New York Times, Vox, The Huffington Post, The Washington Post, The Wall Street Journal and others when it was released in December.
“I am deeply embarrassed by this turn of events and apologize to the editors, reviewers, and readers of Science,” Green told the blog.
In an email to POLITICO, Green said he spoke with LaCour by phone on Tuesday and that he “maintained that he did not fabricate the data but told me that he could not locate the Qualtrics source files for the surveys on the Qualtrics interface or on any of his drives.”
Qualtrics was the survey platform that was purportedly used, though a company spokesman clarified to POLITICO that it did not collaborate with LaCour or anyone else on the study.
The problem came to light when researchers sought to replicate the study.  But such efforts at replication happen less often than one would hope, which raises the possibility that a lot of bad work has gone undetected. Monya Baker reported at Nature last month:
An ambitious effort to replicate 100 research findings in psychology ended last week — and the data look worrying. Results posted online on 24 April, which have not yet been peer-reviewed, suggest that key findings from only 39 of the published studies could be reproduced.
But the situation is more nuanced than the top-line numbers suggest (See graphic, 'Reliability test'). Of the 61 non-replicated studies, scientists classed 24 as producing findings at least “moderately similar” to those of the original experiments, even though they did not meet pre-established criteria, such as statistical significance, that would count as a successful replication.
 The results should convince everyone that psychology has a replicability problem, says Hal Pashler, a cognitive psychologist at the University of California, San Diego, and an author of one of the papers whose findings were successfully repeated.  “A lot of working scientists assume that if it’s published, it’s right,” he says. “This makes it hard to dismiss that there are still a lot of false positives in the literature.