Many researchers still balk at the idea of social media checks and balances of scientific papers, but post-publication, crowd-sourced social media have been identifying errors missed by peer reviewers and editors.
The most reputable universities and labs enjoy a high level of trust within the scientific community, so papers coming out of those institutions — even when written by junior researchers — often pass the pre-publication reviews process.
But too often, manuscripts contain manipulated images, substantial typographical errors or break journal standards for test subjects, which is what happened when journal editors who published the recent Facebook social contagion study acknowledged that the study did not meet the journal’s requirements for human research subjects.
Two important papers describing a revolutionary method to make stem cells were retracted last week by Nature after being published in January. Thanks to social media, flaws within the paper were called out almost immediately. The lead junior scientist Haruko Obokata was found guilty of misconduct for splicing together images, presenting two images as one and minor plagiarism.
Critiques of scientific work on social media like Twitter, the online journal club PubPeer, blogs and scientific forums (such as the National Institutes of Health’s) have prompted several high-profile retractions and corrections that were missed in the pre-publication process.
Writing for Pacific Standard, Michael White, systems biologist at the Washington University School of Medicine in St. Louis, said that common complaints by researchers about social media sound like this: “Busy, serious scientists don’t have time to waste on Twitter or message boards, where any unhinged idiot with an Internet connection can rage away against a highly-technical paper that he doesn’t get.”
Regarding the stem cell paper retracted by Nature, several members of the research team claimed that they had independently verified Obokata’s work before publishing, but nobody actually repeated the entire experiment; they all trusted Obokata’s results. According to White:
Most journals have implemented routine checks for image problems and plagiarism, but these checks have their limits. In an editorial accompanying last week’s retractions, Nature‘s editors argued that, in spite of some errors in the vetting process, “we and the referees could not have detected the problems that fatally undermined the papers. The referees’ rigorous reports quite rightly took on trust what was presented in the papers.” But if that’s true, how could online commenters spot the flaws so quickly?
While time and quality control on social media may indeed be challenging, science can no longer rely on an honor system or blind trust. If editors and reviewers are not taking the extra steps necessary to determine the validity of scientific works, social media provides an open, public venue for checks and balances.
“These concerns are understandable,” wrote White. “But to those of us who have gone ahead and joined these online communities, it’s clear that they work. And as more scientists figure out how to integrate them into their professional lives, post-publication review will only get better.”