The Guardian: Scientific Fraud Is Rife – It’s Time To Stand Up For Good Science
Science is broken. Psychology was rocked recently by stories of academics making up data, sometimes overshadowing whole careers. And it isn’t the only discipline with problems - the current record for fraudulent papers is held by anaesthesiologist Yoshitaka Fujii, with 172 faked articles.
These scandals highlight deeper cultural problems in academia. Pressure to turn out lots of high-quality publications not only promotes extreme behaviours, it normalises the little things, like the selective publication of positive novel findings – which leads to “non-significant” but possibly true findings sitting unpublished on shelves, and a lack of much needed replication studies.
Why does this matter? Science is about furthering our collective knowledge, and it happens in increments. Successive generations of scientists build upon theoretical foundations set by their predecessors. If those foundations are made of sand, though, then time and money will be wasted in the pursuit of ideas that simply aren’t right.
A recent paper in the journal Proceedings of the National Academy of Sciences shows that since 1973, nearly a thousand biomedical papers have been retracted because someone cheated the system. That’s a massive 67% of all biomedical retractions. And the situation is getting worse – last year, Nature reported that the rise in retraction rates has overtaken the rise in the number of papers being published.
This is happening because the entire way that we go about funding, researching and publishing science is flawed. As Chris Chambers and Petroc Sumner point out, the reasons are numerous and interconnecting:
• Pressure to publish in “high impact” journals, at all research career levels;
• Universities treat successful grant applications as outputs, upon which continued careers depend;
• Statistical analyses are hard, and sometimes researchers get it wrong;
• Journals favour positive results over null findings, even though null findings from a well conducted study are just as informative;
• The way journal articles are assessed is inconsistent and secretive, and allows statistical errors to creep through.
Problems occur at all levels in the system, and we need to stop stubbornly arguing that “it’s not that bad” or that talking about it somehow damages science. The damage has already been done – now we need to start fixing it.