Advancing Voluntary, Informed Consent to Medical Intervention
Dr. Sydney Brenner, PhD, Nobel Laureate in Physiology/Medicine (2002), deplores the current profit-driven culture in science. A culture in which he believes — as do a significant growing number of genuine scientists and physicians — genuine breakthroughs and important discoveries are impeded.
In a recent interview in King’s Review magazine (2014) he expressed his critical views of the current profit-driven culture in science which he believe is producing rubbish while impeding genuine scientific breakthroughs and discoveries which do not follow the dictates of commerce.
He deplores the fact that those who hold the purse strings dictate and limit the parameters of the pursuit of science:
There’s no exploration any more except in a very few places. You know like someone going off to study Neanderthal bones. Can you see this happening anywhere else? No, you see, because he would need to do something that’s important to advance the aims of the people who fund science.
The supporters now dictate what paths scientists may explore, demanding that time be spent on crafting a constant stream of high impact publications. . . The bureaucrats of science, do not wish to take any risks. So in order to get [a project] supported, they want to know from the start that it will work. This means you have to have preliminary information, which means that you are bound to follow the straight and narrow.
He notes that the drive to publish or perish has led American academics to develop a new culture in science based on the slavery of graduate students. He is especially critical of the dictatorial power of so-called high impact scientific journals:
“I don’t believe in peer review because I think it’s very distorted and as I’ve said, it’s simply a regression to the mean. I think peer review is hindering science. In fact, I think it has become a completely corrupt system. It’s corrupt in many ways, in that scientists and academics have handed over to the editors of these journals the ability to make judgment on science and scientists.
There are universities in America. . . that won’t consider people’s publications in low impact factor journals. . . . this has assembled a most ridiculous group of people. I campaigned against this [culture] because I think it is not only bad, it’s corrupt. In other words it puts the judgment in the hands of people who really have no reason to exercise judgment at all. And that’s all been done in the aid of commerce, because they are now giant organisations making money out of it.
He is also appalled by the fact that journals have stripped authors of their copyrights:
there was a time, and I’m trying to trace the history when the rights to publish, the copyright, was owned jointly by the authors and the journal. Somehow that’s why the journals insist they will not publish your paper unless you sign that copyright over. It is never stated in the invitation, but that’s what you sell in order to publish. And everybody works for these journals for nothing. There’s no compensation. There’s nothing. They get everything free. They just have to employ a lot of failed scientists, editors who are just like the people at Homeland Security, little power grabbers in their own sphere.
Because publications have become a proxy for research quality, publications in high impact factor journals are the metric used by grant and promotion committees to assess individual researchers. The problem is that impact factor, which is based on the number of times papers are cited, does not necessarily correlate with good science. To maximize impact factor, journal editors seek out sensational papers, which boldly challenge norms or explore trendy topics, and ignore less spectacular, but equally important things like replication studies or negative results. As a consequence, academics are incentivised to produce research that caters to these demands.
Indeed, he cites a recent study that found that only six out of 53 landmark studies in cancer research were replicable. In another study, researchers were only able to repeat a quarter of 67 influential papers in their field. Wistfully, he suggests that: