The publication of a scientific study in a peer-reviewed journal is commonly recognized as a kind of “nobilitation” of the study that confirms its worth. The peer-review process was designed to assure the validity and quality of science that seeks publication. This is not always the case. If and when peer review fails, sloppy science gets published.
According to a recent analysis published in Proceedings of the National Academy of Sciences, about 67 percent of 2047 studies retracted from biomedical and life-science journals (as of May 3, 2012) resulted from scientific misconduct. However, the same PNAS study indicated that about 21 percent of the retractions were attributed to a scientific error. This indicates that failures in peer-review led to the publication of studies that shouldn’t have passed muster. This relatively low number of studies published in error (ca. 436) might be the tip of a larger iceberg, caused by the unwillingness of the editors to take an action.
Peer review is clearly an imperfect process, to say the least. Shoddy reviewing or reviewers have allowed subpar science into the literature. We hear about some of these oversights when studies are retracted due to “scientific error.” Really, the error in these cases lies with reviewers, who should have caught such mistakes or deceptions in their initial review of the research. But journal editors are also to blame for not sufficiently using their powers to retract scientifically erroneous studies.
Case in point: In May 2011, the International Agency for Research on Cancer (IARC) classified cell phone radiation as a possible human carcinogen based predominantly on epidemiological evidence. In December 2011, the update of the largest recent epidemiological study, the so-called Danish Cohort, failed to find any causal link between brain cancer and cell phone radiation. It was published in the British Medical Journal.
However, as pointed out by a number of scientists, including myself, peer-review of the Danish Cohort study failed to recognize a number of flaws, which invalidate the study’s conclusions.
The only information collected pertaining to a person’s exposure to cell phone radiation was the length of their cell phone subscription. Hence, two persons using cell phones—one many hours and another only a few minutes per week—were classified and analyzed in the same exposure group if their subscriptions were of equal length. This meant that in the Danish Cohort study highly exposed people and nearly unexposed people were actually mixed up in the same exposure groups.
From the initial size of the cohort of 723,421 cell phone subscribers, more than 420,000 private subscribers were included in the study but more than 200,000 corporate subscribers were excluded. The exclusion of the corporate cell phone users meant that, most probably, the heaviest users were excluded (unless they had also a private subscription). In addition to being excluded from user categories in the study, corporate users were also classified as unexposed. This means that the control group was contaminated. As the BMJ study admitted: “…Because we excluded corporate subscriptions, mobile phone users who do not have a subscription in their own name will have been misclassified as unexposed…”
Another flaw of the study was a 12-year gap between data collected on cell phone subscriptions and information culled from a cancer registry. The study considered people with cell phone subscriptions as of 1995, while cancer registry data from 2007 was used in the follow-up study. That means that any person who started a cell phone subscription after 1995 was classified as unexposed. So the study’s authors considered a person who was diagnosed with brain cancer in 2007, but who had started a cell phone plan in 1996 as unexposed. In reality, that person with brain cancer had been exposed to cell phone radiation for 11 years.
It is clear to me that these flaws invalidate the conclusions of the Danish Cohort study. Peer-review failed, and a study that should never have got published due to its unfounded conclusions remains as a valid peer-reviewed article in the British Medical Journal. As long as the flawed study is not withdrawn it will be used by scientists and by decision makers to justify their actions—e.g. a reference to the Danish Cohort study was recently used as supporting evidence in failing to indicate a causal link between cell phone radiation and brain cancer by the US Government Accountability Office.
How is it possible that the British Medical Journal allowed such a poor quality peer review? Were the peer reviewers incompetent or did they have conflicts of interest? What was the involvement of the BMJ’s editors? Why, once alerted to serious design flaws by readers, have BMJ editors not taken any action?
In my opinion the Danish Cohort study should be retracted because no revision or rewriting can rescue it. The study is missing crucial data on exposure to cell phone radiation. Furthermore, an investigation should be launched to determine why such a flawed study was published. Was it peer reviewer and BMJ editor incompetence alone or was a conflict of interest among reviewers involved? (The authors of the study declared no conflicts of interest, but the original cohort was reportedly established with funding from a Danish phone company.) Answering these questions is important because it might help to avoid similar mistakes in the future.
DISCLAIMER: All opinions presented are author’s own and should not be considered as opinions of any of his employers.
Dariusz Leszczynski is a research professor at the Radiation and Nuclear Safety Authority in Finland and a visiting professor at Swinburne University of Technology in Australia.
The review by the American Society for Microbiology stated that they find about 70% of the studies published in journals are flawed.
- Log in to post comments