The skirmish over the quality of data in mammography screening trials brings to light frustrations about the methodological flaws that some experts believe have muddied the research to date. Discussion at the 4th International Congress on Peer Review in Biomedical Publication in Barcelona in September was unsparing in its analysis of how a system that is intended to ensure that only high-quality medical research is published often falls short.
Douglas Altman, D.Sc., of the U.K. National Health Service Center for Statistics in Medicine, Oxford, addressed the issue of why biomedical research is tainted by so many methodological errors.
"Peer review is difficult and only partly successful," he said. He noted, for example, that statistical analysis is more difficult than is commonly acknowledged and that many analyses are performed by people who do not have an adequate understanding of statistics.
"Readers should not assume that peer-reviewed papers are scientifically sound," he suggested. "Much medical research is being done by people who have not been trained in it and dont want to do it," Altman said.
Another problem with published research is that bylined authors might not actually agree on what their study says. Richard Horton, M.D., editor of the Lancet, reviewed 10 articles that appeared in his journal in 2000 by asking each contributor a set of six questions designed to draw out additional comments on their studys strengths and weaknesses.
This direct questioning, Horton found, typically elicited additional implications of the study as well as additional avenues for further research. He concluded that published papers often fail to reflect diversity of opinion among contributors.
Michael Callaham, M.D., of the University of California at San Francisco, and colleagues set out to identify characteristics that would predict whether and how often original research will be cited. Contrary to common belief, a bias toward citing studies with positive outcomes was not evident.
Instead, the most important predictor of whether and how much a paper would be cited was the impact factor of the journal in which the paper appeared. (The impact factor is a measure of the frequency with which articles published in a journal are cited over a 2-year period.)
Several researchers suggested that unstated conflicts of interest can allow biased research to evade scrutiny. Although an estimated 500 medical journals voluntarily follow the Uniform Requirements for Manuscripts Submitted to Biomedical Journals, Anu Gupta, M.D., of the Yale University School of Medicine, and colleagues found that these are not routinely followed when it comes to disclosing the type and degree of involvement by the agency supporting the research.
"Anonymity gives power without accountability," said JAMA deputy editor Drummond Rennie, M.D., the conferences director.
Peer-review experts also raised the issue of trial registration as a remedy. Its proponents contend that requiring all clinical trials to be registered, presumably online, might prevent unfavorable or negative trial results from being simply deep-sixed by researchersor, more likely, by sponsors with a financial interest in a favorable outcome. (See News, May 3, 2000, p. 681.)
Overall, there is "very limited evidence" that peer review improves the quality of medical publications, said Tom Jefferson, M.D., of the U.K. Cochrane Center, Oxford. Based on a systematic review of 19 studies on the results of peer review, "there is far more evidence of the effectiveness of aspirin than there is of peer review," he said.
Referring to the same 19 studies, Elizabeth Wager of the U.K. Peer Review and Technical Editing Systematic Review Group said, "Most aspects of the peer-review process remain completely untested and unproven," adding that the aims of peer review must be defined before quality can be measured meaningfully.
So if almost 300 experts could spend three days examining the flaws in the peer-review system, is peer review actually worth the trouble? "Peer review for scientific journals has existed in some form for around 300 years but has been under fairly intense scrutiny now for only some 15 years," said George Lundberg, M.D., editor in chief of Medscape.
"By objective criteria, it has not done well. Yet those of us who work with it every day know from experience that it routinely does work in the best interests of the public and patients," he continued. "I personally cannot imagine inflicting on the readers of a medical journal the products of only what the authors sent me and what I personally believed should be published."
The peer-review and publication process, Horton commented, is a social one, not a scientific one. "Its about negotiating."
![]() |
||||
|
Oxford University Press Privacy Policy and Legal Statement |