NEWS

Vagaries of Research Publishing Again Under the Microscope

Lou Fintor

At a simplistic level, medical journals are an ongoing dialogue among experts about unfolding research that has been scrutinized by experts in the field. But the issues surrounding publishing good science are complex.

Recent debate about how scientific evidence is summarized, how it is interpreted, and how it is presented to the public was fueled last month by the Journal of the American Medical Association, which devoted an entire issue to studies evaluating publication and dissemination of results.

This longstanding and often thorny debate has been fueled by criticisms leveled by some researchers who have studied the quality and dissemination of published research results, by journals grappling with financial disclosure standards, and even by prickly journal editors taking their fellow editors to task.

Study Designs

Analyses of the medical publishing world often focus on consistency and quality in reporting clinical trials and on identifying weaknesses that are believed to be shortcomings on the part of the journals.

"Certainly editors of journals need to apply more rigorous publication standards," said Jim Nuovo, M.D., who, along with more than 50 other researchers studied authorship, peer review, quality standards, publication bias, legal and ethical issues, news coverage, and related issues in more than 25 different studies.



View larger version (135K):
[in this window]
[in a new window]
 
Dr. Jim Nuovo

 
Nuovo and colleagues Joy Melnikow, M.D., and Denise Chang examined 359 randomized controlled trials reported in five of the world’s leading journals from 1989 to 1998. They found that the most opportune statistical measure—relative risk reduction—was overwhelmingly used to characterize outcomes. The journals studied included Annals of Internal Medicine, British Medical Journal, JAMA, The Lancet, and the New England Journal of Medicine.

The researchers found that, although randomized controlled trials continue to be a methodological "gold standard" in assessing the efficacy of new treatments, the way results are reported can be misleading. In their sample, the more telling and often less favorable "number needed to treat" statistic was reported in only eight studies while only 18 disclosed absolute risk reduction. (The number needed to treat is the total number of eligible patients that a physician would need to treat to prevent one adverse outcome. It is the inverse of the absolute risk reduction statistic.)

"If you’re an academic researcher marching up the ladder of success at a university, there is an emphasis on publishing. Articles are simply more likely to be accepted in journals when there are positive results, so we want to make our results as optimal as possible. Our ‘culture’ is to produce relative risk reduction," Nuovo maintained.

He said that reporting would be more consistent if more investigators used tools such as the Consolidated Standards of Reporting Trials (CONSORT), a systematic checklist developed to evaluate the design, conduct, analysis, and interpretation of clinical trials. (The Journal of the National Cancer Institute uses these standards.)

"There is some very strong evidence that low-quality trials tend to exaggerate how effective interventions are. We need standards that are evidence-based, and we have evidence that using CONSORT standards improves quality," said David Moher, an author of CONSORT and director of the Chalmers Research Group at Children’s Hospital of Eastern Ontario in Ottawa.

According to Moher, standardized approaches like CONSORT quickly identify many study flaws and allow journal editors to reject submissions before sending them out for time- and labor-intensive peer review.

"We don’t need mothering or fathering by editors; we need to empower the readers to make their own judgements through a transparent evaluation process. CONSORT is not perfect and it’s not carved in stone, but it’s very important that we have standards and unfortunately academicians don’t like that," maintained Moher.

Cookie-Cutter Approach

Not so fast, bristle critics opposed to using more standardized approaches for evaluating trial results. They contend that Nuovo’s approach to evaluating the quality of trial results based on whether absolute risk reduction or number needed to treat is reported is itself problematic.

"His paper was very misleading," said New England Journal of Medicine editor-in-chief Jeffrey Drazen, M.D. "In all of our randomized control trials we report the absolute event rate. That’s usually all the information you need to know what is happening."

He pointed out that there can be adverse events in a study that affect other statistical measures so that absolute event rates have to be considered. "Naturally this is going to vary from study to study," he said. "Using mechanisms like CONSORT is too cookie-cutter." Statistician Mary Grace Kovar, Dr.P.H., who forged a career studying the ways in which data are analyzed and presented, echoed Drazen’s concerns.

"The most important statistic to know in any clinical trial is the base risk. You need to ask yourself whether you are looking at an important problem with a large risk or at something that has a small risk to begin with," said Kovar, recently retired from the University of Chicago’s National Opinion Research Center and now a Washington, D.C., consultant to that group.

"If you’re dealing with a rare event, you’re going to have to treat a lot of people to make a difference; if it’s a common event, you’re not. The information you need is not necessarily going to be the same for every trial," Kovar said.

Financial Conflicts

The definition of "conflict of interest" has changed in recent years to accommodate the growing number of researchers who receive funding from multiple sources. Journals have also had to decide to what extent an author’s affiliation or funding could affect their research.

The New England Journal of Medicine had one of the most stringent policies regarding potential conflicts of interest and, as a result, they found themselves with fewer and fewer people to work with.

"There were many cases where we were contacting leaders in their fields to submit editorials and reviews and then would have to turn away very good people when we learned they received $500 from a company as a one-time reviewer," said Drazen. "People who were prominent in their fields were being excluded as authors for minimal financial relationships."

Last month the journal announced a liberalization of existing guidelines in a controversial move it hopes will balance concerns over the potential influence of an author’s financial interests with broadening author participation. Authors receiving less than $10,000 annually in honoraria, research funding, or other support or with financial interests in a public or private company up to that amount will no longer be barred from authoring editorials and review articles in the journal.

Those reporting $10,000 or more within two years of proposed publication will be automatically ineligible to publish. Those reporting less, while still required to disclose their interests, will be evaluated by editors on a case-by-case basis. The guidelines mirror those used by the National Institutes of Health and the Association of American Medical Colleges.

According to Drazen, the revised guidelines will be especially beneficial to cancer researchers and those in dynamic fields where not only is there a constant stream of new treatments and diagnostic modalities but where the relationships among private industry, academia, and government are often pervasive.

"Now we’ll be able to have a much stronger voice, especially in areas where there are a lot of active new therapeutics such as cancer," he said.

Michael Gough, Ph.D., an adjunct scholar at the Cato Institute, a Washington, D.C., libertarian think tank, and coauthor of Silencing Science, maintains that journals too often overemphasize author financial interest, affiliation, and other "personal" factors that do not necessarily influence research at the expense of the "nuts and bolts issues" like trial design, execution, and statistical analysis.

"If you’re talking about inappropriate influence, then these policies should also require disclosure of affiliations with lobbying organizations and political group affiliations," said Gough. "Obviously a paper should be read skeptically regardless of whom or where it came from."

Evils of Enthusiasm

Drazen maintains that financial disclosure is but one element of journal quality, but one that is the responsibility of editors. But inflated claims, on the other hand, are typically not their fault.

"Researchers need to be careful about their enthusiasm. Many researchers are excited about their work, and they tend to overstate and drop some of the subtleties when they describe it," said Paul Raeburn, president of the National Association of Science Writers and a senior writer at BusinessWeek, New York.

Raeburn pointed out that reporters garner stories from a variety of sources, not only clinical meetings and research journals. "Nevertheless, good reporters do not accept press releases uncritically. They work hard to make sure important information is not omitted; good reporters will often talk to colleagues and competitors and create a kind of instant informal peer review," he added.

But, although press releases are routinely issued by several leading journals, they often present data in ways that exaggerate perceived importance and do not report limitations and the role of private funding in a systematic way, said Lisa Schwartz, M.D., and Steven Woloshin, M.D., both at the Department of Veterans Affairs Medical Center, White River Junction, Vt.



View larger version (118K):
[in this window]
[in a new window]
 
Dr. Steve Woloshin

 
Of 127 releases issued by communication staff at Annals of Internal Medicine, British Medical Journal, JAMA, Journal of the National Cancer Institute, Lancet, and Pediatrics that reflected the last six issues of each journal published prior to January 2002, only 23% noted limitations and 65% described the main outcomes using numbers.

"Press releases are the way most journals correctly communicate to the media, but there are not very specific guidelines for data presentation or acknowledging limitations," said Woloshin.

"The answer is in more communication between researchers and the press. Too many researchers feel they’ve been burned by the press and their reaction is not to talk to the press anymore. Instead their reaction should be to talk to the press even more than before," Raeburn said. "Guidelines are a small thing; the bigger issue is fostering better communication between scientists and the press," he concluded.



             
Copyright © 2002 Oxford University Press (unless otherwise stated)
Oxford University Press Privacy Policy and Legal Statement