NEWS

Prostate Cancer: Numbers May Not Tell the Whole Story

Tom Reynolds

In prostate cancer, as anywhere, statistics can fool the unwary—so careful researchers, physicians, and patients must look beyond the numbers to get the whole truth, according to Michael J. Barry, M.D., of Massachusetts General Hospital in Boston.



View larger version (108K):
[in this window]
[in a new window]
 
Dr. Michael J. Barry

 
Barry, a primary-care physician, researcher, and advocate of evidence-based medicine, discussed beliefs and evidence about prostate cancer screening and treatment at the American Society of Therapeutic Radiation and Oncology’s annual meeting, held in Boston in October.

Key Factor

One key factor in interpreting prostate cancer treatment outcome data is the lead-time bias introduced in the last decade by prostate-specific antigen (PSA) screening. Prostate cancer is typically a slow-growing disease, and early diagnosis with PSA can add years to its natural history.

"In the pre-PSA era, we used to say that to get meaningful comparisons of treatment outcomes, you needed 10 to 15 years of follow-up," Barry said. "In the PSA era, with a 7- or 8-year lead time [in diagnosis], we now need 18 to 23 years of follow-up to get the same sense of where we’re at.

"I don’t think any of [the data] for any of our therapies are fully mature yet," he added. "We’re going to need a lot more time to really find what the outcomes are."

The prostate cancer death rate in the United States has fallen in recent years, and it is tempting to interpret that trend as confirming the success of screening and early, aggressive treatment.

The problem, Barry said, is that "nothing makes an intervention look so good as lack of controls. So we’d all like to find some natural experiments contrasting prostate cancer outcomes with more intensive versus less intensive screening or treatment."

Two such "experiments"—one in Austria and one in the United States—give somewhat conflicting answers. An intensive PSA screening program was implemented in 1993 in Tyrol, a state in western Austria. In 1997, prostate cancer mortality had dropped by one third, and in 1998 by 40%, compared with baseline rates from 1986 to 1990, a steeper drop than seen in other areas of Austria. This trend was widely assumed to be a result of screening, Barry said, although it is difficult, given the lead time, to explain how the effect could be seen so quickly.

In the United States, Barry and collaborators in Connecticut and Seattle are studying the effects of widely divergent patterns of treatment in those two areas, both part of the NCI’s Surveillance, Epidemiology, and End Results program. In the pre-PSA era the two SEER areas had identical prostate cancer mortality rates, but physicians in the Seattle area adopted screening and aggressive treatment much faster than those in Connecticut. Since 1987 the researchers have been following a cohort of 100,000 men in the two areas, comparing screening and treatment intensity and prostate cancer mortality.

In the early years of the study, Seattle men were twice as likely as Connecticut men to be diagnosed with prostate cancer and six times as likely to have a prostatectomy—and also got more radiation therapy.

"If early detection and aggressive treatment makes a big difference, we would expect to see an earlier impact on prostate cancer mortality in Seattle," Barry said. "But at least through 11 years, we haven’t seen it," and Seattle’s rate is about 6% higher than Connecticut’s. No one knows whether these findings mean screening and treatment don’t affect mortality, or whether the follow-up time is still too short to see the effect. But either way, they are strikingly at odds with the Tyrol study results, suggesting that screening benefit remains an open question and that the randomized screening trials under way in the United States and Europe should continue.

Side Effects of Treatment

Barry also cast a critical eye on the question of treatment side effects in prostate cancer, contrasting beliefs reported by urologists and radiation oncologists with reports from patients themselves. Barry and colleagues conducted a nationwide survey, published in the June 28 Journal of the American Medical Association, on urologists’ and radiation oncologists’ recommendations for screening and treatment and beliefs about outcomes. The patient responses were collected in the Prostate Cancer Outcomes Study by the National Cancer Institute’s Arnold L. Potosky, Ph.D., and colleagues, and published in the Oct. 4 issue of the Journal of the National Cancer Institute.

The study’s respondents were men aged 55 to 74—an age group where either surgery or radiation is widely considered a reasonable option. About 80% reported impotence after surgery compared with 62% after radiation. In Barry’s study, physicians were asked to estimate rates of impotence expected for unilateral and bilateral nerve-sparing surgery, non-nerve-sparing surgery, and radiation. For the nerve-sparing procedures their estimates ranged from 45% to 60%, and for radiation, from 23% to 39%.

It may seem surprising, Barry noted, that the study’s figures for surgery were not lower, given the high proportion of urologists now performing nerve-sparing prostatectomies. But, he added, most community hospitals are unlikely to match the results attained at academic "centers of excellence" with their highly skilled surgeons.

"At a recent meeting Dr. [Patrick] Walsh [M.D., of Johns Hopkins University at Baltimore] was talking about refining his surgical technique, basically spending his summer vacation reviewing the videotapes of all the operations he had done over the previous year, looking for ways he could do better over time—sort of a Tiger Woods-like approach to improving his swing," Barry said.

"I worry that many of Dr. Walsh’s colleagues are citing his outcome data without necessarily watching the videos. ...This issue of variation in technique is a very important one and I think is one of the reasons the data from Hopkins and other centers of excellence are so much better." In addition, the patients in the Hopkins study had an average age of 57 and only 12% had Gleason scores of 7 or higher—"obviously carefully selected cases," he said.

Case volume is also likely to play a role. In 1995, the peak year for prostatectomy in the United States, more than 90% of urologists performed at least one. But the median number was 13, and a quarter of surgeons had done fewer than six.

"Only 25% had done more than 23, or about two a month," Barry said. "I think many urologic surgeons would question whether some of these lower numbers are enough to get the best outcomes in terms of cancer control and side effects."

A study of Medicare-reimbursed prostatectomies, led by Grace Yu-Lao, Ph.D., of the Health Care Financing Administration, Baltimore, showed that patients who had their surgery done at hospitals where the overall number of prostatectomies was lower had substantially higher risks for complications, readmission, and 30-day mortality than those in high-volume hospitals.

Barry said similar volume–outcome relationships likely exist for radiation therapy and other treatment modalities, adding that "we need to do a better job in defining those relationships because that’s going to help us get better outcomes for patients."


This article has been cited by other articles in HighWire Press-hosted journals:


             
Copyright © 2000 Oxford University Press (unless otherwise stated)
Oxford University Press Privacy Policy and Legal Statement