Affiliation of authors: Health Outcomes Group, Department of Epidemiology and Biostatistics, Memorial Sloan-Kettering Cancer Center, New York, NY.
Correspondence to: Colin B. Begg, Ph.D., Department of Epidemiology and Biostatistics, Memorial Sloan-Kettering Cancer Center, 1275 York Ave., New York, NY 10021 (e-mail: beggc{at}mskcc.org).
Government agencies provide regular reports on progress in the fight against cancer, and these reports are viewed with great interest by researchers and by the media (1). The primary measures of the cancer burden are cancer incidence rates and cancer mortality rates. The latter are derived from state death certificates, which are completed by physicians, medical examiners, coroners, and funeral directors, and collated by the National Center for Health Statistics (NCHS) (2). The data from death certificates are compiled using the World Health Organization schema to determine the underlying cause of death. The classification rules are changed periodically; most recently in 1999 with the adoption of the Tenth Revision of the International Statistical Classification of Diseases and Related Health Problems (ICD) (3). This general approach leads to the designation of a unitary "underlying" cause for each death. In practice, a death may be precipitated by multiple causes. Recently, the NCHS has developed a system for processing multiple classifications, including the use of relevant free text from the death certificate (2). However, the National Cancer Institute continues to report death rates using deaths for which cancer is designated as the sole underlying cause.
In this issue of the Journal, Welch and Black (4) raise the concern that cancer death rates are systematically underestimated, in that many patients who die as a result of cancer treatment do not have cancer recorded as the underlying cause of death. The authors assembled data on the reported cause of death for all patients in the Surveillance, Epidemiology, and End Results (SEER)1 program from 1994 through 1998 who died within 1 month of cancer-directed surgery for one of 19 common solid tumors. They found that for 41% of these deaths, the cause was not attributed to cancer. The authors speculate that cancer treatment is the probable underlying cause for essentially all of these deaths and, as a result, the cancer mortality rate is underestimated by 0.9%. Welch and Black (4) also suggest that many deaths subsequent to 1 month after cancer-directed surgery may be similarly miscoded, leading to further underestimation.
We note that the terminology related to death rates can be confusing. The term "cancer mortality rate" is used conventionally to represent a cancer death rate in the total populationwhere the denominator includes all people in the population with or without the cancer of interest. Thus, the metric is comparable to the "cancer incidence rate" in the population. By contrast, the term "cancer survival" usually refers to the mortality experience (survival rather than death) among patients with a cancer diagnosis. However, in studies of postoperative outcomes, the terms "operative mortality" or "surgical mortality" are often used, although the denominator includes only cancer surgery recipients. To calibrate their calculations throughout their article, Welch and Black (4) used population-based rates.
The misattribution of cancer deaths is a long-studied issue (58). Welch and Black (4) appear to be especially concerned about systematic underestimation in the context of greater use of cancer screening tests. Greater use of cancer-directed surgery in patients with small, early-stage lesions leading to, presumably, more deaths following a cancer diagnosis that are not attributed to cancer could exaggerate the benefits of screening. Welch and Black (4) propose that for all deaths following cancer treatmentincluding radiotherapy, chemotherapy, and surgerycancer should be designated as the underlying cause of death. They propose the development of "simple" though arbitrary rules, such as the designation of any death within 1 month of treatment as a cancer death.
Although we agree that continued efforts to refine and improve the classification and reporting of cancer mortality statistics are worthwhile, and indeed are being pursued in the government agencies responsible for reporting vital statistics, the specific ideas proposed by Welch and Black (4) appear to us to be impractical and may themselves introduce inaccuracies. Consider first the proposal to designate all deaths within 1 month of surgery as cancer deaths, and take, as an example, the use of radical prostatectomy for prostate cancer. We know that the 1-month death rate following this procedure in men over 65 years of age is 0.49% (9). Welch and Black (4) would, by their convention, designate all of these deaths as cancer deaths. However, in an elderly population of this nature (i.e., men over age 65 years), the underlying force of mortality is strong. In fact, using death rates for men in the United States from the NCHS (2), adjusted to the age distribution of the surgery patients receiving prostatectomy, the underlying probability of death in any given month for this cohort of men is 0.28%. That is, of the operative deaths that occur, 57% are expected in the absence of the procedure. When viewed from this context, the observation by Welch and Black (4) that 75% of deaths within 1 month of prostatectomy are not attributed to cancer appears only modestly exaggerated. To be sure, patients who receive surgery are a selected group, because surgeons will generally not operate on patients who appear to be at imminent risk of death or are otherwise seriously medically compromised. Nonetheless, many deaths do occur suddenly and unpredictably. For procedures with much higher operative mortality than prostate cancersuch as pancreatectomy, esophagectomy, or lung resectionwe agree that a higher proportion of the operative deaths are likely to be caused by the procedure. However, these are among the sites for which the reclassification proposed by Welch and Black (4) would have the least impact, because relatively few incident cases receive surgery, and the mortality rate of the disease is high relative to the incidence rate.
Welch and Black (4) also propose greater efforts to educate individuals who prepare death certificates on basic strategies for ensuring that treatment-related deaths are attributed to cancer. However, this would appear to be a daunting and complex task, with uncertain payoff in improved accuracy, given the vast numbers of individuals involved in death certification, their frequent peripheral connection to the deceased, and the various ways in which a series of medical events can be causally linked. Lu et al. (10) educated 145 physicians about determination of cause of death and then asked them to complete death certificates for four hypothetical case histories. Their analysis demonstrated substantial variation in attribution because of differences in interpretation rather than knowledge, a finding supported by another study (11). As an example of the problems of cause-of-death attribution, consider a chronic intravenous drug abuser with hepatitis C, cirrhosis, and a small hepatoma who bleeds to death at home 30 days after a hepatic lobectomy. Is the underlying cause of death hepatoma, or should it be intravenous drug use, cirrhosis, or hepatitis C? Indeed, the goal of accurately recording the impact of disease on mortality must be viewed from a perspective that is not disease-specific because researchers of other conditions such as heart disease or diabetes are similarly interested in disease-specific mortality from their own fields of interest. Any rule that would arbitrarily code all deaths in a given time frame as cancer deaths, such as a designated period following treatment as proposed by Welch and Black (4), would appear to address solely the specific concerns of the cancer research community. Moreover, for systemic treatments such as chemotherapy and radiotherapy, the highly variable and frequently lengthy duration of therapy would appear to preclude any simple rules for attributing cancer treatment as the cause of death in an algorithmic fashion, even if data on the administration and timing of these therapies were routinely available.
Despite our lack of enthusiasm for the specific proposals offered by Welch and Black (4), we do agree that measurement of the impact of cancer on mortality is a difficult task, fraught with pitfalls. Attribution of the cause of death is inherently subjective, and we rely on the judgments made by the numerous individuals who complete death certificates. Moreover, many deaths have multiple contributory causes, making the task of synthesizing the impact of one disease on each individual death even more challenging. It is, in part, in recognition of the complexity of the issue that for years the National Cancer Institute has also used the concept of "relative survival" to characterize the impact of a cancer diagnosis on subsequent life expectancy. This is an inherently statistical tool, in which the overall, all-cause mortality rates estimated from the national population are filtered out of the death rates in the cohort of individuals diagnosed with cancer (12). This technique has elegant simplicity and does not in any way rely on the attribution of the cause of death in individual patients. However, even this approach has methodologic limitations, especially in the context of the changing composition of the cancer population due to increased screening, where increased detection of indolent cases may artifactually improve relative survival. Indeed, the development of techniques for characterizing cause-specific mortality without using individual cause-of-death information is an active area of research (1315).
Finally, the analysis by Welch and Black (4) underscores the point that cause-specific mortality statistics may be used to further specific political or scientific agendas. Recognition of trends in cause-specific mortality is an effective tool for demonstrating progress or even lack of progress against a disease (16). Cause-specific mortality statistics can be used by advocacy groups trying to garner a higher proportion of National Institutes of Health research funds. In view of the fact that the science of nosology (the determination of cause of death) is unlikely to ever be perfected, cause-specific mortality estimates must always be interpreted with caution in the context of related statistics on disease incidence and relative survival.
NOTES
1 Editor's note: SEER is a set of geographically defined, population-based, central cancer registries in the United States, operated by local nonprofit organizations under contract to the National Cancer Institute (NCI). Registry data are submitted electronically without personal identifiers to the NCI on a biannual basis, and the NCI makes the data available to the public for scientific research.
REFERENCES
1 Edwards BK, Howe HL, Ries LA, Thun MJ, Rosenberg HM, Yancik R, et al. Annual report to the nation on the status of cancer, 19731999, featuring implications of age and aging on U.S. cancer burden. Cancer 2002;94:276696.[Medline]
2 Hoyert DL, Arias E, Smith BL, Murphy SL, Kochanek KD. Deaths: final data for 1999. National Vital Statistics Report, Vol 49, No. 8, September 21, 2001, National Center for Health Statistics, Centers for Disease Control and Prevention. [Last accessed: 6/6/2002]. Available from: www.cdc.gov/nchs/releases/01facts/99mortality.htm.
3 World Health Organization. International statistical classification of diseases and related health problems, tenth revision. Vol 2. Geneva (Switzerland): World Health Organization; 1993.
4 Welch HG, Black WC. Are deaths within 1 month of cancer-directed surgery attributed to cancer? J Natl Cancer Inst
2002;94:106670.
5 Percy C. International comparability of coding cancer data: present state and possible improvement by ICD-10. Recent Results Cancer Res 1989;114:24052.[Medline]
6 Percy C, Garfinkel L, Krueger DE, Dolman AB. Apparent changes in cancer mortality, 1968. A study of the effects of the introduction of the Eighth Revision International Classification of Diseases. Public Health Rep 1974;89:41828.[Medline]
7 Hoel DG, Ron E, Carter R, Mabuchi K. Influence of death certificate errors on cancer mortality trends. J Natl Cancer Inst 1993;85:10638.[Abstract]
8 Ederer F, Geisser MS, Mongin SJ, Church TR, Mandel JS. Colorectal cancer deaths as determined by expert committee and from death certificate: a comparison. J Clin Epidemiol 1999;52:44752.[Medline]
9 Begg CB, Riedel ER, Bach PB, Kattan MW, Schrag D, Warren JL, et al. Variations in morbidity after radical prostatectomy. New Engl J Med
2002;346:113844.
10 Lu TH, Shih TP, Lee MC, Chou MC, Lin CK. Diversity in death certification: a case vignette approach. J Clin Epidemiol 2001;54:108693.[Medline]
11 Lloyd-Jones DM, Martin DO, Larson MG, Levy D. Accuracy of death certificates for coding coronary heart disease as the cause of death. Ann Intern Med
1998;129:10206.
12 Ederer F, Axtell LM, Cutler SJ. The relative survival rate: a statistical methodology. Natl Cancer Inst Monogr 1961;6:10121.
13 Cronin KA, Feuer EJ. Cumulative cause-specific mortality for cancer patients in the presence of other causes: a crude analogue of relative survival. Stat Med 2000;19:172940.[Medline]
14 Brown BW, Brauner C, Levy LB. Assessing changes in the impact of cancer on population survival without considering cause of death. J Natl Cancer Inst
1997;89:5865.
15 Chu KC, Miller BA, Feuer EJ, Hankey BF. A method for partitioning cancer mortality trends by factors associated with diagnosis: an application to female breast cancer. J Clin Epidemiol 1994;47:145161.[Medline]
16 Bailar JC, Smith EM. Progress against cancer? New Engl J Med 1986;314:122632.[Abstract]
This article has been cited by other articles in HighWire Press-hosted journals:
![]() |
||||
|
Oxford University Press Privacy Policy and Legal Statement |