Affiliations of authors: J. L. Malin, Divisions of General Internal Medicine-Health Services Research, and Hematology-Oncology, Department of Medicine and Jonsson Comprehensive Cancer Center, University of California, Los Angeles (UCLA), and RAND, Santa Monica, CA; K. L. Kahn, Division of General Internal Medicine-Health Services Research, Department of Medicine, UCLA, and RAND; J. Adams, RAND; L. Kwan, Division of Cancer Prevention and Control Research, Jonsson Comprehensive Cancer Center; M. Laouri, California Health Care Foundation, Oakland; P. A. Ganz, UCLA Schools of Medicine and Public Health and Division of Cancer Prevention and Control Research, Jonsson Comprehensive Cancer Center.
Correspondence to: Jennifer L. Malin, M.D., UCLA Division of GIM-HSR, 911 Broxton Ave., 1st Floor, Box 951736, Los Angeles, CA 900951736 (e-mail: jmalin{at}mednet.ucla.edu).
![]() |
ABSTRACT |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
![]() |
INTRODUCTION |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
The National Cancer Policy Board of the Institute of Medicine concluded that "for many Americans with cancer, there is a wide gulf between what could be construed as the ideal and the reality of their experience with cancer care" (4). During the last decade, a number of studies (620) have documented variations in the outcomes and patterns of care of cancer patients. Many of these studies (1520) rely on data reported by cancer registries.
Cancer registries collect data to determine incidence, to determine trends in various population groups, to plan epidemiologic research and cancer control, and to support health care planning (2125). The cancer registry system in the United States exists as multiple overlapping, hierarchical systems under different governing bodies (25), and the systems have different purposes and speeds of reporting. Many hospitals maintain registries on cancer patients under their care and use data from these registries for certification by the American College of Surgeons or to meet state or federal requirements (26). Because hospitals themselves are expected to support these data collection efforts, there is probably variability in the resources that different hospitals expend on registry activities. Consequently, the effort that individual hospital registries expend on data collection may vary greatly, and the accuracy of their data would be expected to reflect this variability.
Valid information about the care provided is a prerequisite to accurately determining quality of care. Although the completeness of cancer registry data for incident cancer cases is very high [e.g., 97% in the Surveillance, Epidemiology, and End Results (SEER)1 Program (27)], the validity of cancer registry data for the quality of cancer care has not been well studied. We conducted this study to evaluate the validity of California Cancer Registry data for measuring the quality of the initial treatment for breast cancer by comparing data in this registry with that of the medical record "gold standard."
![]() |
METHODS |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
Case Identification and Sampling
We obtained outpatient medical records of a sample of patients with breast cancer from PacifiCare of California (Cypress, CA). Potential cases of breast cancer were identified by the health plan's quality improvement staff, who used an administrative data system to select all female enrollees with International Classification of Diseases, Version 9 (ICD-9) or Current Procedural Terminology (CPT) billing codes for breast cancer (174.*, 198.81, 233.0, 85.4*, 19160, 19162, 19180, 19182, 19200, 19220, and 19240, where * = any digit from 0 to 9) from January 1992 through June 1996. The staff of the California Cancer Registry performed a probabilistic linkage of the identified cases in the health plan with breast cancer cases in the registry by using Social Security number, name, birth date, and address. After the California Cancer Registry "de-identified" the data, we received the linked data file. Three hundred sixty-three women had multiple entries, representing multiple diagnoses of breast cancer. We selected the first diagnosis occurring during the study period for inclusion or, for bilateral cancers occurring synchronously, the diagnosis of the more advanced cancer. From this database, we randomly sampled women diagnosed with breast cancer for the first time from 1993 through 1995 in Los Angeles County who were enrolled in PacifiCare on or before the date of diagnosis and during the entire period of follow-up for this study. Los Angeles County, California, has one of 10 registries that make up the California Cancer Registry and has been a part of the National Cancer Institute-funded SEER program since 1992 (29). The SEER program is considered the gold standard for data quality among cancer registries around the world with a near complete case identification (98%) and a 95% annual rate of follow-up to determine survival (27,30). Limiting our study to cases from the Los Angeles County registry ensured that the registry data would be of the highest quality available.
Pursuit of Medical Records
Our goal was to obtain all medical record data for a sample of 300 patients. We used a professional copier service that worked with the health plan's quality improvement program to obtain a de-identified copy of the medical record for each patient. From their administrative files, PacifiCare identified the provider organization (i.e., an integrated medical group or an independent provider organization) that had a contract with the health plan to provide medical services for patients at the time of their diagnosis. The professional copier service then attempted to obtain the medical records for 1 year before diagnosis through 2 years after diagnosis from that provider organization. To be considered adequate, the provider organizations' medical record had to include a pathology report for at least one breast procedure and the notes of at least one physician providing care for the breast cancer episode (e.g., a surgeon, medical oncologist, or radiation oncologist) for at least 12 months after the date of diagnosis. If this information was not available, the medical record was searched for information regarding other providers that the patient had seen and the name of the facility where any procedures were performed. Medical records were then requested from the additional providers and facilities, in the following order: 1) medical oncologist, 2) radiation oncologist, 3) surgeon, and 4) hospital. If, in combination with the first record, the second record did not yield the information specified above, the next record was requested. This process was continued until the leads on possible records were exhausted. A case was considered "incomplete" if we did not find medical records from at least one physician providing care for breast cancer for at least 3 months after the date of diagnosis or documentation that treatment was completed within the time frame of available records. We excluded patients who were found not to have breast cancer after review of their medical records (in all cases, these patients had been diagnosed with lobular carcinoma in situ). The institutional review board of the University of California, Los Angeles, approved the study.
Medical Record Abstraction
Three research assistants (two medical students and a staff member with a bachelor of science degree in biology and several years of abstraction experience on other research studies) and an oncologist (J. Malin) abstracted each medical record by use of a chart abstraction instrument developed specifically for this study (available from the authors). We abstracted the medical record from 6 months before diagnosis through 12 months after diagnosis for information regarding the cancer diagnosis and evaluation, the characteristics and spread of the tumor, the initial cancer treatment, and the presence of comorbidity by use of the Charlson Comorbidity Index (30). A physician (J. Malin) reviewed all medical records with questions about abstraction or coding. Interrater reliability was assessed among the four abstractors on a 5% sample of the medical records. Reliability of the variables describing the treatments received was excellent with a : statistic of consistently greater than 0.80.
Statistical Analyses
Statistical analyses were performed with SAS software (version 6.12; SAS Institute, Cary, NC). We calculated the observed agreement, : statistic, sensitivity, and specificity of the California Cancer Registry data compared with the medical record gold standard. To illustrate the importance of valid data when measuring quality of care, we used the following four quality indicators (QIs), grounded in the scientific literature with broad expert consensus, that could be determined from California Cancer Registry data: QI1 = patients with stage I through III breast cancer should have definitive surgery; QI2 = patients with stage I through III breast cancer should have a lymph node dissection; QI3 = patients with stage I through III breast cancer treated with breast-conserving surgery should receive radiation therapy; and QI4 = patients with stage II or III breast cancer should receive tamoxifen or chemotherapy.
We limited the eligibility for QI4, the quality indicator for adjuvant systemic therapy, to patients with stage II or III disease because, from 1993 through 1995, patients with tumors of 12 cm were just beginning to be considered for adjuvant therapy and consensus recommendations for this group were vague (3133). We compared the proportion of patients who had care that met the quality indicators in California Cancer Registry data with that in medical record data. We calculated an overall quality score for each subject by determining the proportion of quality indicators that were met relative to those for which patients were eligible. For example, if a patient had stage I breast cancer and a lumpectomy, she was eligible for the first three quality indicators (QI13). If she had a lumpectomy but no axillary lymph node dissection and then received radiation therapy and tamoxifen, she met only two of three possible quality indicators (even though she received tamoxifen). Because she did not meet the eligibility criteria established for QI3, her data for this indicator would not be counted in her quality score. Her quality score would then be 2/3 or 0.67. The individual quality scores were averaged to create an overall quality score for the sample. We then compared the quality of care as measured by the quality score calculated from California Cancer Registry data with that calculated from medical record data. To identify subgroups for which registry data might be less valid, we compared the difference in the quality scores determined from registry data and from medical record data for the following groups of patients: those with stage I, II, and III breast cancer; those younger than 70 years versus those 70 years old or older; white patients versus nonwhite patients; and those with comorbidity counts of 0, 1, and 2 or higher. We chose 70 years as our cut point for age because the literature on patterns of care in breast cancer suggests that patients older than 70 years are less likely to receive standard breast cancer treatment than are younger women (15,16,19,3335). We used the 2 test to assess differences across categorical variables and the Student t test to assess differences for continuous variables. To explore statistical interaction effects on registry data validity, we modeled the effects of age, race, disease stage (stage I versus stages II and III), number of comorbidities, and the interaction terms for age x stage, race x stage, age x race, age x comorbidity, and race x comorbidity on the difference between quality scores determined from medical record data and from registry data. All statistical tests were two-sided.
![]() |
RESULTS |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
|
|
|
|
The stage of a patient is determined from the results of the breast surgery, lymph node dissection and analysis, and testing for the presence of distant metastases, and this determination often includes both hospital-based and ambulatory services. We found only moderate agreement between the stage reported by the registry and that reported in the medical record (82%, = 0.73). Sensitivity of the registry data compared with the medical record data for stage was 80.6% for stage 0, 86.4% for stage I, 82.8% for stage II, 72.7% for stage III, and 41.7% for stage IV. The corresponding specificities were 99.6%, 93.6%, 93.1%, 98.3%, and 100.0%.
The accuracy of the California Cancer Registry did not appear to vary by age or race/ethnicity. The scores for each of the variables were not statistically significantly different for patients younger than 70 years compared with those 70 years and older or for nonwhite patients compared with white patients.
To illustrate the importance of valid data on the results of quality measurement, we compared the average percentage of patients meeting four quality indicators from California Cancer Registry data with that obtained from medical record data (Table 4). Two of the quality indicators reflected hospital-based services (QI1 and QI2), and two reflected ambulatory services (QI3 and QI4). The percentages of patients whose care met the quality indicators for hospital-based services were not statistically significantly different when calculated from registry data and medical record data. Virtually all patients, 99% (95% CI = 98% to 100%), met the first quality indicator. For the second indicator, QI2, registry data indicated that 84% (95% CI = 79% to 89%) of patients had received the care; medical record data indicated that 88% (95% CI = 84% to 92%) had. However, the percentage of patients whose care appeared to meet the quality indicators for ambulatory services was statistically significantly underestimated by registry data. When California Cancer Registry data were used, only 63% (95% CI = 54% to 72%) of patients appeared to have received the indicated radiation therapy after breast-conserving surgery (QI3). When medical record data were used, however, 85% (95% CI = 79% to 91%) of patients appeared to have received such care. Similarly, 46% (95% CI = 37% to 55%) of patients from registry data and 90% (95% CI = 85% to 95%) of patients from medical record data appeared to receive the indicated adjuvant therapy (QI4).
|
|
![]() |
![]() |
![]() |
where QSMR = the quality score calculated from medical records, and QSCCR = the quality score calculated from the California Cancer Registry. The model predicts that, for a woman without clinically significant comorbidity who is 70 years old or older with stage I breast cancer, her quality score (percentage of quality indicators met) would be, on average, 4 percentage points (95% CI = 1 to 9 percentage points) greater with medical record data than with registry data. If this same patient were in the group younger than 70 years, her predicted quality score would be 9 percentage points (95% CI = 5 to 14 percentage points) greater from medical record data than from registry data. A woman with no clinically significant comorbidity and stage II or III breast cancer would be expected to have a quality score 22 percentage points (95% CI = 16 to 27 percentage points) greater if she was 70 years old or older and 16 percentage points (95% CI = 12 to 20 percentage points) greater if she was younger than 70 years old. Each additional comorbidity decreases the predicted difference in quality scores calculated from registry data compared with medical record data by 4 percentage points (95% CI = 6 to 8 percentage points).
|
![]() |
DISCUSSION |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
If providers or health plans are compared by the use of registry data, those with patients who are elderly or have more advanced disease would appear to be providing worse care because of the quality of the data. In addition, providers or health plans that provide more care in an ambulatory setting might appear to have lower quality scores. Reporting the quality of care derived from data with such validity problems could anger providers and seriously undermine public confidence in this process.
We conducted our study by use of a sample of patients diagnosed with breast cancer from a California health maintenance organization (HMO). Although our methods permitted access to the outpatient medical records for more than 80% of the sampled cases, it posed several limitations for the study. First, the resulting study sample reflected the characteristics of the patients enrolled in the HMO and was, therefore, somewhat older and less ethnically diverse than the overall population of women diagnosed with breast cancer in Los Angeles. Second, it is possible that the HMO's practice of contracting with medical groups and hospitals tended to exclude hospitals with limited resources, thereby limiting our ability to detect differences in the validity of cancer registry data. Third, cases were selected for this study by linking data from the California Cancer Registry with a file of women identified by the health plan as having ICD-9 and CPT codes for breast cancer. This protocol resulted in cases without a breast cancer claim being excluded from our study. If our goal was to describe the quality of care of women in this health plan, this exclusion could be an important bias. However, the impact of this bias on an evaluation of the validity of the registry data is likely to be minimal.
The results of this study highlight the importance of using data that are valid across clinical settings and time to measure quality of care. We found that, compared with medical record data, the validity of registry data varied with the setting of care, being less accurate for ambulatory services than for hospital-based care. Furthermore, the setting of care varied with patient characteristics, another source of bias in measurement of quality of care. As cancer care is increasingly focused in the outpatient setting, the use of data that are not valid for ambulatory services would substantially undercut efforts to accurately measure the quality of care. For example, if these data were used for surveillance of the quality of breast cancer care, the quality of ambulatory services would be underestimated. If used to identify areas for quality improvement, resources could be diverted away from the area most in need of improvement. If such data were used to hold providers accountable for the quality of their care, for example, by requiring that a certain standard of quality be met to be allowed to bill Medicare, some providers could inappropriately be penalized (5).
In addition to improving the accuracy of data on ambulatory services, a few other modifications of registry procedures are needed for registry data to be used for quality measurement. First, registries would need to augment their data collection efforts to include information about a patient's comorbidities; comorbidity data are necessary for valid measurement of the process and outcomes of care (39). Second, registries may need to expand their data collection efforts to provide greater clinical detail (e.g., dosage of chemotherapy drugs and number of treatments). Research is ongoing to determine how much clinical detail is needed to make valid assessments of the quality of care. Third, the time between case ascertainment by the hospital registries and data availability from the central registries, typically 2 years (29), would need to be reduced because more timely data are needed for use by stakeholders and policy makers evaluating the quality of care.
In spite of the challenges described, cancer registries have tremendous potential for quality measurement. Because of their regulatory authority, cancer registries are uniquely situated to identify a population-based sample of cancer patients, and they are still the best candidates to provide the infrastructure for measuring the quality of cancer care.
One approach to improving the accuracy of cancer registry data for quality measurement is to augment registry data with administrative data (4043), such as SEER data augmented with Medicare claims data (44,45). However, this approach has caveats that limit its application for any national effort to monitor quality of care. First, administrative data have limited clinical detail. Second, administrative data are not necessarily accurate and would need to be validated (46). Third, because of the fragmented nature of the U.S. health care system, no administrative database provides population-wide data. A database pieced together from various sources of administrative data for different cohorts of patients would be fraught with many problems. It is likely that the accuracy of such data would vary tremendously and that, as in this study, such data could interact with patient characteristics, setting of care, and other structural variables (e.g., type of health plan).
Further research is needed to explore novel strategies to obtain population-based data on quality of cancer care that would not require detailed data collection on every patient. Although claims data are not a panacea, if validated and found to be accurate, such data could be used when available. This procedure would allow resources for more detailed medical record review to be focused on those quality measures for which registry and claims data fall short. Another strategy would be to perform more detailed data collection (i.e., abstraction of the ambulatory medical records or patient surveys) on a sample of patients and then to use standard statistical techniques to impute the quality score for the entire population.
If registries were to collect additional primary data and systematically review outpatient medical records (as was done in this study) or survey patients about the medical care they received, the additional cost would be roughly $150 million to $250 million annually. In contrast, current budgets are approximately $34 million per year for the National Program of Cancer Registries, $22 million per year for SEER, and $1.2 million per year for the National Cancer Data Base; however, the costs associated with data collection are borne by the reporting facilities (47). Unless central registry organizations (e.g., SEER, National Program of Cancer Registries, or the California Cancer Registry) assume a more active role in data collection, the costs of additional data collection would likely be borne by the reporting hospital registries and passed on to the purchasers of health care. Although this amount is a substantial investment, this amount is only about 0.2% of the estimated overall annual costs for cancer of $107 billion (of which direct medical costs are $37 billion) (48,49). This value is well within the range of investment in quality assessment and improvement made by other sectors of the economy, reported to be between 2% and 10% of total sales (50,51). Accurate data on the quality of cancer care are urgently needed. Health care purchasers and policy makers should consider investing in our cancer registry system to obtain these data.
![]() |
NOTES |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
Supported by grants from the Susan G. Komen Breast Cancer Foundation and PacifiCare of California. Dr. Malin is supported by a CI-10 Damon Runyon-Lilly Clinical Investigator Award from the Damon Runyon Cancer Research Foundation. Dr. Ganz is supported by an American Cancer Society Clinical Research Professorship.
We thank William Wright, Sandy Liu, and the staff of the California Cancer Registry, as well as Kim Allory and Laura Epperson at PacifiCare. We gratefully acknowledge Tanya Barauskas, Cynthia Wang, Amber Pakilit, and Christine Reifel for research assistance and Christiann Savage for manuscript preparation. Dr. Malin is indebted to the members of her dissertation committee, Patricia Ganz, Katherine Kahn, John Glaspy, Robert Brook, and Ronald Andersen, for their guidance and support.
![]() |
REFERENCES |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
1 Hewitt M, Simone JV, editors. Ensuring quality cancer care. National Cancer Policy Board, Institute of Medicine and National Research Council. Washington (DC): National Academy Press; 1999.
2 The Susan G. Komen Breast Cancer Foundation. News. Leading breast cancer organization partners with ASCO to evaluate cancer care in the U.S.; ensure all breast cancer patients receive high-quality care. [Accessed: 08/13/2001.] Available from: http://www.breastcancerinfo.com/news/article.asp?ArticleID=72.
3 Skolnick AA. A FACCT-filled agenda for public information. Foundation for Accountability. JAMA 1997:278:1558.[Medline]
4 ASCO initiates study of quality of cancer care. Oncology News 2000;9:1.
5 Kahn KL, Malin JL, Adams J, Ganz PA. Developing a reliable, valid, and feasible plan for quality-of-care measurement for cancer: how should we measure? Med Care. In press 2002.
6 Nattinger AB, Gottleib MS, Hoffman RG, Walker AP, Goodwin JS. Minimal increase in use of breast-conserving surgery from 1986 to 1990. Med Care 1996;34:47989.[Medline]
7 Warnecke RB, Johnson TP, Kaluzny AD, Ford LG. The community clinical oncology program: its effect on clinical practice. Jt Comm J Qual Improv 1995;21:3369.[Medline]
8 Hillner BE, McDonald MK, Penberthy L, Desch CE, Smith TJ, Maddux P, et al. Measuring standards of care for early breast cancer in an insured population. J Clin Oncol 1997;15:14018.[Abstract]
9 Walsh GL, Winn RJ. Baseline institutional compliance with NCCN guidelines: non-small-cell lung cancer. Oncology (Huntingt) 1997;11:16170.
10
Potosky AL, Harlan LC, Stanford JL, Gilliland FD, Hamilton AS, Albertsen PC, et al. Prostate cancer practice patterns and quality of life: the Prostate Cancer Outcomes Study. J Natl Cancer Inst 1999;91: 171924.
11
Lu-Yao GL, Potosky AL, Albertsen PC, Wasson JH, Barry MJ, Wennberg JE. Follow-up prostate cancer treatments after radical prostatectomy: a population-based study. J Natl Cancer Inst 1996;88:16673.
12 Young WW, Marks SM, Kohler SA, Hsu AY. Dissemination of clinical results. Mastectomy versus lumpectomy and radiation therapy. Med Care 1996;34:100317.[Medline]
13 Guadagnoli E, Shapiro C, Gurwitz JH, Silliman RA, Weeks JC, Borbas C, et al. Age-related patterns of care: evidence against ageism in the treatment of early-stage breast cancer. J Clin Oncol 1997;15:233844.[Abstract]
14 Farrow DC, Hunt WC, Samet JM. Geographic variation in the treatment of localized breast cancer. N Engl J Med 1992;326:1097101.[Abstract]
15
Ballard-Barbash R, Potosky AL, Harlan LC, Nayfield SG, Kessler LG. Factors associated with surgical and radiation therapy for early stage breast cancer in older women. J Natl Cancer Inst 1996;88:71626.
16 Hillner BE, Penberthy L, Desch CE, McDonald MK, Smith TJ, Retchin SM. Variation in staging and treatment of local and regional breast cancer in the elderly. Breast Cancer Res Treat 1996;40:7586.[Medline]
17 Lazovich DA, White E, Thomas DB, Moe RE. Underutilization of breast-conserving surgery and radiation therapy among women with stage I or II breast cancer. JAMA 1991;266:34338.[Abstract]
18 Howe HL, Lehnherr M, Katterhagen JG. Effects of physician outreach programs on rural-urban differences in breast cancer management. J Rural Health 1997;13:10917.[Medline]
19 Busch E, Kemeny M, Fremgen A, Osteen RT, Winchester DP, Clive RE. Patterns of breast cancer care in the elderly. Cancer 1996;78:10111.[Medline]
20 Sawka C, Olivotto I, Coldman A, Goel V, Holowaty E, Hislop TG. The association between population-based treatment guidelines and adjuvant therapy for node-negative breast cancer. British Columbia/Ontario Working Group. Br J Cancer 1997;75:153442.[Medline]
21 National Cancer Institute. Process of cancer data collection. SEER's Training Web Site. Unit 3. Cancer data. [Accessed: 08/18/2001.] Available from: http://training.seer.cancer.gov/module_cancer_registration/unit3_how_collect.html#.
22 NAACCR (North American Association of Central Cancer Registries). Standards for completeness, quality, analysis, and management of data. Vol 3. North American Association of Central Cancer Registries. Standards for Cancer Registries; September 2000. Available from: http://www.naaccr.org/Standards/files/VolumeIIIwithprefaceandrefs.pdf
23 Austin DF. Types of registries: goals and objectives. In: Menck H, Smart C, editors. Central cancer registries: design, management, and use. Langhorne (PA): Hardwood Academic Publishers; 1994. p. 112.
24
Izquierdo JN, Schoenbach VJ. The potential and limitations of data from population-based state cancer registries. Am J Public Health 2000;90:6958.
25 Centers for Disease Control and Prevention. National Program of Cancer Registries-Cancer Surveillance System (NPCR-CSS). Rationale and approach (July 1999). [Accessed: 08/10/2001.] Available from: http://www.cdc.gov/cancer/npcr/npcr-css.htm.
26 Young JL. The hospital-based cancer registry. In: Jensen OM, Parkin DM, Maclennan R, Muir CS, Skeet RG, editors. Cancer registration principles and methods. Lyon (France): IARC; 1991. p. 17784.
27 Zippin C, Lum D, Hankey BF. Completeness of hospital cancer case reporting from the SEER Program of the National Cancer Institute. Cancer 1995;76:234350.[Medline]
28 American Joint Committee on Cancer. AJCC cancer staging manual. 5th ed. Philadelphia (PA): Lippincott-Raven Publishers; 1997.
29 National Cancer Institute. About SEER. [Accessed: 08/10/01.] Available from: http://seer.cancer.gov/AboutSEER.html.
30 Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis 1987;40:37383.[Medline]
31 Goldhirsch A, Wood WC, Senn HJ, Glick JH, Gelber RD. Meeting highlights: international consensus panel on the treatment of primary breast cancer. J Natl Cancer Inst 1995;87:14415.[Medline]
32 Consensus Statements/NIH Consensus Development Program. 81. Treatment of early-stage breast cancer. National Institutes of Health-Consensus Development Conference Statement, June 1821, 1990. [Accessed 09/09/2001.] Available from: http://dowland.cit.nih.gov/odp/consensus/cons/081/081_statement.htm.
33 Lazovich D, Solomon CC, Thomas DB, Moe RE, White E. Breast conservation therapy in the United States following the 1990 National Institutes of Health Consensus Development Conference on the treatment of patients with early stage invasive breast carcinoma. Cancer 1999;86:62837.[Medline]
34 Goel V, Olivotto I, Hislop TG, Sawka C, Coldman A, Holowaty EJ. Patterns of initial management of node-negative breast cancer in two Canadian provinces. British Columbia/Ontario Working Group. CMAJ 1997;156:2535.[Abstract]
35
Morrow M, White J, Moughan J, Owen J, Pajack T, Sylvester J, et al. Factors predicting the use of breast-conserving therapy in stage I and II breast carcinoma. J Clin Oncol 2001;19:225462.
36
Shi L. Type of health insurance and the quality of primary care experience. Am J Public Health 2000;90:184855.
37 Monheit AC, Vistnes JP. Race/ethnicity and health insurance status: 1987 and 1996. Med Care Res Rev 2000;57 Suppl 1:1135.[Medline]
38
Bickell NA, Chassin MR. Determining the quality of breast cancer care: do tumor registries measure up? Ann Intern Med 2000;132:70510.
39
Brook RH, McGlynn EA, Cleary PD. Quality of health care. Part 2: measuring quality of care. N Engl J Med 1996;335:96670.
40 McClish DK, Penberthy L, Whittemore M, Newschaffer C, Woolard D, Desch CE, et al. Ability of Medicare claims data and cancer registries to identify cancer cases and treatment. Am J Epidemiol 1997;145: 22733.[Abstract]
41 Brooks JM, Chrischilles E, Scott S, Ritho J, Chen-Hardee S. Information gained from linking SEER Cancer Registry Data to state-level hospital discharge abstracts. Surveillance, Epidemiology, and End Results. Med Care 2000;38:113140.[Medline]
42 Doebbeling BN, Wyant DK, McCoy KD, Riggs S, Woolson RF, Wagner D, et al. Linked insurance-tumor registry database for health services research. Med Care 1999;37:110515.[Medline]
43 Potosky AL, Riley GF, Lubitz JD, Mentnech RM, Kessler LG. Potential for cancer related health services research using a linked Medicare-tumor registry database. Med Care 1993;31:73248.[Medline]
44
Potosky AL, Merrill RM, Riley GF, Taplin SH, Barlow W, Fireman BH, et al. Breast cancer survival and treatment in health maintenance organization and fee-for-service settings. J Natl Cancer Inst 1997;89:168391.
45
Riley GF, Potosky AL, Klabunde CN, Warren JL, Ballard-Barbash R. Stage at diagnosis and treatment patterns among older women with breast cancer: an HMO and fee-for-service comparison. JAMA 1999;281: 7206.
46
Du X, Goodwin JS. Patterns of use of chemotherapy for breast cancer in older women: findings from Medicare claims data. J Clin Oncol 2001;19:145561.
47 Hewitt M, Simone JV, editors. Enhancing data systems to improve the quality of cancer care. National Cancer Policy Board, Institute of Medicine and National Research Council. Washington (DC): National Academy Press; 2000.
48 American Cancer Society. Cancer facts and figures 2000. [Accessed 04/23/2002]. Available from: http://www.cancer.org/downloads/STT/F&F00.pdf.
49 Brown ML. The national economic burden of cancer: an update. J Natl Cancer Inst 1990;82:18114.[Medline]
50 Halevy A, Naveh E. Measuring and reducing the national cost of non-quality. Total Quality Management 2000;11:10952003.
51 Giakatis G, Enkawa T, Washitani K. Hidden quality costs and the distinction between quality cost and quality loss. Total Quality Management 2001;12:17985.
Manuscript received November 9, 2001; revised March 18, 2002; accepted March 29, 2002.
This article has been cited by other articles in HighWire Press-hosted journals:
![]() |
||||
|
Oxford University Press Privacy Policy and Legal Statement |