Validation of Self-reported Screening Mammography Histories among Women with and without Breast Cancer

Sandra A. Norman1 , A. Russell Localio1, Lan Zhou1, Leslie Bernstein2, Ralph J. Coates3, Elaine W. Flagg4, Polly A. Marchbanks5, Kathleen E. Malone6, Linda K. Weiss7, Nancy C. Lee3 and Marion R. Nadel3

1 Center for Clinical Epidemiology and Biostatistics and Department of Epidemiology and Biostatistics, University of Pennsylvania, Philadelphia, PA.
2 Department of Preventive Medicine, University of Southern California, Los Angeles, CA.
3 Division of Cancer Prevention and Control, Centers for Disease Control and Prevention, Atlanta, GA.
4 Division of Global Migration and Quarantine, Centers for Disease Control and Prevention, Atlanta, GA.
5 Division of Reproductive Health, Centers for Disease Control and Prevention, Atlanta, GA.
6 Fred Hutchinson Cancer Research Center, Seattle, WA.
7 Population Studies and Prevention Program, Karmanos Cancer Institute at Wayne State University, Detroit, MI.

Received for publication September 30, 2002; accepted for publication January 31, 2003.


    ABSTRACT
 TOP
 ABSTRACT
 INTRODUCTION
 MATERIALS AND METHODS
 RESULTS
 DISCUSSION
 REFERENCES
 
As part of a case-control study of the efficacy of screening mammography, the authors validated the mammography histories of 2,495 women aged 40–64 years with incident breast cancer diagnosed in 1994–1998 and a 25% random sample of 615 controls never diagnosed with breast cancer, all reporting a mammogram in the past 5 years. Subjects from five metropolitan areas of the United States were cross-classified by facility records ("gold standard") and self-report according to history of a recent screening mammogram (within 1 year or within 2 years). Sensitivity and specificity of self-reported screening at 1 year were 0.93 and 0.82, respectively, for cases and 0.92 and 0.80 for controls. At 2 years, sensitivity and specificity were 0.97 and 0.78 for both cases and controls. Confidence intervals for the differences in sensitivity and specificity were narrow and included zero. Scant evidence was found of telescoping (recollection of events as more recent than actual). Findings suggest that, in an interview-based case-control study of the efficacy of screening mammography, 1) estimated true prevalences of recent screening mammography adjusted for sensitivity and specificity will be slightly lower than self-reported prevalences, and 2) differential misclassification of exposure status is slight. Therefore, odds ratios will likely be biased toward the null, underestimating screening efficacy.

breast neoplasms; case-control studies; mammography; mass screening; sensitivity and specificity

Abbreviations: Abbreviations: CARE, Contraceptive and Reproductive Experiences; CI, confidence interval.


    INTRODUCTION
 TOP
 ABSTRACT
 INTRODUCTION
 MATERIALS AND METHODS
 RESULTS
 DISCUSSION
 REFERENCES
 
Self-reported mammography histories are a mainstay of cancer-control surveillance. Population-based surveys are used to track screening rates over time and to target interventions. Because the accuracy of self-report is always a potential concern, numerous studies have compared self-reported mammography histories with facility data (112).

Case-control studies of the efficacy of screening mammography also require detailed and accurate mammography histories because past screening of cases and controls during the detectable preclinical period is the exposure of interest. This period (sojourn time) is the time prior to clinical diagnosis during which a screening test can detect cancer; for mammography, it is estimated to be 1–4 years (1315).

To our knowledge, there have been no case-control studies of the efficacy of screening mammography based on self-report. Past case-control studies have used deceased patients (16, 17), obtaining screening histories through medical or screening program records, or relying on the presence or absence of population-based screening programs in the geographic areas in which the study participants lived (16).

We are conducting a case-control interview study of screening efficacy among women aged 40–64 years within a large, population-based case-control study of risk factors for incident breast cancer, the National Institute of Child Health and Human Development Women’s Contraceptive and Reproductive Experiences (CARE) Study (18). By using a hybrid design entailing follow-up of all cases for mortality from breast cancer (19), we will able to assess how efficacious screening mammography is in reducing the risk of death from breast cancer among women aged 40–64 years.

Concerns about the accuracy of self-report are more complex in comparative studies than in descriptive surveys. We know of no prior validation studies that have compared the accuracy of screening mammography histories between women with and without breast cancer. Both nondifferential and differential misclassification of exposure by cases and controls can result in biased estimates of association. It is generally assumed that cases recall past exposures more accurately than do controls, particularly if, in the course of their cancer being diagnosed, they have been asked about tests related to that diagnosis. Statistical techniques that adjust for differential recall require careful measurement of the sensitivity and specificity of the self-report in cases and controls compared with a "gold standard" (20). Some studies suggest that women recall their mammograms taking place more recently than they actually did (2, 3, 8). This "telescoping" can produce false positives when self-reporting is used to estimate the frequency of recent mammography. In these studies, specificity (21) has been substantially lower than sensitivity (21). In this article, we estimate the validity of self-reports of screening mammography in the context of a case-control study.


    MATERIALS AND METHODS
 TOP
 ABSTRACT
 INTRODUCTION
 MATERIALS AND METHODS
 RESULTS
 DISCUSSION
 REFERENCES
 
Study subjects
As part of the Women’s CARE Study, women who had incident, histologically confirmed, first primary invasive breast cancer (n = 4,575) and controls obtained by random digit dialing who had no previous breast cancer (n = 4,682) were interviewed from August 1994 through December 1998 in five metropolitan areas of the United States: Atlanta, Georgia; Seattle, Washington; Detroit, Michigan; Los Angeles, California; and Philadelphia, Pennsylvania. Cases and controls were frequency matched by age (in 5-year age groups), race (African American or White), and site. The Centers for Disease Control and Prevention (Atlanta, Georgia) served as the data coordinating center (18). Institutional review board approval for the study was obtained at all participating sites.

The reference date for cases was the month and year that their cancer was diagnosed. For controls, the reference date was considered the month and year that they were identified through random digit dialing.

Inclusion criteria
The subset of Women’s CARE Study respondents included in the validation study 1) were aged 40–64 years on the reference date; 2) reported at least one mammogram in the 5 years before the reference date (or, for cases, reported that their cancer was discovered by a screening mammogram even if they did not report any other mammograms in the 5 years before the reference date); and 3) were interviewed on or after November 1, 1995. Only after that date were women asked to provide facility information for mammograms received during the reference month. Each woman included signed a consent form and gave us the names of the facilities she used.

Self-reported mammography history
The Women’s CARE Study questionnaire defined a mammogram as "an x-ray taken only of the breasts by a machine that presses the breast against a plate." From self-report, we obtained age at first mammogram, month and year of the most recent mammogram before the reference date, and year of all intervening mammograms in the 5 years before the reference date. We assumed that respondents would not be able to recall the precise month for mammograms before their most recent examination. For each mammogram reported on the questionnaire, respondents were asked whether the examination was 1) for a specific breast problem, 2) for follow-up of a previous breast problem, 3) part of a routine physical examination or was a screening test, or 4) for another reason. The questionnaire did not specifically ask about mammograms in the reference month. Rather, all questions about mammograms were asked for the time period before the reference month to exclude mammograms expected to occur as part of the diagnostic work-up. However, we were able to determine whether a case had a screening mammogram in the reference month because the questionnaire also asked about each breast surgery or procedure that occurred on or before the date of the interview, the month and year of this surgery, and how the problem that necessitated the surgery was first discovered (e.g., routine screening mammogram). Thus, if a woman’s breast cancer was discovered with her first mammogram, and it was a screening mammogram in the reference month, this information became evident because of the reasons for breast surgery.

Facility information
Facility records were used to document the presence and timing of the self-reported mammograms. If a woman reported no mammograms within the prior 5 years, we accepted that statement as the truth. With no means of searching the hundreds of facilities at each study site, we did not validate negative reports. Respondents provided the names of facilities used for mammograms in the 5 years before the reference date and, after November 1, 1995, the name of any facility used in the reference month. We validated information on mammograms in the reference month only for cases who reported that their breast surgery resulted from a problem discovered by mammography screening. We asked every facility listed by the respondent, including those out of the study area, for the dates of all mammograms she received for at least 5 years prior to the reference date up to the date of our request. If a facility did not have any information about the respondent, we also contacted other facilities with similar names or in close geographic proximity.

We did not confirm from the facility whether it conducted a screening or a diagnostic mammogram. Previous studies indicate that such information, if obtainable, frequently is not correct because restrictive insurance requirements lead to errors in recording the indicators for mammography (11, 22).

Data analysis
Summary variables
Self-reported screening mammography history. All analyses required an estimate of the number of months from the respondent’s most recent self-reported screening mammogram to a defined endpoint. For cases, this endpoint was the date of diagnosis. A mammogram in the reference month sufficed if the case said that her cancer was discovered by screening. For controls, the analogous endpoint was the month prior to the reference month. To analyze whether a screening mammogram occurred within a specific time period, we added 1 month to the time period for the controls to adjust for this difference. Thus, if the time frame was within the prior year, the most recent screening mammogram for a case could have occurred during, or 12 months before, the reference month. For controls, the most recent screening mammogram could have occurred in the 13 months before the reference month.

If only patient age at the time of the mammogram was available and the exact month of the mammogram was not, we chose a date equal to the patient’s birth month plus 5 or 6 months (assigned randomly) as the mammogram date. If we had only the year of the mammogram, we assumed June or July (randomly). When the respondent could not recall a year, we set the response to missing.

Comparison of self-report and facility information. We evaluated the correspondence between the timing of the self-report and the facility record by using two time frames: within 1 year and within 2 years. Both are estimated to be within the detectable preclinical period for breast cancer. We cross-classified respondents’ self-reports and facilities’ records by whether they were within the specified time frame. Using the 2-year time frame for illustration, for cases’ self-report of screening mammograms, "within 2 years" was defined as a mammogram in the 24 months prior to the reference date or, if the study subject reported that her cancer was discovered by screening, a mammogram in the reference month prior to the date of diagnosis. "Not within 2 years" was defined as a mammogram more than 24 months but less than or equal to 60 months before the reference date or no self-reported mammogram within the prior 5 years. More specifically, if a case’s cancer was discovered by a screening mammogram, we generally used the self-reported date of her most recent screening mammogram. However, some of these respondents either did not report any mammograms at all prior to the reference date or reported their most recent self-reported screening mammogram to be more than 24 months ago (using the 2-year time frame). For these women, we adopted the reference date as the time of their most recent screening mammogram. For controls’ self-report, "within 2 years" was defined as a screening mammogram in the 25 months prior to the reference date and "not within 2 years" as a screening mammogram more than 25 months but 60 months or less before the reference date, or none within the past 5 years. If no facility could find a mammogram for the respondent, we classified the woman as having no matching mammogram within the prior 5 years.

To test the robustness of our assumptions, we used different approaches to assess the correspondence between facility records and self-report (table 1). For approaches I and II, we started with the date of a woman’s most recent self-reported screening mammogram and then located the facility-reported mammogram closest in time. Other approaches (III and IV) defined a match as any facility-reported mammogram in the same dichotomous time period as the self-report. For example, using the 2-year time frame, if a case reported a screening mammogram at 22 months before diagnosis and there were two facility reports, 25 months and 10 months before diagnosis, then approaches I and II would find a mismatch between the self-report and the facility report because the closer facility mammogram (25 months) did not occur within 2 years. However, approaches III and IV would declare this a match because there was a self-reported screening mammogram within 2 years (22 months) and a facility-reported mammogram within 2 years (10 months). This scenario would likely be an error because the facility-reported mammogram at 25 months was more likely to be the one the respondent recalled. Finally, we repeated the analyses by omitting any facility reports in the reference month (approaches II and IV) because they could be part of the diagnostic work-up and could lead to more matches than were justified. Excluding matches based on a facility mammogram in the reference month was most restrictive; if a case had only one screening mammogram and it was in the reference month, the corresponding facility-reported mammogram would be missed and a true match excluded.


View this table:
[in this window]
[in a new window]
 
TABLE 1. Possible methods for defining a match between self-reported and facility-reported mammograms*
 
We adopted approach I a priori as the primary definition of the correspondence between self- and facility-reported mammography because of the manner of data collection. That is, respondents were asked for all relevant dates, facilities were asked for all recorded mammograms, and we attempted to link the self-report to the closest facility record.

Our analyses relied on the usual statistics of diagnostic testing, sensitivity, specificity, and positive predictive value (21) and were based on the assumption that the matching facility report was the gold standard. By using the estimated sensitivity and specificity, we estimated the true prevalence of screening among cases and controls, respectively, as computed by Thompson (20).

To assess telescoping, we restricted our analysis to the controls because 1) for cases, the phenomenon of telescoping could easily be confounded by their recollection of events related to the diagnosis of cancer; and 2) a large proportion of the cases’ most recent screening mammograms occurred very near the reference date. We first examined the distribution of the signed (+/-) difference in months between respondents’ most recent self-reported screening mammogram and the facility-reported mammogram nearest in time. Second, by limiting our analysis to women who reported a screening mammogram in the prior 5 years and for whom there was a facility record of a mammogram, we calculated the proportion of self-reports that were confirmed by facility data when the most recent self-reported mammogram was within the 1- or 2-year cutpoints.


    RESULTS
 TOP
 ABSTRACT
 INTRODUCTION
 MATERIALS AND METHODS
 RESULTS
 DISCUSSION
 REFERENCES
 
Patient recruitment and facility responses
Of the Women’s CARE Study participants eligible for the validation study, 97.7 percent of cases and 97.4 percent of controls consented to mammography record retrieval, resulting in 2,495 eligible cases and 615 randomly selected controls for the validation study. Among controls, 37 percent were aged 40–49 years and 63 percent were aged 50–64 years; 66 percent were White and 34 percent were Black. The distribution by site was Atlanta, 17 percent; Seattle, 23 percent; Detroit, 18 percent; Philadelphia, 16 percent; and Los Angeles, 26 percent. There were no substantial differences in these distributions between cases and controls.

Overall, we could not obtain mammography information from facilities for 4 percent (96/2,495) of the cases and 9 percent (54/615) of the controls. Only rarely was there no response from any of the facilities listed (10 cases, six controls). More often, the facilities found no record of the woman (41 cases, 22 controls) or found some information but not for a mammogram (43 cases, 23 controls). We found no evidence that any facilities were particularly nonresponsive or that individual facilities changed reporting patterns over time.

Correspondence of self-reported and facility information
Table 2 cross-classifies self-reported and facility data for cases and controls for two different periods: within 1 year and within 2 years of the reference date. For the 2-year cutpoint, almost half of the false positives reflected the absence of any mammography date from a facility (100/225 (44 percent) for cases, 38/78 (49 percent) for controls). Using the 1-year cutpoint, the comparable proportions for cases and controls were 29 percent and 24 percent.


View this table:
[in this window]
[in a new window]
 
TABLE 2. Comparison of self-reported and facility-reported data using approach I,* Women’s CARE{dagger} Study, United States, 1994–1998
 
Contrasting cases and controls
Table 3 illustrates differences, and their 95 percent confidence intervals, in the sensitivity, specificity, and positive predictive value of screening histories between cases and controls. The differences in sensitivity and specificity were small (ranging from 0 to 2 percentage points), and the confidence intervals were narrow and included zero. The positive predictive value was higher for the screening histories of cases than for controls, reflecting the higher prevalence of recent facility-reported mammograms among cases.


View this table:
[in this window]
[in a new window]
 
TABLE 3. Differences in sensitivity, specificity, and PPV* between cases and controls when approach I{dagger} was used, Women’s CARE* Study, United States, 1994–1998
 
Effect of age
Sensitivity was significantly (p < 0.001) higher for older women (aged 50–64 years) than for younger women (aged 40–49 years); the reverse was true for specificity. Average values, after controlling for race, site, and case/control status, were as follows: age 40–49 years, sensitivity 0.94 (95 percent confidence interval (CI): 0.93, 0.96) and specificity 0.74 (96 percent CI: 0.70, 0.78); age 50–64 years, sensitivity 0.98 (95 percent CI: 0.97, 0.99) and specificity 0.63 (95 percent CI: 0.59, 0.67).

Evidence of "telescoping"
We found only minimal evidence of telescoping among the controls. The median and modal differences in months between the most recent self-reported mammogram and the facility-reported mammogram closest in time were both zero. The mean difference was –4 months, a direction consistent with telescoping, varying across sites from –2 months to –7 months. The phenomenon did not appear to be caused by most women recalling their mammogram as more recent than actual but rather by a few large discrepancies between self-reports and facility reports. Twenty-eight (5.7 percent) of the 495 controls involved in the analysis accounted for 60 percent of the telescoping.

Robustness of results to changes in assumptions
As expected, both specificity and positive predictive values for cases were lower when we used approaches that excluded facility reports in the reference month (approaches II and IV, table 1). Matches based on a facility report in the reference month that would have been true positives with approaches I or III became false positives (table 4). For example, using the 2-year window, 16 percent (159/970) of cases who reported that their cancer was discovered by screening would have been misclassified without the question on how their cancer was discovered. Sixty-two of these 159 cases did not report any mammogram before the reference month, and 97 reported that their most recent mammogram before the reference month was more than 24 months ago. Because these cases quite likely had their most recent screening mammogram in the reference month, not including facility matches in the reference month would cause such matches to be classified as false positives. Controls were not affected by this decision because facility-reported mammograms in the reference month were not counted.


View this table:
[in this window]
[in a new window]
 
TABLE 4. Sensitivity, specificity, and PPV* using all four approaches,{dagger} Women’s CARE* Study, United States, 1994–1998
 
Our analyses assumed that the facility-reported mammogram closest in time to a respondent’s most recently reported screening mammogram was, in fact, a screening mammogram. However, 11 percent of cases and 2 percent of controls reported both screening and nonscreening mammograms within the 2 years before the reference date. When we adopted a more conservative approach in these situations based on the proximity of the self-reported screening mammogram to the facility-reported mammogram, sensitivity remained unchanged and specificity decreased slightly, from 0.78 to 0.73 for cases and from 0.78 to 0.76 for controls (approach I). Confidence intervals for the differences between cases and controls in sensitivity and specificity still included zero.

Estimating the true prevalence of screening mammography
Table 5 gives estimates of the true prevalence of screening mammography ({pi}) for various levels of self-reported screening (p) based on corrections in which our estimates of sensitivity and specificity were used. For both cases and controls, the self-reports overstated the true prevalence of screening; however, the estimates became closer as self-reported prevalence increased.


View this table:
[in this window]
[in a new window]
 
TABLE 5. Comparison of self-reported prevalences of screening and corrected prevalences based on sensitivity and specificity (approach I*), Women’s CARE{dagger} Study, United States, 1994–1998
 

    DISCUSSION
 TOP
 ABSTRACT
 INTRODUCTION
 MATERIALS AND METHODS
 RESULTS
 DISCUSSION
 REFERENCES
 
To our knowledge, this study of a diverse sample of women from five urban areas of the United States is the largest population-based mammography validation study to date among women without breast cancer. In addition, to our knowledge, it is the only such study among women with breast cancer. Other population-based mammography validation studies have been smaller and confined to single geographic areas, limiting their generalizability (2, 8, 11). Studies conducted with health maintenance organization participants (36) allowed easy access to medical records for validation but were not generalizable to populations with different access to health care.

Sensitivity and specificity of self-reported screening mammography were high among both cases and controls (table 3). High sensitivity is consistent with the results of most other validation studies (16, 8): perhaps if a woman truly has had a recent screening mammogram, this information will be reported accurately. Specificity in our study, although lower than sensitivity, was considerably higher than in most other studies across a broad range of assumptions about the concordance between self-reported and facility data. In several other studies, specificity was about 0.50–0.60 for at least one of the time frames measured (15, 8); in our study, specificity was no lower than 0.78 for our base-case assumptions.

One possible explanation for this discrepancy is that our study, conducted from 1994 to 1998, was more recent than the others, which were conducted in the late 1980s and early to mid-1990s. Recent, extensive public health efforts to increase screening might have promoted better understanding of mammography, thereby decreasing the false-positive rate. Furthermore, if women are now more likely to obtain regular mammograms, they might recall the mammography dates more accurately.

Contrary to other reports about self-reported screening mammograms being on average 3–4 months more recent than actual (2, 3, 8), we found that the limited telescoping among controls was driven by a few aberrant responses, including potentially incorrect facility reports. Self-reported mammography histories of cases and controls were similarly accurate, indicating nondifferential misclassification of exposure. Our estimates and the narrow confidence intervals suggest that the sensitivity and specificity of self-reported histories among cases and controls differ by no more than a few percentage points (table 3).

This study has several limitations. If a woman reported no mammograms in the prior 5 years, we assumed that this report was accurate since prior studies in closed medical systems have found negative predictive values of a self-report of no mammogram close to 100 percent (2, 4, 6). This decision could have overestimated specificity but only in the unlikely event that facilities found mammograms that the respondents failed to report. Given the high negative predictive value, it is also unlikely that this decision overestimated sensitivity by placing potential false-negative reports in the true-negative cell.

Estimates of sensitivity and specificity could have also been affected because we relied on the subjects to name the mammography facilities. We may have overestimated sensitivity if a woman overlooked a recent mammogram at a different facility. In addition, if a contacted facility could not find information on a woman, we were not able to allocate the error to the subject or to the facility. Although we contacted other plausible facilities, we could not check them all. Our estimates of specificity assumed that the subject was in error for all false positives. If we assumed that some or all of these instances reflected facility errors, the specificity would have been higher.

Another limitation results from the form of the questions about screening. Our questionnaire elicited information only about mammograms before the reference month. Screening mammograms in the reference month that led to the diagnosis of cancer were captured by a separate question on how the problem leading to the breast surgery was discovered. For most of the women whose cancer was discovered by a screening mammogram, we did not have to rely on this question alone. However, for 16 percent of these women, we had to assume that the mammogram occurred in the reference month. In addition, we had to randomly impute the month of the most recent self-reported screening mammogram when we did not have this information; however, doing so should have introduced only random error and should not have biased our overall estimates of sensitivity or specificity.

Finally, we did not seek the reason for the mammogram from the facility because previous studies indicated that such information was frequently incorrect (11, 22). Instead, we assumed that the facility-reported mammogram closest in time to the most recent self-reported screening mammogram was also a screening mammogram. Our results were not materially changed when we adopted a more conservative approach to declaring a match between a self-reported screening mammogram and a facility-reported mammogram.

In summary, our validation study contributes to the evaluation of mammography screening in two ways. First, use of controls enabled us to adjust estimates of screening prevalence based on self-reports to more closely reflect the true prevalence of screening in the population. For example, according to nationwide data from the Behavioral Risk Factor Surveillance System for the year 2000, the proportions of women aged 40–49, 50–59, 60–64, and >=65 years reporting mammograms in the prior year were 0.67, 0.74, 0.77, and 0.72, respectively, and the proportions who had mammograms in the prior 2 years were above 0.80 for all age groups (23). Over 90 percent of the mammograms were for routine screening. At these screening levels, our results indicate that self-reports of a recent screening mammogram somewhat overestimate the true prevalence of such screening.

Second, by including women both with and without breast cancer, we were able to compare the sensitivity and specificity of self-reported screening mammography histories in these two groups. Such information is needed to assess the validity of case-control studies of the efficacy of screening mammography (20). The similar sensitivity and specificity of the self-report of cases and controls suggest that nondifferential rather than differential misclassification is a more likely source of error and that odds ratios might understate the true efficacy of screening mammography in preventing mortality and morbidity from breast cancer.


    ACKNOWLEDGMENTS
 
This mammography validation study was funded by the Centers for Disease Control and Prevention (CDC), Division of Cancer Prevention and Control, through subcontracts to the Women’s CARE Study through an interagency agreement with the National Institute of Child Health and Human Development (NICHD). The Women’s CARE Study was funded by NICHD, with additional support from the National Cancer Institute, through contracts with Emory University, Atlanta, Georgia (N01-HD-3-3168); Fred Hutchinson Cancer Research Center, Seattle, Washington (N01-HD-2-3166); Karmanos Cancer Institute at Wayne State University, Detroit, Michigan (N01-HD-3-3174); the University of Pennsylvania (N01-HD-3-3176), Philadelphia, Pennsylvania; and the University of Southern California (N01-HD-3-3175), Los Angeles, California and through an interagency agreement with the CDC (Y01-HD-7022). CDC also contributed additional staff and computer support. General support through Surveillance, Epidemiology, and End Results (SEER) contracts N01-PC-67006 (Atlanta), N01-CN-65064 (Detroit), N01-PC-67010 (Los Angeles), and N01-CN-0532 (Seattle) is also acknowledged.

Special thanks are extended to Janet Ortiz, who coordinated data collection for the mammography study, and to Noemi Epstein, Julie Bamrick, Jane Sullivan-Halley, Brenda Rogers, Peter Briggs, and Janet Ortiz for their special efforts in obtaining facility data. Drs. Noel S. Weiss, Lars Holmberg, and W. Douglas Thompson served as the Scientific Advisory Board. Their advice and guidance were invaluable. The authors also thank all past and present members of the Women’s CARE Study team for their important contributions to this project.


    NOTES
 
Reprint requests to Dr. Sandra A. Norman, Center for Clinical Epidemiology and Biostatistics, 801 Blockley Hall, 423 Guardian Drive, Philadelphia, PA 19104-6021 (e-mail: snorman{at}cceb.med.upenn.edu). Back


    REFERENCES
 TOP
 ABSTRACT
 INTRODUCTION
 MATERIALS AND METHODS
 RESULTS
 DISCUSSION
 REFERENCES
 

  1. Champion VL, Menon U, McQuillen D, et al. Validity of self-reported mammography in low-income African-American women. Am J Prev Med 1998;14:111–17.[CrossRef][ISI][Medline]
  2. Degnan D, Harris R, Ranney J, et al. Measuring the use of mammography: two methods compared. Am J Public Health 1992;82:1386–8.[Abstract]
  3. Fulton-Kehoe DL, Burg MA, Lane DS. Are self-reported dates of mammography accurate? Public Health Rev 1993;20:233–40.
  4. Gordon NP, Hiatt R, Lampert DI. Concordance of self-reported data and medical record audit for six cancer screening procedures. J Natl Cancer Inst 1993;85:570–4.[Abstract]
  5. Hiatt RA, Perez-Stable EJ, Quesenberry CJ, et al. Agreement between self-reported early cancer detection practices and medical audits among Hispanic and Non-Hispanic white health plan members in Northern California. Prev Med 1995;24:278–85.[CrossRef][ISI][Medline]
  6. King ES, Rimer BK, Trock B, et al. How valid are mammography self-reports? Am J Public Health 1990;80:1386–8.[Abstract]
  7. Lawrence VA, De Moor C, Glenn ME. Systematic differences in validity of self-reported mammography behavior: a problem for intergroup comparisons? Prev Med 1999;29:577–80.[CrossRef][ISI][Medline]
  8. Paskett ED, Tatum CM, Mack DW, et al. Validation of self-reported breast and cervical cancer screening tests among low-income minority women. Cancer Epidemiology Biomarkers Prev 1996;5:721–6.[Abstract]
  9. Suarez L, Goldman DA, Weiss NS. Validity of Pap smear and mammogram self-reports in a low-income Hispanic population. Am J Prev Med 1995;11:94–8.[ISI][Medline]
  10. Whitman S, Lacey L, Ansell D, et al. Do chart reviews and interviews provide the same information about breast and cervical cancer screening? Int J Epidemiol 1993;22:393–7.[Abstract]
  11. Zapka JG, Bigelow C, Hurley T, et al. Mammography use among sociodemographically diverse women: the accuracy of self-report. Am J Public Health 1996;86:1016–21.[Abstract]
  12. Newell SA, Girgis A, Sanson-Fisher RW, et al. The accuracy of self-reported health behaviors and risk factors relating to cancer and cardiovascular disease in the general population: a critical review. Am J Prev Med 1999;17:211–29.[CrossRef][ISI][Medline]
  13. Tabar L, Fagerberg G, Chen HH, et al. Efficacy of breast cancer screening by age. Cancer 1995;75:2507–17.[ISI][Medline]
  14. Shen Y, Zelen M. Screening sensitivity and sojourn time from breast cancer early detection clinical trials: mammograms and physical examinations. J Clin Oncol 2001;19:3490–9.[Abstract/Free Full Text]
  15. Duffy SW, Day NE, Tabar L, et al. Markov models of breast tumor progression: some age-specific results. J Natl Cancer Inst Monogr 1997;22:93–7.[Medline]
  16. Elwood JM, Cox B, Richardson AK. The effectiveness of breast cancer screening by mammography in younger women. Online J Curr Clin Trials 1993 Feb 25;Doc no. 32.
  17. Demissie K, Mills O, Rhoads G. Empirical comparison of the results of randomized controlled trials and case-control studies in evaluating the effectiveness of screening mammography. J Clin Epidemiol 1998;51:81–91.[CrossRef][ISI][Medline]
  18. Marchbanks PA, McDonald JA, Wilson HG, et al. The NICHD Women’s Contraceptive and Reproductive Experiences Study: methods and operational results. Ann Epidemiol 2002;12:213–21.[CrossRef][ISI][Medline]
  19. Weiss NS, Lazovich D. Case-control studies of screening efficacy: the use of persons newly diagnosed with cancer who later sustain an unfavorable outcome. Am J Epidemiol 1996;143:319–22.[ISI][Medline]
  20. Thompson WD. Statistical analysis of case-control studies. Epidemiol Rev 1994;16:33–50.[ISI][Medline]
  21. Last JM, ed. A dictionary of epidemiology. 4th ed. New York, NY: Oxford University Press, 2001.
  22. Houn F, Brown ML. Current practice of screening mammography in the United States: data from the National Survey of Mammography Facilities. Radiology 1994;190:209–15.[Abstract]
  23. Behavioral Risk Factor Surveillance System. Prevalence data, 2000: National Center for Chronic Disease Prevention and Health Promotion. (http://apps.nccd.cdc.gov/brfss/age.asp?cat=WH&yr=2000&qkey=1984&state=US).