Affiliations of authors: Departments of Radiology (DG, JHS, LAH, HER) and Biostatistics (HER), University of Pittsburgh, Pittsburgh, PA.
Correspondence to: David Gur, ScD, Imaging Research, Department of Radiology, University of Pittsburgh, 300 Halket St., Ste. 4200, Pittsburgh, PA 15213 (e-mail: gurd{at}upmc.edu)
The editorial (1) that accompanied our article (2) regarding the effects of computer-aided detection on our practice highlighted two limitations of our study: the possible impact of changes in the fraction of women undergoing repeat examinations and the possible underestimation of cancer rates. We wish to clarify these two issues as related to our study. First, ours was an observational study. We decided not to adjust for estimated expected values for either recall or detection rates. Both recall and cancer detection rates are generally expected to be quite similarly affected by the fraction of repeat examinations (3). Because we did not have complete information about all breast examinations for all women, namely, if the women had undergone screening or other breast imaging procedures elsewhere, our accounting of repeat examinations only refers to those performed at our own institution. Changes in the fraction of repeat examinations could have resulted in a lower expected number of detected cancers without computer-aided detection during the second period of our study than what we observed. Similar to adjusting for recall rates (1), modeling the difference between expected and observed cancer detection rates could be used to estimate benefits associated with computer-aided detection. However, unlike recall rates that are largely affected by the availability of previous mammograms for comparison during the interpretation (3), cancer detection rates can be affected by prior imaging or physical examinations during the several years preceding the screening mammogram in question. Hence, modeling-based adjustments regarding recall or cancer detection rates could be easily criticized. The fact that neither the recall rates nor the detection rates changed substantially after the introduction of computer-aided detection in our practice suggests that our results are quite relevant (without any adjustments), particularly for the relative comparison we presented.
Second, using established (standardized) data bases (e.g., the Surveillance, Epidemiology, and End Results Program [SEER1]) as a source of outcome information is laudable, but it would not necessarily improve the accuracy of our study because over time, a woman could relocate, enroll in a different health insurance plan, or change her health care provider (e.g., possibly changing from one who participates in a specific surveillance program to one who does not). Hence, adjustments of cancer rates that are not based on complete information for all women are open to criticism. Moreover, databases are only as complete as the data entered into them. In our observational study, our approach, to verify cases in a consistent manner, namely, from the top down (i.e., from the detected cancers back to the latest screening examination that was recalled and ultimately led to a cancer detection) and from the bottom up (i.e., from the recalled examinations to the diagnosis of cancer), is perhaps the least biased approach to assess cancer detection rates in a relative comparison.
Our study had several limitations that we discussed (2). Despite these limitations, observational studies such as ours can be performed relatively quickly and for a fraction of the cost and with fewer complexities than can well-designed and appropriately executed prospective clinical studies (4). The most important limitation of our study is that we included only one (albeit large) practice with only one type of interpreting radiologist (i.e., in an academic practice). These limitations can be overcome by similar studies at other institutions, and we welcome additional results in this regard, whatever they may be.
NOTES
1 Editors note: SEER is a set of geographically defined, population-based central cancer registries in the United States, operated by local nonprofit organizations under contract to the National Cancer Institute (NCI). Registry data are submitted electronically without personal identifiers to the NCI on a biannual basis, and the NCI makes the data available to the public for scientific research.
REFERENCES
1 Elmore JG, Carney PA. Computer-aided detection of breast cancer: has promise outstripped performance? J Natl Cancer Inst 2004;96:1623.
2 Gur D, Sumkin JH, Rockette HE, Ganott M, Hakim C, Hardesty L, et al. Changes in breast cancer detection and mammography recall rates after the introduction of a computer-aided detection system. J Natl Cancer Inst 2004;96:18590.
3 Frankel SD, Sickles EA, Curpen BN, Sollitto RA, Ominsky SH, Galvin HB. Initial versus subsequent screening mammography: comparison of findings and their prognostic significance. AJR Am J Roentgenol 1995;164:11079.[Abstract]
4 National Cancer Institute. First major trial of digital mammography launched. News release. Available at: http://www.cancer.gov/newscenter/acrin. [Last accessed: 02/02/04.]
This article has been cited by other articles in HighWire Press-hosted journals:
Correspondence about this Article
![]() |
||||
|
Oxford University Press Privacy Policy and Legal Statement |