12 Queen's Park Cres. West, Room 401 C, Toronto, ON M5S 1A8, Canada. E-mail: cornelia.baines{at}utoronto.ca
SirsIn their qualitative assessment of screening trials Freedman et al.1 demonstrate that impressive author affiliations and the peer review process do not guarantee accuracy. On page 50, the reader will find the following persuasive statement about the Canadian National Breast Screening Study (CNBSS). Centre radiologists only agreed with the reference radiologist 3050% of the time. What a dreadful study that must have been! Quite wrong.
The numbers they cite are clearly reported as kappa statistics, which indicate how much of the agreement observed was agreement beyond that which might occur by chance.2 In fact, Table 2 (which they disregarded) from our publication reveals very clearly that there was agreement between centre radiologists and the reference radiologist 85.6% of the time with respect to cancer cases and 75.8% of the time for mammograms from women who did not have cancer. Do these authors truly not understand the difference between inter-observer agreement and kappa statistics?
They go on to condemn the Canadian study for false negative mammograms. Observer error and technical problems led to delayed detection in 2235% of cancers. Our paper reports that the overall false negative rate was 24%: 22% for screen cancers and 35% for interval cancers; by not differentiating between screen and interval cancers, a false impression is conveyed. Contrast those rates with words from the radiologist D Kopans who wrote of a clinical study, this review confirms another well known phenomenon, namely the failure of even expert observers to perceive all abnormalities.3 He continues that a radiological review discovered that 54% of cancer cases had been read as negative. Although more than twice the false negative rate in the CNBSS, somehow Dr Kopans found 54% acceptable. It was inflated because the review was not blind.
Page 51 begins with an attribution to me which is uninterpretable (Baines notes that a comparison of advanced cancers detected by MA + PE in treatment to those detected by PE in control (19 to 5) is biased.) and cannot be identified on page 3294 as is claimed. As for allegations about flawed mammography and subverted randomization, these are stale, unwarranted and have been responded to previously.4
Not only did the authors present false data but they also based their evaluation on CNBSS results published 12 years ago rather than results published in 2000 and 2002, Table II. This must compel any intelligent reader to question the validity of their conclusions. Unfortunately, it is not the first time, nor will it be the last, that false data about the CNBSS have been published.4
The authors acknowledge spending much time in consultation with Dr L Tabar of the Two-County trial, presumably to help them understand Swedish data. It is unfortunate that they did not make similar efforts to understand Canadian data.
![]() |
References |
---|
![]() ![]() |
---|
2 Baines CJ, McFarlane DV, Miller AB. The role of the reference radiologist. Estimates of inter-observer agreement and potential delay in cancer detection in the National Breast Screening Study. Invest Radiol 1990;25:97176.[ISI][Medline]
3 Kopans DB. Detecting breast cancer not visible by mammography (Editorial) J Natl Cancer Inst 1992;84:74547.[ISI][Medline]
4 Baines CJ. The Canadian National Breast Screening Study: a perspective on criticisms. Ann Intern Med 1994;120:32634.
|