Correspondence to: Timothy R. Church, Ph.D., Division of Environmental and Occupational Health, University of Minnesota School of Public Health, MMC 807, 420 Delaware St. SE, Minneapolis, MN 55455 (e-mail: trc{at}cccs.umn.edu).
George Box is famously quoted, ". . . all models are wrong but some models are useful." (1). Simulation models contribute to our knowledge of complex systems by letting us see what a system will do under specific circumstances, based on the assumed rules by which the system functions. In spite of the simplifications they must make in representing the underlying reality, simulation models can provide insight not only when used to explore behavior under previously unobserved circumstances but also when used to find the conditions under which the system will behave in a prescribed or observed way.
Draisma et al. (2) have used MIcrosimulation for SCreening ANalysis (MISCAN) to model screening for prostate cancer in the latter way, that is, to estimate unobserved parameters. It is a Monte Carlo simulation (3) and therefore a particularly flexible and valuable approach to modeling cancer screening in a large population. The flexibility of this model comes from its large number of parameters and its ability to use arbitrary distributions for characteristics of the cancers and timing of their evolution. For example, analytical models are often constrained by what can be represented by mathematically tractable probability distributionsdistributions that can be easily differentiated, integrated, and algebraically manipulated either alone or in combination with other distributions. By contrast, the MISCAN model can use any distributions that can be computed, regardless of their tractability. This characteristic opens up a vista of possibilities for representing, say, the preclinical duration of a cancer, the time from its initial detectability by a diagnostic test to its clinical surfacing due to symptoms in the affected individual.
Screening for prostate cancer by means of prostate-specific antigen (PSA) testing is widely used and currently being studied in large randomized trials in Europe and the United States. No one yet knows the utility of PSA screening, and much debate focuses on the potential for over-diagnosis of prostate cancer. Draisma et al. use the adaptability of the MISCAN model to explore the possible extent of over-diagnosis in PSA screening for prostate cancer, a critical determinant of the cost-effectiveness of such screening when coupled with its attendant mortality and morbidity effects. As the authors note, similar models have been brought to bear on questions regarding screening for breast and colon cancer. The reasonableness of the choices the authors have made in simulating prostate cancer screening makes the model particularly appealing, because it is a straightforward representation of generally accepted principles regarding the development and detection of prostate cancer. For the most part, the authors have used robust methods for specifying and computing the model and have brought a broad array of observable data to the problem. The results of the simulations are also very appealing, because they support the generally accepted and reasonable view that PSA testing leads to substantial over-diagnosis and that the effect increases with age.
However, the flexibility of the MISCAN model comes at a price. The number of estimated parameters is 37, and the values of these parameters were chosen by comparing the output of the model to observed data from the system being modeled (here, the PSA screening of a randomized trial population). In the MISCAN approach, parameter values are adjusted systematically until the simulated results are as close as possible to the data. However, if the number of parameters in the model exceeds the number of outputs available to calibrate the values of the parameters, then those values will be under-determined; that is, many different sets of parameter values can yield the same outputs, analogous to fitting a straight line to a single data point. Although Draisma et al. are clear about the number of outcomes used to fit the model, it is difficult to ascertain the degree of dependence between them and, therefore, whether there was under-determination. The authors recognize that some of the observations are marginally constrained when they state that the total degrees of freedom for the chi-square statistic is 42, which is 15 less than the 57 observations used to fit the model. However, this adjustment to the degrees of freedom takes into account only marginal constraints and ignores the potential structural associations between, for example, the results of round 1 and round 2 of the screening or between years for interval cancers.
These concerns are of more than academic interest. Not only do the P values and confidence intervals depend on their resolution, but also more than one set of estimated overdiagnosis rates and lead-time values could result in the same fit as that observed in this analysis. Without an accurate assessment of the uncertainty of the estimated lead time and overdiagnosis, the model is rendered less useful. At present, we must take the estimates in the Draisma et al. article as tentative and provisional. However, the results presented therein are a promising beginning, because they are a first ambitious and helpful step in formulating model-based estimates of these important parameters. The potential refinement and further validation of such models could eventually lead to a clearer understanding of the role of PSA screening in the struggle to reduce or eliminate the health impact of prostate cancer.
REFERENCES
1 Box GE. Robustness is the strategy of scientific model building. In: Launer RL, Wilkinson GN, editors. Robustness in statistics. New York (NY): Academic Press; 1979. p. 202.
2 Draisma G, Boer R, Otto SJ, van der Cruijsen IW, Damhuis RA, Schröder FH, et al. Lead times and overdetection due to prostate-specific antigen screening: estimates from the European Randomized Study of Screening for Prostate Cancer. J Natl Cancer Inst 2003;95:86878.
3 Hammersley J, Handscomb D. Monte Carlo methods. London (U.K.): Chapman and Hall; 1964.
![]() |
||||
|
Oxford University Press Privacy Policy and Legal Statement |