Department of Obstetrics and Gynecology Milwaukee Clinical Campus University of Wisconsin Medical School P.O. Box 342 Milwaukee, WI 53201-0342
It is heartening to have attention focused on epidemiologic studies that may lead to the identification of etiologic factors involved in fetal death. This sector of the continuum of perinatal morbidity and mortality has been largely ignored and receives only cursory recognition at most state public health agencies. The work of Bell et al. (1, 2
) extends some innovative methods for retrospective exposure assessment to the analysis of the associations between exposure to pesticides during pregnancy and risk of fetal death.
Vital statistics databases were the primary source material for identification of fetal deaths, for the classification of fetal deaths by cause, and for the identification of controls for the comparison group. Although the body of literature on the quality of fetal death certificates for underlying cause of fetal death is not large, the results are quite consistent (35
). Generally these records are an unreliable source of data on the causes of fetal deaths, especially underidentifying and misclassifying birth defects and congenital anomalies. Research also shows that birth certificates have very low sensitivity for identifying birth defects (6
, 7
), and that data quality for data elements of interest in perinatal epidemiology on fetal death certificates requires improvement (8
). Thus, the work of Bell et al. (1
, 2
) has potential problems both with failing to identify all cases of fetal deaths with the selected causes of interest and with potentially including some livebirths that actually had birth defects in the control group. Although the latter problem imparts a conservative bias to these analyses, the former is a major concern. It is especially troubling since, in 1984, the state of California had a comprehensive birth defects surveillance system with active case-finding methods for all livebirths and fetal deaths in 18 California counties, including one of those under study (9
). If the pesticide use database were statewide, it might have been possible to focus the study on areas of the state for which more reliable diagnostic information on the outcomes of interest was available. These issues are not reviewed in the discussion sections of either paper nor do the authors present any evidence of their own data quality analyses concerning the primary outcome measures. There is one paragraph discussing potential underascertainment of fetal deaths and the problem of misclassification of underlying cause of fetal death, but in the end little can be done to address this problem if fetal death certificates are the only source of case material.
Bell et al. advance the science of environmental exposure assessment for spontaneous abortion and fetal death. However, there is the larger concern that, as public health databases become increasingly available, these data will be used uncritically in epidemiologic analyses, leading in some cases to incorrect conclusions and perhaps poorly devised public health programs and policies. I again issue a call to editors of peer-reviewed journals publishing studies using vital statistics as primary sources: Require authors either to document or otherwise evaluate the quality of their data for primary research variables or to comprehensively assess the potential impact of mismeasurement on their findings (10).
REFERENCES
Department of Epidemiology School of Public Health University of North Carolina Chapel Hill, NC 27599-7400
Office of Environmental Health Hazard Assessment California Environmental Protection Agency Sacramento, CA 95812
Dr. Kirby (1) writes about our two recently published papers (2
, 3
) evaluating residential proximity to agricultural pesticide applications and risk of fetal death or death of a liveborn infant within 24 hours. In the first paper, we restricted cases to those with identified congenital anomalies (2
); in the second, we restricted cases to all other causes excluding congenital anomalies and those not likely to have an environmental cause (i.e., birth trauma) (3
). In both studies the control group was livebirths with no congenital defects as noted on the birth certificates. Dr. Kirby (1
) raises four issues regarding the use of vital statistics databases in these studies: 1) misclassification of cause of fetal death, 2) underascertainment of congenital anomalies resulting in their inclusion in our control group, 3) the quality of additional data obtained from these databases, and 4) nonuse of the California Birth Defects Monitoring Program registry to identify birth defects. A key question for our results is whether the errors in assignment of cause of death are random or associated with the exposures of interest.
With regard to misclassification of cause of fetal death, two studies provide insight. In a comparison with medical record information, the sensitivity of fetal death certificates for identifying deaths due to congenital anomalies was 60 percent (4). Fetal deaths attributed to congenital anomalies on the fetal death certificates were likely to be confirmed. Among fetal deaths that were autopsied, a high percentage (82 percent) of congenital anomalies were reported on the fetal death certificates. However, of the congenital anomalies reported on fetal death certificates, 42 percent were not confirmed by autopsy (5
).
Regarding underascertainment of birth defects in livebirths, in a study of birth defect reporting in five California counties, only 19 percent of those found by the California Birth Defects Monitoring Program registry were reported on birth certificates (6). However, given that controls were a random sample of all livebirths, we would expect the prevalence of livebirths with birth defects among controls to be small (<3 percent). Therefore, any dilution of our control group resulting from this misclassification (low sensitivity) would be minimal. Although only 76 percent of congenital anomalies reported on the birth certificate were verified in the California Birth Defects Monitoring Program registry (6
), these numbers do not pertain to the cases in our study because we based diagnoses of the livebirth congenital anomalies on death, not birth, certificates.
The magnitude and direction of any bias would depend on whether the overall exposure distribution by case status was affected. If the exposure distribution did differ for the unascertained or misclassified cases, this difference would have to be large enough to alter the exposure distribution observed for cases. We would expect little change in the exposure distribution of observed controls from misclassification of birth defects in livebirths. Therefore, in our first paper, if the exposure distribution of the misclassified cases was sufficiently different from that of those correctly classified, the resulting bias could be in either direction.
Judging from the validation studies cited above, both underascertainment and misclassification of cause of death were low when all other causes of death were combined. Thus, in our second paper, we would expect the impact of any bias resulting from misclassification of cause of death to be small. On the other hand, given that any association with environmental exposures would likely differ across the specific causes of fetal death, the use of one broad outcome category decreased our ability to uncover true biologic associations (2, 3
).
Regarding the quality of additional data obtained from the birth and death certificates, we relied on questionnaire data whenever the two sources were discrepant, as we discussed previously. Finally, in 1984, nine of the 10 study counties were not yet included in the California Birth Defects Monitoring Program registry. Given that the primary objective of our research was to assess the association between pesticide exposure and fetal death, counties were selected for analysis based on their pesticide use rather than registry participation.
REFERENCES