Harborview Injury Prevention and Research Center 325 Ninth Avenue Seattle, WA 98104
Department of Epidemiology School of Public Health and Community Medicine University of Washington Seattle, WA 98195
![]() |
INTRODUCTION |
---|
![]() ![]() ![]() ![]() ![]() |
---|
First, the authors used conditional logistic regression to estimate the relative risk of death associated with exposure. Driver death in a traffic crash is rare; in 1998, only about 0.2 percent of drivers involved in crashes in the United States were killed (2). Odds ratios will usually approximate risk ratios when the outcome is rare in the entire study population. However, the odds ratio will not approximate the relative risk well if a subset of crashes with high risk of death accounts for a substantial portion of all deaths.
A hypothetical example may help us make our point. We created data for 10 million driver pairs who crashed head-on at slow speed. The absolute risk of death was 0.01 for unbelted drivers, and the relative risk of death associated with use of a seat belt was 0.4. We generated data in such a way that many pairs of drivers were discordant with regard to seat belt use and approximately two thirds of drivers were belted. We then generated a second set of 100,000 driver pairs exactly like the first, except that these drivers crashed at a fast speed; the absolute risk of death among the unbelted drivers was 0.8. When we combined the two data sets, the proportion of drivers that died was 0.0108, which is still small. In the entire set of crashes, the risk of death among belted drivers was 0.007129, compared with 0.017822 among those not belted (relative risk = 0.40). However, when we analyzed the data using conditional logistic regression, accounting for the matched pairs, the resulting odds ratio was 0.30. (When we analyzed the data using ordinary logistic regression, adjusting for speed, the odds ratio was also 0.30.)
If some head-on crashes have a high risk of death, the result will be that among the driver pairs with at least one dead driver, a noteworthy proportion will consist of pairs with two dead drivers. In our hypothetical data, this proportion was 12.9 percent, and in Crandall et al.'s study it was 14.5 percent (1). This problem can be stated in another way. In a matched-pair case-control study, we typically match each case with a control. However, in the study by Crandall et al., 6,573 cases were matched to a control, while 2,232 cases were members of 1,116 pairs in which each case was matched to another case (1
). Matching in this way can still yield the correct odds ratio for the association between exposure and outcome, but when the matching scheme often matches cases to cases, investigators should be aware that their odds ratio estimates will be further from 1.0 than the desired relative risks.
The second source of possible bias is missing data. Crandall et al. used data from the Fatality Analysis Reporting System (FARS) for crashes occurring from 1992 through 1997 (1). Unfortunately, records in FARS are often missing information on seat belts and air bags; the problem is substantial for both variables, but especially for air bags. During the period of Crandall et al.'s study, air bag information in FARS was coded as deployed, not deployed, or "unknown or not applicable." Thus, drivers without an air bag and drivers with missing information regarding an air bag (and its deployment) were assigned the same value. In 1997 FARS data, there were 1,280 pairs of passenger cars involved in head-on crashes (calculated from publicly available data). Only 7.5 percent of the driver pairs had jointly known information regarding air bag deployment (table 1). It appears to us that Crandall et al. assigned all drivers with missing air bag data to the category of nondeployment. Since some of these drivers may have had an air bag which deployed, this may have resulted in a substantial amount of misclassification. If the misclassification was nondifferential (the same for dead drivers and living drivers), this will have tended to bias the odds ratios toward 1.0. Differential misclassification could cause bias in either direction. (The coding of air bag data in FARS was improved in 1998, but missing information remains a problem.)
|
![]() |
REFERENCES |
---|
![]() ![]() ![]() ![]() ![]() |
---|
Department of Emergency Medicine School of Medicine University of New Mexico Health Sciences Center Albuquerque, NM 87131-5246
Department of Pediatrics Intermountain Injury Control Research Center University of Utah Salt Lake City, UT 84108-1226
![]() |
INTRODUCTION |
---|
![]() ![]() ![]() ![]() ![]() |
---|
We used a matched-pairs design to calculate the relative odds of death for combinations of seat belt and air bag use using Fatality Analysis Reporting System (FARS) data. We agree with Cummings and Weiss that the relative risk value lies somewhere closer to 1.0, as is always true when relative odds are used to approximate relative risk. This "bias" is an inherent difference between these two measures of association, but it only becomes relevant when one attempts to substitute relative odds for relative risk.
We agree that missing data are a significant problem in FARS. One common criticism of FARS is that data are completely missing for crashes that do not involve a death. Our matched-pairs design obviated this limitation by relying solely upon circumstances where there was differential outcome and exposure for both drivers (and this required that only one of the drivers had died in the crash). Since mortality assessment is probably complete and accurate, the matched-pairs odds ratio estimate will only be biased when air bag exposure assessment is incorrect. It is unclear whether FARS investigators are more likely to over- (or under-) assess air bag deployment as a result of fatal (or nonfatal) outcome.
Our decision to measure air bag exposure as "air bag deployed" versus "air bag present" was partly a response to missing data for the air bag variable in FARS, but it was also due to the incidence of failed air bag deployment. While several authors have decoded vehicle identification numbers to determine the presence of an air bag, this method does not account for the substantial percentage of vehicles equipped with air bags that fail to deploy for a variety of reasons (including crash mechanics that do not trigger deployment, air bag mechanical failure, and an occupant's decision to use a manual cutoff switch). For example, when air bag data were available for both drivers in Cummings and Weiss' data (1), 7.3 percent of these air bags did not deploy. The best method of measuring air bag effectiveness is to first ensure that the vehicle is equipped with an air bag (as, practically speaking, all new passenger vehicles are) and then determine whether or not the air bag actually deployed.
Air bag deployment is quite obvious upon cursory inspection of the crash vehicle, and it does not change after the crash has occurred. Seat belt use is subject to a greater potential for biased observation, as seat belt use commonly changes after a crash (but air bag deployment does not change) and not wearing a seat belt is often against the law. These factors are not relevant for air bag deployment.
Recent actions by the federal government and motor vehicle manufacturers to depower air bags or to incorporate so-called "smart" air bag technologies in response to air bag-related fatalities deserve additional observation. Our best understanding of crash system effectiveness will come from an improved real-world understanding and assessment of the specific crash mechanics, circumstances, and forces that led to the crash and air bag deployment. Knowing the details of how and why (or why not) air bags deploy will be essential to estimating their effectiveness.
![]() |
REFERENCES |
---|
![]() ![]() ![]() ![]() ![]() |
---|