CORRESPONDENCE

RESPONSE: Re: Hutchinson Smoking Prevention Project: Long-Term Randomized Trial in School-Based Tobacco Use Prevention—Results on Smoking

Richard R. Clayton, F. Douglas Scutchfield, Steven W. Wyatt

Affiliations of authors: Kentucky Prevention Research Center and Kentucky School of Public Health, University of Kentucky, Lexington.

Correspondence to: Richard R. Clayton, Ph.D., Kentucky School of Public Health, University of Kentucky, 2365 Harrodsburg Rd., Suite B100, Lexington, KY 40504 (e-mail: clayton{at}pop.uky.edu).

Bliss interprets our editorial as saying that the higher attrition rate in the study by Botvin et al. (1) [39%, versus 7% in the study by Peterson et al. (2)] makes Botvin et al.'s results questionable. Although he does note, correctly, that Botvin et al.'s statistical analysis showed that there were no differential effects related to attrition in the study groups, he missed our point. Our point was that the Peterson et al. study represents a new gold standard in prevention science with regard to the implementation of a randomized trial, one index of which is the difference in attrition rates.

Bliss then offers another explanation as to why Botvin et al. (1) found statistically significant reductions in several measures of smoking—i.e., that their instructional materials and techniques were more effective than those used by Peterson et al. (2). Although this is certainly a plausible hypothesis, it seems unlikely, for several reasons. First, the Hutchinson Smoking Prevention Project (HSPP) curriculum started with children when they were younger [3rd grade versus 7th grade in Botvin et al. (1)]. Second, it embodied in each year all 15 essential elements of school-based, curriculum-driven smoking prevention recommended by the National Cancer Institute/National Centers for Disease Control and Prevention. Third, because the HSPP program was delivered from the 3rd through the 10th grades, it had considerably more opportunities for reinforcement of lesson materials than the curriculum used by Botvin et al., which had lessons in the 7th through the 9th grades only. Finally, prevention scientists generally agree that prevention curriculum materials need to be delivered interactively rather than in a didactic fashion, and both programs were interactive. So, there must be other explanations for the different findings of the two studies.

One possible explanation was the basis of the comment by Cameron et al. (3), who call attention to results from a study they conducted. A first look at the data from a main-effects perspective revealed no statistically significant differences between intervention and comparison conditions at the end of grade 8, after pooling data within conditions. These authors report that a more fine-grained analysis looking for moderator effects revealed wide variation in smoking norms across school settings. By examining the interaction between intervention and school risk, the authors found that the intervention had a substantial impact in reducing smoking in high-risk schools but not in other schools. Students are nested in the environmental context of the school they attend, and there is good reason to believe that schools differ from each other on a number of dimensions, most of which have been largely ignored in the school-based curriculum-driven part of prevention science.

In our editorial, we noted that one of the most interesting findings from the HSPP was the substantial difference in the prevalence of daily smoking within conditions. For example, among 12th-grade females in the 20 school districts in the control condition, daily smoking ranged from 0% to 41.9%. If there is such heterogeneity across districts even within one condition, heterogeneity must be even greater across the 72 schools in the HSSP study and the 56 schools in the study by Botvin et al. (1). Heterogeneity at the macro level, like individual differences, is the norm, not the exception, and must be taken into account. We believe the explanation offered by Cameron et al. provides an important clue for future prevention science and programming efforts.

Finally, we agree with both Bliss and Cameron et al. that policy questions about prevention are too important to be based or changed on the results of any one study. The findings from the HSPP study should, however, be seen as an opportunity to begin addressing major public policy questions. For example, are the marginal effects of school-based, curriculum-driven prevention programming large enough to justify taking that many hours away from more traditional academic programming? What and how strong are the effects of prevention programming on academic achievement? We would argue that, if prevention programs do not have positive effects on the academic purpose of the schools, there is reason to question their place in schools.

REFERENCES

1 Botvin GJ, Baker E, Dusenbury L, Botvin EM, Diaz T. Long-term follow-up results of a randomized drug abuse prevention trial in a white middle-class population. JAMA 1995;273:1106–12.[Abstract]

2 Peterson AV, Kealey KA, Mann SI, Marek PM. Hutchinson Smoking Prevention Project: long-term randomized trial in school-based tobacco use prevention—results on smoking. J Natl Cancer Inst 2000;92:1979–91[Abstract/Free Full Text]

3 Cameron R, Brown KS, Best AJ, Pelkman CL, Madill CL, Manske SR, et al. Effectiveness of a social influences smoking prevention program as a function of provider type, training method, and school risk. Am J Public Health 1999;89:1827–31.[Abstract]



             
Copyright © 2001 Oxford University Press (unless otherwise stated)
Oxford University Press Privacy Policy and Legal Statement