Where to Go for Practical Advice

Dale P. Sandler

From the Epidemiology Branch—MD A3-05, National Institute of Environmental Health Sciences, P. O. Box 12233/111 TW Alexander Drive, Research Triangle Park, NC 27709. (e-mail: sandler{at}niehs.nih.gov).


    INTRODUCTION
 TOP
 INTRODUCTION
 REFERENCES
 
With this issue of the Journal, we are introducing a new category of papers on the practice of epidemiology. We see this as a forum for sharing information on practical issues in the design, implementation, and analysis of studies as well as an opportunity to bring knowledge and skills from other disciplines into epidemiology. Such papers can be found in the Journal, but, for several reasons, they appear relatively infrequently.

Through our training programs and our publications, we tend to underplay the effort required to conduct a high-quality, valid study. We focus on technologic innovations, such as the ability to characterize gene polymorphisms, on new and complex statistical techniques, and (to a lesser extent) on understanding the biological basis of the diseases we study. Insufficient attention is paid, for example, to the quality of our exposure assessment tools, including questionnaires (1Go, 2Go). With notable exceptions (e.g., the book by Armstrong et al. (3Go)), our textbooks do not instruct us on how to collect high-quality data, although they do warn of biases that may be introduced by such factors as misclassification or poor response rates. We lament the declining response rates for epidemiologic surveys but provide few opportunities for sharing ideas for improving response or understanding the implications of poor response (4Go, 5Go).

On the other hand, those who try to publish papers on the practice of epidemiology often find it an uphill battle. Funding for methods development or validation is inadequate. For those who manage to do this kind of work, publication may be difficult. Space limitations in journals can make it hard for authors to provide enough detail for readers to fully evaluate the study methods or to carry out similar research. And, yes, editors tend to favor papers that link exposure to disease over those on the quality of questionnaires or field methods.

Opportunities for sharing study methods should increase considerably with the introduction of electronic publishing. The Journal has already issued a call for providing questionnaires through individual websites and even the Journal's website (2Go). Presumably, other methodological details could also be made available in this way.

In the absence of a comprehensive literature, many epidemiologists find themselves in the position of having to figure out on their own the best methods to improve response rates, collect specific samples, or measure exposures without the benefit of solutions that others have already developed. We are left to call friends and colleagues and haphazardly assemble information, which we, in turn, do not routinely make available to others unless asked.

From time to time, or regularly, depending on the level of interest and the quality of papers we receive, the Journal would like to publish papers on practical topics that we believe epidemiologists would find useful. Such papers might include the validation of questionnaires or other exposure assessment methods, the development and validation of scales for assessing exposure or outcome, direct comparison of different methods of data collection or exposure assessment, novel approaches to survey design or implementation, "how-to guides" for collecting specific kinds of data or on how (or why) to implement new study designs or carry out complex statistical analyses, and commentaries on the relative merits or limitations of particular methods in epidemiology or biostatistics.

The paper by Metzger et al. (6Go) in this issue fits the bill in two ways. It describes a technologic advance in computer-assisted interviewing that may not yet be well known or widely available. The paper also deals with the important issue of how best to obtain candid and truthful answers to sensitive questions on questionnaires. The authors evaluated the feasibility and acceptability of using audio computer-assisted self-interviews and demonstrated that risky behaviors were reported more frequently by those randomized to such interviews than by those assigned to in-person interviews. The multisite study was large and randomized and included a wide range of sociodemographic groups, so the results are probably applicable to others.

The paper by Sesso et al. (7Go), also in this issue, describes a novel approach to ascertaining the vital status of study participants. Data from Social Security Administration death files accessible on the World Wide Web proved to be an efficient and cost-effective means of identifying deaths among a cohort of men. Limitations on the number of potential matching variables, the lack of information on causes of death, and the poor results for women suggest, however, that this approach will not entirely replace more traditional methods, such as National Death Index searches. Even so, readers will be glad to know that such a resource exists.

We can point to other specific papers of the sort we would like to see. For example, the paper by Austin et al. (8Go) on the collection of biological samples for studies of gene-environment interactions in cardiovascular disease studies is a useful instructional guide. The commentaries by Maclure and Willett on the uses and abuses of the kappa statistic (9Go) and by Maclure and Greenland on tests for trend (10Go) are practical guides for analyzing data and avoiding common pitfalls. Weinberg and Sandler's paper on randomized recruitment (11Go) was an attempt to bring a method already published in the biostatistical literature to the attention of epidemiologists who might not be regular readers of those journals, while at the same time providing new insights. Other examples from recent Journal issues include papers on using population rosters to select controls (12Go), comparing cancer follow-up using active and passive approaches (13Go), and evaluating the reliability of self-reported data on past levels of physical activity (14Go).

There are many other topics that merit discussion. For example, there is little information on how to actually analyze the data from case-cohort studies and only one commercially available statistical software package to do this (15Go). Another area in need of work is alternative approaches to the collection of biological and environmental samples in studies that are too large or spread out for in-person collection. Harty et al. (16Go) evaluated self-collection of buccal cell samples for DNA analysis. With dozens (if not hundreds) of such studies being planned, it would be useful to know if buccal cell samples can successfully be collected by participants without a researcher present.

In evaluating papers for this new feature, we will not abandon the usual requirements of the Journal. This is not a call for authors to dredge up marginal data from old studies or to begin submitting incidental "validation studies" based on inadequate sample sizes or inappropriate designs. Work must be of high quality and represent a substantial contribution to the literature. Methods must be appropriate, and conclusions must be valid. Results must be generalizable beyond the specifics of the individual study involved. Furthermore, the papers must be judged to provide information of interest to a large number of readers of the Journal. The sound underpinnings of any study are of paramount importance. With this new feature, the Journal hopes to encourage dialogue that will lead to improved methods and more valid study results.


    NOTES
 
Reprint requests to Dr. Sandler at this address.


    REFERENCES
 TOP
 INTRODUCTION
 REFERENCES
 

  1. Olsen J, IEA European Questionnaire Group. Epidemiology deserves better questionnaires. (Abstract). Int J Epidemiol 1998;27:935.[Free Full Text]
  2. Wilcox AJ. The quest for better questionnaires. Am J Epidemiol 1999;150:1261–2.[Abstract]
  3. Armstrong BK, White E, Saracci R. Principles of exposure measurement in epidemiology. Monographs on epidemiology and biostatistics. Vol. 21. New York, NY: Oxford University Press, 1995.
  4. Slattery ML, Edwards SL, Caan BJ, et al. Response rates among control subjects in case-control studies. Ann Epidemiol 1995;5:245–9.[Medline]
  5. Hartge P. Raising response rates: getting to yes. Epidemiology 1999;10:105–7.[ISI][Medline]
  6. Metzger DS, Koblin B, Turner C, et al. Randomized controlled trial of audio computer-assisted self-interviewing: utility and acceptability in longitudinal studies. Am J Epidemiol 2000;152:99–106.[Abstract/Free Full Text]
  7. Sesso HD, Paffenbarger RS, Lee I-M. Comparison of National Death Index and World Wide Web death searches. Am J Epidemiol 2000;152:107–11.[Abstract/Free Full Text]
  8. Austin MA, Ordovas JM, Eckfeldt JH, et al. Guidelines of the National Heart, Lung, and Blood Institute Working Group on Blood Drawing, Processing, and Storage for genetic studies. Am J Epidemiol 1996;144:437–41.[Abstract]
  9. Maclure M, Willett WC. Misinterpretation and misuse of the kappa statistic. Am J Epidemiol 1987;126:161–9.[ISI][Medline]
  10. Maclure M, Greenland S. Tests for trend and dose response: misinterpretations and alternatives. Am J Epidemiol 1992;135:96–104.[Abstract]
  11. Weinberg CR, Sandler DP. Randomized recruitment in case-control studies. Am J Epidemiol 1991;134:421–32.[Abstract]
  12. Bohlke K, Harlow BL, Cramer DW, et al. Evaluation of a population roster as a source of population controls: the Massachusetts Resident Lists. Am J Epidemiol 1999;>150:354–8.[Abstract]
  13. Kato I, Toniolo P, Koenig KL, et al. Comparison of active and cancer registry-based follow-up for breast cancer in a prospective cohort study. Am J Epidemiol 1999;149:372–8.[Abstract]
  14. Falkner KL, Trevisan M, McCann SE. Reliability of recall of physical activity in the distant past. Am J Epidemiol 1999;150:195–205.[Abstract]
  15. EPICURE: A package of 5 programs for generalized risk modeling and person-year computation. Version 2.10. Seattle WA: HiroSoft International Corporation, 1999.
  16. Harty LG, Shields PG, Winn DM, et al. Self-collection of oral epithelial cell DNA under instruction from epidemiologic interviewers. Am J Epidemiol 2000:151:199–205.[Abstract]
Received for publication March 9, 2000. Accepted for publication March 29, 2000.