From the Epidemiology BranchMD A3-05, National Institute of Environmental Health Sciences, P. O. Box 12233/111 TW Alexander Drive, Research Triangle Park, NC 27709. (e-mail: sandler{at}niehs.nih.gov).
![]() |
INTRODUCTION |
---|
![]() ![]() ![]() |
---|
Through our training programs and our publications, we tend to underplay the effort required to conduct a high-quality, valid study. We focus on technologic innovations, such as the ability to characterize gene polymorphisms, on new and complex statistical techniques, and (to a lesser extent) on understanding the biological basis of the diseases we study. Insufficient attention is paid, for example, to the quality of our exposure assessment tools, including questionnaires (1, 2
). With notable exceptions (e.g., the book by Armstrong et al. (3
)), our textbooks do not instruct us on how to collect high-quality data, although they do warn of biases that may be introduced by such factors as misclassification or poor response rates. We lament the declining response rates for epidemiologic surveys but provide few opportunities for sharing ideas for improving response or understanding the implications of poor response (4
, 5
).
On the other hand, those who try to publish papers on the practice of epidemiology often find it an uphill battle. Funding for methods development or validation is inadequate. For those who manage to do this kind of work, publication may be difficult. Space limitations in journals can make it hard for authors to provide enough detail for readers to fully evaluate the study methods or to carry out similar research. And, yes, editors tend to favor papers that link exposure to disease over those on the quality of questionnaires or field methods.
Opportunities for sharing study methods should increase considerably with the introduction of electronic publishing. The Journal has already issued a call for providing questionnaires through individual websites and even the Journal's website (2). Presumably, other methodological details could also be made available in this way.
In the absence of a comprehensive literature, many epidemiologists find themselves in the position of having to figure out on their own the best methods to improve response rates, collect specific samples, or measure exposures without the benefit of solutions that others have already developed. We are left to call friends and colleagues and haphazardly assemble information, which we, in turn, do not routinely make available to others unless asked.
From time to time, or regularly, depending on the level of interest and the quality of papers we receive, the Journal would like to publish papers on practical topics that we believe epidemiologists would find useful. Such papers might include the validation of questionnaires or other exposure assessment methods, the development and validation of scales for assessing exposure or outcome, direct comparison of different methods of data collection or exposure assessment, novel approaches to survey design or implementation, "how-to guides" for collecting specific kinds of data or on how (or why) to implement new study designs or carry out complex statistical analyses, and commentaries on the relative merits or limitations of particular methods in epidemiology or biostatistics.
The paper by Metzger et al. (6) in this issue fits the bill in two ways. It describes a technologic advance in computer-assisted interviewing that may not yet be well known or widely available. The paper also deals with the important issue of how best to obtain candid and truthful answers to sensitive questions on questionnaires. The authors evaluated the feasibility and acceptability of using audio computer-assisted self-interviews and demonstrated that risky behaviors were reported more frequently by those randomized to such interviews than by those assigned to in-person interviews. The multisite study was large and randomized and included a wide range of sociodemographic groups, so the results are probably applicable to others.
The paper by Sesso et al. (7), also in this issue, describes a novel approach to ascertaining the vital status of study participants. Data from Social Security Administration death files accessible on the World Wide Web proved to be an efficient and cost-effective means of identifying deaths among a cohort of men. Limitations on the number of potential matching variables, the lack of information on causes of death, and the poor results for women suggest, however, that this approach will not entirely replace more traditional methods, such as National Death Index searches. Even so, readers will be glad to know that such a resource exists.
We can point to other specific papers of the sort we would like to see. For example, the paper by Austin et al. (8) on the collection of biological samples for studies of gene-environment interactions in cardiovascular disease studies is a useful instructional guide. The commentaries by Maclure and Willett on the uses and abuses of the kappa statistic (9
) and by Maclure and Greenland on tests for trend (10
) are practical guides for analyzing data and avoiding common pitfalls. Weinberg and Sandler's paper on randomized recruitment (11
) was an attempt to bring a method already published in the biostatistical literature to the attention of epidemiologists who might not be regular readers of those journals, while at the same time providing new insights. Other examples from recent Journal issues include papers on using population rosters to select controls (12
), comparing cancer follow-up using active and passive approaches (13
), and evaluating the reliability of self-reported data on past levels of physical activity (14
).
There are many other topics that merit discussion. For example, there is little information on how to actually analyze the data from case-cohort studies and only one commercially available statistical software package to do this (15). Another area in need of work is alternative approaches to the collection of biological and environmental samples in studies that are too large or spread out for in-person collection. Harty et al. (16
) evaluated self-collection of buccal cell samples for DNA analysis. With dozens (if not hundreds) of such studies being planned, it would be useful to know if buccal cell samples can successfully be collected by participants without a researcher present.
In evaluating papers for this new feature, we will not abandon the usual requirements of the Journal. This is not a call for authors to dredge up marginal data from old studies or to begin submitting incidental "validation studies" based on inadequate sample sizes or inappropriate designs. Work must be of high quality and represent a substantial contribution to the literature. Methods must be appropriate, and conclusions must be valid. Results must be generalizable beyond the specifics of the individual study involved. Furthermore, the papers must be judged to provide information of interest to a large number of readers of the Journal. The sound underpinnings of any study are of paramount importance. With this new feature, the Journal hopes to encourage dialogue that will lead to improved methods and more valid study results.
![]() |
NOTES |
---|
![]() |
REFERENCES |
---|
![]() ![]() ![]() |
---|