* National Center for Toxicogenomics and the National Toxicology Program, National Institute of Environmental Health Sciences, P.O. Box 12233, Research Triangle Park, North Carolina 27709;
E. I. du Pont de Nemours & Co., Haskell Laboratory, P.O. Box 50, Newark, Delaware 19714;
Department of Biochemistry and Molecular Biology, Michigan State University, 223 Biochemistry Building, Wilson Road, East Lansing, Michigan 48824; and
Birth Defects Research Center, Department of Pediatrics, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, Wisconsin 53226
Received October 9, 2002; accepted February 4, 2003
![]() |
ABSTRACT |
---|
![]() ![]() ![]() ![]() |
---|
Key Words: Human Genome Project; genomic research; high-throughput genomic technologies; risk assessment; regulatory guidelines.
![]() |
INTRODUCTION |
---|
![]() ![]() ![]() ![]() |
---|
Spurred by the demands of the Human Genome Project, advances in high-throughput technologies and bioinformatics have provided new insights into mechanisms of toxicity. While these advances promise to revolutionize our ability to characterize hazard and risk, the challenge is to establish a body of knowledge that will serve as a foundation for applying these new data to risk assessment (Fielden et al., 2002). While focus has been on the dizzying pace of technological advancements in the fields of genomics, proteomics, and bioinformatics, it is imperative that the results of these advances be transparent to those who will ultimately use these data for the protection of human health in the regulatory arena. Additionally, although the goal of the scientist is to understand mechanisms of toxic responses at the gene and protein level and to advance the scientific basis for the development and application of genomic methodologies to mechanism-based risk assessment, important social, legal, and ethical issues must also be considered to ensure the protection of the rights of the individual and prevent potential misuse of such data.
In recognition of the potential of genomic technologies to revolutionize toxicology and perhaps even redefine the concepts fundamental to our understanding of toxicity, the Society of Toxicology Task Force to Improve the Scientific Basis of Risk Assessment brought scientists actively pursuing research in the field of toxicogenomics together with risk assessment scientists and regulators. The objective was to begin a dialogue on the incorporation of genomic data in the regulatory process. This article highlights some of the ongoing research that was presented, but more importantly, it identifies the challenges being faced by investigators who are applying genomic technologies to toxicology as well as incorporating these data into existing risk assessment paradigms.
Application of Genomics in Disease Diagnosis and Treatment
There are several key points during disease development where genomic information may be beneficial:
One of the first disease areas to which genomic technologies have been applied is cancer. Investigators at the National Cancer Institutes Developmental Therapeutics Program have studied gene expression profiles in 60 cancer cell lines (the NCI60) derived from a wide variety of human tumor types. They have assessed the gene expression profiles of 8000 genes of each cell line for unique expression patterns alone and after exposure to over 70,000 different antineoplastic and potential antineoplastic agents. This research uncovered gene expression patterns that may lead investigators to a better understanding of the cellular origin from which each tumor cell line was derived. However, the investigators also demonstrated that clustering the gene expression changes produced a different relational map for the NCI60 cell lines than would have been produced by only evaluating sensitivity to chemotherapeutic agents. These data are proving invaluable for mechanistic understanding of drug sensitivity and resistance, and will benefit efforts on the development of chemotherapeutic agents that produce greater toxicity to tumor tissue and less toxicity to nontumor tissues (Staunton et al., 2001).
In the area of disease diagnosis and prognosis, recent studies on patients with ovarian cancer have produced encouraging results. An approach examining proteomic patterns in serum from neoplastic and nonneoplastic ovarian disease resulted in the identification of a cluster pattern capable of distinguishing cancer from noncancer patients. In a masked study of 116 patients, the algorithm correctly identified all 50 ovarian cancer patients, including 18 individuals with stage I disease. Of the 66 nonmalignant disease cases, 63 were identified as not cancer (Ardekani et al., 2002; Petricoin et al., 2002a
,b
). Additional research has identified mRNA markers that correlated with tumor responsiveness to chemotherapy using various algorithms and validated markers with 28 independent clinical samples. Genomic markers that predict outcome in the ovarian tumor set with reported prediction accuracy of 9095% in cross-validation discovery set, a prediction accuracy of 6080% using any single algorithm and approximately 90% for bad outcome patients. These data are extremely encouraging for medical treatment becoming individualized using genomic techniques (Buchholz et al., 2002
).
Global gene expression analysis also is being used to examine pathways affected following exposure to low dose ionizing radiation and to identify downstream cellular changes that could be used as biomarkers for adverse health effects. Custom human and mouse arrays enriched for genes responsive to radiation were used to identify genes whose expression are modulated within minutes to hours of exposure to low doses (<10 cGy) of ionizing radiation in human lymphoblastoid cells as well as multiple tissues in the mouse. Time effects of different doses of ionizing radiation on gene expression were grouped according to pathways and indicated that significantly different gene expression profiles were elicited. Somewhat surprisingly, there were a number of differentially expressed genes that were specific to low-dose radiation when compared to the higher dose.
Despite the above examples of preliminary success, it is clear that analysis of gene expression alone cannot provide the entire picture of cellular regulation. However, these analyses can be complemented and perhaps strengthened by a comprehensive analysis of the proteome (Hanash, 2002). Such an analysis is especially important when one considers that many regulatory signals affect proteins posttranslationally. Compartmentalization of proteins within the cell can also have functional relevance. Recent studies using dendritic cells (DCs) characterized the combination of proteomic and genomic strategies to provide insight into disease processes; dendritic cells are antigen-presenting cells that play a major role in initiating primary immune responses. Two independent approaches, DNA microarrays and proteomics, have been utilized to analyze the expression profile of human CD14+ blood monocytes and their derived DCs. Analysis of gene expression changes at the RNA level using oligonucleotide microarrays complementary to 6300 human genes showed that 40% were expressed in DCs. Four percent of these (255) were found to be regulated during DC differentiation or maturation. Most of these genes were not previously associated with DCs and included genes encoding secreted proteins as well as genes involved in cell adhesion, signaling, and lipid metabolism. Protein analysis of the same cell populations was done using two-dimensional gel electrophoresis. A total of 900 distinct protein spots were included, and 4% of them exhibited quantitative changes during DC differentiation and maturation. Differentially expressed proteins were identified by mass spectrometry and found to represent proteins with fatty acid binding or chaperone activities as well as proteins involved in cell motility. In addition, proteomic analysis provided an assessment of posttranslational modifications. For example, the chaperone protein, calreticulin was found to undergo cleavage, yielding a novel form. The combined oligonucleotide microarray and proteomic approaches have uncovered novel genes associated with DC differentiation and maturation and have allowed analysis of posttranslational modifications of specific proteins as part of these processes.
Application of Genomics to Predictive Toxicology
The new genomic and proteomic technologies offer tremendous potential for the development of toxicity screens, the characterization of drug related toxicity, the selection of candidate molecules with lower toxicity, the elucidation of mechanisms of toxicity, and the reduction of cycle time in drug discovery and development. However, barriers exist to the realization of these goals, including the lack of publicly available databases; the paucity of validated technologies; the lack of comparative data on experimental platforms, experimental approaches, and study design; the immaturity of robust tools for data analysis; uncertainty about the direct relationship between transcripts and toxicity; and, finally, uncertainty about regulatory applications. Cooperative efforts being coordinated through both the National Institutes of Health and the Health and Environmental Sciences Institute of the International Life Sciences Institute (ILSI/HESI) are addressing these challenges.
The National Center for Toxicogenomics (NCT), centered at the National Institute of Environmental Health Sciences, is developing genomic technologies, in collaboration with other research institutes, such as ILSI/HESI (see below), and through cooperative agreements with industry (Tennant, 2002). The early effort at NCT was focused on a proof of principle program to determine whether gene expression profiles can distinguish peroxisome proliferator and barbiturate treated animals. The peroxisome proliferators studied include Wyeth-14643, clofibrate, and gemfibrozil, and the barbiturates include phenobarbital and hexobarbital. Phenytoin, a phenobarbital-like inducer, was also included. The program includes multiple dose levels and time course information and is focused at the moment on gene expression profiling. Multiple bioinformatics tools are being employed to classify compounds using previously derived data. Samples have been analyzed in a blinded fashion. Of 23 blinded samples, 12 were correctly classified as being similar to another compound of the same class. Ten were correctly classified as not being similar to a "toxic profile" in the database (Hamadeh et al., 2002a
,b
).
The NCT program is extending its research to develop an "encyclopedia" of tissue-specific gene expression signatures. Beginning with hepatotoxicity, the compendium will include gene expression profile data from acute, subchronic, and chronic treatments. The first six hepatotoxicants selected for study are acetaminophen, furan, furfural, methyl eugenol, methylphenidate, and riddelliine. The program will develop over the next five years and is currently focusing on methodological issues such as interlaboratory and profiling system platform differences, architecture of the database (known as the Chemical Effects in Biological Systems, or CEBS), and establishment of the database consortium.
In response to the emerging issues and challenges associated with toxicity screening, the members of ILSI/HESI formed a subcommittee with the immediate goals of (1) evaluating the different experimental methodologies for measuring alterations in gene expression, (2) contributing to the development of an international database linking gene expression data and key biological parameters, and (3) determining if known mechanisms and pathways of toxicity can be associated with characteristic gene expression profiles. For this work, the ILSI/HESI subcommittee has focused on hepatotoxicity, nephrotoxicity, and genotoxicity. Thus, the ILSI efforts complement those of the NCT.
Cisplatin, puromycin, and gentamycin were chosen for the nephrotoxicity studies that are currently ongoing. However, changes in kidney gene expression in response to cisplatin appear consistent with known mechanisms of toxicity and effects could be observed in as little as 4 h postdose. With regards to variability, differences between array replicates within one laboratory and between laboratories have been minimal. However, variability between animals appears to be significant. This has led to a decision to analyze individual samples rather than pooled RNA. These data are preliminary, and considerably more analyses are needed before any firm conclusions can be made.
Progress also has been made on the development of the ILSI Microarray Database (IMD). To date, this is a simple, on-line data storage site that includes a description of the various array platforms, chip and study designs, and a compilation of the comparative gene expression data. A Basic Expression Exchange Format (BEEF) also has been developed to standardize data submission from the variety of platforms being used in the participating laboratories. The IMD has allowed the sharing of data among the participants in the subcommittee and has facilitated the identification of issues associated with cross-platform analysis described above.
Although only preliminary results are available from these studies, several issues have been identified that need to be addressed by those participating in this research field. These include the need to identify and reduce sources of experimental variability, the finding that different approaches to data analysis can lead to artifacts in comparisons and thus the need for a common analytical metric, and the realization that different experimental platforms will, in fact, enrich the data set.
Genetic Variability and Susceptibility
Genetic polymorphisms are variations in DNA sequences (e.g., substitutions, insertions, and deletions) that are present at a frequency greater than 1% in a population. These variations themselves, which are common in all genes, may or may not have functional significance. However, if functional significance is not apparent, the polymorphism may be in linkage disequilibrium with a variant that does exhibit significance and, as such, may still be associated with susceptibility. From a risk assessment perspective, what is important is the functional variations that affect the internal dose of a chemical, target organ damage, and or other response to exposure, and eventually, an adverse health outcome. Of crucial importance is the identification of susceptible subgroups that may be at much greater increased risk from exposure than the general population. Knowledge of mechanism and characterization of genetic susceptibility will be useful in guiding the approach for controlling or preventing effects of environmental exposure.
Several groups have been pursuing the identification of functional variants in genes identified based on our current knowledge of toxicology, i.e., candidate genes. Examples of genes known to be relevant to environmental chemical exposures and that illustrate the role that polymorphisms play in rendering an individual susceptible or at reduced risk compared to the general population include GSTM1 and susceptibility to the carcinogen benzo[a]pyrene (Engel et al., 2002), adverse drug reactions in individuals exhibiting functional CYP2D6 polymorphisms (Meyer and Zanger, 1997
), and alcohol intolerance among aldehyde dehydrogenase deficient individuals (Maezawa et al., 1995
). However, other factors, besides metabolic status, will modulate susceptibility including physiology, rates of DNA repair, and proliferative or apoptotic responses to exposure (for example, Qiao et al., 2002
). As these are characterized, information about genetic or epigenetic variation in these factors can be incorporated into physiologically-based risk models.
One early example of incorporating a polymorphism into risk assessment is the case of dichloromethane and the enzyme glutathione S-transferase theta 1 (GSTT1; Jonsson and Johanson, 2001). Dichloromethane (DCM) can be activated by GSTT1 and approximately 20% of the population has a GSTT1 null genotype (GSTT1-/-). Animal studies suggest that DCM is a carcinogen. Individuals with a functional GSTT1 enzyme exposed to DCM are at high risk, while exposed individuals who carry the null GSTT1 genotype and, thus, a nonfunctional enzyme may have very little risk. However, the same null GSTT1 genotype that is protective against DCM exposure-associated disease appears to increase risk if one is exposed to ethylene oxide (Fennell et al., 2000
). Thus the stigmatization of a genotype as "at risk" or "protective" is chemical- and dose-dependent. This situation clearly confuses the issue with regard to interpreting the meaning of these studies for the general public.
In contrast to the candidate gene approach, others are using global approaches to identify variants associated with susceptibility. Current estimates suggest the presence of 10 million single nucleotide polymorphisms (SNPs) in the human genome with 500,000 of these localized within structural genes. Of these, it is estimated that between 1000 and 10,000 SNPs will have medical utility. Using existing technology, considerable progress has been made in SNP discovery. There has been less success, though, in SNP association with complex phenotypes and the identification of risk factors. Although some have expressed considerable hope in expression profiling as a means of identifying leads to possible genetic risk factors for complex disease, major limitations to this otherwise promising technology are the number of potential targets that would be identified and the problem of obtaining early-stage disease tissue to allow the identification of genetic causality.
Three approaches to SNP association with phenotype have been attempted. Although much effort has been placed on the discovery and assessment of candidate gene SNPs as discussed above, this approach has had mixed success. The contribution of any one candidate gene to a complex trait can be relatively small and, as such, difficult to prove. Similarly, positional cloning has not worked well for multigenic or quantitative traits. In contrast, with suitable statistical and technological power, whole genome scans should work. The MassEXTENDTM reaction developed by Sequenom (Jurinke et al., 2001), which is an extension of the well-established single-based extension assay for genotyping (Lindblad-Toh et al., 2000
), coupled with a mass spectrometer array and artificial intelligence-based software for analysis, offers the technological power to accomplish such whole genome scans. For complex phenotypes, multiplexed SNP assays are performed on individuals and followed by many different association tests with the same genotype. The major limitation to this approach is the large number of individuals needed for such analyses. Nevertheless, Sequenom currently has nearly 200,000 gene-based SNP assays that have been validated and that cover virtually all human genes. In a similar effort, and representing a collaborative effort between both the private and public sectors, the SNP Consortium and First Genetic Trust also has made considerable progress in developing assays for the same number of SNPs. As of November 2001, this group had identified 1.7 million SNPs in 24 individuals representative of the human populations ethnic diversity; of these, 1.5 million had been mapped to the sequence of the human genome. This information will be put into the public domain, and the development of these portfolios will advance the discovery of multifactorial genes of therapeutic and diagnostic utility.
In summary, there are critical data needs that should be addressed for successful incorporation of genetic polymorphism information into risk assessment. Not necessarily listed in their chronological order or their importance, these are
Ethical and Legal Issues in Genomics
Scholars in the fields of ethics, law, philosophy, and public policy have been highly interested in the social implications of the Human Genome Project and related research in human genetics. Much of the resulting research in these fields has examined challenges surrounding the development and integration of predictive tests for genetic diseases and disease susceptibilities. Prominent topics in this literature include the role of genetic counseling in patient decision making, best practices for informed consent, consumer and physician education regarding genetic influences on disease, impact of genetic testing on family relationships, and protections against genetic discrimination in insurance and employment contexts. To a much lesser degree, scholarship in the humanities and research in the social sciences also has examined ethical, legal, and social issues surrounding the application of genetic information and diagnostic tests in nonclinical settings. For example, researchers have considered issues surrounding the use of DNA-based tests to determine paternity in child-custody disputes. The application of genetic tests to link criminals to forensic evidence also has been studied. There is an emerging body of literature on tests for genetic sensitivities to occupational and environmental agents. However, surprisingly little attention has been directed toward nonclinical applications of modern genetic tools.
One field in which the lack of research on the ethical and social dimensions of applying genetic technologies in nonclinical contexts is particularly striking is that of toxicological sciences. To date, there have been no significant studies of ethical, legal, policy, and regulatory implications surrounding the application of genetic information and technologies to hazard identification, risk assessment, risk management, and environmental regulation. This is discouraging because the elucidation of mechanisms by which genes influence response to environmental toxicants are expected to transform significantly the practices of risk assessment and environmental regulation. For example, genetic tests might be used to identify hypersensitive subgroups in the population or reveal presently unknown environmental hazards. These and other possibilities argue strongly for the need to study ethical, legal, and social dimensions of the coming "genetic revolution" in the toxicological sciences. If genomic technologies are to be successfully integrated into ongoing risk assessment and risk management efforts, the social impacts of these applications must be carefully studied and thoughtfully addressed (Christiani et al., 2001).
Challenges to the Application of Genomics in Risk Assessment
To better characterize potential hazards to humans, the risk assessment process is evolving from a process dependent on traditional toxicologic testing paradigms into a process that incorporates more scientific understanding of mechanistic data and biologically based models. In this regard, greater emphasis is being placed on biomarkersthat is, indicators that signal events in biological systems. As documented by some of the examples above, there are at least two approaches being used to incorporate genomics data in toxicology and identify such biomarkers: a targeted approach in which one assesses the expression levels of key biochemical pathways identified a priori, and a "shotgun" approach in which gene expression profiling, coupled with bioinformatics techniques, is used to identify these key pathways. It is the latter approach for which the incorporation of genomics data into quantitative risk assessments is uncertain but is expected to be revolutionary. As with any developing technology, there are challenges as well as opportunities. The challenges that need to be faced include analysis of complex data sets that require sophisticated bioinformatics to unravel, greater understanding of the significance of small changes in gene expression to disease development, and linking of the gene and protein changes following exposure to chemicals to the ultimate outcome of disease, both qualitatively and quantitatively.
The new technology for assessing tissue status in response to toxicants at the gene expression level will be useful for establishing and validating hypothesized modes of action, dose-time response dependencies of the mode of action and assessing the relative sensitivity to indicated modes of action across species. In contrast to the current paradigms for assessing risk of noncancer toxic endpoints (i.e., the use of safety, or uncertainty, factors), information on genetic variability has the potential for improved quantification of individual and population-based interindividual variability in susceptibility to different diseases/toxicants. Obstacles to these potential advances include the high cost of replicate measures, collection of the data as dichotomous variables (e.g., the gene expression level exceeds a predefined threshold or it does not), and the difficult task of integrating the vast amount of data into meaningful processes overlaid on top of background fluctuations in cell cycle stage, diurnal cycles, developmental maturation, and aging. Complex diseases offer their own challenges. Whereas large population case control studies may well detect some individual genetic determinants associated with moderate relative risks of specific diseases, these genetic determinants may not act independently. Sorting out the combined quantitative contributions of alleles at many specific loci to polygenic traits, and interactions among loci, will require overcoming substantial multiple comparison problems in studies where the effects at thousands of loci are studied.
How are these new approaches being viewed by those making regulatory decisions? The approaches to risk/benefit analysis being taken by regulatory agencies are evolving, and the science of genomics is stimulating the development and formation of improved strategies for safety assessment. For regulatory purposes, it will be necessary to establish the relationships between the new genomics-based endpoints and known health outcomes, more established methodologies, and the biological relationships of the endpoints in laboratory animal models to humans. For full integration of genomic information into risk and safety assessment, regulatory bodies also will need to gain confidence in the accuracy, sensitivity, and robustness of these new methods. When will regulatory agencies accept data from these new genomics-based methods? The simple answer is when it is appropriate to do so. To gain regulatory acceptance it will be necessary for consensus to emerge from the larger scientific community and then from within the responsible centers within the various agencies. Guidance, guidelines, and regulations will then emanate from such consensus.
Summary
It is clear from the many examples provided by the conference participants that genomic technologies are moving forward at a rapid pace in the areas of defining disease susceptibility, diagnosis, and treatment, as well as gaining a better understanding of toxicant mechanism of action. Progress also is being made in using these same approaches to develop toxicity screens that will be useful in predictive toxicology. However, many challenges remain (Bishop et al., 2001; Simmons and Portier, 2002
). Questions regarding deficiencies in both the number and quality of publicly available databases, the number of validated technologies, comparative data on experimental platforms, and robust tools for data analysis still plague this field. There also remains a paucity of information regarding the ethical, legal, and policy implications surrounding the application of genetic information to hazard identification, risk assessment, and risk management. However, active efforts are underway to address each of these concerns that must be completed before the use of genomic information to make regulatory decisions is fully embraced. Furthermore, a better understanding of genomic endpoints relative to known health outcomes, as well as an understanding of how genomic endpoints in animal models extrapolate to human health, will be necessary before there is acceptance by the regulatory community.
![]() |
NOTES |
---|
1 To whom correspondence should be addressed at the National Center for Toxicogenomics and the National Toxicology Program, National Institute of Environmental Health Sciences, 111 Alexander Drive, Mail Drop B3-10, Research Triangle Park, NC 27709. Fax: 919-541-4632. E-mail: cunning1{at}niehs.nih.gov.
![]() |
REFERENCES |
---|
![]() ![]() ![]() ![]() |
---|
Bishop, W. E., Clarke, D. P., and Travis, C. C. (2001) The genomic revolution: What does it mean for risk assessment? Risk Anal. 21, 983987.[CrossRef][ISI][Medline]
Buchholz, T. A., Stivers, D. N., Stec, J., Ayers, M., Clark, E., Bolt, A., Sahin, A. A., Symmans, W. F., Hess, K. R., Kuerer, H. M., et al. (2002). Global gene expression changes during neoadjuvant chemotherapy for human breast cancer. Cancer J. 8, 461468.[ISI][Medline]
Christiani, D. C., Sharp, R. R., Collman, G. W., and Suk, W. A. (2001). Applying genomic technologies in environmental health research: Challenges and opportunities. J. Occup. Environ. Med. 43, 526533.[ISI][Medline]
Engel, L. S., Taioli, E., Pfeiffer, R., Garcia-Closas, M., Marcus, P. M., Lan, Q., Boffetta, P., Vineis, P., Autrup, H., Bell, D. A., et al. (2002). Pooled analysis and meta-analysis of glutathione S-transferase M1 and bladder cancer: A HuGE review. Am. J. Epidemiol. 156, 95109.
Fennell, T. R., MacNeela, J. P., Morris, R. W., Watson, M., Thompson, C. L., and Bell, D. A. (2000). Hemoglobin adducts from acrylonitrile and ethylene oxide in cigarette smokers: Effects of glutathione S-transferase T1-null and M1-null genotypes. Cancer Epidemiol. Biomark. Prev. 9, 705712.
Fielden, M. R., Fertuck, K. C., Matthews, J. B., Halgren, R., and Zacharewski, T. (2002). In silico approaches to mechanistic and predictive toxicology: An introduction to bioinformatics for toxicologists. Crit. Rev. Toxicol. 32, 67112.[ISI][Medline]
Hamadeh, H. K., Bushel, P. B., Jayadev, S., DiSorbo, O., Bennett, L., Li, L., Tennant, R., Stoll, R., Barrett, J. C., Paules, R. S., et al. (2002a). Prediction of compound signature using high density gene expression profiling. Toxicol. Sci. 67, 232240.
Hamadeh, H. K., Bushel, P. B., Jayadev, S., Martin, K., DiSorbo, O., Sieber, S., Bennett, L., Tennant, R., Stoll, R., Barrett, J. C., et al. (2002b). Gene expression analysis reveals chemical-specific profiles. Toxicol. Sci. 67, 219231.
Hanash, S. M. (2002). Global profiling of gene expression in cancer using genomics and proteomics. Curr. Opin. Mol. Ther. 3, 538545.[ISI]
Jonsson, F., and Johanson, G. (2001). A Bayesian analysis of the influence of GSTT1 polymorphism on the cancer risk estimate for dichloromethane. Toxicol. Appl. Pharmacol. 174, 99112.[CrossRef][ISI][Medline]
Jurinke, C., van den Boom, D., Cantor, C. R., and Koster, H. (2001). Automated genotyping using the DNA MassArray technology. Meth. Mol. Biol. 170, 103116.[Medline]
Lander, E. S., Linton, L. M., Birren, B., Nusbaum, C., Zody, M. C., Baldwin, J., Devon, K., Dewar, K., Doyle, M., FitzHugh, W., et al. (2001). Initial sequencing and analysis of the human genome. Nature 409, 860921.[CrossRef][ISI][Medline]
Lindblad-Toh, K., Winchester, E., Daly, M. J., Wang, D. G., Hirschhorn, J. N., Laviolette, J.-P., Ardlie, K., Reich, D. E., Robinson, E., Sklar, P., et al. (2000). Large-scale discovery and genotyping of single-nucleotide polymorphisms in the mouse. Nat. Genet. 24, 381386.[CrossRef][ISI][Medline]
Maezawa, Y., Yamauchi, M., Toda, G., Suzuki, H., and Sakurai, S. (1995). Alcohol-metabolizing enzyme polymorphisms and alcoholism in Japan. Alcohol. Clin. Exp. Res. 19, 951954.[ISI][Medline]
Meyer, U. A., and Zanger, U. M. (1997). Molecular mechanisms of genetic polymorphisms of drug metabolism. Annu. Rev. Pharmacol. Toxicol. 37, 269296.[CrossRef][ISI][Medline]
Petricoin, E. F. I., Ardekani, A. M., Hitt, B. A., Levine, P. J., Fusaro, V. A., Steinberg, S. M., Mills, G. B., Simone, C., Fishman, D. A., Kohn, E. C., et al. (2002a). Use of proteomic patterns in serum to identify ovarian cancer. Lancet 359, 572577.[CrossRef][ISI][Medline]
Petricoin, E. F. I., Mills, G. B., Kohn, E. C., and Liotta, L. A. (2002b). Proteomic patterns in serum and identification of ovarian cancer. Lancet 360, 170171.
Qiao, Y., Spitz, M. R., Shen, H., Guo, Z., Shete, S., Hedayati, M., Grossman, L., Mohrenweiser, H., and Wei, Q. (2002). Modulation of repair of ultraviolet damage in the host-cell reactivation assay by polymorphic XPC and XPD/ERCC2 genotypes. Carcinogenesis 23, 295299.
Simmons, P. T., and Portier, C. J. (2002) Toxicogenomics: The new frontier in risk analysis. Carcinogenesis 23, 903905.
Staunton, J. E., Slonim, D. K., Coller, H. A., Tamayo, P., Angelo, M. J., Park, J., Scherf, U., Lee, J. K., Reinhold, W. O., Weinstein, J. N., et al. (2001). Chemosensitivity prediction by transcriptional profiling. Proc. Natl. Acad. Sci. U.S.A. 98, 1078710792.
Tennant, R. W. (2002). The National Center for Toxicogenomics: Using new technologies to inform mechanistic toxicology. Environ. Health Perspect. 110, A8A10.[ISI][Medline]
Venter, J. C., Adams, M. D., Myers, E. W., Li, P. W., Mural, R. J., Sutton, G. G., Smith, H. O., Yandell, M., Evans, C. A., Holt, R. A., et al. (2001). The sequence of the human genome. Science 291, 13041351.