National Institute of Environmental Health Sciences, Laboratory of Computational Biology and Risk Analysis, MD A3-06, Research Triangle Park, NC 27709, USA
![]() |
Abstract |
---|
![]() ![]() ![]() ![]() |
---|
Abbreviations: DCM, dichloromethane; PBPK, physiologically based pharmacokinetic modela
![]() |
Introduction |
---|
![]() ![]() ![]() ![]() |
---|
Risk assessment is the process by which scientific data pertaining to the toxicity of a chemical agent are evaluated to make practical decisions concerning the release of that chemical agent into the environment. Initially, risk assessment uses scientific data to address basic questions. Is this chemical agent potentially a human cancer hazard? To what degree does the risk of cancer increase with increasing exposure to this agent? Toxicogenomics has clearly contributed to the answers to both of these questions. To the question of hazard identification, toxicogenomics has provided increased confidence in extrapolation of hazards observed in animal studies to hazards that are likely to appear in humans (e.g. mechanism of action of dioxins). In addition, the development of transgenic animals has led to faster and more efficient screening methods for potential carcinogenicity.
In our opinion, the complete sequence of the human genome will cause a fundamental paradigm shift in the science of risk assessment. More biologically relevant information, such as DNA sequence polymorphisms in genes that determine susceptibility to toxic chemicals, will be available for incorporation into the risk decision making process. Coupled with a mathematical model that is representative of the underlying physiology, biology and biochemistry of the system, this knowledge can be used to set reasonable exposure limits for chemicals of concern in the environment. El-Masri et al. (8) demonstrated the utility of this type of information using a physiologically based pharmacokinetic model (PBPK) to quantitatively strengthen the determination of cancer risks. Human risk was estimated based on the carcinogenic potency of dichloromethane (DCM) in mice coupled with PBPK-predicted concentrations of DNAprotein crosslinks formed in humans. Monte Carlo simulations were used to provide distributions of risk estimates for a sample of 1000 PBPK runs, each run representing a collection of biochemical and physiological parameters for an individual (with and without the polymorphism included in the model). The authors assumed that individuals vary in their ability to produce toxic metabolites and that the overall risk distribution was based on the magnitude of adduct formation, which is a biomarker of cancer risk.
The DCM example demonstrates the utility of molecular epidemiology in strengthening the quality of quantitative estimates of population risk. Using this method, the estimated lifetime risk to humans from exposure to 1 ppm DCM was calculated to be 7x10-6 when the polymorphism is ignored. However, when information on the frequency of the polymorphism in the US population was utilized, the estimate was reduced to 5.2x10-6 for a sample of 1000 randomly chosen persons. Indeed, a genetic polymorphism, which is protective in this instance, accounted for an ~24% drop in the risk estimate. The outcome of this change in magnitude is noted at the average and median estimates, but not at the 95th percentile, which is more sensitive to the variance in the population distribution. Inclusion of a `protected' subpopulation in risk estimation could result in a significant decrease in the average risk of a population but may not impact upon the highest risk fraction of the population.
In addition, the primary sequence of the human genome can also be used in theory to identify all genes that are transcriptionally active using gene expression microarrays. These genes can be examined to determine gene expression profiles after toxicant exposure. Gene expression profiles will increase our understanding of the mechanisms of toxicity through the identification of relationships between chemical exposure and changes in genome-wide gene expression patterns. An understanding of changes in gene expression patterns is critically important in developing a complete understanding of toxicological processes because gene expression is altered either directly or indirectly as a result of toxicant exposure. Some chemicals elicit toxic responses by first damaging cellular components (cytotoxic chemicals) or DNA (DNA-damaging agents). The cell responds by repairing the damage, in part through altering expression of appropriate repair genes. Other chemicals that modulate endocrine systems or cellular replication affect toxic responses directly by triggering signal transduction systems, either at the membrane or in the nucleus, leading to alteration of gene expression. The spectrum of altered gene expression then determines the type and outcome of the toxic response. Viewed in this manner, patterns of gene expression can be used as a biomarker of exposure and as a method of identifying mechanisms of toxicity.
Once all of this information has been generated, a standardized method of presenting the available information is necessary. The use of molecular interaction maps coupled with mechanistic models may be employed for more effective use of the available information databases created from the genome project and to test hypotheses generated by it. The molecular interaction notation created by Kohn (9) gives an example of how these data can be summarized. According to the author, molecular interaction maps represent the complex molecular interactions in a pathway in a logical manner, highlight areas of possible interactions and raise questions for further experimentation. Each interaction in the map diagram is referenced to an interactive and dynamic database of current information.
Kohn used the map diagram notation to describe mammalian cell cycle control and DNA repair systems (9). Portions of the network were grouped into functional subsystems. The map demonstrates how the various complexes may interrelate and function at gene promoter sites and at sites of DNA damage. It also demonstrates how interconnected the p53Mdm2 system is with other parts of the network. This information may be employed in mechanism-based modeling to generate specific functional hypotheses and design studies to address them. Simulations of a mechanistic model, along with additional experimentation, would determine the validity of these hypotheses. These simulations would lead to functional models that could be employed in risk assessment. El-Masri and Portier demonstrated this approach using a simpler model describing the entry of cells into G1 following a growth signal to a growth factor receptor (10).
The goals of toxicogenomics are to achieve a better understanding of mechanisms of toxicity and to identify gene expression patterns that are representative of adverse outcomes prior to more time consuming and traditional measures; to improve the predictive accuracy of extrapolating from animal models to human and in vitro to in vivo settings based on a better understanding of molecular mechanisms; and to identify gene expression patterns which accurately reflect and predict specific and quantifiable toxicological end points (11). Has toxicogenomics achieved these goals? It is also important to appreciate the present capabilities and limitations of the technology.
Presently, toxicogenomics has proven successful in recent experiments on cancer genetics in the identification of disease phenotypes (12,13). Oligonucleotide-based or cDNA microarrays are currently being utilized to determine changes in gene expression that may result from differences in physiology, developmental stage, pathology or environmental exposure (14,15). In addition, several researchers are using this technology to categorize and classify toxic responses through direct comparison of gene expression signatures in exposed and control samples. By studying gene expression, it is believed that knowledge can be acquired of fundamental mechanisms of chemical toxicity. This knowledge may also allow the identification of threshold concentrations where there is minimal health risk (11).
This technology also has other potential applications. Various authors have touted toxicogenomics as a method to define surrogates of safety for use in clinical trials and to identify toxicants on the basis of tissue-specific patterns of gene expression by establishing molecular signatures for chemical exposure. In addition, toxicogenomics may be used to elucidate mechanisms of action of environmental agents through the identification of gene expression networks, as well as to use toxicant-induced gene expression as a biomarker to assess human exposure. It is hoped that these results may be extrapolated from animal species to man. Finally, this tool can be utilized for the study of chemical mixtures, as well as to examine the effects of low dose exposure versus high dose exposure (16,17). However, the best application of toxicogenomics presently is its utilization with more traditional approaches. Vast amounts of information currently exist on in vivo toxic responses to toxic chemicals in rodents. Toxicogenomic studies must be designed to allow the utilization and annotation of this information, as well as to generate hypotheses for further study.
Toxicogenomics also has several uncertainties that must be addressed. One challenge involving the use of this technology is to separate gene alterations associated with toxicity from changes that are minor or occur as normal adaptive responses that are not associated with injury to the individual cell or organ (16,18). At present this technology is unable to distinguish causative events from adaptive responses, so potential transcriptional changes for a given toxic response require extensive follow-up work using more traditional approaches that examine multiple end points at the molecular, cellular, tissue and physiological levels of the organism. Also, scientists have the ability to describe gene expression but are often unable to offer valid interpretations of its meaning. The challenge will be how to explain data in a manner that inspires confidence in those that will apply this knowledge to hazard/risk management (18). Scientists are currently combining the identification of gene regulatory elements with expression profiles in microarray experiments to begin to elucidate what transcription factors and upstream signaling molecules are governing the observed response in gene expression following chemical exposure (19,20). Another challenge is the fact that many chemicals act through multiple mechanisms of action, which are dependent on the dose, timing and duration of exposure and cell phenotype. Each individual mechanism alone does not illicit a toxic response, but together these mechanisms can cause cell injury and death. This fact will limit the extrapolation of results of present toxicogenomic studies due to differences that may occur in transcriptional responses from one model system to another (between target cells, cell culture to in vivo conditions or from rodents to man). Also, the cDNA and oligonucleotide array chips may miss reactions. The adverse effect of a chemical may be caused by its interaction with another chemical or by a metabolite in vivo. Testing a single chemical may miss complex interactions such as this. It is necessary that toxicogenomics be combined with other validated assessment strategies (proteomics, metabolomics and more traditional tests) to assess individual risk to chemical exposure.
Toxicogenomics is in its infancy and is still evolving as a science. Because of this, now is the time to decide key issues in its utility and development. Where will the information generated be stored? In what manner will it be stored? Will there be `gold standards' of toxicogenomic data? Will there be a standardized methodology? How efficient is the microarray process? Will toxicogenomics eliminate the necessity of animal testing? How do we confirm that patterns of gene expression are likely to lead to cancer? What role will mechanism-based mathematical modeling play in this debate? These are the issues that need to be addressed for the successful utilization of this new field in risk analysis. Success in the toxicogenomics arena will ensure more scientifically sound risk assessments for the protection of public health.
![]() |
Notes |
---|
![]() |
References |
---|
![]() ![]() ![]() ![]() |
---|