* Department of Biostatistics, Virginia Commonwealth University, Richmond, Virginia; Solveritas, L.L.C., Richmond, Virginia;
U.S. EPA, NCEA, Cincinnati, Ohio;
U.S. EPA, NHEERL, Research Triangle Park, North Carolina; and ¶ The Dow Chemical Company, Midland, Michigan
1 To whom correspondence should be addressed at Department of Biostatistics, Virginia Commonwealth University, 1101 E. Marshall Street, #B1-039-A, Richmond, VA 23298-0032. Fax: (804) 828-8900. E-mail: gennings{at}hsc.vcu.edu.
Received May 5, 2005; accepted July 12, 2005
![]() |
ABSTRACT |
---|
![]() ![]() ![]() ![]() ![]() ![]() |
---|
Key Words: additivity; synergy; nonadditivity; interaction index; isobologram.
![]() |
INTRODUCTION |
---|
![]() ![]() ![]() ![]() ![]() ![]() |
---|
The objective of this article is to describe a generalized approach for modeling additivity for toxicological mixture studies. This method is developed for a "data rich" scenarioreproducible dose-response information are assumed available on the mixture components, presumably on the same health endpoint in the same animal species, strain/stock, age, and gender under the same exposure conditions (route, duration, gavage vehicle, volume, etc.). In other more "data poor" situations such as is often the case in environmental risk assessment, simplifying assumptions may be necessary in order to proceed. The most widely used methods for assessing risks from low-level exposures to chemical mixtures are based on additivity concepts (ATSDR, 2004; U.S. EPA, 2000
). These include dose-addition risk assessment methods which assume the same toxic mechanism-of-action across the mixture's components and response-addition risk assessment methods which assume independence of toxic action across the mixture components. These approaches are chosen based either on real data on the mode of action by which toxicity is produced for each mixture component (which may be scarce), or by using expert judgment regarding mode of action. Dose-addition methods sum the exposure levels of similar components in a mixture (after scaling for relative potency among components) and estimating the effect (or risk) of the mixture directly from the summed dose. In response addition, the probabilistic risk of an effect for each individual chemical in the mixture is estimated and then these individual risks are summed. Consequently, the two methodologies may yield quite different estimates of risk, which is problematic when the mode of action data used to choose between them are uncertain. These approaches are useful in data poor situations where toxicological dose-response information for all of the chemicals in a mixture may not be available. As data availability increases, however, environmental risk assessment may be improved by developing methods that rely less on assumptions of toxic mechanism. Such methods can be used to harmonize approaches to environmental risk assessment and increase consistency in data uses and interpretation. The goal of this article is to describe statistical models of additivity and departures from additivity which provide a high level of statistical rigor for testing biological hypotheses. A unifying approach compatible with both forms of additivity is suggested, using a statistical modeling approach based on the fundamental definition of additivity developed by Berenbaum (1985)
.
A basic concept of an interaction is that the slope (i.e., steepness) of a dose-response curve of a chemical changes in the presence of one or more other components in a mixture. Conversely, if the slope of the dose-response curve of a chemical is not altered in the presence of another chemical then the chemicals are said to exhibit no interaction (see Fig. 1), or that they are said to combine additively (i.e., zero interaction) (Teuschler et al., 2002). This concept of interaction as a change in slope is well grounded in the pharmacology literature. The earliest designations of receptors (receptive substance) and drug-receptor interactions occurred in the late nineteenth century when Langley suggested that these interactions were based on the law of mass action (Goodman and Gilman, 2001
). This hypothesis was extended theoretically and experimentally by A. J. Clark in the 1920s, Ariëns and Beld (1977)
, and many others (Goodman and Gilman, 2001
). The equation that was used to describe this interaction between a drug and its receptor was that of a simple rectangular hyperbola (e.g., Michaelis-Menton). It is from these initial studies that terms for drugs with different activities (e.g., potency, partial agonist, antagonist) were coined and various types of drug interactions (e.g., "additivity" as no interaction) were described (Goodman and Gilman, 2001
). A good compilation of how these concepts have been developed and used in different applications and case studies is contained in the book edited by Yang (1994)
.
|
![]() | (1) |
In what follows we will show an algebraic equivalence of Berenbaum's definition of additivity given in Equation 1 to statistical additivity models. We will also demonstrate that these statistical additivity models satisfy the fundamental concept of zero interaction for additivity. We begin with a general discussion of the justification for using empirical statistical models, which approximate an underlying relationship; these models are not based on mechanistic assumptions/knowledge. Later, we describe an approach that relates the mean of the response variable to a set of covariates that is applicable to many different types of responses, dose-response shapes and distributional assumptions. That is, the methods may be applied to continuous response values, proportional responses, count data, etc. We demonstrate that these are additivity models when they are parameterized to include an intercept and linear terms only. Otherwise, higher-order cross product terms are associated with interactions according to Berenbaum's definition. In each of these sections
Justification of Statistical Models
To define the notation, let g(µ) be a known function of the mean response of interest in an analysis of a mixture of c components. Examples of commonly used transformations, g(µ), include the probit transformation for proportional data, µ (or a power of µ) for continuous data, and log(µ) for count data. We justify the common parameterization of statistical models based on two assumptions through a Taylor series argument2: (1) g(µ) is a function of the exposure concentrations of the c mixture components, i.e., the response changes with exposure. It is usually the case that the algebraic form of this underlying relationship is unknown. (2) Although the underlying relationship is unknown, we assume that the relationship is smooth (differentiable) and continuous. When these two assumptions are met, it follows that the unknown relationship can be expanded in a Taylor series (see Appendix) which motivates general parameterization of a response surface using linear and cross-product terms. As such, a first-degree model with only linear terms is an additivity model. Models with cross-product terms allow for interactions among the components in the mixture.
For this model development to be useful, it is necessary to demonstrate the adequacy of the approximation. Since the observed data contain information about the underlying dose-response relationship, comparison of these data to the predictions of the model are important in assessing the model. Such comparisons can be accomplished with varying levels of statistical rigor. Often simple plots of observed and predicted results are sufficient. In other cases, it may be necessary to test the null hypothesis of model adequacy. While testing model adequacy is an activity that can only occur after the data collection and analysis phases of a study, it should be noted that experimental designs have been developed to maximize the power of the test of model adequacy (e.g., Atkinson and Donev, 1992). For models that provide an adequate representation of the data, the appropriateness of Box's (1979)
observation, "all models are wrong, but some are useful," is readily understood.
Generalized Linear Models as Additivity Models
The class of statistical models that we consider in this section is known as generalized linear models (e.g., McCullagh and Nelder, 1989). They are general in the sense that they can be used for different types of data including continuous, proportional, and count data. The mean of the response of interest is modeled as a function of covariates of interest (e.g., doses/concentrations) through what is termed the link function, a user specified smooth and monotone function. Commonly used link functions include the logit link (i.e., log(µ/(1 µ))), probit or others for modeling binary data (e.g., loss of righting reflex); a log link is often used in modeling count data (e.g., motor activity); and a power link (i.e., µ
) is often used with continuous data (e.g., serum enzyme levels). The choice of the link function depends on the type of response variable (i.e., binary, count, continuous) and the analyst's choice for ease of interpretation.
Algebraic equivalence of the additivity model with Berenbaum's definition of additivity.
A statistical additivity model (e.g., Neter et al., 1996) for a combination of two chemicals is
![]() | (2) |
For convenience, the additivity model in Equation 2 is expressed for the case of two chemicals. The results retained in the following sections for the case of two chemicals are readily generalized to additivity models for c chemicals where
In the statistical literature this model is known as a generalized linear model due to the link between a transformation of the mean, g(µ), and a set of covariates (e.g., McCullagh and Nelder, 1989). It is also an additivity model in the statistical sense (Neter et al., 1996
). (Interestingly, the model in Equation 2 is equivalent to the slope ratio model described by Finney [1971]
when g(µ) = µ.) It can be shown algebraically that the additivity model given in Equation 2 also satisfies the definition of additivity as given by Berenbaum in Equation 1. Consider the model in Equation 2 at a specified response µ0, such that the transformed mean g(µ0) = ß0 + ß1x1 + ß2x2. For example, to work with an ED50 contour, µ0 = 0.5 and using a logit link, g(µ0) = log(µ0/(1 µ0)) = log(0.5/(1 0.5)). From Equation 2, the dose associated with µ0 for each of the single chemicals alone is
Then,
![]() |
Following Carter et al. (1988), if the model is parameterized to include a cross-product term as motivated by a Taylor series argument, i.e.,
then Equation 1 becomes
![]() |
Equivalence of the additivity model with fundamental notion of zero interaction.
The model in Equation 2 is an additivity model in that it satisfies Berenbaum's definition of additivity (i.e., planar contours of constant response). It also satisfies the fundamental concept of an interaction as a change in the slope of one chemical in the presence of another chemical. That is, for the slope of g(µ) with respect to x1 is ß1; the slope of g(µ) with respect to x2 is ß2. For comparison, consider the model with a cross-product term given by
![]() |
![]() |
![]() |
![]() |
For comparison, the rate of change of the mean as a function of the dose of either chemical of the additivity model given in Equation 2 is characterized by only the corresponding linear parameter. Here, let so that
![]() | (3) |
![]() |
Use of these derivatives to describe the rate of change in the mean as a function of either chemical elucidates the complexity of working with a nonlinear model. The actual slope depends on the level of response selected (here denoted by w). If straight line or linear approximations are used instead of a nonlinear shape, then comparisons of the slope of the linear functions may lead to an incorrect inference. If the approximations are based on different regions of the dose-response relationship, then presumed differences in slopes may be due to the approximation and not due to an interaction. Figure 2 depicts two nonlinear and parallel dose-response curves. If linear approximations were made in the two locations shown in the figure, then an incorrect conclusion of non-parallelism could be made.
|
|
|
![]() | (4) |
![]() |
![]() |
![]() |
Equivalence of the additivity model with fundamental notion of zero interaction.
Similar to the argument that led to Equation 3, the slope of the model in Equation 4 depends on the level of response and the corresponding linear parameter. For ease of notation, let so that
![]() |
Threshold Models as Additivity Models
Algebraic equivalence of a threshold additivity model with Berenbaum's definition of additivity.
Threshold models are piecewise models (i.e., connected segmented lines) that allow for a dose range/region associated with a response the same as background, and an increase/decrease in response beyond the "threshold." Consider the following parameterization of an increasing threshold additivity model for a combination of c chemicals,
![]() | (5) |
Using this model, the dose threshold associated with the ith chemical is Figure 5A depicts the dose threshold as the dosage where the background response mean changes to an increasing dose-response relationship. The algebra relating this model to Berenbaum's definition of additivity in Equation 1 is similar to that for the generalized linear model as it would be for response values greater than background, i.e., in the increasing part of the curve. That is, for the increasing part of the curve, i.e., where
and the model is given by
![]() | (6) |
![]() |
|
Equivalence of the additivity model with notion of zero interaction as lack of slope change.
Recognizing the generalization of Equation 6 in the form of Equation 2, it follows from Equation 3 that Thus, using the threshold additivity model, the shape of the dose response curve of the ith chemical does not change in the presence of the other chemicals.
Use of Additivity Models
A general strategy for testing for interactions among chemicals in a mixture is to use an additivity model to define the "no interaction" case and to use mixture data to describe the so-called "unrestricted," or general case, as described by the mixture data. Only single chemical dose-response data are necessary to estimate an additivity model. Selection of mixture points in regions of environmental or biological relevance (defined by fixed-ratio mixtures of the chemicals in the mixture) results in economical and practical designs for use in testing for interactions when the number of components in the mixture is large. Examples of the use of additivity models in testing the hypothesis of additivity follow.
Gennings et al. (1997) compared mean responses from an additivity model to that observed at a mixture point of interest. In particular, these authors describe 100(1
)% prediction intervals at each mixture point of interest using the additivity model. If the observed sample mean from the mixture point is included outside of the prediction interval, then they conclude evidence of departure from additivity. As the number of mixture points increases, multiple comparison corrections (e.g., Bonferroni corrections) become important. Dawson et al. (2000)
compared the dose locations at specified responses under an additivity model to that observed using the interaction index. These authors estimate the interaction index at each mixture point of interest and develop a statistical test of whether the index equals one. They used Hochberg corrections for multiple testing.
More recently, several authors used a ray design to compare predicted responses from an additivity model to a mixture model along one or more fixed-ratio mixture rays (Casey et al., 2004, 2005, in press
; Gennings et al., 2002
; Meadows et al., 2002
). Figure 6 depicts a ray design for a combination of two chemicals with two mixture rays. Casey et al. (2004)
developed methodology for testing the hypothesis of additivity in a mixture of c chemicals and for testing whether subsets of the chemicals interact with the remaining chemicals. Let ai be the proportion of the ith chemical in the fixed mixture ratio, i = 1,..., c, where
Gennings et al. (2002)
and others have pointed out that the slope in terms of total dose along the fixed-ratio ray under additivity is given by
These authors develop a test of additivity by testing whether the slope for the dose-response curve of the mixture in terms of total dose is equivalent to
add. Although this inference is limited to the mixing ratio used in the experiment, it results in experimentally feasible studies of mixtures of many chemicals.
|
Experimental Designs
One of the primary advantages to using fixed-ratio ray designs is the savings in terms of experimental resources required to test hypotheses of additivity. In general, the estimation of the additivity model requires only suitable single-chemical dose-response data. Our experience suggests that four to six dose groups spanning the active region of each of the single chemical dose-response curves is sufficient to predict the additivity surface. Similarly, if a single fixed-ratio mixture is of interest, then a target of about six total dose groups along the ray spanning the active part of the dose-response curve is suggested. Thus, with c single chemicals and one mixture ray, such a design includes about 6(c + 1) dose groups. By contrast, a factorial design with c chemicals and d dose groups per chemical has dc dose groups. A further advantage of the use of statistical models is their connection to statistical experimental designs. A vast experimental design literature (e.g., Abdelbasit and Plackett, 1983; Atkinson and Donev, 1992
; Kalish, 1990
; Minkin, 1987
) has developed that can be exploited to provide estimates with desirable properties, such as estimates with minimized variance. Meadows et al. (2002)
and Casey et al. (2005)
developed optimal experimental design strategies for tests of interaction using fixed-ratio ray designs. That is, these designs specify dose locations and sample size allocations that are associated with desirable statistical properties of the model parameters. The approach taken by Meadows et al. and Casey et al. was to determine the experimental designs associated with minimizing the variance of the test statistic associated with the test of additivity. By reducing its variance, the resulting test statistic has increased power for rejecting the hypothesis of additivity.
Summary and Discussion
We have described the justification for using empirical statistical models with a Taylor series argument. The argument begins with an assumption of a functional relationship of unknown form but which is assumed to be continuous and differentiable. Using a Taylor series, the functional form is approximated by a polynomial function with a finite number of terms. The models that we fit to the observed data are meant to only approximate the underlying unknown dose-response relationship. The approximation should be verified with a goodness-of-fit test before continuing in the analysis with further inference. However, if the model of the mean response, or a transformation of the mean, is parameterized with linear terms, we have demonstrated that these terms may be interpreted as being associated with the rate of change in the mean as a function of the dose/concentration. In the additivity models we have described, the rate of change in the response with respect to the ith chemical does not change in the presence of other chemicals. When cross-product terms are added to the model to construct an interaction model, then the rate of change in the response with respect to the ith chemical does change in the presence of other chemicals. Thus, these additivity models satisfy the condition that the slope of a chemical does not change shape in the presence of other chemicals.
It is indeed the case that statisticians and life scientists (e.g., toxicologists) working independently have each developed a body of knowledge pertinent to the study of interactions. Over time an extensive vocabulary has evolved to characterize interactions, i.e., departure from additivity, which is perhaps unnecessarily confusing and complex. Dose-addition is the fundamental premise behind such risk assessment approaches as the Hazard Index and Toxicity Equivalence Factors (U.S. EPA, 2000). Such approaches assume that any amount of the agent, no matter how small or large, can contribute to the overall toxicity of the mixture because total dose is the unit of concern. Response addition assumes that if the toxicity of the mixture is truly independent among the chemicals, then it is appropriate to add small risks following the law of statistical independence. With response addition, sub-threshold doses of individual chemicals do not contribute to the overall toxicity of the mixture (essentially adding zero responses). Mumtaz et al. (1994)
states the "default" and "most conservative" form of response addition "is equivalent to dose addition, provided the dose-response curves ... are within the linear range." Similar to the default form of response addition is the concept of effect addition (e.g., Berenbaum, 1989
; Kortenkamp and Altenburger, 1999
) where the expected effect of a mixture is the arithmetic sum of a measured toxic effect of the single agents in the mixture. The method of effect addition is applicable for linear (Berenbaum, 1989
) or linearizable (when the summation is conducted on the transformed scale, e.g., the logit or probit scale) dose-response curves. In short, dose addition, response addition, and effect addition define three ways of defining additivityby adding doses, risks, or effects.
The methods shown in this article require single chemical dose-response information on the mixture components, presumably on the same health endpoint in the same animal species, strain/stock, age, and gender under the same exposure conditions (route, duration, gavage vehicle, volume, etc.). The additivity models accommodate these similarities across studies with a common background (intercept) parameter (which may be verified by comparing the means of the vehicle control groups in an analysis of variance model). Further, the single-chemical dose-response curves should either be reproducible over time or be collected at the same time as the mixture data. When such data rich situations occur, the methods described here provide an improvement over simple additivity approaches. That is, the additivity models as described in the previous sections estimated using single chemical dose-response data do not require the simplifying assumptions of other methods (e.g., common mode of action) and thereby may provide improved estimation of risk. They are also useful in providing evidence of the joint toxic action for a group of chemicals; such information may serve to actually support or refute the use of dose addition or response addition methods. Given the plethora of potential chemical combinations and exposure scenarios, default risk assessment methods may always be necessary, particularly for the data poor situations. However, the approach shown in this article contributes to the library of available techniques applicable for the data rich cases.
We have demonstrated that the additivity models we propose not only satisfy the concept of no interaction as no change in slope, but they also satisfy Berenbaum's definition of additivity. Since these models can all be linearized (or at least conditionally linearized in the case of a nonlinear model), they also satisfy the definition of effect additivity where the addition is conducted on the transformed scale. The added effect due to the ith chemical is determined by the term ßixi on the transformed mean response scale. When sufficient dose-response data are available for this kind of modeling exercise, it may not be necessary to restrict the interpretation of additivity into categories such as dose addition and response addition. The additivity models described here satisfy the concept of no interaction as the condition where chemicals combine in a way that does not affect their individual dose-response relationships. That is, the rate of change in response with respect to the dose/concentration of each chemical does not change in the presence of the other chemicals. It is a theoretical possibility that a shift in a threshold level of a component in the mixture is non-additive. The additivity models considered in the previous sections allow for a shift in the "threshold" due to the presence of the other chemicals. If the shift is different from that supported by the additivity model, then an interaction may be said to exist. Such a phenomenon is testable using the additivity model and appropriate statistical tests.
In Bailar and Bailer (1999), the suggested analytical schemes of the hypothetical data in a 3 x 3 design included logistic regression where the log odds were used in the analysis to suggest additivity; a linear model of the observed responses which resulted in a claim of synergism; and a dose addition model that resulted in a claim of antagonism. Although limited by the design, these data do not seem appropriately modeled with a linear assumptionthe data demonstrate curvilinearity. It is often reasonable to assume that dose-response data are sigmoid in shape. This is particularly the case with proportional data where the range of the response is constrained to be in the interval [0,1]. The general form of the models described in the previous sections is sufficiently flexible to adequately represent the observed data. For example, assuming the data are actually proportional, if the analyst had initially transformed the data to achieve a linearized form (say, using a logit or probit link function) and properly accounted for the variability in the data, then the three analyses would have resulted in the same conclusionsadditivity as suggested by a logistic regression analysis. This emphasizes once again that the choice of the model should be based on properties of the data.
In conclusion, we have attempted to bridge the gap between a concept of no interaction and statistical additivity models. When the dose-responsiveness of a chemical in a mixture does not change in the presence of other chemicals, then it is claimed to act additively with the other chemicals. We have shown that this concept of additivity is inherent within the statistical additivity models described herein. These models can be estimated with the support of appropriate experimental single chemical dose-response data. When the zero interaction case is well described with sound statistical rigor, then the likelihood of adequately detecting and characterizing departure from additivity with experimental mixture data is improved.
![]() |
APPENDIX |
---|
![]() ![]() ![]() ![]() ![]() ![]() |
---|
![]() |
![]() |
![]() |
Now we have approximated the unknown dose-response relationship with a polynomial relationship with a finite number of terms. The terms of the polynomial reflect the slopes of the individual mixture components' dose response and, through the cross-product terms, the effect that a component has on the slopes of the other components' dose response curve when the components are combined together in a mixture. For example, ßij is the effect of the ith component on the slope of the jth component when the two chemicals are combined; and, ßijk can be interpreted as the effect of the ith component on the interaction between the jth and kth components.
![]() |
Disclaimer |
---|
![]() ![]() ![]() ![]() ![]() ![]() |
---|
![]() |
NOTES |
---|
![]() |
ACKNOWLEDGMENTS |
---|
![]() |
REFERENCES |
---|
![]() ![]() ![]() ![]() ![]() ![]() |
---|
ATSDR (2004). Guidance Manual for the Assessment of Joint Toxic Action of Chemical Mixtures. Online. www.atsdr.cdc.gov/interactionprofiles/ipga.html.
Ariëns, E. J., and Beld, A. J. (1977). The receptor concept in evolution. Biochem. Pharmacol. 26, 913918.[CrossRef][ISI][Medline]
Atkinson, A. C., and Donev, A. N. (1992). Optimum Experimental Designs. Clarendon Press, Oxford.
Bailar, J. C., and Bailer, A. J. (1999). Risk assessmentthe mother of all uncertainties: Disciplinary perspectives on uncertainty in risk assessment. Annals N.Y. Acad. Sci. 895, 273285.
Berenbaum, M. C. (1985). The expected effect of a combination of agents: The general solution. J. Theor. Biol. 114, 413431.[ISI][Medline]
Berenbaum, M. C. (1989). What is synergy? Pharmacol. Rev. 41, 93141.
Box, G. E. P. (1979). Robustness in the strategy of scientific model building. In Robustness in Statistics (R. L. Launer and G. N. Wilkinson), p. 202. Academic Press, New York.
Carter, W. H., Jr., Gennings, C., Staniswalis, J. G., Campbell, E. D., and White, K. L., Jr. (1988). A statistical approach to the construction and analysis of isobolograms. J. Amer. College Toxicol. 7, 963973.[ISI]
Casey, M., Gennings, C., Carter, W. H., Jr., Moser, V., and Simmons, J. E. (2004). Detecting interaction(s) and assessing the impact of component subsets in a chemical mixture using fixed-ratio ray designs. J. Agricul. Biol. Environ. Stat. 9, 339361.[CrossRef][ISI]
Casey, M., Gennings, C., Carter, W. H., Jr., Moser, V., and Simmons, J. E. (2005). Ds-optimal designs for studying combinations of chemicals using multiple fixed-ratio ray experiments. Environmetrics 16, 129147.[CrossRef][ISI]
Casey, M., Gennings, C., Carter, W. H., Jr., Moser, V., and Simmons, J. E. (in press). Power and sample size determination for testing the effect of subsets of compounds on mixtures along fixed-ratio rays. J. Ecolog. Environ. Stat.
Crofton, K. M., Craft, E. S., Hedge, J. M., Gennings, C., Simmons, J. E., Carchman, R. A., Carter, W. H., Jr., and DeVito, M. J. (in press). Thyroid hormone disrupting chemicals: Evidence for dose-dependent additivity and synergism. Environ. Health Perspect.
Dawson, K. S., Carter, W. H., Jr., and Gennings, C. (2000). A statistical test for detecting and characterizing departures from additivity in drug/chemical combinations. J. Agricul. Biol. Environ. Stat. 5, 342359.[ISI]
Finney, D. J. (1971). Statistical Method in Biological Assay, 2nd ed. Griffin, London.
Gennings, C., Schwartz, P., Carter, W. H., Jr., and Simmons, J. E. (1997). Detection of departures from additivity in mixtures of many chemical with a threshold model. J. Agricul. Biol. Environ. Stat. 2, 198211.
Gennings, C., Schwartz, P., Carter, W. H., Jr., and Simmons, J. E. (2000). Erratum: Detection of departures from additivity in mixtures of many chemical with a threshold model. J. Agricul. Biol. Environ. Stat. 5, 257259.[ISI]
Gennings, C., Carter, W. H., Jr., Campain, J. A., Bae, D., and Yang, R. S. H. (2002). Statistical analysis of interactive cytotoxicity in human epidermal keratinocytes following exposure to a mixture of four metals. J. Agricul. Biol. Environ. Stat. 7, 5873.[CrossRef][ISI]
Gennings, C., Carter, W. H., Jr., Carchman, R., Charles, G., Gollapudi, B., and Carney, E. (2004). Analysis of fixed ratios of chemical mixtures developed from a comparison to an indirect additivity surface determined by single chemical dose-response models. Toxicol. Sci. 80, 134150.
Goodman, L. S., and Gilman, A. G. (2001). Goodman and Gilman's The Pharmacological Basis of Therapeutics, 10th ed. (A. G. Gilman, L. S. Goodman, T. W. Rall, and F. Murad, Eds.). MacMillan Publishing, New York.
Kalish, L. A. (1990) Efficient design for estimation of median lethal dose and quantal dose-response curves. Biometrics 46, 737748.[ISI][Medline]
Kortenkamp, A., and Altenburger, R. (1999). Approaches to assessing combination effects of oestrogenic environmental pollutants. Sci. Total Environ. 233, 131140.[CrossRef][ISI][Medline]
Loewe, S. (1953). The problem of synergism and antagonism of combined drugs. Arzneimittle forshung 3, 285290.
Loewe, S., and Muischnek, H. (1926). Uber kombinationswirkunger. I. Mitteilung: Hiltsmittel der gragstellung. Naunyn-Schmiedebergs. Arch. Pharmacol. 114, 313326.
McCullagh, P., and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. Chapman and Hall, New York.
Meadows, S. L., Gennings, C., Carter, W. H., Jr., Bae, D.-S. (2002). Experimental designs for mixtures of chemicals along fixed ratio rays. Environ. Health Perspect. 110(Suppl. 6), 979983.[ISI][Medline]
Minkin, S. (1987). Optimal designs for binary data. J. Amer. Stat. Assoc. 82, 10981103.[ISI]
Mumtaz, M. M., DeRosa, C. T., and Durkin, P. R. (1994). Approaches and challenges in risk assessments of chemical mixtures. In Toxicology of Chemical Mixtures: Case studies, Mechanisms, and Novel Approaches (R. S. H. Yang, Ed.) pp. 565597. Academic Press, San Diego.
Munem, M. A., and Foulis, D. J. (1978). Calculus with Analytic Geometry. Worth Publishers, New York.
Neter, J., Kutner, M. H., Nachtsheim, C., and Wasserman, W. (1996). Applied Linear Statistical Models, 4th ed. Irwin, Chicago.
Safe, S. H. (1998). Hazard and risk assessment of chemical mixtures using the toxic equivalency factor approach. Environ. Health Perspect. 106(Suppl. 4), 10511058.[ISI][Medline]
Teuschler, L., Klaunig, J., Carney, E., Chambers, J., Connolly, R., Gennings, C., Giesy, J., Hertzberg, R., Klaassen, C., Kodell, R., Paustenbach, D., and Yang, R. (2002). Support of science-based decisions concerning the evaluation of the toxicology of mixtures: A new beginning. Regul. Toxicol. 36, 3439.[CrossRef][ISI]
U.S. EPA (1989). Interim Procedures for Estimating Risks Associated with Exposures to Mixtures of Chlorinated Dibenzo-p-dioxins and -dibenzofurans (CDDs and CDFs) and 1989 update. Risk Assessment Forum. EPA/625/389/016.
U.S. EPA (2000). Supplementary Guidance for Conducting Health Risk Assessment of Chemical Mixtures. Risk Assessment Forum. EPA/630/R-00/002.
Yang, R. S. H., Ed. (1994). Toxicology and Chemical Mixtures: Case Studies, Mechanisms, and Novel Approaches. Academic Press, San Diego.
|