A Unifying Concept for Assessing Toxicological Interactions: Changes in Slope

C. Gennings*,{dagger},1, W. H. Carter, Jr.*,{dagger}, R. A. Carchman{dagger}, L. K. Teuschler{ddagger}, J. E. Simmons§ and E. W. Carney

* Department of Biostatistics, Virginia Commonwealth University, Richmond, Virginia; {dagger} Solveritas, L.L.C., Richmond, Virginia; {ddagger} U.S. EPA, NCEA, Cincinnati, Ohio; § U.S. EPA, NHEERL, Research Triangle Park, North Carolina; and The Dow Chemical Company, Midland, Michigan

1 To whom correspondence should be addressed at Department of Biostatistics, Virginia Commonwealth University, 1101 E. Marshall Street, #B1-039-A, Richmond, VA 23298-0032. Fax: (804) 828-8900. E-mail: gennings{at}hsc.vcu.edu.

Received May 5, 2005; accepted July 12, 2005


    ABSTRACT
 TOP
 ABSTRACT
 INTRODUCTION
 APPENDIX
 Disclaimer
 REFERENCES
 
Robust statistical methods are important to the evaluation of toxicological interactions (i.e., departures from additivity) among chemicals in a mixture. However, different concepts of joint toxic action as applied to the statistical analysis of chemical mixture toxicology data or as used in environmental risk assessment often appear to conflict with one another. A unifying approach for application of statistical methodology in chemical mixture toxicology research is based on consideration of change(s) in slope. If the slope of the dose-response curve of one chemical does not change in the presence of other chemicals, then there is no interaction between the first chemical and the others. Conversely, if the rate of change in the response with respect to dose of the first chemical changes in the presence of the other chemicals, then an interaction is said to exist. This concept of zero interaction is equivalent to the usual approach taken in additivity models in the statistical literature. In these additivity models, the rate of change in the response as a function of the ith chemical does not change in the presence of other chemicals in a mixture. It is important to note that Berenbaum's (1985, J. Theor. Biol. 114, 413–431) general and fundamental definition of additivity does not require the chemicals in the mixture to have a common toxic mode of action nor to have similarly shaped dose response curves. We show an algebraic equivalence between these statistical additivity models and the definition of additivity given by Berenbaum.

Key Words: additivity; synergy; nonadditivity; interaction index; isobologram.


    INTRODUCTION
 TOP
 ABSTRACT
 INTRODUCTION
 APPENDIX
 Disclaimer
 REFERENCES
 
Bailar and Bailer (1999)Go describe hypothetical data of a mixture of two chemicals A and B. Using common methods of analysis for testing for interaction available in the statistical, epidemiology, and toxicology literatures, different conclusions from the same data were made including synergy, independence, and antagonism, respectively. They point out that these are conceptual differences across the three disciplines, and they conclude that these differences have "clear implications for risk assessment, an interdisciplinary exercise of serious societal impact." In a peer-reviewed publication written by a Society of Toxicology expert panel, Teuschler et al. (2002)Go point out that "development of more generalized approaches for describing additivity and departure from additivity of mixtures of chemicals with particular emphasis on low-dose regions would be useful."

The objective of this article is to describe a generalized approach for modeling additivity for toxicological mixture studies. This method is developed for a "data rich" scenario—reproducible dose-response information are assumed available on the mixture components, presumably on the same health endpoint in the same animal species, strain/stock, age, and gender under the same exposure conditions (route, duration, gavage vehicle, volume, etc.). In other more "data poor" situations such as is often the case in environmental risk assessment, simplifying assumptions may be necessary in order to proceed. The most widely used methods for assessing risks from low-level exposures to chemical mixtures are based on additivity concepts (ATSDR, 2004Go; U.S. EPA, 2000Go). These include dose-addition risk assessment methods which assume the same toxic mechanism-of-action across the mixture's components and response-addition risk assessment methods which assume independence of toxic action across the mixture components. These approaches are chosen based either on real data on the mode of action by which toxicity is produced for each mixture component (which may be scarce), or by using expert judgment regarding mode of action. Dose-addition methods sum the exposure levels of similar components in a mixture (after scaling for relative potency among components) and estimating the effect (or risk) of the mixture directly from the summed dose. In response addition, the probabilistic risk of an effect for each individual chemical in the mixture is estimated and then these individual risks are summed. Consequently, the two methodologies may yield quite different estimates of risk, which is problematic when the mode of action data used to choose between them are uncertain. These approaches are useful in data poor situations where toxicological dose-response information for all of the chemicals in a mixture may not be available. As data availability increases, however, environmental risk assessment may be improved by developing methods that rely less on assumptions of toxic mechanism. Such methods can be used to harmonize approaches to environmental risk assessment and increase consistency in data uses and interpretation. The goal of this article is to describe statistical models of additivity and departures from additivity which provide a high level of statistical rigor for testing biological hypotheses. A unifying approach compatible with both forms of additivity is suggested, using a statistical modeling approach based on the fundamental definition of additivity developed by Berenbaum (1985)Go.

A basic concept of an interaction is that the slope (i.e., steepness) of a dose-response curve of a chemical changes in the presence of one or more other components in a mixture. Conversely, if the slope of the dose-response curve of a chemical is not altered in the presence of another chemical then the chemicals are said to exhibit no interaction (see Fig. 1), or that they are said to combine additively (i.e., zero interaction) (Teuschler et al., 2002Go). This concept of interaction as a change in slope is well grounded in the pharmacology literature. The earliest designations of receptors (receptive substance) and drug-receptor interactions occurred in the late nineteenth century when Langley suggested that these interactions were based on the law of mass action (Goodman and Gilman, 2001Go). This hypothesis was extended theoretically and experimentally by A. J. Clark in the 1920s, Ariëns and Beld (1977)Go, and many others (Goodman and Gilman, 2001Go). The equation that was used to describe this interaction between a drug and its receptor was that of a simple rectangular hyperbola (e.g., Michaelis-Menton). It is from these initial studies that terms for drugs with different activities (e.g., potency, partial agonist, antagonist) were coined and various types of drug interactions (e.g., "additivity" as no interaction) were described (Goodman and Gilman, 2001Go). A good compilation of how these concepts have been developed and used in different applications and case studies is contained in the book edited by Yang (1994)Go.



View larger version (14K):
[in this window]
[in a new window]
 
FIG. 1. Hypothetical dose-response curves of chemical A alone and at one fixed dose of chemical B when (A) the shapes are similar which is the case when the chemicals do not interact, and (B) when the shape changes which is the case when the two chemicals interact. Statistical tests are available that account for biological variability when testing for changes in shape. (A) No change in shape between the dose-response curves for A alone and for A at one fixed dose of B implies no interaction. (B) Change in shape between the dose-response curve for A alone and for A at a fixed dose of B indicates interaction.

 
A definition of additivity, which is often used to test for interactions among components in a mixture, is given by Berenbaum (e.g., 1985Go) and is based on the classical isobologram for the combination of two chemicals (e.g., Loewe, 1953Go; Loewe and Muischnek, 1926Go). In fact, Berenbaum (1985Go, 1989Go) refers to this definition as a "general solution" which is "mechanism-free" with the advantage of being based on empirical information. In a combination of c chemicals, let Ei represent the concentration/dose of the ith component alone that yields a fixed response, and let xi represent the concentration/dose of the ith component in combination with the c agents that yields the same response. According to this definition of additivity if the substances combine with zero interaction, then

(1)
If the left-hand side of Equation 1 is less than 1, then a greater than additive response (i.e., synergism) can be claimed at the combination of interest. If the left-hand side of Equation 1 is greater than 1, then a less than additive response (i.e., antagonism) can be claimed at the combination. As Equation 1 is the equation of a plane in c dimensions, this definition of additivity implies that under additivity contours of constant response are planar. It is important to note that Berenbaum's general definition as given in Equation 1 places no constraint on the single chemical slopes, and the mixture may include active and inactive compounds. Further, the chemicals in the mixture do not need to have similarly shaped dose response curves, a requirement for applications of dose addition that use an index chemical to estimate risk. An example of the use of an index chemical to assses mixture risk is the Toxic Equivalency Factor (TEF) approach to dose addition for dioxins which assumes common slopes across the chemicals under study (e.g., Safe, 1998Go; U.S. EPA, 1989Go); Berenbaum's definition of additivity in Equation 1 does not require such an assumption.

In what follows we will show an algebraic equivalence of Berenbaum's definition of additivity given in Equation 1 to statistical additivity models. We will also demonstrate that these statistical additivity models satisfy the fundamental concept of zero interaction for additivity. We begin with a general discussion of the justification for using empirical statistical models, which approximate an underlying relationship; these models are not based on mechanistic assumptions/knowledge. Later, we describe an approach that relates the mean of the response variable to a set of covariates that is applicable to many different types of responses, dose-response shapes and distributional assumptions. That is, the methods may be applied to continuous response values, proportional responses, count data, etc. We demonstrate that these are additivity models when they are parameterized to include an intercept and linear terms only. Otherwise, higher-order cross product terms are associated with interactions according to Berenbaum's definition. In each of these sections

Thus, the additivity models satisfy the fundamental notion of additivity and Bernbaum's definition given in Equation 1. We conclude with a discussion on the advantages of additivity models in the analysis of chemical mixtures, where the number of chemicals in the mixture is large, with implications for optimal experimental designs and finish with some summary statements. An advantage of this framework is that the important hypotheses of additivity can be tested with statistical rigor.

Justification of Statistical Models
To define the notation, let g(µ) be a known function of the mean response of interest in an analysis of a mixture of c components. Examples of commonly used transformations, g(µ), include the probit transformation for proportional data, µ (or a power of µ) for continuous data, and log(µ) for count data. We justify the common parameterization of statistical models based on two assumptions through a Taylor series argument2: (1) g(µ) is a function of the exposure concentrations of the c mixture components, i.e., the response changes with exposure. It is usually the case that the algebraic form of this underlying relationship is unknown. (2) Although the underlying relationship is unknown, we assume that the relationship is smooth (differentiable) and continuous. When these two assumptions are met, it follows that the unknown relationship can be expanded in a Taylor series (see Appendix) which motivates general parameterization of a response surface using linear and cross-product terms. As such, a first-degree model with only linear terms is an additivity model. Models with cross-product terms allow for interactions among the components in the mixture.

For this model development to be useful, it is necessary to demonstrate the adequacy of the approximation. Since the observed data contain information about the underlying dose-response relationship, comparison of these data to the predictions of the model are important in assessing the model. Such comparisons can be accomplished with varying levels of statistical rigor. Often simple plots of observed and predicted results are sufficient. In other cases, it may be necessary to test the null hypothesis of model adequacy. While testing model adequacy is an activity that can only occur after the data collection and analysis phases of a study, it should be noted that experimental designs have been developed to maximize the power of the test of model adequacy (e.g., Atkinson and Donev, 1992Go). For models that provide an adequate representation of the data, the appropriateness of Box's (1979)Go observation, "all models are wrong, but some are useful," is readily understood.

Generalized Linear Models as Additivity Models
The class of statistical models that we consider in this section is known as generalized linear models (e.g., McCullagh and Nelder, 1989Go). They are general in the sense that they can be used for different types of data including continuous, proportional, and count data. The mean of the response of interest is modeled as a function of covariates of interest (e.g., doses/concentrations) through what is termed the link function, a user specified smooth and monotone function. Commonly used link functions include the logit link (i.e., log(µ/(1 – µ))), probit or others for modeling binary data (e.g., loss of righting reflex); a log link is often used in modeling count data (e.g., motor activity); and a power link (i.e., µ{lambda}) is often used with continuous data (e.g., serum enzyme levels). The choice of the link function depends on the type of response variable (i.e., binary, count, continuous) and the analyst's choice for ease of interpretation.

Algebraic equivalence of the additivity model with Berenbaum's definition of additivity.
A statistical additivity model (e.g., Neter et al., 1996Go) for a combination of two chemicals is

(2)
where

µ is the mean response and g(µ) is a specified transformation of the mean known as the link function,
x1 is the dose of chemical A,
x2 is the dose of chemical B,
ß0 is an unknown parameter associated with the intercept,
ß1 is an unknown parameter associated with the slope for chemical A, and
ß2 is an unknown parameter associated with the slope of chemical B.

For convenience, the additivity model in Equation 2 is expressed for the case of two chemicals. The results retained in the following sections for the case of two chemicals are readily generalized to additivity models for c chemicals where

In the statistical literature this model is known as a generalized linear model due to the link between a transformation of the mean, g(µ), and a set of covariates (e.g., McCullagh and Nelder, 1989Go). It is also an additivity model in the statistical sense (Neter et al., 1996Go). (Interestingly, the model in Equation 2 is equivalent to the slope ratio model described by Finney [1971]Go when g(µ) = µ.) It can be shown algebraically that the additivity model given in Equation 2 also satisfies the definition of additivity as given by Berenbaum in Equation 1. Consider the model in Equation 2 at a specified response µ0, such that the transformed mean g0) = ß0 + ß1x1 + ß2x2. For example, to work with an ED50 contour, µ0 = 0.5 and using a logit link, g(µ0) = log(µ0/(1 – µ0)) = log(0.5/(1 – 0.5)). From Equation 2, the dose associated with µ0 for each of the single chemicals alone is Then,

which is the equation of a line, indicating that the contours of constant response are linear for two chemicals and planar in the general case of c chemicals. Thus, the additivity model in Equation 2 is algebraically equivalent to the definition of additivity given in Equation 1.

Following Carter et al. (1988)Go, if the model is parameterized to include a cross-product term as motivated by a Taylor series argument, i.e., then Equation 1 becomes

When ß12 = 0, i.e., no interaction, then and additivity is the case. For g(µ0) > ß0 (i.e., for responses above background) and for increasing dose-response curves, if ß12 > 0 then a greater than additive response (synergism) can be claimed since if ß12 < 0 then a less than additive response (antagonism) can be claimed since This is important because it relates the idea of a departure from additivity to a parameter in a statistical model. Thus, the hypothesis of additivity can be expressed as H0: ß12 = 0 and statistical methodology for testing this hypothesis exists (e.g., Neter et al., 1996Go).

Equivalence of the additivity model with fundamental notion of zero interaction.
The model in Equation 2 is an additivity model in that it satisfies Berenbaum's definition of additivity (i.e., planar contours of constant response). It also satisfies the fundamental concept of an interaction as a change in the slope of one chemical in the presence of another chemical. That is, for the slope of g(µ) with respect to x1 is ß1; the slope of g(µ) with respect to x2 is ß2. For comparison, consider the model with a cross-product term given by

which is rewritten as

which demonstrates that the slope of the dose-response curve of chemical 2 is changed in the presence of chemical 1. If ß12 = 0 then the level of x1 is not involved in the "slope" for x2 and vice versa. It is important to note that the results are in terms of g(µ). Typically the biological user of this methodology is interested in the relationship between a biological response (which has mean µ) and dose, as opposed to a transformed mean, g(µ), and dose. Use of the chain rule from calculus (e.g., Munem and Foulis, 1978Go) permits us to show that for a two chemical mixture, chemical 2 affects the dose-response relationship of chemical 1 when combined, and vice versa, on the µ scale. This can be shown symbolically. For ease of notation, define so that where w is the linear predictor. In the following, we show that the "slope" or rate of change in the mean response as a function of the dose of each chemical (as specified by a partial derivative) depends on the dose of the other chemical:

and

where the notation signifies that the mean is a function the dose of both chemicals. The value of the slope in terms of the mean µ is expressed as the product of two terms—one that has to do with the rate of change of the mean with respect to the linear predictor, w, (i.e., ), which only depends on the mean; and a term that comes from differentiating the linear predictor with respect to the variables. This second part is a function of the linear and interaction parameters and the dose of the other chemical, and it actually characterizes the slope of the dose-response curve.

For comparison, the rate of change of the mean as a function of the dose of either chemical of the additivity model given in Equation 2 is characterized by only the corresponding linear parameter. Here, let so that

(3)
and

Thus, under additivity, the shape of the dose-response curve of either chemical does not change in the presence of the other chemical.

Use of these derivatives to describe the rate of change in the mean as a function of either chemical elucidates the complexity of working with a nonlinear model. The actual slope depends on the level of response selected (here denoted by w). If straight line or linear approximations are used instead of a nonlinear shape, then comparisons of the slope of the linear functions may lead to an incorrect inference. If the approximations are based on different regions of the dose-response relationship, then presumed differences in slopes may be due to the approximation and not due to an interaction. Figure 2 depicts two nonlinear and parallel dose-response curves. If linear approximations were made in the two locations shown in the figure, then an incorrect conclusion of non-parallelism could be made.



View larger version (8K):
[in this window]
[in a new window]
 
FIG. 2. The slope of a linear approximation to data in different effect regions of two parallel curves incorrectly indicates different slopes for Chemical A and Chemical B.

 
It was noted that Berenbaum's Equation 1 is the equation of a plane, i.e., contours of constant response (isobols) are planar when chemicals combine additively. For two chemicals, the contours are straight lines, which is the "line of additivity" in an isobologram. The contours of constant response associated with the general form of the additivity model in Equation 2 for c chemicals also are planar, i.e., is a plane. Consider the following example. Suppose we observe proportional data from Chemical A and from Chemical B (Fig. 3A) which are fit with a logistic regression model using the additivity parameterization given in Equation 2 with The resulting three-dimensional response surface (Fig. 3B) has a general sigmoid shape. However, the contours of constant response displayed in Figure 3C are linear. Figure 3 illustrates the connection between the additivity model given in Equation 2 and the definition of additivity given in Equation 1 which is an equation of a plane for any fixed response. So although the dose-response relationship is sigmoidal, the contours are linear.



View larger version (26K):
[in this window]
[in a new window]
 
FIG. 3. (A) Dose response curves for chemicals A and B where the observed endpoint is binary and the model is of the probability of response. (B) The corresponding three-dimensional additivity response surface is sigmoid-shaped. (C) However the contours of constant response (i.e., isobols) are linear.

 
Nonlinear Models as Additivity Models
In this section we consider a class of flexible nonlinear models that are sigmoid-shaped. A common way of parameterizing these models is to use known functions that are sigmoid in shape but whose responses are contained in the range between 0 and 1. Frequently, cumulative distribution functions, F(ß, x), from the statistical literature are used. To change the response range to what the data support, additional parameters are included. The general model is then given by which ranges between {alpha} and {alpha} + {gamma} and may be sigmoid-shaped. Figure 4 depicts the construction of this model.



View larger version (7K):
[in this window]
[in a new window]
 
FIG. 4. General class of nonlinear models where the function F(ß,x) which is restricted to values between 0 and 1 is adjusted to have response range [{alpha}, {alpha} + {gamma}].

 
Algebraic equivalence of a nonlinear additivity model with Berenbaum's definition of additivity.
The connection between additivity models and Berenbaum's definition of additivity can be made for this class of nonlinear models with an additional condition. Instead of demonstrating this in general notation, we will use a specific nonlinear function which in practice is selected for its flexibility and asymmetric properties; however, the results hold for the general class of nonlinear models. The Gompertz nonlinear model is of the form

(4)
where the bracketed part in Equation 4, is the Gompertz function and the {alpha} and {gamma} parameters are the range parameters. Other examples of commonly used functions include the logistic function and the exponential cumulative distribution function. Additivity models built from these other functions maintain similar associations as those shown for the Gompertz model. The model in Equation 4 is linearized by solving for the argument in the exponents, i.e.,

Similar to the generalized linear model, this nonlinear model can be put in the form of Equation 1 indicating it has planar contours of constant response conditional on the values of {alpha} and {gamma}.

Rearranging terms yields

Thus, the nonlinear additivity model given in Equation 4 has planar contours of constant response as specified in Equation 1.

Equivalence of the additivity model with fundamental notion of zero interaction.
Similar to the argument that led to Equation 3, the slope of the model in Equation 4 depends on the level of response and the corresponding linear parameter. For ease of notation, let so that

As in Equation 3, the slope of the additivity model in Equation 4 depends on the rate of change of the function with respect to the linear predictor, i.e., which is the location of the tangent line to the curve; and the dose-response curve is characterized by the parameter ßi. Thus, the slope of the additivity model associated with the ith chemical does not depend on the other chemicals in the mixture.

Threshold Models as Additivity Models
Algebraic equivalence of a threshold additivity model with Berenbaum's definition of additivity.
Threshold models are piecewise models (i.e., connected segmented lines) that allow for a dose range/region associated with a response the same as background, and an increase/decrease in response beyond the "threshold." Consider the following parameterization of an increasing threshold additivity model for a combination of c chemicals,

(5)
where

the link function g(µ) is as defined in (2),
ß0 is an unknown parameter associated with background response,
ßi is an unknown parameter associated with the ith chemical, and
{delta} is an unknown parameter associated with the threshold.

Using this model, the dose threshold associated with the ith chemical is Figure 5A depicts the dose threshold as the dosage where the background response mean changes to an increasing dose-response relationship. The algebra relating this model to Berenbaum's definition of additivity in Equation 1 is similar to that for the generalized linear model as it would be for response values greater than background, i.e., in the increasing part of the curve. That is, for the increasing part of the curve, i.e., where and the model is given by

(6)
where In this region the threshold additivity model is parameterized similarly to the model in Equation 2.

Thus, the threshold additivity model also satisfies Berenbaum's definition of additivity.



View larger version (6K):
[in this window]
[in a new window]
 
FIG. 5. (A) A schematic of a threshold dose-response curve for a single chemical. (B) When three chemicals are combined in a threshold additivity model as given in Equation 4 the additivity threshold surface is a plane which intersects at the dose thresholds for each of the three chemicals. The response associated with any combination of the three chemicals below this plane (i.e., between the plane and the origin) is the same as the background response.

 
The "threshold additivity surface" is the plane that connects the dose thresholds for each of the chemicals. Figure 5B is a schematic of such a plane for a combination of three chemicals. Based on the threshold additivity model, all doses/concentrations of the mixture that are between this plane and the origin are associated with a background response mean.

Equivalence of the additivity model with notion of zero interaction as lack of slope change.
Recognizing the generalization of Equation 6 in the form of Equation 2, it follows from Equation 3 that Thus, using the threshold additivity model, the shape of the dose response curve of the ith chemical does not change in the presence of the other chemicals.

Use of Additivity Models
A general strategy for testing for interactions among chemicals in a mixture is to use an additivity model to define the "no interaction" case and to use mixture data to describe the so-called "unrestricted," or general case, as described by the mixture data. Only single chemical dose-response data are necessary to estimate an additivity model. Selection of mixture points in regions of environmental or biological relevance (defined by fixed-ratio mixtures of the chemicals in the mixture) results in economical and practical designs for use in testing for interactions when the number of components in the mixture is large. Examples of the use of additivity models in testing the hypothesis of additivity follow.

Gennings et al. (1997)Go compared mean responses from an additivity model to that observed at a mixture point of interest. In particular, these authors describe 100(1 – {alpha})% prediction intervals at each mixture point of interest using the additivity model. If the observed sample mean from the mixture point is included outside of the prediction interval, then they conclude evidence of departure from additivity. As the number of mixture points increases, multiple comparison corrections (e.g., Bonferroni corrections) become important. Dawson et al. (2000)Go compared the dose locations at specified responses under an additivity model to that observed using the interaction index. These authors estimate the interaction index at each mixture point of interest and develop a statistical test of whether the index equals one. They used Hochberg corrections for multiple testing.

More recently, several authors used a ray design to compare predicted responses from an additivity model to a mixture model along one or more fixed-ratio mixture rays (Casey et al., 2004Go, 2005, in pressGoGo; Gennings et al., 2002Go; Meadows et al., 2002Go). Figure 6 depicts a ray design for a combination of two chemicals with two mixture rays. Casey et al. (2004)Go developed methodology for testing the hypothesis of additivity in a mixture of c chemicals and for testing whether subsets of the chemicals interact with the remaining chemicals. Let ai be the proportion of the ith chemical in the fixed mixture ratio, i = 1,..., c, where Gennings et al. (2002)Go and others have pointed out that the slope in terms of total dose along the fixed-ratio ray under additivity is given by These authors develop a test of additivity by testing whether the slope for the dose-response curve of the mixture in terms of total dose is equivalent to {theta}add. Although this inference is limited to the mixing ratio used in the experiment, it results in experimentally feasible studies of mixtures of many chemicals.



View larger version (10K):
[in this window]
[in a new window]
 
FIG. 6. A schematic of a ray design with single chemical axes and two mixture rays: (1:1) and (3:1).

 
The additivity models described throughout this paper are algebraically equivalent to the definition of additivity given in Equation 1. That is, the models can be algebraically manipulated to demonstrate that the contours of constant response are planar. Gennings et al. (2004)Go used a more general additivity model which was associated with the same definition of additivity. They fit each single chemical and mixture ray with a sufficiently flexible dose-response model that allowed for full and partial agonists in the mixture. To predict along the mixture ray(s) under additivity, they used the single chemical models and imposed the constraint of linear contours. By using such an approach, prediction from the additivity model was only conducted implicitly; however, the advantage is that the single chemical data have a customized model fit. This general approach of a more flexible additivity model has been used in the analysis of the effect of a mixture of 18 chemicals on thyroid function (Crofton et al., in pressGo).

Experimental Designs
One of the primary advantages to using fixed-ratio ray designs is the savings in terms of experimental resources required to test hypotheses of additivity. In general, the estimation of the additivity model requires only suitable single-chemical dose-response data. Our experience suggests that four to six dose groups spanning the active region of each of the single chemical dose-response curves is sufficient to predict the additivity surface. Similarly, if a single fixed-ratio mixture is of interest, then a target of about six total dose groups along the ray spanning the active part of the dose-response curve is suggested. Thus, with c single chemicals and one mixture ray, such a design includes about 6(c + 1) dose groups. By contrast, a factorial design with c chemicals and d dose groups per chemical has dc dose groups. A further advantage of the use of statistical models is their connection to statistical experimental designs. A vast experimental design literature (e.g., Abdelbasit and Plackett, 1983Go; Atkinson and Donev, 1992Go; Kalish, 1990Go; Minkin, 1987Go) has developed that can be exploited to provide estimates with desirable properties, such as estimates with minimized variance. Meadows et al. (2002)Go and Casey et al. (2005)Go developed optimal experimental design strategies for tests of interaction using fixed-ratio ray designs. That is, these designs specify dose locations and sample size allocations that are associated with desirable statistical properties of the model parameters. The approach taken by Meadows et al. and Casey et al. was to determine the experimental designs associated with minimizing the variance of the test statistic associated with the test of additivity. By reducing its variance, the resulting test statistic has increased power for rejecting the hypothesis of additivity.

Summary and Discussion
We have described the justification for using empirical statistical models with a Taylor series argument. The argument begins with an assumption of a functional relationship of unknown form but which is assumed to be continuous and differentiable. Using a Taylor series, the functional form is approximated by a polynomial function with a finite number of terms. The models that we fit to the observed data are meant to only approximate the underlying unknown dose-response relationship. The approximation should be verified with a goodness-of-fit test before continuing in the analysis with further inference. However, if the model of the mean response, or a transformation of the mean, is parameterized with linear terms, we have demonstrated that these terms may be interpreted as being associated with the rate of change in the mean as a function of the dose/concentration. In the additivity models we have described, the rate of change in the response with respect to the ith chemical does not change in the presence of other chemicals. When cross-product terms are added to the model to construct an interaction model, then the rate of change in the response with respect to the ith chemical does change in the presence of other chemicals. Thus, these additivity models satisfy the condition that the slope of a chemical does not change shape in the presence of other chemicals.

It is indeed the case that statisticians and life scientists (e.g., toxicologists) working independently have each developed a body of knowledge pertinent to the study of interactions. Over time an extensive vocabulary has evolved to characterize interactions, i.e., departure from additivity, which is perhaps unnecessarily confusing and complex. Dose-addition is the fundamental premise behind such risk assessment approaches as the Hazard Index and Toxicity Equivalence Factors (U.S. EPA, 2000Go). Such approaches assume that any amount of the agent, no matter how small or large, can contribute to the overall toxicity of the mixture because total dose is the unit of concern. Response addition assumes that if the toxicity of the mixture is truly independent among the chemicals, then it is appropriate to add small risks following the law of statistical independence. With response addition, sub-threshold doses of individual chemicals do not contribute to the overall toxicity of the mixture (essentially adding zero responses). Mumtaz et al. (1994)Go states the "default" and "most conservative" form of response addition "is equivalent to dose addition, provided the dose-response curves ... are within the linear range." Similar to the default form of response addition is the concept of effect addition (e.g., Berenbaum, 1989Go; Kortenkamp and Altenburger, 1999Go) where the expected effect of a mixture is the arithmetic sum of a measured toxic effect of the single agents in the mixture. The method of effect addition is applicable for linear (Berenbaum, 1989Go) or linearizable (when the summation is conducted on the transformed scale, e.g., the logit or probit scale) dose-response curves. In short, dose addition, response addition, and effect addition define three ways of defining additivity—by adding doses, risks, or effects.

The methods shown in this article require single chemical dose-response information on the mixture components, presumably on the same health endpoint in the same animal species, strain/stock, age, and gender under the same exposure conditions (route, duration, gavage vehicle, volume, etc.). The additivity models accommodate these similarities across studies with a common background (intercept) parameter (which may be verified by comparing the means of the vehicle control groups in an analysis of variance model). Further, the single-chemical dose-response curves should either be reproducible over time or be collected at the same time as the mixture data. When such data rich situations occur, the methods described here provide an improvement over simple additivity approaches. That is, the additivity models as described in the previous sections estimated using single chemical dose-response data do not require the simplifying assumptions of other methods (e.g., common mode of action) and thereby may provide improved estimation of risk. They are also useful in providing evidence of the joint toxic action for a group of chemicals; such information may serve to actually support or refute the use of dose addition or response addition methods. Given the plethora of potential chemical combinations and exposure scenarios, default risk assessment methods may always be necessary, particularly for the data poor situations. However, the approach shown in this article contributes to the library of available techniques applicable for the data rich cases.

We have demonstrated that the additivity models we propose not only satisfy the concept of no interaction as no change in slope, but they also satisfy Berenbaum's definition of additivity. Since these models can all be linearized (or at least conditionally linearized in the case of a nonlinear model), they also satisfy the definition of effect additivity where the addition is conducted on the transformed scale. The added effect due to the ith chemical is determined by the term ßixi on the transformed mean response scale. When sufficient dose-response data are available for this kind of modeling exercise, it may not be necessary to restrict the interpretation of additivity into categories such as dose addition and response addition. The additivity models described here satisfy the concept of no interaction as the condition where chemicals combine in a way that does not affect their individual dose-response relationships. That is, the rate of change in response with respect to the dose/concentration of each chemical does not change in the presence of the other chemicals. It is a theoretical possibility that a shift in a threshold level of a component in the mixture is non-additive. The additivity models considered in the previous sections allow for a shift in the "threshold" due to the presence of the other chemicals. If the shift is different from that supported by the additivity model, then an interaction may be said to exist. Such a phenomenon is testable using the additivity model and appropriate statistical tests.

In Bailar and Bailer (1999)Go, the suggested analytical schemes of the hypothetical data in a 3 x 3 design included logistic regression where the log odds were used in the analysis to suggest additivity; a linear model of the observed responses which resulted in a claim of synergism; and a dose addition model that resulted in a claim of antagonism. Although limited by the design, these data do not seem appropriately modeled with a linear assumption—the data demonstrate curvilinearity. It is often reasonable to assume that dose-response data are sigmoid in shape. This is particularly the case with proportional data where the range of the response is constrained to be in the interval [0,1]. The general form of the models described in the previous sections is sufficiently flexible to adequately represent the observed data. For example, assuming the data are actually proportional, if the analyst had initially transformed the data to achieve a linearized form (say, using a logit or probit link function) and properly accounted for the variability in the data, then the three analyses would have resulted in the same conclusions—additivity as suggested by a logistic regression analysis. This emphasizes once again that the choice of the model should be based on properties of the data.

In conclusion, we have attempted to bridge the gap between a concept of no interaction and statistical additivity models. When the dose-responsiveness of a chemical in a mixture does not change in the presence of other chemicals, then it is claimed to act additively with the other chemicals. We have shown that this concept of additivity is inherent within the statistical additivity models described herein. These models can be estimated with the support of appropriate experimental single chemical dose-response data. When the zero interaction case is well described with sound statistical rigor, then the likelihood of adequately detecting and characterizing departure from additivity with experimental mixture data is improved.


    APPENDIX
 TOP
 ABSTRACT
 INTRODUCTION
 APPENDIX
 Disclaimer
 REFERENCES
 
For convenience and without loss of generality, we evaluate the Taylor series expansion at zero. Summarizing the above we have where the form of f(...) is unknown and unspecified and x1, x2, ..., xc are doses/concentrations of the c chemicals in the combination. From the Taylor series expansion of g(µ) we have

where represents the *th partial derivative of f with respect to variables evaluated at (0,0,...,0). Notice that the form of f(...) is unspecified and that the expansion has an infinite number of terms. Let's consider these two concerns separately. First, note that if the function were known its derivatives could be determined and evaluated at the point (0,0,..., 0). Thus, each of these terms can be represented as unknown constants, ß(.), where the subscript (.) denotes the variable(s) involved in the derivative evaluated in each case. This permits reducing the expression to

which is a polynomial with an infinite number of terms and where etc. Up to this point, the representation of g(µ) is exact, but requires an infinite number of terms. Truncating the expansion after a finite number of terms provides an approximation to the underlying relationship, which permits us to write

Now we have approximated the unknown dose-response relationship with a polynomial relationship with a finite number of terms. The terms of the polynomial reflect the slopes of the individual mixture components' dose response and, through the cross-product terms, the effect that a component has on the slopes of the other components' dose response curve when the components are combined together in a mixture. For example, ßij is the effect of the ith component on the slope of the jth component when the two chemicals are combined; and, ßijk can be interpreted as the effect of the ith component on the interaction between the jth and kth components.


    Disclaimer
 TOP
 ABSTRACT
 INTRODUCTION
 APPENDIX
 Disclaimer
 REFERENCES
 
The research described in this article has been reviewed by the National Health and Environmental Effects Research Laboratory and the National Center for Environmental Assessment, Office of Research and Development, U.S. Environmental Protection Agency and approved for publication. Approval does not signify that the contents necessarily reflect the views and policies of the Agency, nor does mention of trade names or commercial products constitute endorsement or recommendation for use.


    NOTES
 
2 A Taylor series can be used under general conditions to approximate a function by including a finite number of terms. Further explanation is provided in the Appendix. Back


    ACKNOWLEDGMENTS
 
This research was partially supported by the U.S. EPA National Center for Environmental Assessment (Cincinnati, OH) cooperative agreement #CR-827208. The paper has benefited from the thorough review made by Drs. L. Birnbaum, M. DeVito, D. Herr, D. Gaylor, and G. Suter of an earlier version. Conflict of interest: none declared.


    REFERENCES
 TOP
 ABSTRACT
 INTRODUCTION
 APPENDIX
 Disclaimer
 REFERENCES
 
Abdelbasit, K. M., and Plackett, R. L. (1983). Experimental design for binary data. J. Amer. Stat. Assoc. 78, 90–98.[ISI]

ATSDR (2004). Guidance Manual for the Assessment of Joint Toxic Action of Chemical Mixtures. Online. www.atsdr.cdc.gov/interactionprofiles/ipga.html.

Ariëns, E. J., and Beld, A. J. (1977). The receptor concept in evolution. Biochem. Pharmacol. 26, 913–918.[CrossRef][ISI][Medline]

Atkinson, A. C., and Donev, A. N. (1992). Optimum Experimental Designs. Clarendon Press, Oxford.

Bailar, J. C., and Bailer, A. J. (1999). Risk assessment–the mother of all uncertainties: Disciplinary perspectives on uncertainty in risk assessment. Annals N.Y. Acad. Sci. 895, 273–285.[Abstract/Free Full Text]

Berenbaum, M. C. (1985). The expected effect of a combination of agents: The general solution. J. Theor. Biol. 114, 413–431.[ISI][Medline]

Berenbaum, M. C. (1989). What is synergy? Pharmacol. Rev. 41, 93–141.[Free Full Text]

Box, G. E. P. (1979). Robustness in the strategy of scientific model building. In Robustness in Statistics (R. L. Launer and G. N. Wilkinson), p. 202. Academic Press, New York.

Carter, W. H., Jr., Gennings, C., Staniswalis, J. G., Campbell, E. D., and White, K. L., Jr. (1988). A statistical approach to the construction and analysis of isobolograms. J. Amer. College Toxicol. 7, 963–973.[ISI]

Casey, M., Gennings, C., Carter, W. H., Jr., Moser, V., and Simmons, J. E. (2004). Detecting interaction(s) and assessing the impact of component subsets in a chemical mixture using fixed-ratio ray designs. J. Agricul. Biol. Environ. Stat. 9, 339–361.[CrossRef][ISI]

Casey, M., Gennings, C., Carter, W. H., Jr., Moser, V., and Simmons, J. E. (2005). Ds-optimal designs for studying combinations of chemicals using multiple fixed-ratio ray experiments. Environmetrics 16, 129–147.[CrossRef][ISI]

Casey, M., Gennings, C., Carter, W. H., Jr., Moser, V., and Simmons, J. E. (in press). Power and sample size determination for testing the effect of subsets of compounds on mixtures along fixed-ratio rays. J. Ecolog. Environ. Stat.

Crofton, K. M., Craft, E. S., Hedge, J. M., Gennings, C., Simmons, J. E., Carchman, R. A., Carter, W. H., Jr., and DeVito, M. J. (in press). Thyroid hormone disrupting chemicals: Evidence for dose-dependent additivity and synergism. Environ. Health Perspect.

Dawson, K. S., Carter, W. H., Jr., and Gennings, C. (2000). A statistical test for detecting and characterizing departures from additivity in drug/chemical combinations. J. Agricul. Biol. Environ. Stat. 5, 342–359.[ISI]

Finney, D. J. (1971). Statistical Method in Biological Assay, 2nd ed. Griffin, London.

Gennings, C., Schwartz, P., Carter, W. H., Jr., and Simmons, J. E. (1997). Detection of departures from additivity in mixtures of many chemical with a threshold model. J. Agricul. Biol. Environ. Stat. 2, 198–211.

Gennings, C., Schwartz, P., Carter, W. H., Jr., and Simmons, J. E. (2000). Erratum: Detection of departures from additivity in mixtures of many chemical with a threshold model. J. Agricul. Biol. Environ. Stat. 5, 257–259.[ISI]

Gennings, C., Carter, W. H., Jr., Campain, J. A., Bae, D., and Yang, R. S. H. (2002). Statistical analysis of interactive cytotoxicity in human epidermal keratinocytes following exposure to a mixture of four metals. J. Agricul. Biol. Environ. Stat. 7, 58–73.[CrossRef][ISI]

Gennings, C., Carter, W. H., Jr., Carchman, R., Charles, G., Gollapudi, B., and Carney, E. (2004). Analysis of fixed ratios of chemical mixtures developed from a comparison to an indirect additivity surface determined by single chemical dose-response models. Toxicol. Sci. 80, 134–150.[Abstract/Free Full Text]

Goodman, L. S., and Gilman, A. G. (2001). Goodman and Gilman's The Pharmacological Basis of Therapeutics, 10th ed. (A. G. Gilman, L. S. Goodman, T. W. Rall, and F. Murad, Eds.). MacMillan Publishing, New York.

Kalish, L. A. (1990) Efficient design for estimation of median lethal dose and quantal dose-response curves. Biometrics 46, 737–748.[ISI][Medline]

Kortenkamp, A., and Altenburger, R. (1999). Approaches to assessing combination effects of oestrogenic environmental pollutants. Sci. Total Environ. 233, 131–140.[CrossRef][ISI][Medline]

Loewe, S. (1953). The problem of synergism and antagonism of combined drugs. Arzneimittle forshung 3, 285–290.

Loewe, S., and Muischnek, H. (1926). Uber kombinationswirkunger. I. Mitteilung: Hiltsmittel der gragstellung. Naunyn-Schmiedebergs. Arch. Pharmacol. 114, 313–326.

McCullagh, P., and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. Chapman and Hall, New York.

Meadows, S. L., Gennings, C., Carter, W. H., Jr., Bae, D.-S. (2002). Experimental designs for mixtures of chemicals along fixed ratio rays. Environ. Health Perspect. 110(Suppl. 6), 979–983.[ISI][Medline]

Minkin, S. (1987). Optimal designs for binary data. J. Amer. Stat. Assoc. 82, 1098–1103.[ISI]

Mumtaz, M. M., DeRosa, C. T., and Durkin, P. R. (1994). Approaches and challenges in risk assessments of chemical mixtures. In Toxicology of Chemical Mixtures: Case studies, Mechanisms, and Novel Approaches (R. S. H. Yang, Ed.) pp. 565–597. Academic Press, San Diego.

Munem, M. A., and Foulis, D. J. (1978). Calculus with Analytic Geometry. Worth Publishers, New York.

Neter, J., Kutner, M. H., Nachtsheim, C., and Wasserman, W. (1996). Applied Linear Statistical Models, 4th ed. Irwin, Chicago.

Safe, S. H. (1998). Hazard and risk assessment of chemical mixtures using the toxic equivalency factor approach. Environ. Health Perspect. 106(Suppl. 4), 1051–1058.[ISI][Medline]

Teuschler, L., Klaunig, J., Carney, E., Chambers, J., Connolly, R., Gennings, C., Giesy, J., Hertzberg, R., Klaassen, C., Kodell, R., Paustenbach, D., and Yang, R. (2002). Support of science-based decisions concerning the evaluation of the toxicology of mixtures: A new beginning. Regul. Toxicol. 36, 34–39.[CrossRef][ISI]

U.S. EPA (1989). Interim Procedures for Estimating Risks Associated with Exposures to Mixtures of Chlorinated Dibenzo-p-dioxins and -dibenzofurans (CDDs and CDFs) and 1989 update. Risk Assessment Forum. EPA/625/3–89/016.

U.S. EPA (2000). Supplementary Guidance for Conducting Health Risk Assessment of Chemical Mixtures. Risk Assessment Forum. EPA/630/R-00/002.

Yang, R. S. H., Ed. (1994). Toxicology and Chemical Mixtures: Case Studies, Mechanisms, and Novel Approaches. Academic Press, San Diego.





This Article
Abstract
Full Text (PDF)
All Versions of this Article:
88/2/287    most recent
kfi275v1
Alert me when this article is cited
Alert me if a correction is posted
Services
Email this article to a friend
Similar articles in this journal
Similar articles in ISI Web of Science
Similar articles in PubMed
Alert me to new issues of the journal
Add to My Personal Archive
Download to citation manager
Disclaimer
Request Permissions
Google Scholar
Articles by Gennings, C.
Articles by Carney, E. W.
PubMed
PubMed Citation
Articles by Gennings, C.
Articles by Carney, E. W.