SEQUENTIALLY ADJUSTED RANDOMIZATION TO FORCE BALANCE IN CONTROLLED TRIALS WITH UNKNOWN PREVALENCE OF COVARIATES: APPLICATION TO ALCOHOLISM RESEARCH

MATTHIAS J. MÜLLER*, ARMIN SCHEURICH, HERMANN WETZEL, ARMIN SZEGEDI and MARTIN HAUTZINGER1

Department of Psychiatry, University of Mainz, Mainz and 1 Institute of Psychology, University of Tübihgen, Germany

* Author to whom correspondence should be addressed at: Department of Psychiatry, University of Mainz, Untere Zahlbacher Straße 8, D-55131 Mainz, Germany. Tel.: +49 6131 17 2920; Fax: +49 6131 17 66 90; E-mail: mjm{at}mail.psychiatrie.klinik.uni-mainz.de

(Received 11 March 2003; first review notified 6 May 2003; revised and accepted 23 November 2004)


    ABSTRACT
 TOP
 ABSTRACT
 INTRODUCTION
 METHODS
 RESULTS
 DISCUSSION
 REFERENCES
 
Aims: In treatment outcome studies with small to medium sample sizes (n < 200), the balance of groups with regard to important factors, which sometimes occur at low prevalence, is indispensable for adequate interpretation. This study tested a method for use in clinical alcoholism research, an uncomplicated procedure for satisfactory randomization of patients to different treatments, taking into account relevant background variables. Methods: An easily applicable modification of Efron's biased coin method for the randomization of treatments within strata of unknown but low prevalence was compared with the original approach and alternative methods by computer simulation (10 000 runs). An application example for a clinical trial in alcoholism research is given. Results: The sequentially adjusted randomization procedure revealed results similar to Efron's approach without the necessity for monitoring the assignment history throughout the trial. The new method was slightly superior to Efron's approach in randomizing subjects in strata with n ≤ 20, whereas strata with n > 20 favoured randomization with Efron's approach. Taking into account all results from simulation, the new approach reached a proportion of acceptable balanced randomization > 95% in all stratum sizes. Conclusions: The approach, a special case of the standard urn design, provides three major advantages in clinical trials: (i) it can be easily implemented in any trial without technical equipment; (ii) it works with high accuracy in trials with a priori unknown but low numbers of subjects (4 ≤ n ≤ 20) in prognostic relevant strata; and (iii) a deterministic assignment tendency is completely avoided, as a random process takes place throughout the assignment procedure. The modified biased coin method can be recommended as one possible strategy for special purposes, particularly in alcoholism research.


    INTRODUCTION
 TOP
 ABSTRACT
 INTRODUCTION
 METHODS
 RESULTS
 DISCUSSION
 REFERENCES
 
A major problem in clinical alcoholism research and related areas is the satisfactory randomization of patients to different treatments by taking into account potentially relevant background variables, e.g. onset of alcoholism and socio-economic status. Even in the absence of a clearly evident set of predictors, it is important to take into account background variables which can influence outcome, as has been shown in studies trying to match treatment options to client heterogeneity (Project MATCH, National Institute on Alcohol Abuse and Alcoholism, 1993Go; Nielsen et al., 1998Go; Johnson et al., 2000Go). However, some of the relevant background variables occur at a low and unknown prevalence. Particularly, in treatment outcome studies with small to medium sample sizes (up to n < 200), the balancing of groups with regard to such factors is indispensable for adequate interpretation (Stout et al., 1994Go).

Generally, one of the essential problems in planning and conducting comparative randomized clinical trials with two or more treatment arms is to provide a balanced assignment of subjects to the experimental groups (Pocock, 1983Go). Although simple randomization procedures seem to yield several advantages, e.g. high unpredictability of treatment and applicability of inference statistics based on random sample theory, practical considerations and empirical findings often show shortcomings and irregularities of these techniques, resulting in major interpretative problems (Simon, 1979Go). Simple randomization seems to be sufficient and recommendable in trials with n > 200 (Lachin, 1988aGo). In smaller samples, matching or stratification techniques have been suggested to avoid accidentally occurring unbalanced designs (‘covariate imbalance’) (Billewicz, 1965Go; Chase, 1968Go; Bailey, 1983Go). Both approaches have advantages and shortcomings. In the case of primarily unknown proportions of subjects with a prognostic characteristic, randomization within groups of individuals with the same characteristic (‘stratum’) seems most appropriate (Simon, 1979Go). However, block randomization or simple randomization within strata will yield the same results as without stratification, e.g. each subject has a probability of P = 0.50 to be assigned to one of two treatments without correcting for deviations occurring from balance. On the other hand, highly deterministic approaches have been proposed, e.g. pairwise block randomization within each stratum, where each assignment determines the next. Alternative strategies determine treatment assignment in the event of a defined deviation from random balance (Taves, 1974Go). However, every deterministic interference can invalidate the results of randomized trials and should therefore be avoided (‘selection bias’). The key idea to improve randomization within strata is to apply an algorithm, which is efficient in adjusting or ‘forcing’ the balance of randomized assignments, without significantly reducing treatment unpredictability. Therefore, the sequential progress of assignment has to be taken into account in some way. Depending on previous assignments and the resulting current over- or under-representation of one treatment, the immediate next step of the randomization process should be influenced. Efron (1971)Go proposed and Pocock and Simon (1975)Go elaborated on the idea of the ‘biased coin’, i.e. to vary the probability of treatment assignment in favour of the so-far under-represented treatment (Atkinson, 1982Go). Wei (1978)Go and Wei and Lachin (1988)Go presented a general outline, including mathematical properties, of adaptive randomization techniques within the framework of standard urn designs. The ‘biased coin’ procedure should reduce possible imbalances in stratified randomization, particularly if subjects enter the study sequentially and if the prevalence of the categories of a stratification variable is not known a priori.

According to Efron's (1971)Go ‘biased coin’ idea, the following rationale can be derived:

  1. If two treatments A and B have so far been assigned to equal numbers of subjects in a stratum sx, a new individual of that stratum sx is assigned to A or B with PA = PB = 0.50.
  2. If treatment A is currently over-represented in a stratum sx, i.e. more subjects of sx have been assigned to treatment A than to treatment B, a new individual of sx is assigned to the currently under-represented treatment B with PB > 0.50.
  3. The chance of assignment to the currently over-represented treatment A is PA = 1 – PB < 0.50.

Efron (1971)Go suggested that a value of P = 2/3 is generally acceptable in case (ii); Pocock and Simon (1975)Go used values of P = 1 and P = 3/4. More sophisticated procedures to optimize a balance in stratified designs have also been developed (Klotz, 1978Go) but did not reach widespread use, possibly due to complex computer calculations that have to be carried out. In clinical studies for example, a randomization routine has to be maximally straightforward and safe in terms of the treatment blindness of patients and clinical staff. Pocock (1983)Go has emphasized that the possibility of implementing and running a design with ease is at least as important as its theoretical optimality. Therefore, we will propose an easily and routinely applicable approach and show that our approach—in the case of a priori unknown but low numbers of subjects fulfilling specific stratification criteria—is at least comparable to the ‘biased coin’ procedure with respect to forcing a balance between two treatments. The proposed approach is a sequentially adjusted randomization technique; as with the ongoing trial and with successively assigned subjects the base probability of P = 0.50 for each of two treatment alternatives is continuously adjusted depending on previous assignments (without having to calculate them). The rationale follows the above mentioned three steps. However, while Efron's approach used a fixed value of P = 2/3 whenever one treatment group was under-represented, we propose a more flexible strategy:

  1. If two treatments A and B have to be assigned to equal numbers of subjects in a stratum sx, the base chances of both treatments for a newly randomized individual are PA = PB = 0.50. In other words, the randomization procedure starts with one item for each treatment (e.g. cards with either ‘treatment A’ or ‘treatment B’ written on them).
  2. If ‘treatment A’ is drawn from a sample containing two items, the respective item will be put back and an additional item for ‘treatment B’ will be attached to the sample leading to an adjusted chance ratio of ‘treatment A’: ‘treatment B’ = 1:2, and PA < PB = 0.333.
  3. The procedure outlined in (ii) is repeated until the predetermined number of subjects of a study is reached. The weights for the randomization chances are adjusted automatically according to formally operationalized rules (see Methods section).
The sequentially adjusted randomization procedure is illustrated in Fig. 1.



View larger version (74K):
[in this window]
[in a new window]
 
Fig. 1. Illustration of the implementation of the sequential adjustment randomization procedure. A and B, treatment alternatives; X and Y, stratification variables; + and –, present or absent. The example shows two steps of assignment of subjects to one of four strata (X+Y+, X–Y+, X+Y–, X–Y–). In both steps of the example, treatment ‘A’ was chosen.

 
Theoretically, the new technique should lead to balanced assignments, especially in strata of only a few subjects with rarely occurring characteristics. In such cases, stratified randomization seems to be of particular practical relevance. The co-occurrence of rare features, e.g. a certain personality disorder, can severely compromise the treatment outcome if such a characteristic is not equally distributed in groups on different treatments. A further advantage of balanced assignment is the opportunity to analyse the data with respect to interaction with the stratifying features. The proposed algorithm should force a balanced randomization in strata of small sample sizes because deviations from balance are adjusted by weights corresponding to the magnitude of deviation. These adjustment weights are relatively large-sized and potentially greater than Efron's proposed value of P = 2/3 at early stages of randomization. With ongoing assignment, the weights will asymptotically reach the value of P = 0.50 and will then lose their ‘forcing power’.

Due to the statistical properties and simplicity of the proposed approach, it should be preferable in designs with a priori unknown but low prevalence of specific stratification characteristics.

The conjectured features and advantages will be tested empirically by computer simulation techniques comparing the present approach with simple randomization and Efron's method. For illustrative purposes, an application of the algorithm to a hypothetical clinical study on different treatments of alcoholism will be given.


    METHODS
 TOP
 ABSTRACT
 INTRODUCTION
 METHODS
 RESULTS
 DISCUSSION
 REFERENCES
 
Randomization procedures
Simple randomization. Each subject has chances of P = 0.50 to be assigned to treatment A or B (PA = PB = 0.50). These chances are not changed and not variable throughout the course of treatment assignment.

Efron's ‘biased coin’ approach (P = 2/3). The approach is carried out as described by Efron (1971)Go; whenever a treatment is under-represented during sequential assignment, the probability for this treatment is set at P = 0.67 for subsequent assignments. If both treatments are balanced, chances of P = 0.50 are used for random assignment to both treatments. For each new assignment, the proportion of previous assignments to each treatment has to be calculated and taken into account.

Sequentially adjusted randomization (new method). (i) PA or PB is the probability for a subject to be assigned to treatment A or B; (ii) nA or nB is the number of subjects already assigned to treatment A or B; and (iii) for each new subject of a stratum sx to be assigned, let PA = (nB + 1)/(nA + nB + 2) and PB = (nA + 1)/ (nA + nB + 2); e.g.

  1. at the beginning: nA = 0; nB = 0; PA = 1/2 = 0.50; PB = 1/2 = 0.50
  2. if ‘A’ was chosen: nA = 1; nB = 0; PA = 1/3 = 0.33; PB = 2/3 = 0.67
  3. if ‘A’ was chosen: nA = 2; nB = 0; PA = 1/4 = 0.25; PB = 3/4 = 0.75
  4. if ‘B’ was chosen: nA = 2; nB = 1; PA = 2/5 = 0.40; PB = 3/5 = 0.60.
With ongoing assignment, the chances are adjusted according to the actual assignment. The procedure can be implemented and carried out without knowledge of previous assignments by simply adding one count to the treatment not actually assigned (Fig. 1).

Simulation procedure and outcome parameters
To test the accuracy of the aforementioned approaches, different a priori specified hypothetical values for the prevalence of subjects with specific stratum characteristics were used (Table 1); in each simulation, a particular number of individuals (2–100) had to be randomly assigned to either treatment A or B. An accurate randomization was assumed whenever the simulation resulted in a balanced assignment of the hypothetical subjects of the hypothetical stratum to the treatment. Balance was accepted if the distribution of assignments did not deviate significantly from equal distribution (P = 0.50 for assignment to either group). For that purpose, binomial tests were calculated before simulations were performed and distributions that did not deviate significantly (one-sided P > 0.05) from the hypothesis of balance were accepted as ‘balanced’. Table 1 shows the hypothetical prevalence, i.e. the number of subjects to be assigned, used for simulation and the accepted distributions with the corresponding binomial test results.


View this table:
[in this window]
[in a new window]
 
Table 1. Numbers of subjects in the hypothetical strata for assignment to two treatments by simulation and statistics for acceptance of assignments as sufficiently ‘balanced’

 
For each strategy, simulations of 10 000 trials were calculated. Besides the percentage of runs with acceptable randomization results, the percentage of runs with exactly balanced treatment assignments, empirical probabilities for assignment as well as means and standard deviations (SDs) of the randomization distributions were calculated. The comparison of strategies was carried out descriptively, and the proportions of balanced assignments and the dispersion parameters of the underlying distributions (SD2 = variance) were compared using {chi}2-tests and F-tests, respectively. These statistical tests were performed because they are often used to test retrospectively for balance of covariates in clinical trials. The level of statistical significance was set at conventional {alpha} = 0.05; trends were reported with 0.05 < {alpha} < 0.10.

Illustrative example
For illustrative purposes, data from a hypothetical study are used to show the applicability and results of the approach. A clinical study on the outcome of a placebo-controlled study of pharmacological relapse prevention in 100 alcohol-dependent patients is outlined. Three prognostic factors with unknown prevalence in the sample are used to stratify the sample of subsequently enrolled patients: antisocial personality features [according to the Diagnostic and Statistical Manual for Mental Disorders (DSM-IV)], severity of alcohol dependence (more than three hospitalizations for alcohol detoxification) and social maladjustment (unemployment).


    RESULTS
 TOP
 ABSTRACT
 INTRODUCTION
 METHODS
 RESULTS
 DISCUSSION
 REFERENCES
 
Figure 2 gives an illustrative impression of the steps of randomization for the three approaches used in the present study. As a hypothetical example, 20 subjects were randomized in a balanced fashion to either treatment A (n = 10) or B (n = 10).



View larger version (17K):
[in this window]
[in a new window]
 
Fig. 2. Illustration of the randomization procedure. Comparison of simple randomization, weighted randomization, Efron's approach and the new method (20 assignment trials).

 
Trivially, for simple randomization, the probability for an assignment to either treatment A or B was fixed at P = 0.50. The data were selected from the simulation procedures. As predefined, Efron's approach yielded probabilities for assignment to treatment A or B of 1/3, 1/2 and 2/3 (left panel of Fig. 2). The right panel of Fig. 2 shows the results of the newly proposed approach with sequentially adjusted assignment probabilities. For comparison, again the simple randomization procedure is presented. Obviously, the assignment probabilities for the new method showed an asymptotic trend towards P = 0.50, however, with the possibility to show values of P < 1/3 (as shown in Fig. 2) or P > 2/3. Thus, the new approach covers a more flexible adjustment (‘bias’) of assignment probabilities across the course of the randomization procedure. Contrary to Efron's approach, after a few assignments, the new procedure loses its ‘forcing’ power and tends to represent a simple randomization procedure further on.

The results of the simulation procedures with 10 000 runs are given in Tables 2 and 3. Table 2 yields expected and observed probabilities for an assignment of subjects with a specific prognostic feature to one of two treatments. Only minor deviations from the expected value of P = 0.50 were computed for all three approaches under investigation.


View this table:
[in this window]
[in a new window]
 
Table 2. Expected and observed probabilities for assignment of a subject with a prognostic feature x to treatment A (results from simulations; 10 000 runs)

 

View this table:
[in this window]
[in a new window]
 
Table 3. Proportions of exactly and sufficiently balanced assignments (results of simulations; 10 000 runs)

 
Table 3 shows the main results of the simulation study: the proportions of exactly balanced assignments are given together with the proportions of assignments that are considered to be ‘sufficiently’ balanced from a pragmatic statistical viewpoint. When comparing the proportions of exactly balanced assignment procedures, Efron's method seems to be superior with respect to all pre-specified sample sizes of strata with n > 2.

The rationale for the decision whether or not an assignment trial is balanced was derived from inference statistics, i.e. binomial tests as outlined in Table 1. Randomization runs resulting in proportions of assignments to both treatments, which did not have a statistically significant (one-sided P > 0.05) deviation from equal distribution, were accepted as sufficiently balanced. According to this pragmatic guideline, a numerical comparison of proportions revealed highly satisfactory figures (‘acceptable’ balance in >90% of trials) for both Efron's approach and the newly proposed method in all cases with n > 2 subjects to be randomized per stratum. Both approaches were clearly superior to simple randomization. In stratum sizes of n ≤ 20, the new method was numerically superior to Efron's approach, whereas assignments in strata with n > 20 favoured Efron's approach. Taking into account all computed assignment simulations, the new approach reached a proportion of statistically acceptable balanced randomization >95% in all stratum sizes. Efron's method achieved similar results with the exception that in the stratum with n = 6, a proportion of 93.5% sufficiently balanced assignments was calculated from 10 000 simulation runs. Figure 3 shows the comparison of Efron's and the new approach.



View larger version (23K):
[in this window]
[in a new window]
 
Fig. 3. Proportions of trials with insignificant deviations from balanced assignment. Comparison of Efron's approach and the new method (10 000 simulation runs).

 
The results are also reflected in Table 4 showing the expected and observed number of subjects assigned to one of two treatment alternatives. The means of assigned subjects in each stratum did not differ between the different approaches. However, the SDs of the different methods resulting in different distributions of assignments showed considerable differences. Efron's method and the new approach yielded a lower variance for all strata with n > 2 than simple randomization (F-tests, P < 0.10). Efron's method showed a statistically significant (P < 0.05) less variance of correct assignments for strata with n ≥ 40 and a respective trend (P < 0.10) for strata with n ≥ 20. These results are strongly related to those in Table 2 and provide additional statistical proof.


View this table:
[in this window]
[in a new window]
 
Table 4. Expected and observed numbers of subjects with a prognostic feature x assigned to a specific treatment A (results of simulation; 10 000 runs)

 
Finally, Table 5 presents the results of the application example. Eight strata were chosen based on preliminary empirical findings. When simple randomization was applied, the average probability for sufficiently balanced assignments reached P = 0.881. In this case, ~85 of 100 subjects would be assigned correctly. The mean probability with Efron's method (averaged across eight strata) for a balanced assignment was P = 0.987 and ~99 (98.76) of 100 subjects should be correctly assigned when the procedure is repeated 10 000 times. The figures for the new approach were comparable, as sufficiently balanced assignment would be achieved with an average P = 0.991, resulting again in ~99 of 100 (98.62) individuals who were ‘correctly’ assigned in a balanced fashion.


View this table:
[in this window]
[in a new window]
 
Table 5. Application example: comparison of simple randomization, Efron's method and the new method for assignment of 100 consecutively enrolled alcoholics to either treatment A or B (three prognostic features with a priori unknown prevalence)

 
From Table 5 it can also be concluded that strata with low prevalence (n ≤ 20) should favour the new approach, whereas with strata covering >20 subjects, slightly better results could be achieved with Efron's method.


    DISCUSSION
 TOP
 ABSTRACT
 INTRODUCTION
 METHODS
 RESULTS
 DISCUSSION
 REFERENCES
 
Our results show that the proposed method (modifying Efron's biased coin approach using sequentially adjusted biasing probabilities) is a recommendable randomization procedure in trials with a priori known prognostic features, but unknown and expected low numbers of subjects (n ≤ 20) within the strata. Furthermore, the new procedure for randomization within strata has demonstrated its pragmatic validity, as it requires no statistician and no computer support, is easily applicable and preserves treatment unpredictability throughout a complete trial. As a major advantage, it is not necessary to record previous assignments to calculate the actual probability (Fig. 1). Given that the conditions outlined here occur frequently in alcoholism research and related areas, the new approach can be recommended as a feasible alternative to other randomization strategies.

The problem of unbalanced assignments in the so-called randomized trials is presumably underestimated (Stout et al., 1994Go). Simple random assignment of patients to one of two or more treatment alternatives still seems, for most researchers, sufficient to protect from ending up with treatment groups that differ significantly in essentially relevant features. This source of bias was labelled as ‘accidentally occurring bias’ because it can not by definition be a systematic bias in randomized assignment, but it still represents one serious form of bias (‘covariate imbalance’). The only way to protect against such randomly occurring influences, which can substantially invalidate research findings, seems a sufficiently large sample size (n > 200) (Lachin, 1988aGo). In all other cases, potentially relevant features should be thoroughly assessed prior to the study, and a matching or stratification procedure should be used (Simon, 1979Go; Fleiss, 1981Go). For these designs, feasible procedures are still required, although the mathematical background has already been developed (Wei, 1978Go; Lachin, 1988bGo; Lachin et al., 1988Go; Wei and Lachin, 1988Go). The method we propose deals with the issue of gaining methodologically sound results with a straightforward, robust and practically tractable procedure. We did not claim to propose an optimal solution to the problem of stratified randomization, as there have been several excellent contributions to that field (Pocock and Simon, 1975Go; Freedman and White, 1978Go; Klotz, 1978Go). The scope of our approach was to evaluate a ‘foolproof’ randomization algorithm with respect to its relevant statistical properties. Therefore, we used a simulation procedure with 10 000 runs, which should be sufficient for comparison of different approaches. We did not choose a more or less arbitrary but mathematically derived criterion for deviation from balanced assignment, such as the |q1 – q2| statistic proposed by Pocock and Simon (1975)Go. Instead, in line with our pragmatic aims, we defined ‘acceptable’ balance in assignment from an inference statistical standpoint. One typical step in data analysis is to check for comparability of treatment groups with respect to relevant characteristics which could influence the outcome measures. If there are significant differences between groups, in some cases disturbing variables can be used as covariates for analysis. However, the best statistical approach cannot compensate for shortcomings in a study design. In the case of nominal categorical variables (e.g. gender), the use of covariates is problematic, as a single individual with a mixed gender comprising e.g. 2/5 male and 3/5 female features as artificially ‘created’ by analysis of covariance does not exist. On the other hand, post hoc stratification lowers the power of statistical tests, but even in this case a balanced distribution between treatment groups would be clearly recommendable. Nevertheless, stratification approaches and covariance analyses are not mutually exclusive, and could be useful in combination.

When analysing the balance of relevant prognostic values between treatments for categorical data, usually {chi}2 tests or binomial tests are used. Hence, it is not necessary that prognostic features are exactly balanced between treatment groups. Instead, a distribution not deviating significantly from the hypothesis of equal distribution is, in most cases, sufficient for assuming ‘balance’. According to Cui et al. (2002)Go imbalance frequently occurs in strata with small number of subjects (‘numerical imbalance’). However, numerical imbalance does not necessarily imply clinical relevance (‘pragmatic balance’); i.e. pragmatic balance allows numerical imbalance, and binomial tests can in such cases be useful to decide on the cut-off (not rejecting the null hypothesis of balance as accepting the balance) as shown in Table 1.

Thus, we calculated binomial tests prior to the simulation of assignments and regarded all assignment distributions as acceptable if they did not lead to the rejection of the null hypothesis of equal distribution between groups. We chose a rather conservative level of significance (one-tailed P < 0.05).

Several limitations of our approach should, however, be mentioned. First, for the sake of simplicity we decided to use only even numbers of subjects to be assigned in our simulation procedures. The application example was extended to the use of odd numbers of subjects hypothetically assigned to two treatment groups (n = 5, n = 15). Second, strata comprising a very small number of subjects (n ≤ 3) cannot be assigned in a balanced fashion by the proposed approach with the same accuracy as in the case of larger strata. Third, the utilization of even numbers of subjects to be assigned and the discrete cut-off values for statistical decision about ‘balanced’ or ‘unbalanced’ simulation outcomes led to discrete and not strictly monotone curves of proportions of balanced assignments. Nevertheless, for the aforementioned practical reasons, we decided to show these results. The outcome of assignment simulation for the new method and for Efron's approach was highly satisfactory, as in clearly >90% assignments of simulated runs (for n > 2 in each stratum), an acceptable balance was obtained, and, in general, both approaches were substantially superior to a simple randomization procedure (Table 4). If unbalanced assignments should be definitely avoided, deterministic assignment procedures are recommended (Taves, 1974Go); however, this occurs at the cost of lost treatment unpredictability. For high numbers of stratification features and combinations thereof, the approach leads to unsatisfactory results, as most of the strata will contain no or only very low numbers of subjects, and the newly proposed, as well as Efron's, assignment procedure will assimilate simple randomization (Pocock and Simon, 1975Go).

For strata with high numbers of subjects (n > 20), Efron's approach seems superior to the new method because the probability for assignment of the new approach will asymptotically reach P = 0.50 for n -> {infty}. Another critical point is that we have used only the biasing probability of 1/3 or 2/3 for forcing balance within Efron's approach. As it has been successfully shown (Pocock and Simon, 1975Go), more extreme biasing probabilities lead to even better results in typical designs. Hence, the advantage of the newly proposed method has to be seen in the context of accuracy, practicability and safety with respect to administration and treatment unpredictability (blindness). A descriptive comparison of different approaches with respect to these features is given in Table 6.


View this table:
[in this window]
[in a new window]
 
Table 6. Descriptive comparison of different randomization procedures with respect to accuracy, practicability and deterministic tendencies

 
Weighting these factors equally, our approach shows satisfactory accuracy, high practicability and protects treatment blindness, and can therefore be recommended for randomized trials with low or moderate sample sizes (n < 100), a moderate number of strata (2–10) and the need for randomization within strata. The outlined approach was only exemplified for two treatment groups with a probability for assignment of P = 0.50; apparently, an extension to more than two treatment groups can be done easily. Application to controlled trials in treatment–outcome research should be easily possible. The proposed new technique has potential application in other research areas as well, e.g. diabetes mellitus, where baseline predictors are already known to influence clinical outcome.


    REFERENCES
 TOP
 ABSTRACT
 INTRODUCTION
 METHODS
 RESULTS
 DISCUSSION
 REFERENCES
 
Atkinson, A. C. (1982) Optimum biased coin designs for sequential clinical trials with prognostic factors. Biometrika 69, 61–67.[ISI]

Bailey, R. A. (1983) Restricted randomization. Biometrika 70, 183–198.[ISI]

Billewicz, W. Z. (1965) The efficiency of matched samples. An empirical investigation. Biometrics 21, 623–644.[ISI][Medline]

Chase, G. R. (1968) On the efficiency of matched pairs in Bernoulli trials. Biometrika 55, 365–369.[ISI]

Cochran, W. G. (1968) The effectiveness of subclassification in removing bias in observational studies. Biometrics 24, 295–313.[ISI][Medline]

Cui, L., Hung, H. M. J., Wang, S. J. et al. (2002) Issues related to subgroup analysis in clinical trials. Journal of Biopharmaceutical Statistics 12, 241–252.

Efron, B. (1971) Forcing a sequential experiment to be balanced. Biometrika 58, 403–417.[ISI]

Fleiss, J. L. (1981) Statistical Methods for Rates and Proportions. Wiley, New York.

Freedman, L. S. and White, S. J. (1978) On the use of Pocock and Simon's method for balancing treatment numbers over prognostic factors in the controlled clinical trial. Biometrics 32, 691–694.

Johnson, B. A., Roache, J. D., Javors, M. A. et al. (2000) Ondansetron for reduction of drinking among biologically predisposed alcoholic patients: a randomized controlled trial. Journal of the American Medical Association 284, 963–971.[Abstract/Free Full Text]

Klotz, J. H. (1978) Maximum entropy constrained balance randomization for clinical trials. Biometrics 34, 283–287.[ISI][Medline]

Lachin, J. M. (1988a) Statistical properties of randomization in clinical trials. Controlled Clinical Trials 9, 289–311.[CrossRef][ISI][Medline]

Lachin, J. M. (1988b) Properties of simple randomization in clinical trials. Controlled Clinical Trials 9, 312–326.[CrossRef][ISI][Medline]

Lachin, J. M., Matts, J. P. and Wei, L. J (1988). Randomization in clinical trials: conclusions and recommendations. Controlled Clinical Trials 9, 365–374.[CrossRef][ISI][Medline]

National Institute on Alcohol Abuse and Alcoholism (1993) Project MATCH (Matching Alcoholism Treatment to Client Heterogeneity): rationale and methods for a multisite clinical trial matching patients to alcoholism treatment. Alcohol: Clinical and Experimental Research 17, 1130–1145.[ISI][Medline]

Nielsen, B., Nielsen, A. S. and Wraae, O. (1998) Patient-treatment matching improves compliance of alcoholics in outpatient treatment. Journal of Nervous and Mental Disease 186, 752–760.[CrossRef][ISI][Medline]

Pocock, S. J. (1983) Clinical Trials: A Practical Approach. Wiley, New York.

Pocock, S. J. and Simon, R. (1975) Sequential treatment assignment with balancing for prognostic factors in the controlled clinical trial. Biometrics 31, 103–115.[ISI][Medline]

Simon, R. (1979) Restricted randomization designs in clinical trials. Biometrics 35, 503–512.[ISI][Medline]

Stout, R. L., Wirtz, P. W., Carbonari, J. P. et al (1994) Ensuring balanced distribution of prognostic factors in treatment outcome research. Journal of Studies on Alcohol. Suppl. 12, 70–75.

Taves, D. R. (1974) Minimization: a new method of assigning patients to treatment and control groups. Clinical Pharmacology and Therapeutics 15, 443–453.[ISI][Medline]

Wei, L. J. (1978) An application of urn model to the design of sequential controlled clinical trials. Journal of the American Statistical Association 73, 559–563.[ISI]

Wei, L. J. and Lachin, J. M. (1988) Properties of the urn randomization in clinical trials. Controlled Clinical Trials 9, 345–364.[CrossRef][ISI][Medline]





This Article
Abstract
Full Text (PDF)
All Versions of this Article:
40/2/124    most recent
agh131v1
Alert me when this article is cited
Alert me if a correction is posted
Services
Email this article to a friend
Similar articles in this journal
Similar articles in ISI Web of Science
Similar articles in PubMed
Alert me to new issues of the journal
Add to My Personal Archive
Download to citation manager
Request Permissions
Google Scholar
Articles by MÜLLER, M. J.
Articles by HAUTZINGER, M.
PubMed
PubMed Citation
Articles by MÜLLER, M. J.
Articles by HAUTZINGER, M.