[Show abstract][Hide abstract] ABSTRACT: Within cluster randomized trials no algorithms exist to generate a full enumeration of a block randomization, balancing for covariates across treatment arms. Furthermore, often for practical reasons multiple blocks are required to fully randomize a study, which may not have been well balanced within blocks.
We present a convenient and easy to use randomization tool to undertake allocation concealed block randomization. Our algorithm highlights allocations that minimize imbalance between treatment groups across multiple baseline covariates. We demonstrate the algorithm using a cluster randomized trial in primary care (the PRE-EMPT Study) and show that the software incorporates a trade off between independent random allocations that were likely to be imbalanced, and predictable deterministic approaches that would minimise imbalance. We extend the methodology of single block randomization to allocate to multiple blocks conditioning on previous allocations.
The algorithm is included as Additional file 1 and we advocate its use for robust randomization within cluster randomized trials.
BMC Medical Research Methodology 11/2008; 8(1):65. DOI:10.1186/1471-2288-8-65 · 2.27 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Cluster randomized controlled trials are increasingly used to evaluate medical interventions. Research has found that cluster size variability leads to a reduction in the overall effective sample size. Although reporting standards of cluster trials have started to evolve, a far greater degree of transparency is needed to ensure that robust evidence is presented. The use of the numbers of patients recruited to summarize recruitment rate should be avoided in favour of an improved metric that illustrates cumulative power and accounts for cluster variability. Data from four trials is included to show the link between cluster size variability and imbalance. Furthermore, using simulations it is demonstrated that by randomising using a two block randomization strategy and weighting the second by cluster size recruitment, chance imbalance can be minimized.
Statistics in Medicine 12/2010; 29(29):2984-93. DOI:10.1002/sim.4050 · 1.83 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Selection bias affects the evaluation of clinical trials, for example, by elevating type I error rate. We investigated the effect of selection bias on type I error rate considering permuted block randomization. We also considered stratified randomization in general and for the special case of multicenter clinical trials, where we incorporated preferences of the investigator in our approach. Finally, the effect of underrunning is modeled, that is, where the randomization list exceeds the actual number of patients taking part in the trial. For all situations, we illustrate and discuss the impact of selection bias on type I error rate.
Statistics in Medicine 06/2011; 30(21):2573-81. DOI:10.1002/sim.4279 · 1.83 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.