2018-yu.pdf: “The heterogeneity problem in meta-analytic structural equation modeling (MASEM) revisited: A reply to Cheung”, Jia (Joya) Yu, Patrick E. Downes, Karmeon M. Carter, Ernest O'Boyle
2017-wallach.pdf: “Evaluation of Evidence of Statistical Support and Corroboration of Subgroup Claims in Randomized Clinical Trials”, American Medical Association ( )
2015-pedroza.pdf: “Performance of informative priors skeptical of large treatment effects in clinical trials: A simulation study”, (2015-12-13; ):
One of the main advantages of Bayesian analyses of clinical trials is their ability to formally incorporate skepticism about large treatment effects through the use of informative priors. We conducted a simulation study to assess the performance of informative normal, Student-t, and beta distributions in estimating relative risk (RR) or odds ratio (OR) for binary outcomes. Simulation scenarios varied the prior standard deviation (SD; level of skepticism of large treatment effects), outcome rate in the control group, true treatment effect, and sample size. We compared the priors with regards to bias, mean squared error (MSE), and coverage of 95% credible intervals. Simulation results show that the prior SD influenced the posterior to a greater degree than the particular distributional form of the prior. For RR, priors with a 95% interval of 0.50–2.0 performed well in terms of bias, MSE, and coverage under most scenarios. For OR, priors with a wider 95% interval of 0.23–4.35 had good performance. We recommend the use of informative priors that exclude implausibly large treatment effects in analyses of clinical trials, particularly for major outcomes such as mortality.
[Keywords: Bayesian analysis, informative priors, large treatment effects, binary data, clinical trial, robust priors]
2013-couzinfrankel.pdf: “Science Magazine” ( )
2010-vesterinen.pdf: “Improving the translational hit of experimental treatments in multiple sclerosis”, (2010-08-04; ):
Background: In other neurological diseases, the failure to translate pre-clinical findings to effective clinical treatments has been partially attributed to bias introduced by shortcomings in the design of animal experiments.
Objectives: Here we evaluate published studies of interventions in animal models of multiple sclerosis for methodological design and quality and to identify candidate interventions with the best evidence of efficacy.
Methods: A systematic review of the literature describing experiments testing the effectiveness of interventions in animal models of multiple sclerosis was carried out. Data were extracted for reported study quality and design and for neurobehavioural outcome. Weighted mean difference meta-analysis was used to provide summary estimates of the efficacy for drugs where this was reported in five or more publications.
Results: The use of a drug in a pre-clinical multiple sclerosis model was reported in 1152 publications, of which 1117 were experimental autoimmune encephalomyelitis (EAE). For 36 interventions analysed in greater detail, neurobehavioural score was improved by 39.6% (95% CI 34.9–44.2%, p < 0.001). However, few studies reported measures to reduce bias, and those reporting randomization or blinding found statistically-significantly smaller effect sizes.
Conclusions: EAE has proven to be a valuable model in elucidating pathogenesis as well as identifying candidate therapies for multiple sclerosis. However, there is an inconsistent application of measures to limit bias that could be addressed by adopting methodological best practice in study design. Our analysis provides an estimate of sample size required for different levels of power in future studies and suggests a number of interventions for which there are substantial animal data supporting efficacy.
2006-peters.pdf: “A Systematic Review of Systematic Reviews and Meta–Analyses of Animal Experiments with Guidelines for Reporting”, (2006; ):
To maximize the findings of animal experiments to inform likely health effects in humans, a thorough review and evaluation of the animal evidence is required. Systematic reviews and, where appropriate, meta-analyses have great potential in facilitating such an evaluation, making efficient use of the animal evidence while minimizing possible sources of bias. The extent to which systematic review and meta-analysis methods have been applied to evaluate animal experiments to inform human health is unknown.
Using systematic review methods, we examine the extent and quality of systematic reviews and meta-analyses of in vivo animal experiments carried out to inform human health. We identified 103 articles meeting the inclusion criteria: 57 reported a systematic review, 29 a systematic review and a meta-analysis, and 17 reported a meta-analysis only.
The use of these methods to evaluate animal evidence has increased over time. Although the reporting of systematic reviews is of adequate quality, the reporting of meta-analyses is poor. The inadequate reporting of meta-analyses observed here leads to questions on whether the most appropriate methods were used to maximize the use of the animal evidence to inform policy or decision-making. We recommend that guidelines proposed here be used to help improve the reporting of systematic reviews and meta-analyses of animal experiments.
Further consideration of the use and methodological quality and reporting of such studies is needed.
[Keywords: animal experiments, guidelines, meta-analysis, reporting, review, systematic review]
2004-hunterschmidt-methodsofmetaanalysis.pdf: “Methods of Meta-Analysis: Correcting Error and Bias in Research Findings”, John E. Hunter, Dr. Frank L. Schmidt ( )
2000-olson.pdf: “Concordance of the Toxicity of Pharmaceuticals in Humans and in Animals”, Olson, H., et al. ( )