/docs/statistics/meta-analysis/ Directory Listing

Directories

Files

  • 2018-yu.pdf: “The heterogeneity problem in meta-analytic structural equation modeling (MASEM) revisited: A reply to Cheung”⁠, Jia (Joya) Yu, Patrick E. Downes, Karmeon M. Carter, Ernest O'Boyle

  • 2017-wallach.pdf: “Evaluation of Evidence of Statistical Support and Corroboration of Subgroup Claims in Randomized Clinical Trials”⁠, American Medical Association (backlinks)

  • 2015-pedroza.pdf: ⁠, Claudia Pedroza, Weilu Han, Van Thi Thanh Truong, Charles Green, Jon E. Tyson (2015-12-13; backlinks):

    One of the main advantages of Bayesian analyses of clinical trials is their ability to formally incorporate skepticism about large treatment effects through the use of informative priors. We conducted a simulation study to assess the performance of informative normal, Student-t, and beta distributions in estimating relative risk (RR) or odds ratio (OR) for binary outcomes. Simulation scenarios varied the prior standard deviation (SD; level of skepticism of large treatment effects), outcome rate in the control group, true treatment effect, and sample size. We compared the priors with regards to bias, mean squared error (MSE), and coverage of 95% credible intervals. Simulation results show that the prior SD influenced the posterior to a greater degree than the particular distributional form of the prior. For RR, priors with a 95% interval of 0.50–2.0 performed well in terms of bias, MSE, and coverage under most scenarios. For OR, priors with a wider 95% interval of 0.23–4.35 had good performance. We recommend the use of informative priors that exclude implausibly large treatment effects in analyses of clinical trials, particularly for major outcomes such as mortality.

    [Keywords: Bayesian analysis, informative priors, large treatment effects, binary data, clinical trial, robust priors]

  • 2013-couzinfrankel.pdf: “Science Magazine” (backlinks)

  • 2010-vesterinen.pdf: ⁠, Hanna M. Vesterinen, Emily S. Sena, Charles ffrench-Constant, Anna Williams, Siddharthan Chandran, Malcolm R. Macleod (2010-08-04; backlinks):

    Background: In other neurological diseases, the failure to translate pre-clinical findings to effective clinical treatments has been partially attributed to bias introduced by shortcomings in the design of animal experiments.

    Objectives: Here we evaluate published studies of interventions in animal models of multiple sclerosis for methodological design and quality and to identify candidate interventions with the best evidence of efficacy.

    Methods: A systematic review of the literature describing experiments testing the effectiveness of interventions in animal models of multiple sclerosis was carried out. Data were extracted for reported study quality and design and for neurobehavioural outcome. Weighted mean difference meta-analysis was used to provide summary estimates of the efficacy for drugs where this was reported in five or more publications.

    Results: The use of a drug in a pre-clinical multiple sclerosis model was reported in 1152 publications, of which 1117 were experimental autoimmune encephalomyelitis (EAE). For 36 interventions analysed in greater detail, neurobehavioural score was improved by 39.6% (95% CI 34.9–44.2%, p < 0.001). However, few studies reported measures to reduce bias, and those reporting randomization or blinding found statistically-significantly smaller effect sizes.

    Conclusions: EAE has proven to be a valuable model in elucidating pathogenesis as well as identifying candidate therapies for multiple sclerosis. However, there is an inconsistent application of measures to limit bias that could be addressed by adopting methodological best practice in study design. Our analysis provides an estimate of sample size required for different levels of power in future studies and suggests a number of interventions for which there are substantial animal data supporting efficacy.

  • 2006-peters.pdf: ⁠, Jaime L. Peters, Alex J. Sutton, David R. Jones, Lesley Rushton, Keith R. Abrams (2006; backlinks):

    To maximize the findings of animal experiments to inform likely health effects in humans, a thorough review and evaluation of the animal evidence is required. Systematic reviews and, where appropriate, meta-analyses have great potential in facilitating such an evaluation, making efficient use of the animal evidence while minimizing possible sources of bias. The extent to which systematic review and meta-analysis methods have been applied to evaluate animal experiments to inform human health is unknown.

    Using systematic review methods, we examine the extent and quality of systematic reviews and meta-analyses of in vivo animal experiments carried out to inform human health. We identified 103 articles meeting the inclusion criteria: 57 reported a systematic review, 29 a systematic review and a meta-analysis, and 17 reported a meta-analysis only.

    The use of these methods to evaluate animal evidence has increased over time. Although the reporting of systematic reviews is of adequate quality, the reporting of meta-analyses is poor. The inadequate reporting of meta-analyses observed here leads to questions on whether the most appropriate methods were used to maximize the use of the animal evidence to inform policy or decision-making. We recommend that guidelines proposed here be used to help improve the reporting of systematic reviews and meta-analyses of animal experiments.

    Further consideration of the use and methodological quality and reporting of such studies is needed.

    [Keywords: animal experiments, guidelines, meta-analysis, reporting, review, systematic review]

  • 2004-hunterschmidt-methodsofmetaanalysis.pdf: “Methods of Meta-Analysis: Correcting Error and Bias in Research Findings”⁠, John E. Hunter, Dr. Frank L. Schmidt (backlinks)

  • 2000-olson.pdf: “Concordance of the Toxicity of Pharmaceuticals in Humans and in Animals”⁠, Olson, H., et al. (backlinks)

  • 2012-mitchell.pdf (backlinks)

  • 2008-whiteside.pdf (backlinks)

  • 2007-sena.pdf (backlinks)

  • 2007-ocollins.pdf (backlinks)

  • 2007-dixit.pdf (backlinks)

  • 2006-hackam.pdf (backlinks)

  • 2005-macleod.pdf (backlinks)

  • 2005-macleod-2.pdf (backlinks)

  • 2004-greaves.pdf (backlinks)

  • 2003-lee.pdf (backlinks)

  • 2002-sandercock.pdf (backlinks)

  • 2002-ikonomidou.pdf (backlinks)

  • 1994-cook-metanalysisexplanationcasebook.pdf (backlinks)

  • 1992-schmidt.pdf (backlinks)

  • 1992-lipsey.pdf (backlinks)

  • 1990-crowley.pdf

  • 1987-pocock.pdf (backlinks)

  • 1986-wilbourn.pdf (backlinks)

  • 1985-godfrey.pdf (backlinks)

  • 1970-schein.pdf (backlinks)