Regression To The Mean Fallacies

Regression to the mean is a general statistical phenomenon which leads to several widespread fallacies in analyzing & interpreting statistical results, such as residual confounding and Lord's paradox.
bibliography⁠, psychology⁠, statistics⁠, genetics
2021-05-202021-06-11 in progress certainty: possible importance: 5 backlinks


causes ⁠, but it also leads to additional errors, particularly when combined with measurement error:


  1. This is part of why results in sociology/epidemiology/psychology are so unreliable: everything is correlated but not only do they usually not control for genetics at all, they don’t even control for the things they think they control for! You have not controlled for SES by throwing in a discretized income variable measured in one year plus a discretized college degree variable. Variables which correlate with or predict some outcome such as poverty, may be doing no more than correcting some measurement error (frequently, due to the heavy genetic loading of most outcomes——correcting the omission of genetic information). This is why within-family designs are desirable even without worries about genetics: they hold constant shared-environment factors so you don’t need to measure or model them. Even a structural equation model (SEM) which explicitly incorporates measurement error may still have enough leakage to render ‘controlling’ misleading. Such confounding where the highly-imperfect correlations drive pseudo-causal effects (which are just regression to the mean) are doubtless a reason why so many apparently-well-controlled & highly-replicable correlations fail in RCTs⁠.↩︎

  2. The draft version is “Two statistical paradoxes in the interpretation of group differences: Illustrated with medical school admission and licensing data”.↩︎

  3. Including but not limited to researcher malpractice; eg the use of “genome-wide statistical-significance” to filter GWAS hits ensures a “winner’s curse”, and (contra critics) given their +regression-to-the-mean.↩︎