A cost-benefit analysis of IVF-based embryo selection for intelligence with 2015-2016 state-of-the-art (decision theory, biology, psychology, statistics, transhumanism)
created: 22 Jan 2016; modified: 08 Feb 2017; status: finished; belief: likely

With genetic predictors of a phenotypic trait, it is possible to select embryos during an in vitro fertilization process to increase or decrease that trait. I consider the case of human intelligence using SNP-based genetic prediction, finding:

  • a meta-analysis of GCTA results indicates that SNPs can explain >33% of variance in current intelligence scores, and >44% with better-quality phenotype testing
  • this sets an upper bound on the effectiveness of selection: a gain of 9 IQ points when selecting the top embryo out of 10
  • the best 2016 polygenic score could achieve a gain of ~3 IQ points when selecting out of 10
  • the cost of embryo selection is modest, at $1500 + $200 per embryo, with the sequencing cost projected to drop rapidly
  • a model of the IVF process, incorporating number of extracted eggs, losses to abnormalities & vitrification & failed implantation & miscarriages from 2 real IVF patient populations, estimates feasible gains of 0.39 & 0.68 IQ points
  • embryo selection is currently unprofitable (mean: -$358) in the USA under the lowest estimate of the value of an IQ point, but profitable under the highest (mean: $6230). The main constraints on selection profitability is the polygenic score; under the highest value, the NPV EVPI of a perfect SNP predictor is $24b and the EVSI per education/SNP sample is $71k
  • under the worst-case estimate, selection can be made profitable with a better polygenic score, which would require n>237,300 using education phenotype data (and much less using fluid intelligence measures)
  • selection can be made more effective by selecting on multiple phenotype traits: considering an example using 7 traits (IQ/height/BMI/diabetes/ADHD/bipolar/schizophrenia), there is a gain of 2.3x over IQ alone (total of $14653); the outperformance of multiple selection remains after adjusting for genetic correlations & polygenic scores and using a broader set of 16 traits with an estimated gain of 1.4x.

Embryo selection cost-effectiveness

In vitro fertilization (IVF) is a medical procedure for infertile women in which eggs are extracted, fertilized with sperm, allowed to develop into an embryo, and the embryo injected into their womb to induce pregnancy. The choice of embryo is relatively random, with the best looking one implanted; various tests can be run on embryos, including genome sequencing, by extracting a few cells from the embryo.

Preimplantation genetic profiling / preimplantation genetic diagnosis / preimplantation genetic screening (PGD; review) is when genetic information is taken and used to choose which embryo to implant. Currently, embryos are screened for gross abnormalities like the wrong number of chromosomes which would either be fatal to the embryo’s development or cause birth defects like Down syndrome (so there is no point in implanting it rather than a healthier embryo).

But with ever cheaper SNP arrays, large amounts of subtler genetic information becomes available, and one could check for abnormalities and also start making predictions about adult phenotypes, and one could choose embryos with higher/lower probability of traits with many known genetic hits such as height or intelligence or alcoholism or schizophrenia - thus, in effect, creating designer babies with proven technology no more exotic than IVF and 23andMe. For example, increases in height have long been linked to increased career success & life satisfaction with estimates like +$800 per inch per year income, which combined with polygenic scores predicting a decent fraction of variance, could be valuable But height, or hair color, or other traits are in general zero-sum traits and far less important to life outcomes than personality or intelligence, which profoundly influence an enormous range of outcomes ranging from academic success to income to longevity to violence to happiness to altruism (and so increases in which are far from frivolous, as some commenters have labeled them); since the personality GWASes have had difficulties (probably due to non-additivity of the relevant genes connected to the predicted frequency-dependent selection), that leaves intelligence as the most important case.

Discussions of this possibility have often led to both overheated prophecies of genius babies or super-babies, and to dismissive scoffing that such methods are either impossible or of trivial value; unfortunately, specific numbers and calculations backing up either view tend to be lacking, even in cases where the effect can be predicted easily from behavioral genetics and shown to be not as large as laymen might expect & consistent with the results (for example, the genius sperm bank1).

In Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?, Shulman & Bostrom 2014 consider the potential of embryo selection for greater intelligence in a little detail, ultimately concluding that in the most applicable current scenario of minimal uptake (restricted largely to those forced into IVF use) and gains of a few IQ points, embryo selection is more of curiosity than game-changer as it will be Socially negligible over one generation. Effects of social controversy more important than direct impacts. Some things are left out of their analysis which I’m interested in:

  1. they give the upper bound on the IQ gain that can be expected from a given level of selection & then-current imprecise GCTA heritability estimates, but not the gain that could be expected with updated figures: is it a large or small fraction of that maximum? And they give a general description of what societal effects might be expected from combinations of IQ gains and prevalence, but can we say something more rigorously about that?
  2. their level of selection may bear little resemblance to what can be practically obtained given the realities of IVF and high embryo attrition rates (selecting from 1 in 10 embryos may yield x IQ points, but how many real embryos would we need to implement that, since if we extract 10 embryos, 3 might be abnormal, the best candidate might fail to implant, the second-best might result in a miscarriage, etc?)
  3. there is no attempt to estimate costs nor whether embryo selection right now is worth the costs, or how much better our selection ability would need to be to make it worthwhile. Are the advantages compelling enough that ordinary parents would pay for it and take the practice out of the lab?
  4. if it is not worthwhile because the genetic information is too weakly predictive of adult phenotype, how much additional data would it take to make the predictions good enough to make selection worthwhile?
  5. What are the prospects for embryo editing instead of selection, in theory and right now?

Benefit

Value of IQ

Shulman & Bostrom 2014 note that

Studies in labor economics typically find that one IQ point corresponds to an increase in wages on the order of 1 per cent, other things equal, though higher estimates are obtained when effects of IQ on educational attainment are included (Zax and Rees, 2002; Neal and Johnson, 1996; Cawley et al., 1997; Behrman et al., 2004; Bowles et al., 2002; Grosse et al., 2002).2 The individual increase in earnings from a genetic intervention can be assessed in the same fashion as prenatal care and similar environmental interventions. One study of efforts to avert low birth weight estimated the value of a 1 per cent increase in earnings for a newborn in the US to be between $2,783 and $13,744, depending on discount rate and future wage growth (Brooks-Gunn et al., 2009)2

The given low/high range is based on 2006 data; inflation-adjusted to 2016 dollars (as appropriate due to being compared to 2015/2016 costs), that would be $3270 and $16151. There is much more that can be said on this topic, starting with various measurements of individuals from income to wealth to correlations with occupational prestige, looking at longitudinal & cross-sectional national wealth data, positive externalities & psychological differences (such as increasing cooperativeness, patience, free-market and moderate politics), verification of causality from longitudinal predictiveness, genetic overlap, within-family comparisons, & exogenous shocks positive (iodization & iron) or negative (lead), etc; an incomplete bibliography is provided as an appendix. As polygenic scores & genetically-informed designs are slowly adopted by the social sciences, we can expect more known correlations to be confirmed as causally downstream of genetic intelligence. These downstream effects likely include not just income and education, but behavioral measures as well Weiss 2000, notes in the National Longitudinal Survey of Youth data that a 3 point IQ increase predicts 28% less risk of highschool dropouts, 25% less risk of poverty or being jailed (men), 20% less risk of parentless children, 18% less risk of going on welfare, and 15% less risk of out-of-wedlock births. Anders Sandberg provides a descriptive table (expanded from Gottfredson 2003, itself adapted from Gottfredson 1997):

Population distribution of IQ by intellectual capacity, common jobs, and social dysfunctionality
Population distribution of IQ by intellectual capacity, common jobs, and social dysfunctionality

Estimating the value of an additional IQ point is difficult as there are many perspectives one could take: zero-sum, including only personal earnings or wealth and neglecting all the wealthy produced for society (eg through research), often based on correlating income with intelligence scores or education; positive-sum, attempting to include the positive externalities, perhaps through cross or longitudinal global comparisons, as intelligence predicts later wealth and the wealth of a country is closely linked to the average intelligence of its population which captures many (but not all) of the positive externalities; measures which include the greater longevity & happiness of more intelligent people, etc. Further, intelligence has intrinsic value of its own, and the genetic hits appear to be pleiotropic and improve other desirable traits (consistent with the mutation-selection balance evolutionary theory of persistent intelligence differences); the intelligence/longevity correlation has been found to be due to common genetics, and Krapohl et al 2015 examines the correlation of polygenic scores with 50 diverse traits, finding that the college/IQ polygenic scores correlate with 10+ of them in generally desirable directions3, similar to Hagenaars et al 20164 & Hill et al 2016 (graph), indicating both causation for those correlations & benefits beyond income. (For a more detailed discussion of embryo selection on multiple traits and whether genetic correlations increase or decrease selection gains, see later.) There are also pitfalls, like the fallacy of controlling for an intermediate variable, exemplified by studies which attempt to correlate intelligence with income after controlling for education, despite knowing that educational attainment is caused by intelligence and so their estimates are actually something irrelevant like the gain from greater intelligence for reasons other than through its effect on education. Estimates have come from a variety of sources, such as iodine and lead studies, using a variety of methodologies from cross-sectional surveys or administrative data up to natural experiments. Given the difficulty of coming up with reliable estimates for the value of an IQ point, which would be a substantial research project in its own right (but highly useful in other analyses like lead remediation or iodization), I will just reuse the $3270-16151 range.

Polygenic scores for IQ

SNP

Shulman & Bostrom’s upper bound works as follows:

Standard practice today involves the creation of fewer than ten embryos. Selection among greater numbers than that would require multiple IVF cycles, which is expensive and burdensome. Therefore 1-in-10 selection may represent an upper limit of what would currently be practically feasible …The standard deviation of IQ in the population is about 15. Davies et al. (2011) estimates that common additive variation can account for half of variance in adult fluid intelligence in its sample. Siblings share half their genetic material on average (ignoring the known assortative mating for intelligence, which will reduce the visible variance among embryos). Thus, in a crude estimate, variance is cut by 75 per cent and standard deviation by 50 per cent. Adjustments for assortative mating, deviation from the Gaussian distribution, and other factors would adjust this estimate, but not drastically. These figures were generated by simulating 10 million couples producing the listed number of embryos and selecting the one with the highest predicted IQ based on the additive variation.

Table 1. How the maximum amount of IQ gain (assuming a Gaussian distribution of predicted IQs among the embryos with a standard deviation of 7.5 points) might depend on the number of embryos used in selection:

Selection Average IQ gain
1 in 2 4.2
1 in 10 11.5
1 in 100 18.8
1 in 1000 24.3

That is, the full heritability of adult intelligence is ~0.8; a SNP chip records the few hundred thousand most common genetic variants in the population and treating each gene as having a simple additive increase-or-decrease effect on intelligence, Davies et al 2011’s GCTA (Genome-wide complex trait analysis) estimates that those SNPs are responsible for 0.51 of variance; since siblings descend from the same two parents, they will share half the variants (just like dizygotic twins) and differ on the rest, so the SNPs can only predict up to 0.25 between siblings and siblings are analogous to multiple embryos being considered for implantation in IVF (but not sperm or eggs5); simulate n embryos by drawing from a normal distribution with a SD of 0.7 or 10.5 IQ points and selecting the highest, and with various n, you get something like the table.

GCTA tells us how much the SNPs would explain if we knew all their effects exactly. This represents both an upper bound and a lower bound. It is a lower bound on heritability because:

  • only SNPs are used, which are a subset of all genetic variation excluding variants found in <1% of the population, copy-number variations, extremely rare or de novo mutations, etc

    Using techniques which boost genomic coverage like imputation based on whole-genomes could substantially increase the GCTA estimate. Yang et al 2015 demonstrated that using better imputation to make measured SNPs tag more causal variants drastically increased the GCTA estimate for height; Hill et al 2017 applied GCTA to both common variants (23%) and also to relatives to pick up rarer variants shared within families (31%), and found that combined, most/all of the estimated genetic variance was accounted for (23+31=54% vs 54% heritability in that dataset and a traditional heritability estimate of 50-80%).
  • the SNPs are statistically treated in an additive fashion, ignoring any contribution they may make through epistasis and dominance
  • GCTA estimates typically include no correction for measurement error in the phenotype data which can reduce GWAS estimates substantially (Liao et al 2014): a short IQ test, or a proxy like years of education, will correlate imperfectly with intelligence. This can be adjusted by psychometric formulas using test-retest reliability to get a true estimate (eg a GCTA estimate of 0.33 based on a short quiz with r=0.5 reliability might actually imply a true GCTA estimate more like 0.5, implying one could find much more of the genetic variants responsible for intelligence by running a GWAS with better - but probably slower & more expensive - IQ testing methods).

So GCTA is a lower bound on the total genetic contribution to any trait; use of whole-genome data and more sophisticated analysis will allow predictions beyond the GCTA. But the GCTA represents an upper bound on the state of the art approaches:

  • there are many SNPs (likely into the thousands) affecting intelligence
  • only a few are known to a high level of confidence, and the rest will take much larger samples to pin down
  • relatively small SNP datasets used in additive modeling is most feasible in terms of computing power and implementations

So the current approaches of getting increasingly large SNP samples will not pass the GCTA ceiling. Polygenic scores based on large SNP samples modeled additively are what is available in 2015, and in practice are nowhere near the GCTA ceiling; hence, the state of the art is well below the outlined maximum IQ gains. Probably at some point whole-genomes will become cost-effective compared to SNPs, improvements be made in modeling interactions, and potentially much better polygenic scores will become available approaching the 0.8 of heritability; but not yet.

GCTA meta-analysis

Davies et al 2011’s 0.5 is a little outdated and small, based on n=3511 with correspondingly large imprecision in the GCTA estimates. We can do better by incorporating additional GCTAs which have been published since 2011.

Intelligence GCTA literature

I was able to find in total the following GCTA estimates:

  1. Genome-wide association studies establish that human intelligence is highly heritable and polygenic, Davies et al 2011 (supplementary)

0.51(0.11); but Supplementary Table 1 (pg1) actually reports in the combined sample, the no cut-off gf h^2 equals 0.53(0.10). The 0.51 estimate is drawn from a cryptic relatedness cutoff of <0.025. The samples are also reported aggregated into Scottish & English samples: 0.17 (0.20) & 0.99 (0.22) respectively. Sample ages:

1. Lothian Birth Cohort 1921 (Scottish): _n_=550, 79.1 years average
2. Lothian Birth Cohort 1936 (Scottish): _n_=1091, 69.5 years average
3. Aberdeen Birth Cohort 1936 (Scottish): _n_=498, 64.6 years average
4. Manchester and Newcastle longitudinal studies of cognitive aging cohorts (English): _n_=6063, 65 years median

GCTA is not reported for the Norwegian, and not reported for the 4 samples individually, so I code Davies et al 2011 as 2 samples with weighted-averages for ages (70.82 and 65 respectively) <!-- (550*79.1 + 1091*69.5 + 498*64.6) / (550+1091+498) -->
  1. Most Reported Genetic Associations with General Intelligence Are Probably False Positives, Chabris et al 2012

    0.47; no measure of precision reported in paper or supplementary information but the relevant sample seems to be n=2,441 and so the standard error will be high.
  2. Genetic contributions to stability and change in intelligence from childhood to old age, Deary et al 2012

    The bivariate analysis resulted in estimates of the proportion of phenotypic variation explained by all SNPs for cognition, as follows: 0.48 (standard error 0.18) at age 11; and 0.28 (standard error 0.18) at age 65, 70 or 79.

This re-reports the Aberdeen & Lothian Birth Cohorts from Davies et al 2011. 3. Common DNA Markers Can Account for More Than Half of the Genetic Influence on Cognitive Abilities, Plomin et al 2013

England/Wales TEDS cohort. Table 1: ".35 [.12, .58]" (95% CI; so presumably a standard error of `(0.35-0.12)/1.96 ~>0.117`), 12-year-old twins
  1. Childhood intelligence is heritable, highly polygenic and associated with FNBP1L, Benyamin et al 2014 (supplementary information)

    Cohorts from England, USA, Australia, Netherlands, & Scotland. pg4: TEDS (mean age 12yo, twins): 0.22(0.10), UMN (14yo, mostly twins6): 0.40(0.21), ALSPAC (9yo): 0.46(0.06)
  2. GWAS of 126,559 Individuals Identifies Genetic Variants Associated with Educational Attainment, Rietveld et al 2013 (supplementary information)

    Education years phenotype. pg2: 0.224(0.042); mean age ~57 (using the supplementary information’s Table S4 on pg92 & equal-weighting all reported mean ages; majority of subjects are non-twin)
  3. Molecular genetic contributions to socioeconomic status and intelligence, Marioni et al 2014

    Generation Scotland cohort. Table 3: 0.29(0.05), median age 57.
  4. Results of a GWAS Plus: General Cognitive Ability Is Substantially Heritable and Massively Polygenic, Kirkpatrick et al 2014

    Two Minnesota family & twin cohorts. 0.35( 0.11), 11.78 & 17.48yos (average: 14.63)
  5. DNA evidence for strong genetic stability and increasing heritability of intelligence from age 7 to 12, Trzaskowski et al 2014a

    Rereports the TEDS cohort. pg4: age 7: 0.26(0.17); age 12: 0.45(0.14); used unrelated twins for the GCTA.
  6. Genetic influence on family socioeconomic status and children’s intelligence, Trzaskowski et al 2014b

    Table 2: 0.32(0.14); appears to be a followup to Trzaskowski et al 2014a & report on same dataset
  7. Genomic architecture of human neuroanatomical diversity, Toro et al 2014 (supplement)

    0.56(0.25)/0.52(0.25) (visual IQ vs performance IQ; mean: 0.54(0.25)); IMAGEN cohort (Ireland, England, Scotland, France, Germany, Norway), mean age 14.5
  8. Genetic contributions to variation in general cognitive function: a meta-analysis of genome-wide association studies in the CHARGE consortium (n=53949), Davies et al 2015

    ARIC (57.2yo, USA, n=6617): 0.29(0.05), HRS (70yo, USA, n=5976): 0.28(0.07); ages from Supplementary Information 2.

    The article reports doing GCTAs only on the ARIC & HRS samples, but Figure 4 shows a forest plot which includes GCTA estimates from two other groups, CAGES (Cognitive Ageing Genetics in England and Scotland Consortium) at ~0.5 & GS (Generation Scotland) at ~0.25. The CAGES datapoint is cited to Davies et al 2011, which did report 0.51, and the GS citation is incorrect; so presumably those two datapoints were previously reported GCTA estimates which Davies et al 2015 was meta-analyzing together with their 2 new ARIC/HS estimates, and they simply didn’t mention that.
  9. A genome-wide analysis of putative functional and exonic variation associated with extremely high intelligence, Spain et al 2015

    0.174(0.017); but on the liability scale for extremely high intelligence, so of unclear relevance to normal variation and I don’t know how it can be converted to a SNP heritability equivalent to the others.
  10. Epigenetic age of the pre-frontal cortex is associated with neuritic plaques, amyloid load, and Alzheimer’s disease related cognitive functioning, Levine et al 2015

    As measures of cognitive function & aging, some sort of IQ test was done, with the GCTAs reported as 0/0, but no standard errors or other measures of precision were included and so it cannot be meta-analyzed. (Although with only n=700, orders of magnitude smaller than some other datapoints, the precision would be extremely poor and it is not much of a loss.)
  11. Genome-wide association study of cognitive functions and educational attainment in UK Biobank (N=112151), Davies et al 2016

    n=30801, 0.31(0.018) for verbal-numerical reasoning (13-item multiple choice, test-retest 0.65) in UK Biobank, mean age 56.91 (Supplementary Table S1)
  12. The genetic architecture of pediatric cognitive abilities in the Philadelphia Neurodevelopmental Cohort, Robinson et al 2015

    n=3689, 0.360(0.108) for the principal factor extracted from their battery of tests, non-twins mean age 13.7
  13. GWAS meta-analysis reveals novel loci and genetic correlates for general cognitive function: a report from the COGENT consortium, Trampush et al 2017:

    n=35298, 0.215(0.0001); mot GCTA but LD score regression, with overlap with CHARGE (cohorts: CHS, FHS, HBCS, LBC1936 and NCNG); non twin, mean age of 45.6

The earlier estimates tend to be smaller samples and higher, and as heritability increases with age, it’s not surprising that the GCTA estimates of SNP contribution also increases with age.

Meta-analysis

Jian Yang says that GCTA estimates can be meta-analytically combined straightforwardly in the usual way. Excluding Chabris et al 2012 (no precision reported) and Spain et al 2015 and the duplicate Trzaskowski and doing a random-effects meta-analysis with mean age as a covariate:

gcta <- read.csv(stdin(), header=TRUE)
Study, N, HSNP, SE, Age.mean, Twin, Country
Davies et al 2011, 2139, 0.17, 0.2, 70.82, FALSE, Scotland
Davies et al 2011, 6063, 0.99, 0.22, 65, FALSE, England
Plomin et al 2013, 3154, 0.35, 0.117, 12, TRUE, England
Benyamin et al 2014, 3376, 0.40, 0.21, 14, TRUE, USA
Benyamin et al 2014, 5517, 0.46, 0.06, 9, FALSE, England
Rietveld et al 2013, 7959, 0.224, 0.042, 57.47, FALSE, international
Marioni et al 2014, 6609, 0.29, 0.05, 57, FALSE, Scotland
Kirkpatrick et al 2014, 3322, 0.35, 0.11, 14.63, FALSE, USA
Toro et al 2014, 1765, 0.54, 0.25, 14.5, FALSE, international
Davies et al 2015, 6617, 0.29, 0.05, 57.2, FALSE, USA
Davies et al 2015, 5976, 0.28, 0.07, 70, FALSE, USA
Davies et al 2016, 30801, 0.31, 0.018, 56.91, FALSE, England
Robinson et al 2015, 3689, 0.36, 0.108, 13.7, FALSE, USA


library(metafor)
## Model as continuous normal variable; heritabilities are ratios 0-1,
## but metafor doesn't support heritability ratios, or correlations with
## standard errors rather than _n_s (which grossly overstates precision)
## so, as is common and safe when the estimates are not near 0/1, we treat it
## as a standardized mean difference
rem <- rma(measure="SMD", yi=HSNP, sei=SE, data=gcta); rem
# ...estimate       se     zval     pval    ci.lb    ci.ub
#  0.3207   0.0253  12.6586   <.0001   0.2711   0.3704
remAge <- rma(yi=HSNP, sei=SE, mods = Age.mean, data=gcta); remAge
# Mixed-Effects Model (k = 13; tau^2 estimator: REML)
#
# tau^2 (estimated amount of residual heterogeneity):     0.0001 (SE = 0.0010)
# tau (square root of estimated tau^2 value):             0.0100
# I^2 (residual heterogeneity / unaccounted variability): 2.64%
# H^2 (unaccounted variability / sampling variability):   1.03
# R^2 (amount of heterogeneity accounted for):            96.04%
#
# Test for Residual Heterogeneity:
# QE(df = 11) = 15.6885, p-val = 0.1531
#
# Test of Moderators (coefficient(s) 2):
# QM(df = 1) = 6.6593, p-val = 0.0099
#
# Model Results:
#
#          estimate      se     zval    pval    ci.lb    ci.ub
# intrcpt    0.4393  0.0523   8.3953  <.0001   0.3368   0.5419
# mods      -0.0025  0.0010  -2.5806  0.0099  -0.0044  -0.0006
remAgeT <- rma(yi=HSNP, sei=SE, mods = ~ Age.mean + Twin, data=gcta); remAgeT
# intrcpt      0.4505  0.0571   7.8929  <.0001   0.3387   0.5624
# Age.mean    -0.0027  0.0010  -2.5757  0.0100  -0.0047  -0.0006
# Twin TRUE   -0.0552  0.1119  -0.4939  0.6214  -0.2745   0.1640
gcta <- gcta[order(gcta$Age.mean),] # sort by age, young to old
forest(rma(yi=HSNP, sei=SE, data=gcta), slab=gcta$Study)
## so estimated heritability at 30yo:
0.4505 + 30*-0.0027
# [1] 0.3695
## Take a look at the possible existence of a quadratic trend as suggested
## by conventional IQ heritability results:
remAgeTQ <- rma(yi=HSNP, sei=SE, mods = ~ I(Age.mean^2) + Twin, data=gcta); remAgeTQ
# Mixed-Effects Model (k = 13; tau^2 estimator: REML)
#
# tau^2 (estimated amount of residual heterogeneity):     0.0000 (SE = 0.0009)
# tau (square root of estimated tau^2 value):             0.0053
# I^2 (residual heterogeneity / unaccounted variability): 0.83%
# H^2 (unaccounted variability / sampling variability):   1.01
# R^2 (amount of heterogeneity accounted for):            98.87%
#
# Test for Residual Heterogeneity:
# QE(df = 10) = 16.1588, p-val = 0.0952
#
# Test of Moderators (coefficient(s) 2,3):
# QM(df = 2) = 6.2797, p-val = 0.0433
#
# Model Results:
#
#                estimate      se     zval    pval    ci.lb    ci.ub
# intrcpt          0.4150  0.0457   9.0879  <.0001   0.3255   0.5045
# I(Age.mean^2)   -0.0000  0.0000  -2.4524  0.0142  -0.0001  -0.0000
# Twin TRUE       -0.0476  0.1112  -0.4285  0.6683  -0.2656   0.1703
## does fit better but enough?
Forest plot for meta-analysis of GCTA estimates of total additive SNPs’ effect on intelligence/cognitive-ability
Forest plot for meta-analysis of GCTA estimates of total additive SNPs’ effect on intelligence/cognitive-ability

The regression results, residuals, and funnel plots are generally sensible. The overall estimate of ~0.30 is about what one would have predicted based on prior research: Polderman et al 2015, meta-analyzing thousands of twin studies on hundreds of measurements, finds wide dispersal among traits but an overall grand mean of 0.49, of which most is additive genetic effects, so combined with the usually greater measurement error of GCTA studies compared to twin registries (which can do detailed testing over many years) and the limitation of SNP arrays in measuring a subset of genetic variants, one would guess at a GCTA grand mean of about half that or ~0.25; more directly, Ge et al 2016 runs a GCTA-like SNP heritability algorithm on 551 traits available in the UK Biobank with a grand mean of 16% (supplementary All Tables, worksheet 3 Supp Table 1) and education/fluid-intelligence/numeric-memory/pairs-matching/prospective-memory/reaction-time at 29%/23%/15%/6%/11%/7% respectively.7 Hence, ~0.30 is a plausible result for any trait and for intelligence specifically.

There are two issues with some of the details:

  1. Davies et al 2011’s second sample, with a GCTA estimate of 0.99(0.22), is 3 standard errors away from the overall estimate.

    Nothing about the sample or procedures seem suspicious, so why is the estimate so high? The GCTA paper/manual do warn about the possibility of unstable estimation where parameter values escape to a boundary, and it is suspicious that this outlier is right at a boundary, so I suspect that that might be what happened in this procedure and if the Davies et al 2011 data were rerun, a more sensible value like 0.12 would be estimated.
  2. the estimates decrease with age rather than increase.

    I thought this might be driven by the samples using twins, which have been accused in the past of delivering higher heritability estimates due to higher SES of parents and correspondingly less environmental influence, but when added as a predictor, twin samples are non-statistically-significantly lower. My best guess so far is that the apparent trend is due to a lack of middle-aged samples: the studies jump all the way from 14yo to 57yo, so the usual quadratic curve of increasing heritability could be hidden and look flat, since the high estimates will all be missing from the middle. Testing this, I tried fitting a quadratic model instead, and as expected, it does fit somewhat better but without using Bayesian methods, hard to say how much better. This question awaits publication of further GCTA intelligence samples, with middle-aged subjects.

Correcting for measurement error

This meta-analytic summary is an underestimate of the true genetic effect for several reasons, including as mentioned, measurement error. Using Spearman’s formula, we can correct it.

Davies et al 2016 is the most convenient and precise GCTA estimate to work with, and reports a test-retest reliability of 0.65 for its 13-question verbal-numerical reasoning test. Its h2snp=0.31 is a square, so it must be square-rooted to be a r and 0.31=0.556\sqrt{0.31}=0.556. We assume the SNP test-retest reliability is ~1 as genome sequencing is highly accurate due to repeated passes.

The correction for attenuation is rxy=rxyrxxryyr_{x'y'} = \frac{r_{xy}}{\sqrt{r_{xx} \cdot r_{yy}}}

x/y are IQ/SNPs, so:

rxy=0.3110.65=0.691r_{x'y'} = \frac{\sqrt{0.31}}{\sqrt{1 \cdot 0.65}} = 0.691

So the rsnp is 0.691, and to convert it back to h2snp, 0.6912 = 0.477481 = 0.48, which is substantially larger than the measurement-error-contaminated underestimate of 0.31.

0.48 represents the true underlying genetic contribution with indefinite amounts of exact data, but all IQ tests are imperfect and one may ask what is the practical limit with the best current IQ tests?

One of the best current IQ tests is the WAIS-IV full-scale IQ test, with a 32-day test-retest reliability of 0.93 (Table 2). Knowing the true GCTA estimate, we can work backwards assuming a ryy=0.93:

We can then work backwards as I suggest to figure out what a good IQ test could deliver, such as the WAIS-IV full-scale IQ test. So:

  1. rxy=rxyrxxryyr_{x'y'} = \frac{r_{xy}}{\sqrt{r_xx \cdot r_yy}}
  2. 0.691=x10.930.691 = \frac{\sqrt{x}}{\sqrt{1 \cdot 0.93}}
  3. 0.69110.93=x0.691 \cdot \sqrt{1 \cdot 0.93} = \sqrt{x}
  4. (0.69110.93)2=x(0.691 \cdot \sqrt{1 \cdot 0.93})^2 = x
  5. 0.444=x0.444 = x

The better IQ test delivers a gain of 0.444-0.31=0.134 or 43% more possible variance explicable, with 4% still left over compared to a perfect test.

Measurement error has considerable implications for how GWASes will be run in years to come. As SNP costs decline from their 2016 cost of ~$50 and whole genomes from ~$1000, sample sizes >500,000 and into the millions will become routine, especially as whole-genome sequencing becomes a routine practice for all babies and for any patients with a serious disease (if nothing else, for pharmacogenomics reasons). Sample sizes in the millions will recover almost the full measured GCTA heritability of ~0.33 (eg Hsu’s argument that sparsity priors will recover all of IQ at ~n=1m); but at that point, additional samples become worthless as they will not be able to pass the measured ceiling of 0.33 and explain the full 0.48. Only better measurements will allow any further progress. Considering that a well-run IQ test will cost <$100, the crossover point may well have been passed with current n=400k datasets, where resources would be better put into fewer but better measured IQ/SNP datapoints rather than more low quality IQ/SNP datapoints.

GCTA-based upper bound on selection gains

Since half of additives will be shared within family, then we get 0.33*0.5 = 0.165 within-family variance, which gives sqrt(0.165) = 0.4062019202 SD or 6.1 IQ points (Occasionally within-family differences are cited in a format like siblings have an average difference of 12 IQ points, which comes from an SD of ~0.7/0.8, since 0.8*15=12, but you could also check what SD yields an average difference of 12 via simulation: eg mean(abs(rnorm(n=10000000, mean=0, sd=0.71) - rnorm(n=10000000, mean=0, sd=0.71))) * 15 ~> 12.01807542.) We don’t care about means since we’re only looking at gains, so the mean of the within-family normal distribution can be set to 0.

With that, we can write a simulation like Shulman & Bostrom where we generate n samples from 𝒩(0,6.1)\mathcal{N}(0, 6.1), take the max, and return the difference of the max and mean. There are more efficient ways to compute the expected maximum, however, and so we’ll use a lookup table computed using the lmomco library; for a discussion of alternative approximations & implementations, see Calculating The Expected Maximum of a Gaussian Sample using Order Statistics.

For generality to other continuous normally-distributed complex traits, we’ll work in standardized units rather than the IQ scale (SD=15), but convert back to points for easier reading:

exactMax <- function (n, mean=0, sd=1) { if(n>200) { library(lmomco) # if outside the 0-200 precomputed range:
              expect.max.ostat(n, para=vec2par(c(mean, sd), type="nor"), cdf=cdfnor, pdf=pdfnor) } else {
    lookup <- c(0,0,0.5641895835,0.8462843753,1.0293753730,1.1629644736,1.2672063606,1.3521783756,1.4236003060,
        1.4850131622,1.5387527308,1.5864363519,1.6292276399,1.6679901770,1.7033815541,1.7359134449,1.7659913931,
        1.7939419809,1.8200318790,1.8444815116,1.8674750598,1.8891679149,1.9096923217,1.9291617116,1.9476740742,
        1.9653146098,1.9821578398,1.9982693020,2.0137069241,2.0285221460,2.0427608442,2.0564640976,2.0696688279,
        2.0824083360,2.0947127558,2.1066094396,2.1181232867,2.1292770254,2.1400914552,2.1505856577,2.1607771781,
        2.1706821847,2.1803156075,2.1896912604,2.1988219487,2.2077195639,2.2163951679,2.2248590675,2.2331208808,
        2.2411895970,2.2490736293,2.2567808626,2.2643186963,2.2716940833,2.2789135645,2.2859833005,2.2929091006,
        2.2996964480,2.3063505243,2.3128762306,2.3192782072,2.3255608518,2.3317283357,2.3377846191,2.3437334651,
        2.3495784520,2.3553229856,2.3609703096,2.3665235160,2.3719855541,2.3773592389,2.3826472594,2.3878521858,
        2.3929764763,2.3980224835,2.4029924601,2.4078885649,2.4127128675,2.4174673530,2.4221539270,2.4267744193,
        2.4313305880,2.4358241231,2.4402566500,2.4446297329,2.4489448774,2.4532035335,2.4574070986,2.4615569196,
        2.4656542955,2.4697004768,2.4736966781,2.4776440650,2.4815437655,2.4853968699,2.4892044318,2.4929674704,
        2.4966869713,2.5003638885,2.5039991455,2.5075936364,2.5111482275,2.5146637581,2.5181410417,2.5215808672,
        2.5249839996,2.5283511812,2.5316831323,2.5349805521,2.5382441196,2.5414744943,2.5446723168,2.5478382097,
        2.5509727783,2.5540766110,2.5571502801,2.5601943423,2.5632093392,2.5661957981,2.5691542321,2.5720851410,
        2.5749890115,2.5778663175,2.5807175211,2.5835430725,2.5863434103,2.5891189625,2.5918701463,2.5945973686,
        2.5973010263,2.5999815069,2.6026391883,2.6052744395,2.6078876209,2.6104790841,2.6130491728,2.6155982225,
        2.6181265612,2.6206345093,2.6231223799,2.6255904791,2.6280391062,2.6304685538,2.6328791081,2.6352710490,
        2.6376446504,2.6400001801,2.6423379005,2.6446580681,2.6469609341,2.6492467445,2.6515157401,2.6537681566,
        2.6560042252,2.6582241720,2.6604282187,2.6626165826,2.6647894763,2.6669471086,2.6690896839,2.6712174028,
        2.6733304616,2.6754290533,2.6775133667,2.6795835873,2.6816398969,2.6836824739,2.6857114935,2.6877271274,
        2.6897295441,2.6917189092,2.6936953850,2.6956591311,2.6976103040,2.6995490574,2.7014755424,2.7033899072,
        2.7052922974,2.7071828562,2.7090617242,2.7109290393,2.7127849375,2.7146295520,2.7164630139,2.7182854522,
        2.7200969934,2.7218977622,2.7236878809,2.7254674700,2.7272366478,2.7289955308,2.7307442335,2.7324828686,
        2.7342115470,2.7359303775,2.7376394676,2.7393389228,2.7410288469,2.7427093423,2.7443805094,2.7460424475)

  return(mean + sd*lookup[n+1]) }}

## select 1 out of N embryos (default: siblings, who are half-related)
embryoSelection <- function(n, variance=0.33, relatedness=0.5) {
    exactMax(n, mean=0, sd=sqrt(variance*relatedness)); }
embryoSelection(n=10) * 15
# [1] 9.375664711
embryoSelection(n=10, variance=0.444) * 15
# [1] 10.87518323

So 1 out of 10 gives a maximal average gain of ~9 IQ points, less than Shulman & Bostrom’s 11.5 because of my lower GCTA estimate, but using better IQ tests like the WAIS, we could go as high as ~11 points.

For comparison, the full genetic heritability of adult IQ (including SNPs, mutation load & de novo mutations, copy-number variation, etc) is generally estimated ~0.8, which case the upper bound on selection out of 10 embryos would be ~14.5 IQ points:

embryoSelection(n=10, variance=0.8) * 15
# [1] 14.59789016
Polygenic scores

A SNP-based polygenic score works much the same way: it explains a certain fraction or percentage of the variance, halved due to siblings, and can be plugged in once we know how much less than 0.33 it is. An example of using SNP polygenic scores to identify genetic influences and verify they work within-family and are not confounded would be Domingue et al 2015’s Polygenic Influence on Educational Attainment. Past polygenic scores for intelligence:

  1. Genome-wide association studies establish that human intelligence is highly heritable and polygenic, Davies et al 2011:

    0.5776% of fluid intelligence in the NCNG replication sample, if I’ve understood their analysis correctly.
  2. GWAS of 126,559 Individuals Identifies Genetic Variants Associated with Educational Attainment, Rietveld et al 2013:

    This landmark study providing the first GWAS hits on intelligence also estimated multiple polygenic scores: the full polygenic scores predicted 2% of variance in education, and 2.5% of variance in cognitive function in a Swedish replication sample, and also performed well in within-family settings (0.31% & 0.19% of variance in attending college & years of education, respectively, in Table S25).
  3. Childhood intelligence is heritable, highly polygenic and associated with FNBP1L, Benyamin et al 2014:

    0.5%, 1.2%, 3.5%.
  4. Results of aGWAS Plus:" General Cognitive Ability Is Substantially Heritable and Massively Polygenic“, Kirkpatrick et al 2014:

    0.55% (maximum in sub-samples: 0.7%)
  5. Common genetic variants associated with cognitive performance identified using the proxy-phenotype method, Rietveld et al 2014 (supplementary)

    Predicts 0.2% to 0.4% of variance in cognitive performance using a small polygenic score of 69 SNPs. (For other very small polygenic score uses, see also Domingue et al 2015 & Zhu et al 2015.)
  6. Genetic contributions to variation in general cognitive function: a meta-analysis of genome-wide association studies in the CHARGE consortium (N=53 949), Davies et al 2015:

    1.2%
  7. Shared genetic aetiology between cognitive functions and physical and mental health in UK Biobank (N=112 151) and 24 GWAS consortia, Hagenaars et al 2016 (supplementary data):

    Supplementary Table 4d reports predictive validity of the educational attainment polygenic score for childhood-cognitive-ability/college-degree/years-of-education in its other samples, yielding R2=0.0042/0.0214/0.0223 or 0.42%/2.14%/2.23% respectively. Particularly intriguing given its investigation of pleiotropy is Supplementary Table 5, which uses polygenic scores constructed for all the diseases in its data (eg type 2 diabetes, ADHD, schizophrenia, coronary artery disease), where all the disease scores & covariates are entered into the model and then the cognitive polygenic scores are able to predict even higher, as high as R2=0.063/0.046/0.064.
  8. Genome-wide association study of cognitive functions and educational attainment in UK Biobank (N=112151), Davies et al 2016:

    The Biobank polygenic score constructed for verbal-numerical reasoning predicted 0.98%/1.32% of g/gf scores in Generation Scotland, and 2.79% in Lothian Birth Cohort 1936 (Figure 2).
  9. GWAS for executive function and processing speed suggests involvement of the CADM2 gene, Ibrahim-Verbaas et al 2016

    Does not report a polygenic score.
  10. Upcoming in 2016 is another SSGAC paper with ~8x hits using n=305,000 but neither talk’s summary gives the polygenic score. (The polygenic score will be much better than the Rietveld et al 2013 polygenic score, and I intend to rerun the numbers when it comes out and becomes the new public state of the art.) In a March 2016 roundtable, a mention was made of an upcoming polygenic score explaining about 8-10% of IQ.
  11. in 2016, a consortium combined the SSGAC dataset with UK Biobank, expanding the combined dataset to >300,000 and yielding a total of 162 education hits; the results were reported in two papers, the latter giving the polygenic scores:

    The polygenic score predicts 3.5% of intelligence, 7% of family SES, and 9% of education in a heldout sample.
  12. GWAS meta-analysis reveals novel loci and genetic correlates for general cognitive function: a report from the COGENT consortium, Trampush et al 2017: no polygenic score reported

Since these scores overlap and are not, like GCTA estimates, repeated measurements of a variable in any sense, there is little point in meta-analyzing them other than to estimate growth over time (even using them as an ensemble wouldn’t be worth the complexity, and in any case, most studies do not provide the full list of beta values making up the polygenic score); for our purpose, the largest polygenic score is the important number. (Emil Kirkegaard notes that the polygenic scores are also inefficient: polygenic scores are not always published, not always based on individual patient data, and generally use maximum-likelihood estimation neglecting our strong priors on the number of hits & distribution of effect sizes. But these published scores are what we have as of January 2016, so we must make do.)

Selzam et al 2016’s reported polygenic score for cognitive performance was 3.5%. Thus:

selzam2016 <- 0.035
embryoSelection(n=10, variance=selzam2016) * 15
# [1] 3.053367791

Cost of embryo selection

PGD is currently legal everywhere in the USA, so there are no criminal or legal costs; even if there were, clinics in other countries will continue to offer it, and the cost of using a Chinese fertility clinic may not be particularly noticeable financially8 and their quality may eventually be higher9.

Cost of polygenic scores

An upper bound is the cost of whole-gnome sequencing, which has continuously fallen. My impression is that historically, a whole-genome has cost ~6x a comprehensive SNP (500k+). The NHGRI Genome Sequencing Program’s DNA Sequencing Cost dataset most recently records an October 2015 whole-genome cost of $1245. Illumina has boasted about a $1000 whole-genome starting around 2014 (under an unspecified cost model), and around December 2015, Veritas Genetics started taking orders for a consumer 20x whole-genome priced at $1000. So if a comprehensive SNP cost >$1000, it would be cheaper to do a whole-genome, and historically at that price, we would expect a SNP cost of ~$170.

The date & cost of getting a large selection of SNPs is not collected in any dataset I know of, so here are a few 2010-2016 price quotes. Tur-Kaspa et al 2010 estimates Genetic analyses of oocytes by polar bodies biopsy and embryos by blastomere biopsy at $3000. Hsu 2014 estimates an SNP costs ~$100 USD and At the time of this writing SNP genotyping costs are below $50 USD per individual, without specifying a source; given the latter is below any 23andMe price offered, it is probably an internal Beijing Genomics Institute cost estimate. The Center for Applied Genomics price list (unspecified date but presumably 2015) lists Affymetrix SNP 6.0 at $355 & the Human Omni Express-24 at $170. 23andMe famously offered its services for $108.95 for >600k SNPs as of October 2014, but that price apparently was substantially subsidized by research & sales as they raised the price to $200 & lowered comprehensiveness in October 2015. NIH CIDR’s price list quotes a full cost of $150-$210 for 1 use of a UK Biobank 821K SNP Axiom Array (capabilities) as of 10 December 2015. (The NIH CIDR price list also says $40 for 96 SNPs, suggesting that it would be a false economy to try to get only the top few SNP hits rather than a comprehensive polygenic score.) Rockefeller University’s 2016 price list quotes a range of $260-$520 for one sample from an Affymetrix GeneChip. Tan et al 2014 note that for PGD purposes, the estimated reagent cost of sequencing for the detection of chromosomal abnormalities is currently less than $100. The price of the array & genotyping can be driven far below this by economies of scale: Hugh Watkins’s talk at the June 2014 UK Biobank conference says that they had reached a cost of ~$45 per SNP10 (The UK Biobank overall has spent ~$110m 2003-2015, so genotyping 500,000 people at ~$45 each represents a large fraction of its total budget.) Razib Khan reports in October 2016 that for scientific purposes, SNP arrays have fallen to $50.

Overall, SNPing an embryo in 2016 should cost ~$100-400 and more like $200 and we can expect the SNP cost to fall further.

SNP cost forecast

How much will SNP costs drop in the future?

We can extrapolate from the NHGRI Genome Sequencing Program’s DNA Sequencing Cost dataset, but it’s tricky: eyeballing their graph, we can see that historical prices have not followed any single pattern. At first, costs closely tracks a simple halving every 18 months, then there is an abrupt trend-break to super-exponential drops from mid-2007 to mid-2011 and then an equally abrupt reversion to a flat cost trajectory with occasional price increases and then another abrupt fall in early 2015 (accentuated when one adds in the Veritas Genetics $1k as a datapoint).

Dropping pre-2007 data and fitting an exponential shows a bad fit since 2012 (if it follows the pre-2015 curve, it has large prediction errors on 2015-2016 and vice-versa). It’s probably better to take the last 3 datapoints (the current trend) and fit the curve to them, covering just the past 6 months since July 2015, and then applying the 6x rule of thumb we can predict SNP costs out 20 months to October 2017:

# http://www.genome.gov/pages/der/seqcost2015_4.xlsx
genome <- c(9408739,9047003,8927342,7147571,3063820,1352982,752080,342502,232735,154714,108065,70333,46774,
            31512,31125,29092,20963,16712,10497,7743,7666,5901,5985,6618,5671,5826,5550,5096,4008,4920,4905,
            5731,3970,4211,1363,1245,1000)
l <- lm(log(I(tail(genome, 3))) ~ I(1:3)); l
# Coefficients:
# (Intercept)       I(1:3)
#  7.3937180   -0.1548441
exp(sapply(1:10, function(t) { 7.3937180 + -0.1548441*t } )) / 6
# [1] 232.08749421 198.79424215 170.27695028 145.85050092 124.92805739 107.00696553  91.65667754
# [8]  78.50840827  67.24627528  57.59970987

(Even if SNP prices stagnate due to lack of competition or fixed-costs/overhead/small-scales, whole-genomes will simply eat their lunch: at the current trend, whole-genomes will reach $200 ~2019 and $100 ~2020.)

PGD net costs

An IVF cycle involving PGD will need ~4-5 SNPs (given a median egg count of 9 and half being abnormal), so I estimate the genetic part costs ~$800-1000. The net cost of PGD will include the cell harvesting part (one needs to extract cells from embryos to sequence) and interpretation (although scoring and checking the genetic data for abnormality should be automatable), so we can compare with current PGD price quotes.

  • The Fertility Institutes say Average costs for the medical and genetic portions of the service provided by the Fertility Institutes approach $27,000 U.S (unspecified date) without breaking out the PGD part.
  • Tur-Kaspa et al 2010, using 2000-2005 data from the Reproductive Genetics Institute (RGI) in Illinois estimates the first PGD cycle at $6k, and subsequent at $4.5k, giving a full table of costs:

    Table 2: Estimated cost of IVF-preimplantation genetic diagnosis (PGD) treatment for cystic fibrosis (CF) carriers
    Procedure Subprocedure Cost (US$) Notes
    IVF Pre-IVF laboratory screening 1000 Range $600 to $2000; needs to be performed only once each year
    Medications 3000 Range $1500 to $5000
    Cost of IVF treatment cycle 12000 Range $6000 to $18000
    Total cost, first IVF cycle 16000
    Total cost, each additional IVF cycle 15000
    PGD Genetic system set-up for PGD of a specific couple 1500 Range $1000 to $2000; performed once for a specific couple, with or without analysis of second generation, if applicable
    Biopsy of oocytes and embryos 1500
    Genetic analyses of oocytes by polar bodies biopsy and embryos by blastomere biopsy 3000 Variable; upper end presented; depends on number of mutations anticipated
    Subtotal: cost of PGD, first cycle 6000
    Subtotal: cost of PGD, each repeated cycle 4500
    IVF-PGD Total cost, first IVF-PGD cycle 22000
    Total cost, each additional IVF-PGD cycle 19500

    …Overall, 35.6% of the IVF-PGD cycles yielded a life birth with one or more healthy babies. If IVF-PGD is not successful, the couple must decide whether to attempt another cycle of IVF-PGD (Figure 1) knowing that their probability of having a baby approaches 75% after only three treatment cycles and is predicted to exceed 93% after six treatment cycles (Table 3). If 4000 couples undergo one cycle of IVF-PGD, 1424 deliveries with non-affected children are expected (Table 3). Assuming a similar success rate of 35.6% in subsequent treatment cycles and that couples could elect to undergo between four and six attempts per year yields a cumulative success rate approaching 93%. IVF as performed in the USA typically involves the transfer of two or three embryos. The series yielded 1.3 non-affected babies per pregnancy with an average of about two embryos per transfer (Table 1). Thus, the number of resulting children would be higher than the number of deliveries, perhaps by as much as 30% (Table 3). Nonetheless, to avoid multiple births, which have both medical complications and an additional cost, the outcome was calculated as if each delivery results in the birth of one non-affected child. IVF-PGD cycles can be performed at an experienced centre. The estimated cost of performing the initial IVF cycle with intracytoplasmic sperm injection (ICSI) without PGD was $16,000 including laboratory and imaging screening, cost of medications, monitoring during ovarian stimulation and the IVF procedure per se (Table 2). The cost of subsequent IVF cycles was lower because the initial screening does not need to be repeated until a year later. Estimated PGD costs were $6000 for the initial cycle and $4500 for subsequent cycles. The cost for subsequent PGD cycles would be lower because the initial genetic set-up for couples (parents) and siblings for linked genetic markers and probes needs to be performed only once. These conditions yield an estimated cost of $22,000 for the initial cycle of IVF/ICSI-PGD and $19,500 for each subsequent treatment cycle.

  • Genetic Alliance UK claims (in 2012, based on PDF creation date) that The cost of PGD is typically split into two parts: procedural costs (consultations, laboratory testing, egg collection, embryo transfer, ultrasound scans, and blood tests) and drug costs (for ovarian stimulation and embryo transfer). PGD combined with IVF will cost £6,000 [$8.5k] - £9,000 [$12.8k] per treatment cycle. but doesn’t specify the marginal cost of the PGD rather than IVF part.
  • Reproductive Health Technologies Project (2013?): One round of IVF typically costs around $9,000. PGD adds another $4,000 to $7,500 to the cost of each IVF attempt. A standard round of IVF results in a successful pregnancy only 10-35% of the time (depending on the age and health of the woman), and a woman may need to undergo subsequent attempts to achieve a viable pregnancy.
  • Alzforum (July 2014): In Madison, Wisconsin, genetic counselor Margo Grady at Generations Fertility Care estimated the out-of-pocket price of one IVF cycle at about $12,000, and PGD adds another $3,000.
  • SDFC (2015?): PGD typically costs between $4,000-$10,000 depending on the cost of creating the specific probe used to detect the presence of a single gene.
  • Murugappan et al May 2015: The average cost of PGS was $4,268 (range $3,155-$12,626), citing another study which estimated Average additional cost of PGD procedure: $3,550; Median Cost: $3,200
  • the Advanced Fertility Center of Chicago (current pricing, so 2015?) says IVF costs ~$12k and of that, Aneuploidy testing (for chromosome normality) with PGD is $1800 to $5000…PGD costs in the US vary from about $4000-$8000. AFC usefully breaks down the costs further in a table of Average PGS IVF Costs in USA, saying that:

    • Embryo biopsy charges are about $1000 to $2500 (average: $1500)
    • Embryo freezing costs are usually between $500 to $1000 (average: $750)
    • Aneuploidy testing (for chromosome normality) with PGD is $1800 to $5000
    • For single gene defects (such as cystic fibrosis), there are additional costs involved.
    • PGS test cost average: $3500

    (The wording is unclear about whether these are costs per embryo or per batch of embryos; but the rest of the page implies that it’s per batch, and per embryo would imply that the other PGS cost estimates are either far too low or are being done on only one embryo & likely would fail.)

From the final AFC costs, we can see that the genetic testing makes up a large fraction of the cost. Since custom markers are not necessary and we are only looking at standard SNPs, the $1.8-5k genetic cost is a huge overestimate of the $1k the SNPs should cost now or soon. Their breakdown also implies that the embryo freezing/vitrification cost is counted as part of the PGS cost, but I don’t think this is right since one will need to store embryos regardless of whether one is doing PGS/selection (even if an embryo is going to be implanted right away in a live transfer, the other embryos need to be stored since the first one will probably fail). So the critical number here is that the embryo biopsy step costs $1500; there is probably little prospect of large price decreases here and we can take it as fixed.

Hence we can model the cost of embryo selection as a fixed $1.5k cost plus number of embryos times SNP cost.

Modeling embryo selection

In vitro fertilization is a sequential probabilistic process:

  1. harvest x eggs
  2. fertilize them and create x embryos
  3. culture the embryos to either cleavage (2-4 days) or blastocyst (5-6 days) stage; of them, y will still be alive & not grossly abnormal
  4. freeze the embryos
  5. optional: embryo selection using quality and PGS
  6. unfreeze & implant 1 embryo; if no embryos left, return to #1 or give up
  7. if no live birth, go to #6

(Each step is necessary and determines input into the next step; it is a leaky pipeline, whose total yield depends heavily on the least efficient step, so outcomes might be distributed log-normally.) A simulation of this process:

simulateIVF <- function (eggMean, eggSD, polygenicScoreVariance, normalityP=0.5, vitrificationP, liveBirth) {
  eggsExtracted <- max(0, round(rnorm(n=1, mean=eggMean, sd=eggSD)))

  normal        <- rbinom(1, eggsExtracted, prob=normalityP)

  scores        <- embryoScores(n=normal, variance=polygenicScoreVariance)

  survived      <- Filter(function(x){rbinom(1, 1, prob=vitrificationP)}, scores)

  selection <- sort(survived, decreasing=TRUE)
  if (length(selection)>0) {
   for (embryo in 1:length(selection)) {
   if (rbinom(1, 1, prob=liveBirth) == 1) {
     live <- selection[embryo]
     return(live)
     }
     }
    }
  }
simulateIVFs <- function(eggMean, eggSD, polygenicScoreVariance, normalityP, vitrificationP, liveBirth, iters=100000) {
  return(unlist(replicate(iters, simulateIVF(eggMean, eggSD, polygenicScoreVariance, normalityP, vitrificationP, liveBirth)))); }

The transition probabilities can be estimated from the flows reported in papers dealing with IVF and PGD. I have used:

  1. Clinical outcome of preimplantation genetic diagnosis and screening using next generation sequencing, Tan et al December 2014:

    395 women, 1512 eggs successfully extracted & fertilized into blastocysts (~3.8 per woman); after genetic testing, 256+590=846 or 55% were abnormal & could not be used, leaving 666 good ones; all were vitrified for storage during analysis and 421 of the normal ones rethawed, leaving 406 useful survivors or ~1.4 per woman; the 406 were implanted into 252 women, yielding 24+75=99 healthy live births or 24% implanted-embryo->birth rate. Excerpts:

    A total of 395 couples participated. They were carriers of either translocation or inversion mutations, or were patients with recurrent miscarriage and/or advanced maternal age. A total of 1,512 blastocysts were biopsied on D5 after fertilization, with 1,058 blastocysts set aside for SNP array testing and 454 blastocysts for NGS testing. In the NGS cycles group, the implantation, clinical pregnancy and miscarriage rates were 52.6% (60/114), 61.3% (49/80) and 14.3% (7/49), respectively. In the SNP array cycles group, the implantation, clinical pregnancy and miscarriage rates were 47.6% (139/292), 56.7% (115/203) and 14.8% (17/115), respectively. The outcome measures of both the NGS and SNP array cycles were the same with insignificant differences. There were 150 blastocysts that underwent both NGS and SNP array analysis, of which seven blastocysts were found with inconsistent signals. All other signals obtained from NGS analysis were confirmed to be accurate by validation with qPCR. The relative copy number of mitochondrial DNA (mtDNA) for each blastocyst that underwent NGS testing was evaluated, and a significant difference was found between the copy number of mtDNA for the euploid and the chromosomally abnormal blastocysts. So far, out of 42 ongoing pregnancies, 24 babies were born in NGS cycles; all of these babies are healthy and free of any developmental problems.

    …The median number of normal/ balanced embryos per couple was 1.76 (range from 0 to 8)…Among the 129 couples in the NGS cycles group, 33 couples had no euploid embryos suitable for transfer; 75 couples underwent embryo transfer and the remaining 21 couples are currently still waiting for transfer. In the SNP array cycles group, 177 couples underwent embryo transfer, 66 couples had no suitable embryos for transfer, and 23 couples are currently still waiting. Of the 666 normal/balanced blastocysts, 421 blastocysts were warmed after vitrification, 406 survived (96.4% of survival rate) and were transferred in 283 cycles. The numbers of blastocysts transferred per cycle were 1.425 (114/80) and 1.438 (292/203) for NGS and SNP array, respectively. The proportion of transferred embryos that successfully implanted was evaluated by ultrasound 6-7 weeks after embryo transfer, indicating that 60 and 139 embryos resulted in a fetal sac, giving implantation rates of 52.6% (60/114) and 47.6% (139/292) for NGS and SNP array, respectively. Prenatal diagnosis with karyotyping of amniocentesis fluid samples did not find any fetus with chromosomal abnormalities. A total of 164 pregnancies were detected, with 129 singletons and 35 twins. The clinical pregnancy rate per transfer cycle was 61.3% (49/80) and 56.7% (115/203) for NGS and SNP array, respectively (Table 3). A total of 24 miscarriages were detected, giving rates of 14.3% (7/49) and 14.8% (17/115) in NGS and SNP array cycles, respectively

    …The ongoing pregnancy rates were 52.5% (42/80) and 48.3% (98/203) in NGS and SNP array cycles, respectively. Out of these pregnancies, 24 babies were delivered in 20 NGS cycles; so far, all the babies are healthy and chromosomally normal according to karyotype analysis. In the SNP array cycles group the outcome of all pregnancies went to full term and 75 healthy babies were delivered (Table 3)…NGS is with a bright prospect. A case report described the use of NGS for PGD recently [33]. Several comments for the application of NGS/MPS in PGD/PGS were published [34,35]. The cost and time of sequencing is already competitive with array tests, and the estimated reagent cost of sequencing for the detection of chromosomal abnormalities is currently less than $100.

  2. Cost-effectiveness analysis of preimplantation genetic screening and in vitro fertilization versus expectant management in patients with unexplained recurrent pregnancy loss, Murugappan et al May 2015:

    Probabilities for clinical outcomes with IVF and PGS in RPL patients were obtained from a 2012 study by Hodes-Wertz et al. (10). This is the single largest study to date of outcomes using 24-chromosome screening by array comparative genomic hybridization in a well-defined RPL population…The Hodes-Wertz study reported on outcomes of 287 cycles of IVF with 24-chromosome PGS with a total of 2,282 embryos followed by fresh day-5 embryo transfer in RPL patients. Of the PGS cycles, 67% were biopsied on day 3, and 33% were biopsied on day 5. The average maternal age was 36.7 years (range: 21-45 years), and the mean number of prior miscarriages was 3.3 (range: 2-7). From 287 PGS cycles, 181 cycles had at least one euploid embryo and proceeded to fresh embryo transfer. There were 52 cycles with no euploid embryos for transfer, four cycles where an embryo transfer had not taken place at the time of analysis, and 51 cycles that were lost to follow-up observation. All patients with a euploid embryo proceeded to embryo transfer, with an average of 1.65 Æ 0.65 (range: 1-4) embryos per transfer. Excluding the cycles lost to follow-up evaluation and the cycles without a transfer at the time of analysis, the clinical pregnancy rate per attempt was 44% (n 1⁄4 102). One attempt at conception was defined as an IVF cycle and oocyte retrieval Æ embryo transfer. The live-birth rate per attempt was 40% (n1⁄4 94), and the miscarriage rate per pregnancy was 7% (n 1⁄4 7). Of these seven miscarriages, 57% (n 1⁄4 4) occurred after detection of fetal cardiac activity (10). Information on the percentage of cycles with surplus embryos was not provided in the Hodes-Wertz study, so we drew from their database of 240 RPL patients with 118 attempts at IVF and PGS (12). The clinical pregnancy, live-birth, and clinical miscarriage rates did not statistically significantly differ between the outcomes published in the Hodes-Wertz study (P1⁄4 .89, P1⁄4 .66, P1⁄4 .61, respectively). We reported that 62% of IVF cycles had at least one surplus embryo (12).

    …The average cost of preconception counseling and baseline RPL workup, including parental karyotyping, maternal antiphospholipid antibody testing, and uterine cavity evaluation, was $4,377 (range: $4,000-$5,000) (16). Because this was incurred by both groups before their entry into the decision tree, it was not included as a cost input in the study. The average cost of IVF was $18,227 (range: $6,920-$27,685) (16) and includes cycle medications, oocyte retrieval, and one embryo transfer. The average cost of PGS was $4,268 (range $3,155-$12,626) (17), and the average cost of a frozen embryo transfer was $6,395 (range: $3,155-$12,626) (13, 16). The average cost of managing a clinical miscarriage with dilation and curettage (D&C) was $1,304 (range: $517-$2,058) (18). Costs incurred in the IVF-PGS strategy include the cost of IVF, PGS, fresh embryo transfer, frozen embryo transfer, and D&C. Costs incurred in the expectant management strategy include only the cost of D&C.

    17: National Infertility Association. The costs of infertility treatment: the Resolve Study Accessed on May 26, 2014: Average additional cost of PGD procedure: $3,550; Median Cost: $3,200 (Note: Medications for IVF are $3,000 - $5,000 per fresh cycle on average.)

  3. Technical Update: Preimplantation Genetic Diagnosis and Screening, Dahdouh et al 2015:

    The number of diseases currently diagnosed via PGD-PCR is approximately 200 and includes some forms of inherited cancers such as retinoblastoma and the breast cancer susceptibility gene (BRCA2). 52 PGD has also been used in new applications such as HLA matching. 53,54 The ESHRE PGD consortium data analysis of the past 10 years’ experience demonstrated a clinical pregnancy rate of 22% per oocyte retrieval and 29% per embryo transfer. 55 Table 4 shows a sample of the different monogenetic diseases for which PGD was carried out between January and December 2009, according to the ESHRE data. 22 In these reports a total of 6160 cycles of IVF cycles with PGD or PGS, including PGS-SS, are presented. Of these, 2580 (41.8%) were carried out for PGD purposes, in which 1597 cycles were performed for single-gene disorders, including HLA typing. An additional 3551 (57.6%) cycles were carried out for PGS purposes and 29 (0.5%) for PGS-SS. 22 Although the ESHRE data represent only a partial record of the PGD cases conducted worldwide, it is indicative of general trends in the field of PGD.

    …At least 40% to 60% of human embryos are abnormal, and that number increases to 80% in women 40 years or older. These abnormalities result in low implantation rates in embryos transferred during IVF procedures, from 30% in women < 35 years to 6% in women ≥ 40 years. 33 In a recent retrospective review of trophectoderm biopsies, aneuploidy risk was evident with increasing female age. A slightly increased prevalence was noted at younger ages, with > 40% aneuploidy in women ≤ 23 years. The risk of having no chromosomally normal blastocyst for transfer (the no-euploid embryo rate) was lowest (2-6%) in women aged 26 to 37, then rose to 33% at age 42 and reached 53% at age 44. 11

  4. Wikipedia reports net success rates:

    Age: <35yo 35-37 38-40 41-42 >42
    Live birth rate 40.7 31.3 22.2 11.8 3.9

    It is common to remove between ten and thirty eggs.

    using non-donor eggs. (Though donor eggs are better quality and more likely to yield a birth and hence better for selection purposes)
  5. Association between the number of eggs and live birth in IVF treatment: an analysis of 400 135 treatment cycles, Sunkara et al 2011

    The median number of eggs retrieved was 9 [inter-quartile range (IQR) 6-13; Fig. 2a] and the median number of embryos created was 5 (IQR 3-8; Fig. 2b). The overall LBR in the entire cohort was 21.3% [95% confidence interval (CI): 21.2-21.4%], with a gradual rise over the four time periods in this study (14.9% in 1991-1995, 19.8% in 1996-2000, 23.2% in 2001-2005 and 25.6% in 2006-2008).

    Egg retrieval appears normally distributed in Sunkara et al 2011’s graph. The SD is not given anywhere in the paper, but an SD of ~4-5 visually fits the graph and is compatible with a 6-13 IQR, and AGC reports SDs for eggs for two groups of SDs 4.5 & 4.7 and averages of 10.5 & 9.4 - closely matching the median of 9.
  6. The most nationally representative sample for the USA is the data that fertility clinics are legally required to report to the CDC. The most recent one is the 2013 Assisted Reproductive Technology National Summary Report, which breaks down numbers by age and egg source:

    Total number of cycles : 190,773 (includes 2,655 cycle[s] using frozen eggs)…Donor eggs: 9718 fresh cycles, 10270 frozen [(9718+10270)/190773 ~> 0.1047737363]

    …Of the 190,773 ART cycles performed in 2013 at these reporting clinics, 163,209 cycles (86%) were started with the intent to transfer at least one embryo. These 163,209 cycles resulted in 54,323 live births (deliveries of one or more living infants) and 67,996 infants.

    Fresh eggs <35yo 35-37 38-40 41-42 43-44 >44
    cycles: 40,083 19,853 18,06 19,588 4,823 1,379
    P(birth|cycle) 23.8 19.6 13.7 7.8 3.9 1.2
    P(birth|transfer) 28.2 24.4 18.4 11.4 6.0 2.1
    Frozen eggs <35 35-37 38-40 41-42 43-44 >44
    cycles: 21,627 11,140 8,354 3,344 1,503 811
    P(birth|transfer) 28.6 27.2 24.4 21.2 15.8 8.7

    …The largest group of women using ART services were women younger than age 35, representing approximately 38% of all ART cycles performed in 2013. About 20% of ART cycles were performed among women aged 35-37, 19% among women aged 38-40, 11% among women aged 41-42, 7% among women aged 43-44, and 5% among women older than age 44. Figure 4 shows that, in 2013, the type of ART cycles varied by the woman’s age. The vast majority (97%) of women younger than age 35 used their own eggs (non-donor), and about 4% used donor eggs. In contrast, 38% of women aged 43-44 and 73% of women older than age 44 used donor eggs.

    …Outcomes of ART Cycles Using Fresh Non-donor Eggs or Embryos, by Stage, 2013:

    1. 93,787 cycles started
    2. 84,868 retrievals
    3. 73,571 transfers
    4. 33,425 pregnancies
    5. 27,406 live-birth deliveries

    CDC report doesn’t specify how many eggs on average are retrieved or abnormality rate by age, although we can note that ~10% of retrievals didn’t lead to any transfers (since there were 85k retrievals but only 74k transfers) which looks consistent with an overall mean & SD of 9(4.6) and 50% abnormality rate. We could also try to back out from the figures on average number of embryos per transfer, number of transfers, and number of cycles (eg 1.8 for <35yos, and 33750, so 60750 transferred embryos, as part of the 40083 cycles, indicating each cycle must have yielded at least 1.5 embryos), but that only gives a loose lower bound since there may be many left over embryos and the abnormality rate is unknown.

    So for an American model of <35yos (the chance of IVF success declines so drastically with age that it’s not worth considering older age brackets), we could go with a set of parameters like {9, 4.6, 0.5, 0.96, 0.28}, but it’s unclear how accurate a guess that would be.
  7. Tur-Kaspa et al 2010 reports results from an Illinois fertility clinic treating cystic fibrosis carriers who were using PGD:

    Table 1: Outcomes of IVF-preimplantation genetic diagnosis (PGD) cycles for cystic fibrosis (CF) (2000-2005).
    Parameter Value Count Percentage
    No. of patients (age 42 years) 74
    No. of cycles for PGD for CF 104
    Mean no. of IVF-PGD cycles/couple 1.4 (104/74)
    No. of cycles with embryo transfer (%) 94 (90.4)
    No. of embryos transferred 184
    Mean no. of embryos transferred 1.96 (184/94)
    Total number of pregnancies 44
    No. of miscarriages (%) 7 (15.9)
    No. of deliveries 37
    No. of healthy babies born 49
    No. of babies per delivery 1.3
    No. of cycles resulting in pregnancy (%) 44/104 (42.3)
    No. of transfer cycles resulting in a pregnancy (%) 44/94 (46.8)
    Take-home baby rate per IVF-PGD cycle (%) 37/104

    For the Tur-Kaspa et al 2010 cost-benefit analysis, the number of eggs and survival rates are not given in the paper, so it can’t be used for simulation, but the overall conditional probabilities look similar to Hodes-Wertz.

With these sets of data, we can fill in parameter values for the simulation and estimate gains.

Using the Tan et al 2014 data:

  1. eggs extracted per person: normal distribution, mean=3, SD=4.6 (discretized into whole numbers)
  2. using previous simulation, SNP test all eggs extracted for polygenic score
  3. P=0.5 that an egg is normal
  4. P=0.96 that it survives vitrification
  5. P=0.24 that an implanted egg yields a birth
simulateTan <- function() { return(simulateIVFs(3, 4.6, selzam2016, 0.5, 0.96, 0.24)); }
iqTan <- mean(simulateTan()) * 15; iqTan
# [1] 0.3945506292

That is, the couples in Tan et al 2014 would have seen a ~0.4IQ increase.

The Murugappan et al 2015 cost-benefit analysis uses data from American fertility clinics reported in Hodes-Wertz’s Idiopathic recurrent miscarriage is caused mostly by aneuploid embryos: 278 cycles yielding 2282 blastocysts or ~8.2 on average; 35% normal; there is no mention of losses to cryostorage, so I borrow 0.96 from Tan et al 2015; 1.65 implanted on average in 181 transfers, yielding 40% live-births. So:

simulateHodesWertz <- function() { return(simulateIVFs(8.2, 4.6, selzam2016, 0.35, 0.96, 0.40)) }
iqHW <- mean(simulateHodesWertz()) * 15; iqHW
# [1] 0.684226242

Societal effects

One category of effects considered by Shulman & Bostrom is the non-financial social & societal effects mentioned in their Table 3, where embryo selection can perceptibly advantage a minority or in an extreme case, Selected dominate ranks of elite scientists, attorneys, physicians, engineers. Intellectual Renaissance? This is another point which is worth going into a little more; no specific calculations are mentioned by Shulman & Bostrom, and the thin-tail-effects of normal distributions are notoriously counterintuitive, with surprisingly large effects out on the tails from small-seeming changes in means or standard deviations - for example, the legendary levels of Western Jewish overperformance despite their tiny population sizes.

As a general rule of thumb, elite groups like scientists, attorneys, physicians, Ivy League students etc are highly selected for intelligence - one can comfortably estimate averages >=130 IQ (+2SD) from past IQ samples & average SAT scores & the ever-increasingly stringent admissions; and elite performance continues to increase with increasing intelligence as high as can reasonably be measured, as indicated by available date like the SMPY longitudinal study (many papers but see Benbow & Arjmand 1990, Wai et al 2005, Lubinski & Benbow 2006, Park et al 2007, Park et al 2008, Wai et al 2009, Robertson et al 2010, Kell et al 2013) and TIP longitudinal study (Makel et al 2016), where we might define the cut off as 160 IQ based on Anne Roe’s study of the most eminent available scientists (mean ~150-160)11. So to estimate an impact, one could consider a question like: given an average boost of x IQ points through embryo-selection, how much would the odds of being elite (>=130) or extremely elite (>=160) increase for the selected? If a certain fraction of IVFers were selected, what fraction of all people above the cutoff would they make up?

If there are 320 million people in the USA, then about 17m are +2SD and 43k are +3SD:

dnorm((130-100)/15) * 320000000
# [1] 17277109.28
dnorm((160-100)/15) * 320000000
# [1] 42825.67224

Similarly, in 2013, the CDC reports 3,932,181 children born in the USA; and the 2013 CDC annual IVF report says that 67,996 (1.73%) were IVF. This implies that IVFers also make up a very small number of highly gifted children:

size <- function(mean, cutoff, populationSize, useFraction=1) { if(cutoff>mean) { dnorm(cutoff-mean) * populationSize * useFraction } else
                                                                 { (1 - dnorm(cutoff-mean)) * populationSize * useFraction }}
size(0, (60/15), 67996)
# [1] 9.099920031

So assuming IVF parents average 100IQ, then we can take the embryo selection theoretical upper bound of +9.36 (+0.624SD) corresponding to the aggressive IVF set of scenarios in Table 3 of Shulman & Bostrom, and ask, if 100% of IVF children were selected, how many additional people over 160 would that create?

eliteGain <- function(ivfMean, ivfGain, ivfFraction, generation, cutoff, ivfPop, genMean, genPop) {

              ivfers      <- size(ivfMean,                      cutoff, ivfPop, 1)
              selected    <- size(ivfMean+(ivfGain*generation), cutoff, ivfPop, ivfFraction)
              nonSelected <- size(ivfMean,                      cutoff, ivfPop, 1-ivfFraction)
              gain        <- (selected+nonSelected) - ivfers

              population <- size(genMean, cutoff, genPop)
              multiplier <- gain / population
              return(multiplier) }
eliteGain(0, (9.36/15), 1, 1, (60/15), 67996, 0, 3932181)
# [1] 0.1554096565

In this example, the +0.624SD boosts the absolute number by 82 people, representing 15.5% of children passing the cutoff; this would mean that IVF overrepresentation would be noticeable if anyone went looking for it, but would not be a major issue nor even as noticeable as Jewish achievement. We would indeed see Substantial growth in educational attainment, income, but we would not see much effect beyond that.

Is it realistic to assume that IVF children will be distributed around a mean of 100 sans any intervention? That seems unlikely, if only due to the substantial financial cost of using IVF; however, the existing literature is inconsistent, showing both higher & lower education or IQ scores (Hart & Norman 2013), so perhaps the starting point really is 100. The thin-tail effects make the starting mean extremely important; Shulman & Bostrom say, Second generation manyfold increase at right tail. Let’s consider the second generation; with their post-selection mean IQ of 109.36, what second-generation is produced in the absence of outbreeding when they use IVF selection?

eliteGain(0, (9.36/15), 1, 2, (60/15), 67996, 0, 3932181)
# [1] 1.151238772
eliteGain(0, (9.36/15), 1, 5, (60/15), 67996, 0, 3932181)
# [1] 34.98100356

Now the IVF children represent a majority. With the third generation, they reach 5x; at the fourth, 17x; at the fifth, 35x; and so on.

In practice, of course, we currently would get much less: 0.13808892057 IQ points in the USA model, which would yield a trivial percentage increase of 0.06% or 1.6%:

eliteGain(0, (0.13808892057/15), 1, 1, (60/15), 67996, 0, 3932181)
# [1] 0.0006478714323
eliteGain((15/15), (0.13808892057/15), 1, 1, (60/15), 67996, 0, 3932181)
# [1] 0.01601047464

Table 3 considers 12 scenarios: 3 adoption fractions of the general population (100% IVFer/~0.25% general population, 10%, >90%) vs 4 average gains (4, 12, 19, 100+). The descriptions add 2 additional variables: first vs second generation, and elite vs eminent, giving 48 relevant estimates total.

scenarios <- expand.grid(c(0.025, 0.1, 0.9), c(4/15, 12/15, 19/15, 100/15), c(1,2), c(30/15, 60/15))
colnames(scenarios) <- c("Adoption.fraction", "IQ.gain", "Generation", "Eliteness")
scenarios$Gain.fraction <- round(do.call(mapply, c(function(adoptionRate, gain, generation, selectiveness) {
                                  eliteGain(0, gain, adoptionRate, generation, selectiveness, 3932181, 0, 3932181) }, unname(scenarios[,1:4]))),
                           digits=2)
Adoption fraction IQ gain Generation Eliteness Gain fraction
0.025 4 1 130 0.02
0.100 4 1 130 0.06
0.900 4 1 130 0.58
0.025 12 1 130 0.06
0.100 12 1 130 0.26
0.900 12 1 130 2.34
0.025 19 1 130 0.12
0.100 19 1 130 0.46
0.900 19 1 130 4.18
0.025 100 1 130 0.44
0.100 100 1 130 1.75
0.900 100 1 130 15.77
0.025 4 2 130 0.04
0.100 4 2 130 0.15
0.900 4 2 130 1.37
0.025 12 2 130 0.15
0.100 12 2 130 0.58
0.900 12 2 130 5.24
0.025 19 2 130 0.28
0.100 19 2 130 1.11
0.900 19 2 130 10.00
0.025 100 2 130 0.44
0.100 100 2 130 1.75
0.900 100 2 130 15.77
0.025 4 1 160 0.05
0.100 4 1 160 0.18
0.900 4 1 160 1.62
0.025 12 1 160 0.42
0.100 12 1 160 1.68
0.900 12 1 160 15.13
0.025 19 1 160 1.75
0.100 19 1 160 7.01
0.900 19 1 160 63.11
0.025 100 1 160 184.65
0.100 100 1 160 738.60
0.900 100 1 160 6647.40
0.025 4 2 160 0.16
0.100 4 2 160 0.63
0.900 4 2 160 5.69
0.025 12 2 160 4.16
0.100 12 2 160 16.63
0.900 12 2 160 149.70
0.025 19 2 160 25.40
0.100 19 2 160 101.58
0.900 19 2 160 914.25
0.025 100 2 160 186.78
0.100 100 2 160 747.12
0.900 100 2 160 6724.04

To help capture what might be considered important or disruptive, let’s filter down the scenarios to ones where the embryo-selected now make up an absolute majority of any elite group (a fraction >0.5):

Adoption fraction IQ gain Generation Eliteness Gain fraction
0.900 4 1 130 0.58
0.900 12 1 130 2.34
0.900 19 1 130 4.18
0.100 100 1 130 1.75
0.900 100 1 130 15.77
0.900 4 2 130 1.37
0.100 12 2 130 0.58
0.900 12 2 130 5.24
0.100 19 2 130 1.11
0.900 19 2 130 10.00
0.100 100 2 130 1.75
0.900 100 2 130 15.77
0.900 4 1 160 1.62
0.100 12 1 160 1.68
0.900 12 1 160 15.13
0.025 19 1 160 1.75
0.100 19 1 160 7.01
0.900 19 1 160 63.11
0.025 100 1 160 184.65
0.100 100 1 160 738.60
0.900 100 1 160 6647.40
0.100 4 2 160 0.63
0.900 4 2 160 5.69
0.025 12 2 160 4.16
0.100 12 2 160 16.63
0.900 12 2 160 149.70
0.025 19 2 160 25.40
0.100 19 2 160 101.58
0.900 19 2 160 914.25
0.025 100 2 160 186.78
0.100 100 2 160 747.12
0.900 100 2 160 6724.04

For many of the scenarios, the impact is not blatant until a second generation builds on the first, but the cumulative effect has an impact - one of the weakest scenarios, +4 IQ/10% adoption can still be seen at the second generation because easier to spot effects on the most elite levels; in another example, a boost of 12 points is noticeable in a single generation with as little as 10% of the general population adoption. A boost of 19 points is visible in a fair number of scenarios, and a boost of 100 is visible at almost any adoption rate/generation/elite-level. (Indeed, a boost of 100 results in almost meaninglessly large numbers under many scenarios; it’s difficult to imagine a society with 100x as many geniuses running around, so it’s even more difficult to imagine what it would mean for there to be 6,724x as many - other than many things will start changing extremely rapidly in unpredictable ways.)

The tables do not attempt to give specific deadlines in years for when some of the effects will manifest, but we could try to extrapolate based on when eminent figures and child prodigies. Chess prodigies have become grandmasters at very early ages, such as Sergey Karjakin’s 12.6yo record, with (as of 2016) 24 other chess prodigies reaching grandmaster levels before age 15; the record age has dropped rapidly over time which is often credited to computers & the Internet unlocking chess databases & engines to intensively train against, providing a global pool of opponents 24/7, and intensive tutoring and training programs. William James Sidis is probably the most famous child prodigy, credited with feats such as reading by age 2, writing mathematical papers by age 12 and so on, but he abandoned academia and never produced any major accomplishments; his acquaintance and fellow child prodigy Norbert Weiner, on the other hand, produced his first major work at age 17, John von Neumann at age 19; physicists in the early quantum era were noted for youth, with Bragg/Heisenberg/Pauli/Dirac producing their Nobel prize-winning results at ages 22/23/25/26 (respectively). In mathematics, Évariste Galois made major breakthroughs around age 18, Saul Kripke’s first modal logic result was age 17, Srinivasa Ramanujan likely began making major findings around age 16 and continued up to his youthful death at age 32, and Terence Tao began publishing age 15; young students making findings is such a trope that the Fields Medal has an age-limit of 39yo for awardees (who thus must’ve made their discoveries much earlier). Cliometrics and the age of scientists and their life-cycles of productivity across time and fields have been studied by Simonton, Jones, & Murray’s Human Accomplishment; we can also compare to the SMPY/TIP samples where most took normal schooling paths. The peak age for productivity, and average age for work that wins major prizes differs a great deal by field - physics and mathematics are generally younger than fields like medicine or biology. This suggests that different fields place different demands on Gf vs Gc: a field like mathematics dealing in pure abstractions will stress deep thought & fluid intelligence (peaking in the early 20s); while a field like medicine will require a wide variety of experiences and factual knowledge and less raw intelligence, and so may require decades before one can make a major contribution. (In literature, it’s often been noted that lyric poets seem to peak young while novelists may continue improving throughout their lifetimes.)

So if we consider scenarios of intelligence enhancement up to 2 or 3 SDs (145), then we can expect that there may be a few initial results within 15 years heavily biased towards STEM fields with strong Internet presences and traditions of openness in papers/software/data (such as machine learning), followed by a gradual increase in number of results as the cohort begins reaching their 20s and 30s and their adult careers and a broadening across fields such as medicine and the humanities. While math and technology results can have outsized impact these days, in a 2-3SD scenario, the total number of 2-3SD researchers will not increase by more than a factor, and so the expected impact will be similar to what we already experience in the pace of technological development - quick, but not unmanageable.

In the case of >=4SDs, things are a little different. The most comparable case is Sidis, who as mentioned was writing papers by age 12 after 10 years of reading; in an IES scenario, each member of the cohort might be far beyond Sidis, and so the entire cohort will likely reach the research frontier and begin making contributions before age 12 - although there must be a limit on how fast a human child can develop mentally, in calories consumed if nothing else, there is no good reason to think that Sidis’s bound of 12 years is tight, especially given the modern context and the possibilities for accelerated education programs (With such advantages, there may also be much larger cohorts as parents decide the advantages are so compelling that they want them for their children and are willing to undergo the costs.)

Cost-benefit

As written, the IVF simulator cannot deliver a cost-benefit because the costs will depend on the internal state, like how many good embryos were created or the fact that a cycle ending in no live births will still incur costs, and report the marginal gain now that we’re going case by case. So it must be augmented:

simulateIVFCB <- function (eggMean, eggSD, polygenicScoreVariance, normalityP=0.5, vitrificationP, liveBirth, fixedCost, embryoCost, traitValue) {
  eggsExtracted <- max(0, round(rnorm(n=1, mean=eggMean, sd=eggSD)))

  normal        <- rbinom(1, eggsExtracted, prob=normalityP)

  totalCost     <- fixedCost + normal * embryoCost
  scores        <- embryoScores(n=normal, variance=polygenicScoreVariance)

  survived      <- Filter(function(x){rbinom(1, 1, prob=vitrificationP)}, scores)

  selection <- sort(survived, decreasing=FALSE)
  live <- 0
  gain <- 0

  if (length(selection)>0) {
   for (embryo in 1:length(selection)) {
    if (rbinom(1, 1, prob=liveBirth) == 1) {
      live <- selection[embryo]
      }
   }
  gain <- max(0, live - mean(selection))
  }
  return(data.frame(Trait.SD=gain, Cost=totalCost, Net=(traitValue*gain - totalCost)))  }
library(plyr)
simulateIVFCBs <- function(eggMean, eggSD, polygenicScoreVariance, normalityP, vitrificationP, liveBirth, fixedCost, embryoCost, traitValue, iters=20000) {
  ldply(replicate(simplify=FALSE, iters, simulateIVFCB(eggMean, eggSD, polygenicScoreVariance, normalityP, vitrificationP, liveBirth, fixedCost, embryoCost, traitValue))) }

Now we have all our parameters set:

  1. IQ’s value per point or per SD (multiply by 15)
  2. The fixed cost of selection is $1500
  3. per-embryo cost of selection is $200
  4. and the relevant probabilities have been defined already
iqLow <- 3270*15; iqHigh <- 16151*15
## Tan:
summary(simulateIVFCBs(3, 4.6, selzam2016, 0.5, 0.96, 0.24, 1500, 200, iqLow))
#     Trait.SD               Cost              Net
#  Min.   :0.00000000   Min.   :1500.00   Min.   :-4100.0000
#  1st Qu.:0.00000000   1st Qu.:1500.00   1st Qu.:-1700.0000
#  Median :0.00000000   Median :1700.00   Median :-1500.0000
#  Mean   :0.02801017   Mean   :1874.39   Mean   : -500.4911
#  3rd Qu.:0.02927008   3rd Qu.:2100.00   3rd Qu.: -730.6793
#  Max.   :0.44094948   Max.   :4300.00   Max.   :19928.5718
summary(simulateIVFCBs(3, 4.6, selzam2016, 0.5, 0.96, 0.24, 1500, 200, iqHigh))
#     Trait.SD               Cost              Net
#  Min.   :0.00000000   Min.   :1500.00   Min.   : -4300.000
#  1st Qu.:0.00000000   1st Qu.:1500.00   1st Qu.: -1700.000
#  Median :0.00000000   Median :1700.00   Median : -1500.000
#  Mean   :0.02876628   Mean   :1868.35   Mean   :  5100.712
#  3rd Qu.:0.03163196   3rd Qu.:2100.00   3rd Qu.:  5545.998
#  Max.   :0.51011076   Max.   :4900.00   Max.   :120666.277

## Hodes-Wertz:
summary(simulateIVFCBs(8.2, 4.6, selzam2016, 0.35, 0.96, 0.40, 1500, 200, iqLow))
#     Trait.SD                Cost              Net
#  Min.   :0.000000000   Min.   :1500.00   Min.   :-3700.0000
#  1st Qu.:0.000000000   1st Qu.:1700.00   1st Qu.:-1900.0000
#  Median :0.007067949   Median :2100.00   Median :-1500.0000
#  Mean   :0.050269041   Mean   :2077.96   Mean   :  387.7364
#  3rd Qu.:0.087702810   3rd Qu.:2300.00   3rd Qu.: 2048.4342
#  Max.   :0.564267504   Max.   :4300.00   Max.   :25577.3211
summary(simulateIVFCBs(8.2, 4.6, selzam2016, 0.35, 0.96, 0.40, 1500, 200, iqHigh))
#     Trait.SD                Cost             Net
#  Min.   :0.000000000   Min.   :1500.0   Min.   : -3700.000
#  1st Qu.:0.000000000   1st Qu.:1700.0   1st Qu.: -1729.544
#  Median :0.005219926   Median :2100.0   Median :  -895.304
#  Mean   :0.050460809   Mean   :2079.2   Mean   : 10145.688
#  3rd Qu.:0.088175260   3rd Qu.:2300.0   3rd Qu.: 19113.344
#  Max.   :0.568129317   Max.   :4500.0   Max.   :134537.849

## USA, youngest:
summary(simulateIVFCBs(9, 4.6, selzam2016, 0.3, 0.90, 10.8/100, 1500, 200, iqLow))
#     Trait.SD               Cost              Net
#  Min.   :0.00000000   Min.   :1500.00   Min.   :-3900.0000
#  1st Qu.:0.00000000   1st Qu.:1700.00   1st Qu.:-2100.0000
#  Median :0.00000000   Median :1900.00   Median :-1521.1983
#  Mean   :0.03435545   Mean   :2043.96   Mean   : -358.8253
#  3rd Qu.:0.05222450   3rd Qu.:2300.00   3rd Qu.:  389.0717
#  Max.   :0.44004791   Max.   :4100.00   Max.   :19684.3500
summary(simulateIVFCBs(9, 4.6, selzam2016, 0.3, 0.90, 10.8/100, 1500, 200, iqHigh))
#     Trait.SD               Cost              Net
#  Min.   :0.00000000   Min.   :1500.00   Min.   : -3700.000
#  1st Qu.:0.00000000   1st Qu.:1700.00   1st Qu.: -1900.000
#  Median :0.00000000   Median :1900.00   Median : -1500.000
#  Mean   :0.03414784   Mean   :2042.67   Mean   :  6230.156
#  3rd Qu.:0.05215105   3rd Qu.:2300.00   3rd Qu.: 10474.862
#  Max.   :0.49288710   Max.   :4100.00   Max.   :117709.293

In general, embryo selection as of January 2016 is just barely profitable or somewhat unprofitable in each group using the lowest estimate of IQ’s value; it is always profitable on average with the highest estimate.

Value of Information

To get an idea of the value of further research into improving the polygenic score or other parts of the procedure, we can look at the overall population gains in the USA if it was adopted by all potential users.

Public interest in selection

How many people can we expect to use embryo selection as it becomes available?

My belief is that total uptake will be fairly modest as a fraction of the population. A large fraction of the population expresses hostility towards any new fertility-related technology whatsoever, and the people open to the possibility will be deterred by the necessity of advanced family planning, the large financial cost of IVF, and the fact that the IVF process is lengthy and painful. I think that prospective mothers will not undergo it unless the gains are enormous: the difference between having kids or never having kids, or having a normal kid or one who will die young of a genetic disease. A fraction of an IQ point, or even a few points, is not going to cut it. (Perhaps boosts around 20 IQ points, a level with dramatic and visible effects on educational outcomes, would be enough?)

We can see this unwillingness partially expressed in long-standing trends against the wide use of sperm & egg donation. As Matt Ridley points out (Why Eugenics Won’t Come Back), a prospective mother could easily increase traits of her children by eugenic selection of sperm donors, such as eminent scientists, above and beyond the relatively unstringent screening done by current sperm banks and the selectness of sperm buyers:

…we now know from 40 years of experience that without coercion there is little or no demand for genetic enhancement. People generally don’t want paragon babies; they want healthy ones that are like them. At the time test-tube babies were first conceived in the 1970s, many people feared in-vitro fertilization would lead to people buying sperm and eggs off celebrities, geniuses, models and athletes. In fact, the demand for such things is negligible; people wanted to use the new technology to cure infertility - to have their own babies, not other people’s. It is a persistent misconception shared among clever people to assume that everybody wants clever children.

Ignoring that celebrities, models, and athletes are often highly successful sexually (which can be seen as a donation of sorts), this sort of thing was in fact done by the Repository for Germinal Choice; but despite apparently positive results (as expected from selecting for highly intelligent donors), it had a troubled 29-year run (primarily due to a severe donor shortage12) and has no explicit successors.13

So that largely limits the market for embryo selection to those who would already use it: those who must use it.

Will they use it? Ridley’s argument doesn’t prove that they won’t, because the use of sperm/egg donors comes at the cost of reducing relatedness. Non-use of celebrities, geniuses, models, and athletes merely shows that the perceived benefits do not outweigh the costs; it doesn’t tell us what the benefits or costs are. And the cost of reducing relatedness is a severe one - a normal fertile pair of parents will no more be inclined to use a sperm or egg donor (and which one, exactly? who chooses?) than they would be to adopt, and they would be willing to extract sperm from a dead man just for the relatedness.14 A more relevant situation would be how parents act in the infertility situation where avoiding reduced relatedness is impossible.

In that situation, parents are notoriously eugenic in their preferences, demanding of sperm or egg banks that the donor be healthy, well-educated (at the Ivy League, of course, where egg donation is regularly advertised), have particular hair & eye colors (using sperm/eggs exported from Scandinavia, if necessary), be tall (men) and young (Whyte et al 2016), and free of any mental illnesses. This pervasive selection works; Lee 2013 draws on a donor sibling registry, documenting selection in favor of taller sperm donors, and, as predicted by the breeder’s equation, offspring were taller by 1.23 inches.15 Should parents discover that a sperm donor was actually autistic or schizophrenic, allegations of fraud & wrongful birth lawsuits will immediately begin flying, regardless of whether those parents would explicitly acknowledge that most human traits are highly heritable and embryo selection was possible. The practical willingness of parents to make eugenic choices based on donor profiles suggests that advertised correctly, embryo selection could become standard. (For example, given the pervasive Puritanical bias in health towards preventing illness instead of increasing health, embryo selection for intelligence or height can be framed as reducing the risk of developmental delays or shortness; which it would.) Reportedly as of 2016, PGD for hair and eye color is already quietly being offered to parents and accepted, and mentions are made of the potential for selection on other traits.

More drastically, in cases of screening for severe genetic disorders by testing potential carrier parents and fetuses, parents in practice are willing to make use of screening (if they know about it) and use PGD or selective abortions in anywhere up to 95-100% of cases (depending on disease & sample) in diseases such as Down syndrome (eg Choi et al 2012), Tay-Sachs disease (eg Kaback 2000), Thalassemia (eg Liao et al 2005, Scotet et al 2008), Cystic fibrosis (eg Ioannou et al 2015, Sawyer et al 2006, Hale et al 2008, Cunningham & Marshall 1998, Massie et al 2009), and in general (eg Ghiossi et al 2016, Franasiak et al 2016). This willingness is enough to noticeably affect population levels of these disorders (particularly Down’s syndrome, which has dropped dramatically in the USA despite an aging population that should be increasing it). The willingness to use PGD or abort rises with the severity of the disorder, true, but here again there are extenuating factors: parents considerably underestimate their willingness to use PGD/abortion before diagnosis compared to after they are actually diagnosed, and using IVF just for PGD or aborting a pregnancy are expensive & highly undesirable steps to take; so the rates being so high regardless suggest that in other scenarios (like a couple using IVF for fertility reasons), willingness may be high (and higher than people think before being offered the option).

Time will tell whether embryo selection becomes anything more than a exotic novelty, but it looks as though when relatedness is not a cost, parents will tend to accept it. This suggests that Ridley’s argument is incorrect when extended to embryo selection/editing; people simply want to both have and eat their cake, and as embryo selection/editing entail little or no loss of relatedness, they are not comparable to sperm/egg donation.

Hence, I suggest the most appropriate target market is simply the total number of IVF users, and not the much smaller number of egg/sperm donation users.

VoI for USA IVF population

Using the high estimate of an average gain of $6230, and noting that there were 67996 IVF babies in 2013, that suggests an annual gain of up to $423m. What is the net present value of that annually? Discounted at 5%, it’d be $8.6b. (Why a 5% discount rate? This is the highest discount rate I’ve seen used in health economics; more typical are discount rates like NICE’s 3.5%, which would yield a much larger NPV.)

We might also ask: as an upper bound, in the realistic USA IVF model, how much would a perfect SNP polygenic score be worth?

summary(simulateIVFCBs(9, 4.6, 0.33, 0.3, 0.90, 10.8/100, 1500, 200, iqLow))
#     Trait.SD              Cost              Net
#  Min.   :0.0000000   Min.   :1500.00   Min.   :-3700.000
#  1st Qu.:0.0000000   1st Qu.:1700.00   1st Qu.:-2100.000
#  Median :0.0000000   Median :1900.00   Median :-1500.000
#  Mean   :0.1037614   Mean   :2042.24   Mean   : 3047.259
#  3rd Qu.:0.1562492   3rd Qu.:2300.00   3rd Qu.: 5516.869
#  Max.   :1.4293926   Max.   :3900.00   Max.   :68411.709
summary(simulateIVFCBs(9, 4.6, 0.33, 0.3, 0.90, 10.8/100, 1500, 200, iqHigh))
#     Trait.SD              Cost             Net
#  Min.   :0.0000000   Min.   :1500.0   Min.   : -4100.00
#  1st Qu.:0.0000000   1st Qu.:1700.0   1st Qu.: -1900.00
#  Median :0.0000000   Median :1900.0   Median : -1500.00
#  Mean   :0.1030492   Mean   :2037.6   Mean   : 22927.61
#  3rd Qu.:0.1530295   3rd Qu.:2300.0   3rd Qu.: 34652.62
#  Max.   :1.3798166   Max.   :4100.0   Max.   :331981.26
ivfBirths <- 67996; discount <- 0.05
current <- 6230; perfect <- 23650
(ivfBirths * perfect)/(log(1+discount)) - (ivfBirths * current)/(log(1+discount))
# [1] 24277235795

Increasing the polygenic score to its maximum of 33% increases the profit by 5x. This increase, over the number of annual IVF births, gives a net present expected value of perfect information (EVPI) for a perfect score of something like $24b. How much would it cost to gain perfect information? Hsu 2014 argues that a sample around 1 million would suffice to reach the GCTA upper bound using a particular algorithm; the largest usable16 sample I know of, SSGAC, is around n=300k, leaving 700k to go; with SNPs costing ~$200, that implies that it would cost $0.14b for perfect SNP information. Hence, the expected value of information would then be ~$26.15b and safely profitable. From that, we could also estimate the expected value of sample information (EVSI): if the 700k SNPs would be worth that much, then on average17 each additional datapoint is worth $37.6k. Aside from the Hsu 2014 estimate, we can use a formula from a model in the Rietveld et al 2013 supplementary materials (pg22-23), where they offer a population genetics-based approximation of how much variance a given sample size & heritability will explain:

  1. M=2Nekllog(2Nel)M = \frac{2 \cdot N_e \cdot k \cdot l}{log(2 \cdot N_e \cdot l)}; they state that Ne=10000;k=22;l=1.6N_e = 10000; k = 22; l = 1.6, so M=210000221.6log(2100001.6)M = \frac{2 \cdot 10000 \cdot 22 \cdot 1.6}{log(2 \cdot 10000 \cdot 1.6)} or M = 67865.
  2. R2=NMh4NMh2+1R^2 = \frac{\frac{N}{M} \cdot h^4}{\frac{N}{M} \cdot h^2 + 1} For education (the phenotype variable targeted by the main GWAS, serving as a proxy for intelligence), they estimate h2=0.2, or h=0.447 (h2 here being the heritability capturable by their SNP arrays, so equivalent to hSNP2h^2_{SNP}), so for their sample size of 100000, they would expect to explain N=100000; h=sqrt(0.2); ((N / 67865) * h^4) / ((N/67865) * h^2 + 1) ~> 0.045 or 4.5% of variance while they got 2-3%, suggesting over-estimation.

Using this equation we can work out changes in variance explained with changes in sample sizes, and thus the value of an additional datapoint. For intelligence, the GCTA estimate is hSNP2=0.33h^2_{SNP} = 0.33; Rietveld et al 2013 realized a variance explained of 0.025, implying it’s equivalent to n=17000 when we look for a N which yields 0.025 and so we need ~6x more education-phenotype samples to reach the same efficacy in predicting intelligence. We can then ask how much variance is explained by a larger sample and how much that is worth over the annual IVF headcount. Since selection is not profitable under the low IQ estimate and 1 more datapoint will not make it profitable, the EVSI of another education datapoint must be negative and is not worth estimating, so we use the high estimate instead, asking how much a increase of, say, 100 datapoints is worth on average:

gwasSizeToVariance <- function(N, h2) { ((N / 67865) * h2^2) / ((N/67865) * h2 + 1) }
sampleIncrease <- 1000
original     <- gwasSizeToVariance(17000, 0.33)
originalplus <- gwasSizeToVariance(17000+sampleIncrease, 0.33)
originalGain     <- mean(simulateIVFCBs(9, 4.6, original, 0.3, 0.90, 10.8/100, 1500, 200, iqHigh)$Net)
originalplusGain <- mean(simulateIVFCBs(9, 4.6, originalplus, 0.3, 0.90, 10.8/100, 1500, 200, iqHigh)$Net)
originalGain; originalplusGain
((((originalplusGain - originalGain) * ivfBirths) / log(1+discount)) / sampleIncrease) / 6
# [1] 71716.90116

$71k is within an order of magnitude of the Hsu 2014 extrapolation, so reasonable given all the approximations here.

Going back to the lowest IQ value estimate, in the US population estimate, embryo selection only reaches break-even once the variance explained increases by a factor of 2.1 to 5.25%. To boost it to 2.1x (0.0525) turns out to require n=40000 or 2.35x, suggesting that another Rietveld et al 2013-style education GWAS would be adequate once it reached n2.35101000=237350n \geq 2.35 \cdot 101000=237350. After that sample size has been exceeded, EVSI will then be closer to $10k.

Improvements

Limiting step: eggs or scores?

Embryo selection gains can be improved in a number of ways: harvesting more eggs, having more eggs be normal & successfully fertilized, reducing the cost of SNPing or increasing the predictive power of the polygenic scores, and better implantation success.

There’s no clear way to improve egg quality or implant better, and the cost of SNPs is already dropping as fast as anyone could wish for, which leaves just improving the polygenic scores and harvesting more eggs. Improving the polygenic scores is addressed in the previous Value of Information section and turns out to be doable and profitable but requires a large investment by institutions which may not be interested in researching the matter further.

That leaves egg harvesting; this is limited by each woman’s idiosyncratic biology, and also by safety issues, and we can’t expect much beyond the median 9 eggs. There is, however, one possibility for getting many more eggs: coax stem cells into using their pluripotency to develop into eggs, possibly hundreds of viable eggs. This method is reportedly being developed18 and if successful, would enable both powerful embryo selection and also be a major step towards iterated embryo selection (see that section).

How much would getting scores or hundreds of eggs help, and how does the gain scale? Since returns diminish, and we already know that under the low value of IQ embryo selection is not profitable, it follows that no larger number of eggs will be profitable either; so like with EVSI, we look at the high value’s upper bound if we could choose an arbitrary number of eggs for free:

gainByEggcount <- sapply(1:300, function(egg) { mean(simulateIVFCBs(egg, 4.6, selzam2016, 0.3, 0.90, 10.8/100, 1500, 200, iqHigh)$Net) })
max(gainByEggcount); which.max(gainByEggcount)
# [1] 26657.1117
# [1] 281
plot(1:300, gainByEggcount, xlab="Average number of eggs available", ylab="Profit")
summary(simulateIVFCBs(which.max(gainByEggcount), 4.6, selzam2016, 0.3, 0.90, 10.8/100, 1500, 200, iqHigh))
#     Trait.SD              Cost              Net
#  Min.   :0.0000000   Min.   :12300.0   Min.   :-21900.00
#  1st Qu.:0.1284192   1st Qu.:17300.0   1st Qu.: 12711.92
#  Median :0.1817688   Median :18300.0   Median : 25630.74
#  Mean   :0.1845060   Mean   :18369.1   Mean   : 26330.25
#  3rd Qu.:0.2372748   3rd Qu.:19500.0   3rd Qu.: 39162.75
#  Max.   :0.5661427   Max.   :25300.0   Max.   :117856.55
max(gainByEggcount) / which.max(gainByEggcount)
# [1] 94.86516619
Net profit vs average number of eggs
Net profit vs average number of eggs

The maxima is ~281, yielding 0.18SD/~2.7 points & a net profit ~$26k, indicating that with that many eggs, the cost of the additional SNPing exceeds the marginal IQ gain from having 1 more egg available which could turn into an embryo & be selected amongst. With $26k profit vs 281 eggs, we could say that the gain from unlimited eggs compared to the normal yield of ~9 eggs is ~$20k ($26k vs the best current scenario of $6l), and that the average profit from adding each egg was $73, giving an idea of the sort of per-egg costs one would need from an egg stem cell technology (small). The total number of eggs will decrease with an increase in per-egg costs; if it costs another $200 per embryo, then the optimal number of eggs is around half, and so on.

So with present polygenic-scores & SNP costs, an unlimited number of eggs would only increase profit by 4x, as we are still constrained by the polygenic score. This would be valuable, of course, but it is not a huge change.

Multiple selection

Intelligence is one of the most valuable traits to select on, and one of the easiest to analyze, but we shouldn’t forget that it is neither necessary nor desirable to select only on intelligence. Selecting only on one trait means that most of the genotype information is being ignored; at best, this is a lost opportunity, and at worst, it is harmful - in the very long run (dozens of generations), selection only on one trait, particularly in a small breeding population, will have unintended consequences like greater disease rates, shorter lifespans, etc (see Falconer 1960’s Introduction to Quantitative Genetics, Ch. 19 Correlated Characters).

This is why animal breeders do not select purely on a single valuable trait like egglaying rate but on an index of many traits, from maturity speed to disease resistance to lifespan; when breeding is done out of ignorance or with regard only to a few traits, one may wind up with problematic breeds like some purebred dog breeds which have serious health issues due to inbreeding, small founding populations, no selection against negative mutations popping up, and variants which increase the selected trait at the expense of another trait.

In our case, a weak polygenic score can be strengthened by better GWASes, but it can also be combined with other polygenic scores to do selection on multiple traits by summing the scores per embryo and taking the maximum. This can be done almost for free, since if one did sequencing on a comprehensive SNP array chip to compute 1 polygenic score, one probably has all the information needed. (Indeed, you could see selection on a single trait as a multiple selection procedure where all traits’ values are implausibly set to 0 except for 1 trait.) In reality, while some traits are of much more value than others, there are few traits with no value at all; an embryo which scores mediocrely on our primary trait may still have many other advantages which more than compensate, so why not check? (It is a general principle that more information is better than less.) Intelligence is valuable, but it’s also valuable to live a long time, have less risk for schizophrenia, lower BMI, be happier, and so on.

A quick demonstration of the possible gain is to imagine the total of 1 normal deviate (𝒩(0,1)\mathcal{N}(0,1)) vs picking the most extreme out of several normal deviates. With 1 deviate, our average extreme is 0, and most of the time will be ±1SD. But if we can pick out of batches of 10, we can generally get +1.53SD:

mean(replicate(100000, max(rnorm(10, mean = 0))))
# [1] 1.537378753

What if we have 4 different scores (with two downweighted substantially to reflect that they are less valuable)? We get 0.23SD for free:

mean(replicate(100000, max(   1*rnorm(10, mean = 0) +
                           0.33*rnorm(10, mean = 0) +
                           0.33*rnorm(10, mean = 0) +
                           0.33*rnorm(10, mean = 0))))
# [1] 1.769910562

This is just like selecting among multiple embryos: the more we have to pick from, the better the chance one will be particularly good. So in selecting embryos, we want to compute multiple polygenic scores for each embryo, weight them by the overall value of that trait, sum them to get a total score for each embryo, then select the best embryo for implantation.

The advantage of multiple polygenic scores follows from the sum of normally-distributed independent random variables for 2 variables X & Y is N(μX+μY,σX2+σY2)\sim N(\mu_X + \mu_Y, \sigma_X^2 + \sigma_Y^2); that is, the variances are added, so the standard deviation will increase, so our expected maximum sample will increase. Recalling E[Z]σ2log(n)E[Z] \leq \sigma \cdot \sqrt{2 \cdot log(n)}, increasing σ\sigma beyond 1 will initially yield larger returns than increasing n past 9 (it looks linear rather than square-root, but embryo selection is zero-sum - the gain is shrunk by the weighting of the multiple variables), and so multiple selection should not be neglected. Using such a total score on n uncorrelated traits, as compared to alternative methods like selecting for 1 trait in each generation, is considerably more efficient, ~(n)\sqrt(n) times as efficient (Hazel & Lush 1943, The efficiency of three methods of selection).

We could rewrite simulateIVFCB to accept as parameters a series of polygenic score functions and simulate out each polygenic score and their sums; but we could also use the sum of random variables to create a single composite polygenic score - since the variances simply sum up (σX2+σY2\sigma_X^2 + \sigma_Y^2), we can take the polygenic scores, weight them, and sum them.

combineScores <- function(polygenicScores, weights) {
    weights <- weights / sum(weights) # normalize to sum to 1
    # add variances, to get variance explained of total polygenic score
    sum(weights*polygenicScores) }

Let’s imagine a US example but with 3 traits now, IQ and 2 we consider to be roughly half as valuable as IQ, but which have better polygenic scores available of 60% and 5%. What sort of gain can we expect above our starting point?

weights <- c(1, 0.5, 0.5)
polygenicScores <- c(selzam2016, 0.6, 0.05)
summary(simulateIVFCBs(9, 4.6, combineScores(polygenicScores, weights), 0.3, 0.90, 10.8/100, 1500, 200, iqHigh))
#     Trait.SD               Cost              Net
#  Min.   :0.00000000   Min.   :1500.00   Min.   : -3900.00
#  1st Qu.:0.00000000   1st Qu.:1700.00   1st Qu.: -1900.00
#  Median :0.00000000   Median :1900.00   Median : -1500.00
#  Mean   :0.07524308   Mean   :2039.25   Mean   : 16189.51
#  3rd Qu.:0.11491090   3rd Qu.:2300.00   3rd Qu.: 25638.72
#  Max.   :1.00232683   Max.   :4100.00   Max.   :241128.71

So we double our gains by considering 3 traits instead of 1.

Multiple selection on independent traits

A more realistic example would be to use some of the existing polygenic scores for complex traits, of which a number are available. Perhaps a little counterintuitively, to maximize the gains, we want to focus on universal traits such as IQ, or common diseases with high prevalence; the more horrifying genetic diseases are rare precisely because they are horrifying (natural selection keeps them rare), so focusing on them will only occasionally pay off.19

Here are 7 I looked up and was able to convert to relatively reasonable gains/losses:

  1. IQ (using the previously given value and Selzam et al 2016 polygenic score, and excluding any valuation of the 7% of family SES & 9% of education that the IQ polygenic score comes with for free)
  2. height

    The literature is unclear what the best polygenic score for height is at the moment; let’s assume that it can predict most but not all, like ~60%, of variance with a population standard deviation of ~4 inches; the economics estimate is $800 of annual income per inch or a NPV of $16k per inch or $65k per SD, so we would weight it as a quarter as valuable as the high IQ estimate (((800/log(1.05))*4) / iqHigh ~> 0.27). The causal link is not fully known, but a Mendelian randomization study of height & BMI supports causal estimates of $300/$1616 per SD respectively, which shows the correlations are not solely due to confounding.
  3. BMI/obesity

    Polygenic scores: 2.7%/7.1%/15.3%, population SD for men is 4.67 (women: 5.86). Cost is a little trickier (low BMI can be as bad as high BMI, lots of costs are not paid by individuals, etc) but one could say there’s an average marginal cost of $175 per year per adult for a 1 unit change in BMI for each adult in the U.S. population. Then we’d get a weight of 7% (((175/log(1.05))*4.67) / iqHigh ~> 0.069).
  4. type 2 diabetes

    Morris et al 2012’s supplementary information reports a polygenic score predicting 5.73% on the liability scale.

    Diabetes is not a continuous trait like IQ/height/BMI, but generally treated as a binary disease: you either have good blood sugar control and will not go blind and suffer all the other morbidity caused by diabetes, or you don’t. The underlying genetics is still highly polygenic and mostly additive, though, and in some sense one’s risk is normally distributed.

    The liability scale threshold model is the usual genetics model for dealing with binary variables like this: one’s latent risk is considered a normal variable (which is the sum of many individual variables, both genetic and environmental/random), and when one is unlucky enough for this risk to be enough standard deviations out past a threshold, one has the disease. The enough standard deviations is set empirically; if 1% of the population will develop schizophrenia, then one has to be +2.33SD (qnorm(0.01)) out to develop schizophrenia, and assuming a mean risk of 0, one can then calculate the effects of an increase or decrease of 1SD. For example, if some change results in decreasing one’s risk score -1SD such that it would now take another 3.33SD to develop schizophrenia, then one’s probability of developing schizophrenia has decreased from 1% to 0.04%, a fall of 23x (pnorm(qnorm(0.01)) / pnorm(qnorm(0.01)-1) ~> 22.73) and so whatever one estimated the expected loss of schizophrenia at, it has decreased 23x and the change of 1SD can be valued at that. And vice versa for an increase: an increase of 1 SD in latent risk will increase the probability of developing schizophrenia several-fold and the expected loss must be increased accordingly. So if we have a polygenic score for schizophrenia which can produce a reduction (out of, say, 10 embryos) of 0.10SDs, a population prevalence of 1%, and a lifetime cost of $1m, then the expected reduction would be from 1% to 0.762%, or from an expected loss of $10000 (1m * 1%) to $7625 (1m * 0.762%) and the value of that quarter reduction would be around a quarter of the original loss. One consequence of this is that as a disorder becomes rarer, selection becomes worth less; or to put it another way, people with high risk of passing on schizophrenia (such as a diagnosed schizophrenic) will benefit far more: the child of 1 schizophrenic parent (and no other relatives) has a ~10% chance of developing schizophrenia and of 2, 40%, implying thresholds of 1.28SDs and 0.25SDs respectively. Putting it together, we can compute the value like this:

    liabilityThresholdValue <- function(populationFraction, gainSD, value) { reducedFraction <- pnorm(qnorm(populationFraction) + gainSD)
                                                                             difference      <- (populationFraction - reducedFraction) * value
                                                                             return(c(reducedFraction, difference)) }
    liabilityThresholdValue(0.01, -0.1, 1000000)
    # [1] 7.625821493e-03 2.374178507e+03
    liabilityThresholdValue(0.10, -0.1, 1000000)
    # [1] 8.355471719e-02 1.644528281e+04
    liabilityThresholdValue(0.40, -0.1, 1000000)
    # [1] 3.619141184e-01 3.808588159e+04
    3.808588159e+04 / 2.374178507e+03
    # [1] 16.04170937
    Similarly for diabetes. We can estimate the NPV of not developing diabetes at as much as $124,600 (NPV); the lifetime risk of diabetes in the USA is approaching ~40% and has probably exceeded it by now (implying, incidentally, that diabetes is one of the most costly diseases in the world), so the expected loss is $49840 and developing diabetes has a threshold of 0.39SD; a decrease of 1SD gives one a third less chance of developing diabetes (pnorm(qnorm(0.40)-1) / pnorm(qnorm(0.40)) ~> 0.26) for a savings of $11k ((124600 * 0.4) - (124600 * 0.4 * 0.26) ~> 36881); finally, $36.8k/SD, compared with IQ, gets a weight of 15%.
  5. ADHD

    ADHD polygenic scores range from 0.098%/0.4%/0.5%/0.59%/1.5%. Prevalence rates differ based on country & diagnosis method, but most genetics studies were run using DSM diagnoses in the West, so ~7% of children affected. Biederman & Faraone 2006 find large harmful correlations, estimating a -$8900 annual loss from ADHD or ~$182k NPV. So the best score is 1.5%; the liability threshold is 1.47SD; the starting expected loss is ~$12768; a 1SD reduction is then worth $11.5k (182000*pnorm(qnorm(0.07)) - 182000*pnorm(qnorm(0.07)-1) ~> 115304) and has a weight of 4.7%.
  6. bipolar disorder

    Scores: 1.2%/1.4%/2.83% (supplement).

    Frequency is ~3%. Ranking after schizophrenia & depression, BPD is likewise expensive, associated with lost work, social stigma, suicide etc. A 1991 estimate estimates a total annual loss of $45 billion but doesn’t give a lifetime per capita estimate; so to estimate that: in 1991, there were ~253 million people in the USA, life expectancy ~75 years, quoted 1991 lifetime prevalence of 1.3%; if there are a few million people every year with BPD which results in a total loss of $45b in 1991 dollars, and each person lives ~75 years, then that suggests an average lifetime total loss of ~$1026147, which inflation-adjusted to 2016 dollars is $1784953, and this has a NPV at 5% of $87k ((45000000000 / (253000000 * 0.013)) * 75 ~> 1026147; 1784953 * log(1.05) ~> 87088.1499). With a relatively low base-rate, the savings is not huge and it gets a weight of 0.01 ((87088*pnorm(qnorm(0.03)) - 87088*pnorm(qnorm(0.03)-1)) / iqHigh ~> 0.01007).
  7. schizophrenia

    Scores: 3%/3.4%/5.5%/6.35%/18.4% Frequency is ~1%. Schizophrenia is even more notoriously expensive worldwide than BPD, with 2002 USA costs estimated by Wu et al 2005 at $15464 in direct & $22032 in indirect costs per patient, or total $49379 in 2016 dollars for a weight of 4% (49379 / log(1.05) ~> 1012068; (1012068*pnorm(qnorm(0.01)) - 1012068*pnorm(qnorm(0.01)-1)) ~> 9675.41; 9675/iqHigh ~> 0.039)

The low weights suggest we won’t see a 6x scaling from adding 6 more traits, but we still see a substantial gain from multiple selection - up to $14k/2.8x better than IQ alone:

polygenicScores <- c(selzam2016,   0.6,  0.153, 0.0573, 0.015, 0.0283, 0.184)
weights <-         c(1,            0.27, 0.07,  0.15,   0.047, 0.01,   0.04)
summary(simulateIVFCBs(9, 4.6, combineScores(polygenicScores, weights), 0.3, 0.90, 10.8/100, 1500, 200, iqHigh))
#     Trait.SD               Cost              Net
#  Min.   :0.00000000   Min.   :1500.00   Min.   : -3900.00
#  1st Qu.:0.00000000   1st Qu.:1700.00   1st Qu.: -1900.00
#  Median :0.00000000   Median :1900.00   Median : -1500.00
#  Mean   :0.06892926   Mean   :2045.56   Mean   : 14653.59
#  3rd Qu.:0.10634265   3rd Qu.:2300.00   3rd Qu.: 23516.44
#  Max.   :1.02970634   Max.   :4300.00   Max.   :247561.81

14653 / 6230
# [1] 2.352006421

Note that this gain would be larger under lower values of IQ, as then more emphasis will be put on the other traits. Values may also be substantially underestimated because there are many more traits with polygenic scores than just the 7 used here, and for the mental health traits because they pervasively overlap genetically (indeed, in 1 case for ADHD, the schizophrenia/bipolar polygenic scores were better predictors of ADHD status than the ADHD polygenic score was!); counterbalancing this underestimation is that the long-noted correlation between schizophrenia & creativity is turning out to also be genetic, so the gain from reduced schizophrenia/bipolar/ADHD is a tradeoff coming at some cost to creativity.

In any case, in theory and in practice, selection on multiple traits will be much more effective than selecting on one trait.

Multiple selection on genetically correlated traits

In single selection, the embryo selected is picked from the batch solely based on its polygenic score on 1 trait, even if the gain is small and some of the other embryos have large genetic advantages on other, almost as important, traits. In multiple selection, we take the maximum from the embryos based on all the scores summed together, allowing for excellence on 1 trait or general high quality on a few other traits.

What sort of advantage do we expect? It’s not as simple as generating some random numbers independently from a distribution and then summing them, because the actual genetic scores will turn out to be intercorrelated: a high polygenic score for intelligence will also tend to lower the BMI polygenic score, and a high BMI polygenic score will increase the childhood obesity polygenic score or the smoking polygenic score because they genetically overlap on the same SNPs. In fact, all traits will tend to be a little (or a lot) genetically correlated. If we ignore this, we may badly over or underestimate the advantage of multiple selection: the advantage of selection on a good trait may be partially negated if it drags in a bad trait, or the advantage may be amplified if it comes with other good traits.

We need a dataset giving the pairwise genetic correlations of a lot of important traits, and then we can generate hypothetical multivariate sets of polygenic scores which follow what the real-world distribution of polygenic scores would look like, and then we can sum them up, maximize, and see what sort of gain we have.

A specific genetic correlation can be estimated from twin studies, or as part of GWAS studies using an algorithm like GCTA or LD score regression. LD score regression has the notable advantage of being usable on solely the polygenic scores for individual traits released by GWASes, without requiring the same subjects to be phenotyped or access to subject-level data, and computationally tractable; hence it is possible to collect various publicly released polygenic scores for any traits and calculate the correlations for all pairs of traits.

This has been done by done by LD Hub (described in Zheng et al 2016), which provides a web interface to an implementation of LD score regression and >100 public polygenic scores which are now also available for estimating SNP heritability or genetic correlations. Zheng et al 2016 describes the initial correlation matrix for 49 traits, a number which are of practical interest in multiple selection; the spreadsheet can be downloaded, saved as CSV, and the first lines edited to provide a usable file in R.

Several of the traits are redundant or overlapping: it is scientifically useful to know that height as measured in one study is the same thing as measured in a different study (implying that the relevant genetics in the two populations are the same, and that the phenotype data was collected in a similar manner, which is something you might take for granted for a trait like height, but would be in considerable doubt for mental illnesses), but we really don’t need 4 slightly different traits related to tobacco use or ~9 traits about obesity. So before turning into a correlation matrix, we need to drop those to leave with 34 relevant traits:

rg <- read.csv("http://www.gwern.net/docs/genetics/correlation/2016-zheng-ldhub-49x49geneticcorrelation.csv")
# delete redundant/overlapping/obsolete ones:
dupes <- c("BMI 2010", "Childhood Obesity", "Extreme BMI", "Obesity Class 1", "Obesity Class 2",
           "Obesity Class 3", "Overweight", "Waist Circumference", "Waist-Hip Ratio", "Cigarettes per Day",
           "Ever/Never Smoked", "Age at Smoking", "Extreme Height", "Height 2010")
rgClean <- rg[!(rg$Trait1 %in% dupes | rg$Trait2 %in% dupes),]
rgClean <- subset(rgClean, select=c("Trait1", "Trait2", "rg"))

library(reshape2)
rgMatrix <- acast(rgClean, Trait2 ~ Trait1)

# convert from half-matrix to full symmetric matrix
library(psych)
rgMatrix <- lowerUpper(t(rgMatrix), rgMatrix)
rgMatrix[is.na(rgMatrix)] <- 1

For a baseline, let’s revisit the single selection case, in which we have 1 trait where higher=better with a heritability of 0.33 where we are choosing from 10 half-related embryos: we can get +0.62SD in this case. For a multiple selection version, we can consider a correlation matrix for 34 traits in which every trait is uncorrelated, with the same settings (higher=better, 0.33 heritability, 10 half-related siblings): with more traits to sum, the best choice becomes much better, giving us +3.644SDs on an overall index. Finally, what if we populate the correlation matrix with genetic correlations like those in the LD Hub dataset (ignoring the issues of trait-specific heritabilities, losses/gains, and available polygenic scores)? Do we get less or more than 3.644SD because traits cancel out? We get much more, +5.19SDs, because the pattern of correlations turn out to be favorable.

mean(replicate(100000, max(rnorm(10, mean=0, sd=sqrt(0.33*0.5)))))
# [1] 0.6250647743

independent <- matrix(ncol=34, nrow=34, 0)
diag(independent) <- 1

library(mvtnorm)
library(MBESS)
mean(replicate(100000, max(rowSums(rmvnorm(10, sigma=cor2cov(independent, sd=rep(sqrt(0.33*0.5),34)), method="svd")))))
# [1] 3.644660885
mean(replicate(100000, max(rowSums(rmvnorm(10, sigma=cor2cov(rgMatrix, sd=rep(sqrt(0.33*0.5),34)), method="svd")))))
# [1] 5.199043247

Next we consider what happens when we include SNP heritabilities (which set upper bounds on the polygenic scores, but see earlier GCTA discussion on why they’re loose upper bounds in practice). The heritabilities for 173 traits are provided by LD Hub in a different spreadsheet but the trait names don’t always match up with the names in the correlation spreadsheet ones, so I had to convert them manually. (The height heritability is also missing from the heritability page & spreadsheet so I borrowed a GCTA estimate from Trzaskowski et al 2016.) While we’re at it, I classified the traits by desirability to consistently set higher=better:

utilities <- read.csv(stdin(), header=TRUE, colClasses=c("factor", "factor", "numeric","integer"))
"Trait","Measurement.type","H2_snp","Sign"
"ADHD","d",0.2573,-1
"Age at Menarche","c",0.183,1
"Alzheimer's","d",0.0688,-1
"Anorexia","d",0.559,-1
"Autism Spectrum","d",0.559,-1
"Bipolar","d",0.432,-1
"Birth Length","c",0.1697,-1
"Birth Weight","c",0.1124,1
"BMI","c",0.1855,-1
"Childhood IQ","c",0.2735,1
"College","d",0.0563,1
"Coronary Artery Disease","d",0.0781,-1
"Crohn's Disease","d",0.4799,-1
"Depression","d",0.1745,-1
"Fasting Glucose","c",0.0984,-1
"Fasting Insulin","c",0.0695,-1
"Fasting Proinsulin","c",0.1443,1
"Former/Current Smoker","d",0.0645,-1
"HbA1C","c",0.0656,-1
"HDL","c",0.116,1
"Height","c",0.69,1
"Hip Circumference","c",0.1266,-1
"HOMA-B","c",0.0888,-1
"HOMA-IR","c",0.0686,-1
"Infant Head Circumference","c",0.2352,-1
"LDL","c",0.1347,1
"Lumbar Spine BMD","c",0.2684,1
"Neck BMD","c",0.2977,1
"Rheumatoid Arthritis","d",0.161,-1
"Schizophrenia","d",0.4541,-1
"T2D","d",0.0872,-1
"Total Cholesterol","c",0.1014,-1
"Triglycerides","c",0.1525,-1
"Ulcerative Colitis","d",0.2631,-1


mean(replicate(100000, max(rowSums(rmvnorm(10, sigma=cor2cov(independent, sd=rep(sqrt(0.33*0.5),34)), method="svd") * utilities$Sign))))
# [1] 3.638056955
mean(replicate(100000, max(rowSums(rmvnorm(10, sigma=cor2cov(rgMatrix, sd=rep(sqrt(0.33*0.5),34)), method="svd") * utilities$Sign))))
# [1] 4.159339696

mean(replicate(100000, max(rowSums(rmvnorm(10, sigma=cor2cov(independent, sd=sqrt(utilities$H2_snp * 0.5)), method="svd") * utilities$Sign))))
# [1] 2.936800189
mean(replicate(100000, max(rowSums(rmvnorm(10, sigma=cor2cov(rgMatrix, sd=sqrt(utilities$H2_snp * 0.5)), method="svd") * utilities$Sign))))
# [1] 3.286329413

Re-estimating with higher=better corrected, the original multiple selection turned out to be somewhat overestimated. Adding the real trait heritabilities, we see that the gains to multiple selection remain large compared to single selection (2.9 or 3.3SDs vs 0.6SDs), and that the genetic correlations do not substantially reduce gains to multiple selection but in fact benefits multiple selection by adding +0.35SD.

Continuing onward: if multiple selection is helpful, what sort of net benefit to selection would we get after assigning some reasonable costs to each trait & using current polygenic scores?

Coming up with that information for 34 traits in the same detail as I have for intelligence would be extremely challenging, so I will settle for some quicker and dirtier estimates; in cases where the causal impact is not clear or I cannot find reasonably reliable cost estimates, I will simply drop the trait (which will be conservative and underestimate possible gains from multiple selection). The traits:

  • Age at Menarche: Perry et al 2014 reports a polygenic score explaining 15.8% of variance. Day et al 2016 demonstrates a causal impact of early puberty on earlier first sexual intercourse, earlier first birth and lower educational attainment, consistent with the intercorrelations (strong negative correlation with childhood IQ); clearly the sign should be negative, and early puberty has been linked to all sorts of problems (Atlantic: greater risk for breast cancer, teen pregnancy, HPV, heart disease, diabetes, and all-cause mortality, which is the risk of dying from any cause. There are psychological risks as well. Girls who develop early are at greater risk for depression, are more likely to drink, smoke tobacco and marijuana, and tend to have sex earlier.) but no costs are available
  • Alzheimer’s disease: Escott-Price et al 2015 report a polygenic score of 0.021 for AD. The lifetime risk at age 65 is 9% men & 17% women; few people die before age 65 so I’ll take the average 13% as the lifetime risk at birth (since Alzheimer’s rates tend to increase, this should be conservative for future rates). Costs rise steeply before death as the dementia cripples the patient, imposing extraordinary costs for daily care & on families & caregivers. USA total costs have been estimated at >$200b; for dementia, the last 5 years of life can incur $287k in costs. Discounting Alzheimer treatment cost is a little tricky: unlike height/BMI/IQ or BPD which we could treat on an annual cost/gain basis and discount out indefinitely, that $287k of expenses will only be incurred 60+ years after birth on average. We can treat it as a single lump sum expense incurred 70 years in the future, discounted at 5% (as usual to be conservative): 287000 / (1+0.05)^70 ~> 9432
  • Anorexia: ~1% prevalence. Boraska et al 2014 does not report a polygenic score.
  • Autism: ~1.4% prevalence. Clarke et al 2016 report that the earlier PGS results found 17% of liability explained (though this does not seem to be reported in the cited original paper/appendix that I can find). Cost ~$4m.
  • Birth length, weight: skip as difficult to pin down the causal effects
  • College: ~42% of younger Americans have a college degree. Rietveld reports a polygenic score ~2% for both college & years of education. A college degree is worth an estimating $250k+ for an American.
  • Coronary Artery Disease: ~40% lifetime risk. Deloukas et al 2013 report a limited polygenic score explaining 10.6% of the estimated 40% additive heritability or 4.24% of variance. Cardiovascular diseases are some of the most common, expensive, and fatal diseases, and US costs range into the hundreds of billions of dollars. Birnbaum et al 2003 estimates annual costs ~$7k up to age 64 but then ~$31k annually afterwards for total lifetime costs of $599k. Around half of people will be diagnosed by ~age 60, so at a first cut, we might discount it at 423000 / (1+0.05)^60 ~> 32067 or $32k.
  • Crohn’s disease: 0.32% incidence. Jostins et al 2012 reports a polygenic score explaining 13.6% of variance. Crohn’s strikes young and lasts a lifetime; PARA estimates $8330 annually or $374850 over the estimated 45 years after diagnosis around age 20, suggesting a discounting of (8330/log(1.05)) / ((1+0.05)^20) ~> 64346.
  • Major depressive disorder: 17% lifetime rate. Sullivan et al 2013 reports a polygenic score of 0.6%. (Hyde et al 2016’s polygenic score used only the top 17 SNPs, and they don’t report the variance explained of MDD, just the secondary phenotypes.) Another major burden of disease, both common and crippling and frequently fatal, depression has large direct costs for treatment and larger indirect costs from wages, worse health etc. Smith et al finds children with depression have $300k less lifetime income, which doesn’t take into account the medical treatment costs or suicide etc and is a lower bound. I can’t find any lifetime costs so I will guesstimate that as the total cost for adults, starting at age 32, giving ~$63k as the cost.
  • Fasting glucose/insulin/proinsulin, HbA1C: skip as their effects should be covered by diabetes.
  • Former/Current Smoker: ~42% of the American population circa 2005 has smoked >100 cigarettes (although by 2016 currently smoking adults were down to ~15% of the population). Supplemental material for Vink et al 2014 reports a polygenic score for ever smoking of 6.7%. The lifetime cost of tobacco smoking includes the direct cost of tobacco, increased lung cancer risk, lower work output, fires, general worsened health, any second hand or fetal effects, and early mortality; the cost, from various perspectives (individual vs national healthcare systems etc) has been heavily debated, but I think it’s safe to put it at least $100k over a lifetime or $27k discounted.
  • Hip Circumference: should be covered by BMI
  • HOMA-B/HOMA-IR/Lumbar Spine BMD/Neck BMD: I have no idea where to start with these, so skipping
  • Infant Head Circumference: should be covered by IQ and education?
  • Rheumatoid Arthritis: 2.65%. MHC hits explain ~12% and Okada et al 2014 report a polygenic score providing another 5.5%. Cooper 2000 estimates total annual costs for RA at ~$11542/year and cites a Stone 1984 estimate of lifetime cost of $15,504 ($35,909 in 2016); with typical age of onset around 60, the total annual cost might be discounted to $13k.
  • LDL, total cholesterol, triglycerides: harmful effects should be redundant with coronary artery disease
  • Ulcerative colitis: 0.3%. Jostins et al 2012 reports a polygenic score of 7.5%. Cohen et al 2010 & Park & Bass 2011 report $15k medical expenses annually & $5k employment loss. With mean age diagnosis of ~35 (da Silva et al 2014), something like (20000/log(1.05)) / ((1+0.05)^35) ~> 74314.

This gives us 16 usable traits:

liabilityThresholdValue <- function(populationFraction, gainSD, value) {
    if (value<0) {
          fraction <- pnorm(qnorm(populationFraction) - gainSD)
        } else {
          fraction <- pnorm(qnorm(populationFraction) + gainSD) }
    gain  <- (fraction - populationFraction) * value
    return(gain)
    }
## handle both continuous & dichotomous traits:
polygenicValue <- function(populationFraction, value, polygenicScore, n=10) {
    gainSD <- embryoSelection(n=n, variance=polygenicScore)
    if (populationFraction==1) { if (value<0) { gainSD <- -gainSD }; return(gainSD*value) } else {
                                 ## the value of increasing healthy fraction:
                                 liabilityThresholdValue(populationFraction,  gainSD,  value) }  }
## examples for single selection: BMI
polygenicValue(1, -16750, 0.153)
# [1] 7125.151193
## example: IQ
iqHigh <- 16151*15
selzam2016 <- 0.035
polygenicValue(1, iqHigh, selzam2016)
# [1] 49347.3885
## example: bipolar
polygenicValue(0.03, -87088, 0.0283)
# [1] 911.28581092779
## example: college
polygenicValue(0.42, 250000, 0.03)
# [1] 18670.466258862

utilitiesScores <- read.csv(stdin(), header=TRUE, colClasses=c("factor","factor","numeric", "numeric","numeric"))
Trait, Measurement.type, Prevalence, Cost, Polygenic.score
"ADHD","d", 0.07, -182000, 0.015
"Age at Menarche","c", 1,0,0.158
"Alzheimer's","d",0.13,-9432,0.021
"Anorexia","d",0.01,0,0
"Autism Spectrum","d",0.014,-4000000, 0.17
"Bipolar","d", 0.03, -87088, 0.0283
"Birth Length","c",1,0,0
"Birth Weight","c",1,0,0
"BMI","c",1, -16750, 0.153
"Childhood IQ","c", 1, 242265, 0.035
"College","d",0.42,250000,0.03
"Coronary Artery Disease","d",0.404,-32000,0.0424
"Crohn's Disease","d",0.0032,-64346,0.136
"Depression","d",0.17,-62959,0.006
"Fasting Glucose","c",1,0,0
"Fasting Insulin","c",1,0,0
"Fasting Proinsulin","c",1,0,0
"Former/Current Smoker","d",0.42,-27327,0.067
"HbA1C","c",1,0,0
"HDL","c",1,0,0
"Height","c",1, 1616, 0.60
"Hip Circumference","c",1,0,0
"HOMA-B","c",1,0,0
"HOMA-IR","c",1,0,0
"Infant Head Circumference","c",1,0,0
"LDL","c",1,0,0
"Lumbar Spine BMD","c",1,0,0
"Neck BMD","c",1,0,0
"Rheumatoid Arthritis","d",0.0265,-12664,0.175
"Schizophrenia","d",0.01, -49379, 0.184
"T2D","d", 0.40, -124600, 0.0573
"Total Cholesterol","c",1,0,0
"Triglycerides","c",1,0,0
"Ulcerative Colitis","d",0.003,-74314,0.075


utilitiesScores$Value <- with(utilitiesScores,
                     ifelse((Measurement.type=="c"),
                           Cost,
                           unlist(Map(liabilityThresholdValue, Prevalence, 1, Cost))))

mean(replicate(10000, max(rowSums(rmvnorm(10, sigma=cor2cov(rgMatrix,
    sd=sqrt(0.000001+utilitiesScores$Polygenic.score * 0.5)), method="svd") * utilitiesScores$Value))))
# [1] 34218.17767

mean(replicate(10000, max(rowSums(rmvnorm(10, sigma=cor2cov(rgMatrix,
    sd=sqrt(utilities$H2_snp * 0.5)), method="svd") * utilitiesScores$Value))))
# [1] 146147.3353

So with current polygenic scores, we could expect a gain of ~$70k out of 10 embryos (at least, before the inevitable losses of the IVF process), which is indeed around as much more as expected than IQ on its own (which was $49k). We could also take a look at the expected gain if we could have perfect polygenic scores equal to the SNP heritabilities; then we would get as much as $146k.

Embryo selection versus alternative breeding methods

Genomic selection is increasingly used in animal and plant breeding because it can be used before phenotypes are measurable for faster breeding, and polygenic scores can also correct phenotypic measurements for measurement error & environment. This mention of measurement error understates the value - in the case of a binary or dichotomous or threshold trait, there is only a weak population-wide measurable correlation between genetic liability and whether the trait actually manifests. And the rarer the trait, the worse this is. Returning to schizophrenia as an example, only 1% of the population will develop it, even though it is hugely influenced by genetics; this is because there is a large reservoir of bad variants lurking in the population, and only once in a blue moon do enough bad variants cluster in a single person exposed to the wrong nonshared environment and develops full-blown schizophrenia. Any sort of selection based on schizophrenia status will be slow, and will get slower as schizophrenia becomes rarer & cases appear less. However, if one knew all the variants responsible, one could look directly at the whole population and rank by liability score and select based on that. What sort of gain might we expect?

First, we could consider the change in liability scores from simple embryo selection on schizophrenia with the Ripke et al 2014 polygenic score of 18.4%:

mean(simulateIVFCBs(3, 4.6, 0.184, 0.5, 0.96, 0.24, 0, 0, 0)$Trait.SD)
# [1] 0.06622272431

So if embryo selection on schizophrenia were applied to the whole population, we could expect to decrease the liability score by ~0.6SDs the first generation, which would take us from 1% to ~0.8% population prevalence, for a 20% reduction:

0.01 + liabilityThresholdValue(0.01, -0.066, 1)
# [1] 0.008370483248

An alternative to embryo selection would be truncation selection: selecting all members of a population which pass a certain phenotypic threshold and breeding from them (eg letting only people over 110IQ reproduce, or in the other direction, not letting any schizophrenics reproduce). This is one of the most easily implemented breeding methods, and is reasonably efficient.

For a continuous trait, truncation selection’s effect is easily to calculate via the breeder’s equation: the increase is given by the selection intensity times the heritability, where the selection intensity of a particular truncation threshold t is given by dnorm(qnorm(t))/(1-t). So if, for example, only the upper third of a population by IQ was allowed to reproduce, this truncation selection would yield an increase of ~12 IQ points:

t=0.66; (dnorm(qnorm(t))/(1-t)) * 0.8 * 15
# [1] 12.93212934

This is noticeably larger than we would get with current polygenic scores for education/intelligence, and shows that for highly heritable continuous traits, it’s hard to beat selection on phenotypes.

The effect of a generation of truncation selection on a binary trait following the liability-threshold model is more complicated but follows a similar spirit. A discussion & formula is on pg6 of Chapter 14: Short-term Changes in the Mean: 2. Truncation and Threshold Selection; I’ve attempted to implement it in R:

threshold_select <- function(fraction_0, heritability) {
    library(VGAM) ## for 'probit'
    fraction_probit_0 = probit(fraction_0)
    ## threshold for not manifesting schizophrenia:
    s_0 = dnorm(fraction_probit_0) / fraction_0
    ## new rate of schizophrenia after one selection where 100% of schizophrenics never reproduce:
    fraction_probit_1 = fraction_probit_0 + heritability * s_0
    fraction_1 = pnorm(fraction_probit_1)
    ## how much did we reduce schizophrenia in percentage terms?
    print(paste0("Start: population fraction: ", fraction_0, "; liability threshold: ", fraction_probit_0, "; Selection intensity: ", s_0))
    print(paste0("End: liability threshold: ", fraction_probit_1, "; population fraction: ", fraction_1, "; Total population reduction: ",
                 fraction_0 - fraction_1, "; Percentage reduction: ", (1-((1-fraction_1) / (1-fraction_0)))*100))
}

Assuming 1% prevalence & 80% heritability, 1 generation of truncation selection would yield ~5% decrease in schizophrenia:

threshold_select(0.99, 0.80)
# [1] "Start: population fraction: 0.99; liability threshold: 2.32634787404084; Selection intensity: 0.0269213557610688"
# [1] "End: liability threshold: 2.3478849586497; population fraction: 0.99055982415415; Total population reduction: -0.000559824154150346; Percentage reduction: 5.59824154150346"

(This ignores that schizophrenics already reproduce less and there should be ongoing selection against schizophrenia, and in a sense, truncation selection is already being done, so the ~5% is a bit of an overestimate.)

Thus, for rare binary traits, genomic selection methods can do much better than phenotypic selection methods. Which one works better will depend on the details of how rare a trait is, the heritability, available polygenic scores, available embryos etc. Of course, there’s no reason that they can’t both be used, and both methods can be improved by taking into account other information like family histories.

Iterated embryo selection

Aside from regular embryo selection, Shulman & Bostrom note the possibility of iterated embryo selection, where after the selection step, the highest-scoring embryo’s cells are regressed back to stem cells, to be turned into fresh embryos which can again be sequenced & selected on, and so on for as many cycles as feasible. The benefit here is that in exchange for the additional work, one can combine the effects of many generations of embryo selection to produce a live baby which is equivalent to selecting out of hundreds or thousands or millions of embryos. (10 cycles is much more effective than selecting on, say, 10x the number of embryos because it acts like a ratchet: each new batch of embryos is distributed around the genetic mean of the previous iteration, not the original embryo, and so the 1 or 2 IQ points accumulate.) As they summarize it:

Stem-cell derived gametes could produce much larger effects: The effectiveness of embryo selection would be vastly increased if multiple generations of selection could be compressed into less than a human maturation period. This could be enabled by advances in an important complementary technology: the derivation of viable sperm and eggs from human embryonic stem cells. Such stem-cell derived gametes would enable iterated embryo selection (henceforth, IES):

  1. Genotype and select a number of embryos that are higher in desired genetic characteristics;
  2. Extract stem cells from those embryos and convert them to sperm and ova, maturing within 6 months or less (Sparrow, 2013);
  3. Cross the new sperm and ova to produce embryos;
  4. Repeat until large genetic changes have been accumulated.

Iterated embryo selection has recently drawn attention from bioethics (Sparrow, 2013; see also Miller, 2012; Machine Intelligence Research Institute, 2009 [and Suter 2015]) in light of rapid scientific progress. Since the Hinxton Group (2008) predicted that human stem cell-derived gametes would be available within ten years, the techniques have been used to produce fertile offspring in mice, and gamete-like cells in humans. However, substantial scientific challenges remain in translating animal results to humans, and in avoiding epigenetic abnormalities in the stem cell lines. These challenges might delay human application 10 or even 50 years in the future (Cyranoski, 2013). Limitations on research in human embryos may lead to IES achieving major applications in commercial animal breeding before human reproduction. If IES becomes feasible, it would radically change the cost and effectiveness of enhancement through selection. After the fixed investment of IES, many embryos could be produced from the final generation, so that they could be provided to parents at low cost.

IES probably will work, the concept is extremely promising, and progress is still being made on it (eg Irie et al 2015, Zhou et al 2016, Zhang et al 2016 ), but it suffers from two main problems as far as a cost-benefit evaluation goes:

  1. application to human cells remains largely hypothetical, and it is difficult for any outsider to understand how effective current induced pluripotency methods for pluripotent stem cell-derived gametes are: how much will the mouse research transfer to human cells? How reliable is the induction? What might be the long-term effects - or in the case of iterating it, what may be the short-term effects? Is this 5 years or 20 years away from practicality? What does the process cost at the moment, and what sort of lower limit on materials & labor costs can we expect from a mature process?
  2. IES, considered as an extension to per-individual embryo selection like above, suffers from the same weaknesses: Presumably the additional steps of inducing pluripotency and re-fertilizing will be complicated & very expensive (especially given that the proposed timelines for a single cycle run 4-6 months) compared to a routine sequencing & implantation, and this makes the costs explode: if the iteration costs $10k extra per cycle and each cycle of embryo selection is only gaining ~1.13 IQ points due to the inherent weakness of polygenic scores, then each cycle may well be a loss, and the entire process colossally expensive. The ability to create large numbers of eggs from stem cells would boost the n but that still runs into diminishing returns and as shown above, does not drastically change matters. (If one is already spending $10k on IVF and the SNP sequencing for each embryo costs $100 then to get a respectable amount like 1 standard deviation through IES requires (15/1.13) * (100+10000) + 10000 ~> 144070.7965, which at almost $9k a point is far beyond the ability to pay of almost everyone except multi-millionaires or governments who may have other reasons justifying use of the process.)

So it’s difficult to see when IES will ever be practical or cost-effective as a simple drop-in replacement for embryo selection.

The real value of IES is as a radically different paradigm than embryo selection. Instead of selecting on a few embryos, done separately for each set of parents, IES would instead be a total replacement for the sperm/egg donation industry. This is what Shulman & Bostrom mean by that final line about fixed investment: a single IES program doing selection through dozens of generations might be colossally expensive compared to a single round of embryo selection, but the cost of creating that final generation of enhanced stem cells can then be amortized indefinitely by creating sperm & egg cells and giving them to all parents who need sperm or egg donations. (If it costs $100m and is amortized over only 52k IVF births in the first year, then it costs a mere $2k for what could be gains of many standard deviations on many traits.) The offspring may only be related to one of the parents, but that has proven to be acceptable to many couples in the past opting for egg/sperm donation or adoption; and the expected genetic gain will also be halved, but half of a large gain may still be very large. Sparrow et al 2013 points towards further refinements based on agricultural practices: since we are not expecting the final stem cells to be related to the parents using them for eggs/sperms, we can start with a seed population of stem cells which is maximally diverse and contains as many rare variants as possible, and do multiple selection on it for many generations. (We can even cross IES with other approaches like CRISPR gene editing: CRISPR can be used to target known causal variants to speed things up, or be used to repair any mutations arising from the long culturing or selection process.)

We can say that while IES still looks years away and is not possible or cost-effective at the moment, it definitely has the potential to be a game-changer, and a close eye should be kept on in vitro gametogenesis-related research.

Limits to iterated selection

One might wonder: how many generations of selection could IES do now, considering that the polygenic scores explore only a few percentage points of variance and we’ve already seen that in 1 step of selection we get a small amount? Is it possible that after 2 or 3 rounds of selection, the polygenic score will peter out and one will run out of variance?

No. We can observe that in animal and plant breeding, it is almost never the case that selection on a complex trait gives increases for a few generation and then stops cold - unless it’s a simple trait governed by one or two genes, in which case they might’ve been driven to fixation that fast. In practice, breeding programs can operate for many generations without running out of genetic variation to select on, as the maize oil, domesticated fox, milk cow, or thoroughbred horse racing20 have demonstrated. Paradoxically, the more genes involved and thus the worse our polygenic scores are at a given fraction of heritability, the longer selection can operate and the more the potential gains.

It’s true that a polygenic score might be able to predict only a small fraction of variance, but this is in part because of the Central Limit Theorem: with thousands of genes with additive effects, they sum up to a tight bell curve, and it’s 5001 steps forward, 4999 steps backwards, which gives little hint as to what would happen if we could take all 10000 steps forward. So the total potential gain has more to do with the heritability vs number of alleles, which makes sense - if a trait is mostly caused by a single gene which half the population already has, we would not expect to be able to make much difference; but if it’s mostly caused by a few dozen genes, then few people will have the maximal value; and if by a few hundred or a few thousand, then probably no one will have ever had the maximal value and the gain could be enormous. Consider a simple binomial model of 10000 alleles with 1/0 unit weights at 50% frequency, explaining 80% of variance; the mean sum will be 10000*0.5=5000 with an SD of sqrt(10000*0.5*0.5)=50; if we observe a population IQ SD of 15, and each +SD is due 80% to having +50 beneficial variants, then each allele is worth ~0.26 points, and then, regardless of any polygenic score we might’ve constructed explaining a few percentage of the 10000 alleles’ influence, the maximal gain over the average person is 0.26*(10000-5000)=1300 points/86SDs. A more realistic model with exponentially distributed weights gives a similar estimate21

We could also ask what the upper limit is by looking at an existing polygenic score and seeing what it would predict for a hypothetical individual who had the better version of each one. The Rietveld et al 2013 polygenic score for education-years is available and can be adjusted into intelligence, but for clarity I’ll use the Benyamin et al 2014 polygenic score on intelligence (codebook):

benyamin <- read.table("CHIC_Summary_Benyamin2014.txt", header=TRUE)
nrow(benyamin); summary(benyamin)
# [1] 1380158
#          SNP               CHR               BP            A1         A2
#  rs1000000 :      1   chr2   :124324   Min.   :     9795   A:679239   C:604939
#  rs10000010:      1   chr1   :107143   1st Qu.: 34275773   C: 96045   G:699786
#  rs10000012:      1   chr6   :100400   Median : 70967101   T:604874   T: 75433
#  rs10000013:      1   chr3   : 98656   Mean   : 79544497
#  rs1000002 :      1   chr5   : 93732   3rd Qu.:114430446
#  rs10000023:      1   chr4   : 89260   Max.   :245380462
#  (Other)   :1380152   (Other):766643
#     FREQ_A1            EFFECT_A1                  SE                   P
#  Min.   :0.0000000   Min.   :-1.99100e-01   Min.   :0.01260000   Min.   :0.00000361
#  1st Qu.:0.2330000   1st Qu.:-1.12000e-02   1st Qu.:0.01340000   1st Qu.:0.23060000
#  Median :0.4750000   Median : 0.00000e+00   Median :0.01480000   Median :0.48370000
#  Mean   :0.4860482   Mean   : 2.30227e-06   Mean   :0.01699674   Mean   :0.48731746
#  3rd Qu.:0.7330000   3rd Qu.: 1.12000e-02   3rd Qu.:0.01830000   3rd Qu.:0.74040000
#  Max.   :1.0000000   Max.   : 2.00000e-01   Max.   :0.06760000   Max.   :1.00000000

Many of these estimates come with large p-values reflecting the relatively large standard error compared to the unbiased MLE estimate of its average additive effect on IQ points, and are definitely not genome-wide statistically-significant. Does this mean we cannot use them? Of course not! From a Bayesian perspective, many of these SNPs have high posterior probabilities; from a predictive perspective, even the tiny effects are gold because there are so many of them; from a decision perspective, the expected value is still non-zero as on average each will have its predicted effect - selecting on all the 0.05 variants will increase by that many 0.05s etc. (It’s at the extremes that the MLE estimate is biased.)

We can see that over a million have non-zero point-estimates and that the overall distribution of effects looks roughly exponentially distributed. The Benyamin SNP data includes all the SNPs which passed quality-checking, but is not identical to the polygenic score used in the paper as that removed SNPs which were in linkage disequilibrium; leaving such SNPs in leads to double-counting of effects (two SNPs in LD may reflect just 1 SNP’s causal effect). I took the top 1000 SNPs and used SNAP to get a list of SNPs with an r2>0.2 & within 250-KB, which yielded ~1800 correlated SNPs, suggesting that a full pruning would leave around a third of the SNPs, which we can mimic by selecting a third at random.

The sum of effects (corresponding to our imagined population which has been selected on for so many generations that the polygenic score no longer varies because everyone has all the maximal variants) is the thoroughly absurd estimate of +6k SD over all SNPs and +5.6k SD filtering down to p<0.5 and +3k adjusting for existing frequencies (going from minimum to maximum); halving for symmetry, that is still thousands of possible SDs:

## simulate removing the 2/3 in LD
benyamin <- benyamin[sample(nrow(benyamin), nrow(benyamin)*0.357),]
sum(abs(benyamin$EFFECT_A1)>0)
# [1] 491497
sum(abs(benyamin$EFFECT_A1))
# [1] 6940.7508
with(benyamin[benyamin$P<0.5,], sum(abs(EFFECT_A1)))
# [1] 5614.1603
with(benyamin[benyamin$P<0.5,], sum(abs(EFFECT_A1)*FREQ_A1))
# [1] 2707.063157
with(benyamin[benyamin$EFFECT_A1>0,], sum(EFFECT_A1*FREQ_A1)) + with(benyamin[benyamin$EFFECT_A1<0,], abs(sum(EFFECT_A1*(1-FREQ_A1))))
# [1] 3475.532912
hist(abs(benyamin$EFFECT_A1), xlab="SNP intelligence estimates (SDs)", main="Benyamin et al 2014 polygenic score")
The betas/effect-sizes of the Benyamin et al 2014 polygenic score for intelligence, illustrating the many thousands of variants available for selection on.
The betas/effect-sizes of the Benyamin et al 2014 polygenic score for intelligence, illustrating the many thousands of variants available for selection on.

One might wonder about what if we were to start with the genome of someone extremely intelligent, such as a John von Neumann, perhaps cloning cells obtained from grave-robbing the Princeton Cemetary? Would selection or editing then be ineffective because one is starting with such an excellent baseline? Such clones would be equivalent to an identical twin raised apart, sharing 100% of genetics but none of the shared-environment or non shared-environment, and thus the usual ~80% of variance in the clones’ intelligence would be predictable from the original’s intelligence; however, since the donor is chosen for his intelligence, regression to the mean will kick in and the clones will not be as intelligent as the original. How much less? If we suppose von Neumann was 170 (+4.6SDs), then his identical-twin/embryos would regress to the genetic mean of 4.60.8=3.684.6 \cdot 0.8=3.68 SDs or IQ 155. (His siblings would’ve been lower still than this, of course, as they would only be 50% related even if they did have the same shared-environment.) With <0.2 IQ points per beneficial allele and a genetic contribution of +55, then von Neumann would’ve only needed 155100<0.2=>275\frac{155-100}{<0.2} = >275 positive variants compared to the average person; but he would still have had thousands of negative variants left for selection to act against. Having gone through the polygenic scores and binomial/gamma models, this conclusion will not come as a surprise: since existing differences in intelligence are driven so much by the effects of thousands of variants, the CLT/standard deviation of a binomial/gamma distribution implies that those differences represent a net difference of only a few extra variants, as almost everyone has, say, 4990 or 5001 or 4970 or 5020 good variants and no one has extremes like 9000 or 3000 variants - even a von Neumann only had slightly better genes than everyone else, probably no more than a few hundred. Hence, anyone who does get thousands of extra good variants will be many SDs beyond what we currently see.

The major challenges to IES are how far the polygenic scores will be valid before breaking down.

Polygenic scores from GWASes draw most of their predictive power not from identifying the exact causal variants, but identifying SNPs which are correlated with causal variants and can be used to predict their absence or presence. With a standard GWAS and without special measures like finemapping, only perhaps 10% of SNPs identified by GWASes will themselves be causal. For the other 90%, since genes are inherited in blocks, a SNP might almost always be inherited along with an unknown causal variant; the SNPs are in linkage disequilibrium (LD) with the causal variants and are said to tag them. However, across many generations, the blocks are gradually broken up by chromosomal recombination and a SNP will gradually lose its correlation with its causal variant; this causes the original polygenic score to lose overall predictive power as more selection power is spent on increasing the frequency of SNPs which no longer tag their causal variant and are simply noise. This is unimportant for single selection steps because a single generation will change LD patterns only slightly, and in normal breeding programs, fresh data will continue to be collected and used to update the GWAS results and maintain the polygenic score’s efficacy while an unchanged polygenic score loses efficacy (eg Neyhart et al 2016 show this in barley simulations); but in an IES program, one doesn’t want to stop every, say, 5 generations and wait a decade for the embryos to grow up and fresh data, so the polygenic score predictive power will degrade down to that lower bound and the genetic value will hit the corresponding ceiling. (So at a rough guess, a human intelligence GWAS polygenic score would degrade down to ~10% efficacy within 5-10 generations of selection, and the total gains would be upper bounded at 10% of the theoretical limit of so perhaps hundreds of SDs at most.)

Secondly, if all the causal variants were maxed out and driven to fixation, it’s unclear how much gain there would be because variants with additive effects within the normal human range may become non-additive beyond that. Thousands of SDs is meaningless, since intelligence reflects neurobiological traits like nerve conduction velocity, brain size, white matter integrity, metabolic demands etc, all of which must have inherent biological limits (although considerations from the scaling of the primate brain architecture suggest that the human brain could be increased substantially, similar to the increase from Australopithecus to humans, before gains disappear; see Hofman 2015); so while it’s reasonable to talk about boosting to 5-10SDs based on additive variants, beyond that there’s no reason to expect additivity to hold. Since the polygenic score only becomes uninformative hundreds of SDs beyond where other issues will take over, we can safely say that the polygenic scores will not run dry during an IES project, much less normal embryo selection - additivity will run out before the polygenic score’s information does.

See also

Appendix

IQ/income bibliography

The Genius Factory, Plotz 2005

Excerpts from the The Genius Factory: The Curious History of the Nobel Prize Sperm Bank, Plotz 2005 (eISBN: 978-1-58836-470-8), about the Repository for Germinal Choice:

I asked Beth why she called. She said she wanted to dispel the notion that the women who went to the genius sperm bank were crazies seeking über-children. She told me she had gone to the Repository not because she wanted a genius baby but because she wanted a healthy one. The Repository was the only bank that would tell her the donor’s health history. She had picked Donor White. Her daughter Joy, she said, was just what she had hoped for, a healthy, sweet, warm little girl. (That’s why Beth asked me to call her daughter Joy.) My daughter is not a little Nazi. She’s just a lovely, happy girl. She described Joy to me, how she loved horseback riding and Harry Potter. She read me a note from Joy’s teacher: Wow, it is a pleasure to have her smiling face and interest in the classroom.…Beth was so desperate to conceive that she quit her job for one with better health insurance. After six months of failure, she gave up on regular insemination. She spent almost all her savings on in vitro fertilization, trying to have a test-tube baby with Donor White’s sperm. This was 1989, when IVF success rates were very low and the cost was very high. But the pregnancy took.

…More important, Graham had learned that his customers didn’t share his enthusiasm for brainiacs. The Nobelists had afflicted Graham with three problems he hadn’t anticipated: first, there were too few of them to meet the demand; second, they were too old, which raised the risk of genetic abnormalities and cut their sperm counts (a key reason why their seed didn’t get anyone pregnant); third, they were too eggheaded. Even the customers of the Nobel sperm bank sought more than just big brains from their donors. Sure, sometimes his applicants asked how smart a donor was. But they usually asked how good-looking he was. And they always asked how tall he was. Nobody, Graham saw, ever chose the short sperm. Graham realized he could make a virtue of necessity. He could take advantage of his Nobel drought to shed what he called the bank’s little bald professor reputation. Graham began to hunt for Renaissance men instead - donors who were younger, taller, and better - looking than the laureates. Those Nobelists, he would say scornfully, they could never win a basketball game.

…After Mary mentioned the divorce, I told the Legares about one of the odd things I had noticed in my reporting on the genius sperm bank: in most of the two dozen families I had dealt with, the father was notably absent from family life. I knew I had a skewed sample: divorced mothers tended to contact me because they were more open about their secret - not needing to protect the father anymore - and because they were seeking new relatives for their kids. I had heard from only a couple of intact families with attentive dads. While good studies on DI families don’t seem to exist (at least I have not found them), anecdotes about them suggest that there is frequently a gap between fathers and their putative children. Social fathers - the industry term for the nonbiological dads - have it tough, I told the Legares. They are drained by having to pretend that children are theirs when they aren’t; it takes a good actor and an extraordinary man to overlook the fact that his wife has picked another man to father his child. It’s no wonder that the paternal bond can be hard to maintain. When a couple adopts a child, both parents share a genetic distance from the kid. But in DI families, the relationships tend to be asymmetric: the genetically connected mothers are close to their kids, the unconnected fathers are distant. I suspected that the Nobel sperm bank had exaggerated this asymmetry, since donors had been chosen because mothers thought they were better than their husbands - Nobelists, Olympians, men at the top of their field, men with no health blemishes, with good looks, with high IQs. Of course sterile, disappointed husbands would have a hard time competing with all that. Robert Graham had miscalculated human nature. He had assumed that sterile husbands would be eager to have their wives impregnated with great sperm donors, that they would think more about their children than their own egos. But they weren’t all eager, of course. How could they have been eager? Some were angry at themselves (for their infertility), their wives (for seeking a genius sperm donor), and their kids (for being not quite their kids). Graham had limited his genius sperm to married couples in the belief that such families would be stronger, because the husbands would be so supportive. In fact, Graham’s brilliant sperm may have had the opposite effect; I told the Legares about a mom I knew who said the Repository had broken up her marriage. Her husband had felt as though he couldn’t compete with the donor and had walked out.

…Fairfax Cryobank was located beyond the Washington Beltway in The Land of Wretched Office Parks. The cryobank was housed in the dreariest of all office developments….She asked me where I had gone to college. I said Harvard. She was delighted. She continued, And have you done some graduate work? I said no. She looked disappointed. But surely you are planning to do some graduate work? Again I said no. She was deflated and told me why. Fairfax has something it calls - I’m not kidding - its doctorate program. For a premium, mothers can buy sperm from donors who have doctoral degrees or are pursuing them. What counts as a doctor? I asked. Medicine, dentistry, pharmacy, optometry, law (lawyers are doctors? yes - the juris doctorate), and chiropractic. Don’t say you weren’t warned: your premium doctoral sperm may have come from a student chiropractor.

…But, immoral or not, AID was real, and it was useful, because it was the first effective fertility treatment. AID established the moral arc that all fertility treatments since - egg donation, in vitro fertilization, sex selection, surrogacy - have followed.

  1. First, Denial: This is physically impossible.
  2. Then Revulsion: This is an outrage against God and nature.
  3. Then Silent Tolerance: You can do it, but please don’t talk about it.
  4. Finally, Popular Embrace: Do it, talk about it, brag about it. You are having test-tube triplets carried by a surrogate? So am I!

…Robert Graham strolled into the world of dictatorial doctors and cowed patients and accidentally launched a revolution. The difference between Robert Graham and everyone else doing sperm banking in 1980 was that Robert Graham had built a $70 million company. He had sold eyeglasses, store to store. He had developed marketing plans, written ad copy, closed deals. So when he opened the Nobel Prize sperm bank in 1980, he listened to his customers. All he wanted to do was propagate genius. But he knew that his grand experiment would flop unless women wanted to shop with him. What made people buy at the supermarket? Brand names. Appealing advertising. Endorsements. What would make women buy at the sperm market? The very same things.

So Graham did what no one in the business had ever done: he marketed his men. Graham’s catalog did for sperm what Sears, Roebuck did for housewares. His Repository catalog was very spare - just a few photocopied sheets and a cover page - but it thrilled his customers. Women who saw it realized, for the first time, that they had a genuine choice. Graham couldn’t guarantee his product, of course, but he came close: he vouched that all donors were men of outstanding accomplishment, fine appearance, sound health, and exceptional freedom from genetic impairment. (Graham put his men through so much testing and paperwork that it annoyed them: Nobel Prize winner Kary Mullis said he had rejected Graham’s invitation because he’d thought that by the time he was done with the red tape, he wouldn’t have any energy left to masturbate.)…Thanks to its attentiveness to consumers, the Repository upended the hierarchy of the fertility industry. Before the Repository, fertility doctors had ordered, women had accepted. Graham cut the doctors out of the loop and sold directly to the consumer. Graham disapproved of the women’s movement and even banned unmarried women from using his bank, yet he became an inadvertent feminist pioneer. Women were entranced. Mother after mother said the same thing to me: she had picked the Repository because it was the only place that let her select what she wanted.

…Unlike most other sperm bankers, Broder acknowledges his debt to Graham. When the Nobel sperm bank opened in 1980, Broder said, it changed everything. At the time, the California Cryobank had one line about a donor: height, weight, eye color, blood typing, ethnic group, college major. But when we saw what Graham was doing, how much information about the donor he put on a single page, we decided to do the same. Other sperm banks, recognizing that they were in a consumer business, were soon publicizing their ultrahigh safety standards, rigorous testing of donors, and choice, choice, choice. This is the model that guides all sperm banks today.

…With two hundred-plus men available, California Cryobank probably has the world’s largest selection. It dwarfs the Repository, which never had more than a dozen donors at once. California Cryobank produces more pregnancies in a single month than the Repository did in nineteen years. Other sperm banks range from 150-plus donors to only half a dozen. In the basic catalog, donors are coded by ethnicity, blood type, hair color and texture, eye color, and college major or occupation. Searching for an Armenian international businessman? How about Mr. 3291? Or an Italian-French filmmaker, your own little Truffaullini? Try Mr. 5269. But the basic catalog is just a start. For $12, you can see the long profile of any donor - his twenty-six-page handwritten application. Fifteen bucks more gets you the results of a psychological test called the Keirsey Temperament Sorter. Another $25 buys a baby photo. Yet another $25, and you can listen to an audio interview. Still more, and you can read the notes that Cryobank staff members took when they met the donor. For $50, a bank employee will even select the donor who looks most like your husband. …To get a sense of what this man-shopping feels like, I asked Broder if I could see a complete donor package. Broder gave me the entire folder for Donor 3498. I began with the baby photo. In it, 3498 was dark blond and cute, arms flung open to the world. At the bottom, where a parent would write, Jimmy at his second birthday party, the Cryobank had printed, 3498. I leafed through 3498’s handwritten application. His writing was fast and messy. He was twenty-six years old, of Spanish and English descent. His eyes were blue-gray, hair brown, blood B-positive. He was tall, of course. (California Cryobank rarely accepts anyone under five feet, nine inches tall.) Donor 3498 had been a college philosophy major, with a 3.5 GPA, and he had earned a Master of Fine Arts graduate degree. He spoke basic Thai. I was a national youth chess champion, and I have written a novel. His favorite food was pasta. He worked as a freelance journalist (I wondered if I knew him). He said his favorite color was black, wryly adding, which I am told is technically not a color. He described himself as highly self-motivated, obsessive about writing and learning and travel. . . . My greatest flaw is impatience. His life goal was to become a famous novelist. His SAT scores were 1270, but he noted that he got that score when he was only twelve years old, the only time he took the test. He suffered from hay fever; his dad had high blood pressure. Otherwise, the family had no serious health problems. Both parents were lawyers. His mom was assertive, controlling, and optimistic; his dad was assertive and easygoing. I checked 3498’s Keirsey Temperament Sorter. He was classified as an idealist and a Champion. Champions see life as an exciting drama, pregnant with possibilities for both good and evil. . . . Fiercely individualistic, Champions strive toward a kind of personal authenticity. . . . Champions are positive exuberant people. I played 3498’s audio interview. He sounded serious, intense, extremely smart. I could hear that he clicked his lips together before every sentence. He clearly loved his sister - a pretty amazing, vivacious woman - but didn’t think much of his younger brother, whom he dismissed as less serious. He did indeed seem to be an idealist: I’d like to be involved in the establishment of an alternative living community, one that is agriculturally oriented. By then I felt I knew 3498, and that was the point. I knew more about him than I had known about most girls I dated in high school and college. I knew more about his health than I knew about my wife’s or even my own. Unfortunately, I didn’t really like him. His seriousness seemed oppressive: I disliked the way he put down his brother. He sounded rigid and chilly. If I were shopping for a husband, he wouldn’t be it, and if I were shopping for a sperm donor, he wouldn’t be it, either. And that was fine. I thought about it in economic terms: If I were a customer, I would have dropped only a hundred bucks on 3498, which is no more than a couple of cheap dates. I could go right back to the catalog and find someone better. One of the implications of 3498’s huge file - one that banks themselves hate to admit - is that all sperm banks have become eugenic sperm banks. When the Nobel Prize sperm bank disappeared, it left no void, because other banks have become as elitist as it ever was. Once the customer, not the doctor, started picking the donor, banks had to raise their standards, providing the most desirable men possible and imposing the most stringent health requirements. The consumer revolution also changed sperm banking in ways that Robert Graham would have grumbled about. Graham limited his customers to wives, but married couples have less need to resort to donor sperm these days. Vasectomies are often reversible, and a treatment called ICSI can harvest a single sperm cell from the testes and use it to fertilize an in vitro egg. …That means that lesbians and single mothers increasingly drive sperm banking. They now make up 40% of the customers at California Cryobank and 75% at some other banks. Their prevalence is altering how sperm banks treat confidentiality. Lesbians and single mothers can’t deceive their children about their origins, so they don’t. They tell their kids the truth. As a result, they’re clamoring for ever more information about the donors to pass on to their kids. Increasingly, they are even demanding that sperm banks open their records so that children can learn the name of their donor. (Lesbians and single moms have also pioneered the practice of known donors, in which they recruit a sperm provider from among their friends. The known donor, so nice in theory, can be a legal nightmare: known donors, unlike anonymous donors, don’t automatically shed their paternal obligations. The state still considers them legal fathers. So mothers and donors have to write elaborate contracts to try to eliminate those rights.)

…From the beginning, sperm banking had a comic aspect to it. In July 1976, a prankster named Joey Skaggs announced that he would be auctioning rock star sperm from his Celebrity Sperm Bank in Greenwich Village. We’ll have sperm from the likes of Mick Jagger, Bob Dylan, John Lennon, Paul McCartney, and vintage sperm from Jimi Hendrix, he declared. On the morning of the auction, Skaggs and his lawyer appeared to announce that the sperm had been kidnapped. They read a ransom note: Caught you with your pants down. A sperm in the hand is worth a million in a Swiss bank. And that’s what it will cost you. More to cum. [signed] Abbie. Hundreds of women called the nonexistent sperm bank asking if they could buy; radio and TV shows reported the aborted auction without realizing it had been a joke. And at the end of the year, Gloria Steinem - presumably unaware that it had been a hoax - appeared on an NBC special to give the Celebrity Sperm Bank an award for bad taste.

…The Repository sustained its popularity during the early and mid-1990s. The waiting list reached eighteen months, because there were never enough donors. Usually, Anita could supply only fifteen women at a time with sperm. California Cryobank, by contrast, could supply hundreds of customers at once. Demand at the Repository remained strong even when Graham started charging for sperm. In the mid-1990s, the bank collected a $3,500 flat fee per client, a lot more than other banks. Ever the economic rationalist, Graham had concluded that customers would value his product more if they had to pay for it…Neff wasn’t nostalgic when she recounted the end of the bank. Sperm banking will be a blip in history, she said. The Nobel sperm bank, she implied, would be a blip on that blip. And in some ways, she is clearly right. The Repository for Germinal Choice pioneered sperm banking but ended up in a fertility cul-de-sac. Other sperm banks took Graham’s best ideas - donor choice, donor testing, and high-achieving donors - and did them better. They offered more choice, more testing, more men. And they managed to do so without Graham’s peculiar eugenics theories, implicit racism, and distaste for single women and lesbians. The Repository died because no one needed it anymore.


  1. The Repository for Germinal Choice sperm bank (1980-1999) was an American sperm bank which advertised use of only men with high IQ, preferably with Nobel prizes or other major accomplishments. Media advertising, including the investigative report The Genius Factory: The Curious History of the Nobel Prize Sperm Bank based on the Slate articles, emphasized the idea that it would create genius kids.

    The problem with this idea of manufacturing geniuses with (just) sperm donation immediately appears: even if you falsely believed that genetics 100% determined winning a Nobel, and used a Nobelist’s sperm, the kids would only be ~50% genetically related to them because of the relatively average mother! Any sperm donation procedure will be a disappointment if one expects all the children to be the equal of the donor while ignoring that even a clone of the donor would not be as intelligent nor as accomplished (due to regression to the mean). A more realistic analysis would be that assuming the mother has an average IQ and the Nobelist has an IQ of ~140 (roughly consistent with Anne Roe’s results) and the usual broad-sense heritability of 0.8, then the Nobelist could be expected to have a genetic IQ of 132 (100+40*0.8), and as the offspring are half-related to him, the predicted mean adult IQ of such sperm bank offspring would be IQ 116, implying that perhaps 5% would reach the Nobelist’s IQ and of course many fewer would actually win a Nobel or anything like that. The mothers likely are above-average themselves, so while not much changing the marginal benefit, the kids’ IQs would be that much higher (eg if mothers averaged 110, 120 instead of 116, etc).

    The sperm bank was unable to live up to its advertising of Nobelists for several reasons (criticism of its eugenic policies and the inherent old age & ensuing low sperm quality of actual Nobel laureates being major problems); the donors Slate was able to reach were described as follows:

    Seven men recruited by Graham contacted me. Of them, five donated successfully. Graham dropped the other two men for unspecified medical reasons. The five successful donors seem to account for about 30 of the 215 kids born to the repository…The Slate sample reflects Graham’s constrained ambitions. The Slate Seven were bright but not Olympian when Graham tapped them. Two were child prodigies who had earned advanced scientific degrees at precocious ages. Two were promising graduate students. One was a rising businessman. (See The Entrepreneur Speaks.) Another was a political activist who shared Graham’s conservative views. One counseled troubled kids. They were impressive, but certainly not the most celebrated and accomplished men of the age…I can solve relatively complex problems. If there is a better chance that offspring of mine will be able to solve problems, that’s a good thing. So I was happy to help parents, says a donor who is now a professor. I like the idea of producing more intelligent people. After all, if you could produce one person who could change the world as much as Shockley did, that would be worth it.… What are they now? There are no Nobels and no criminals. All of them seem smart and engaged in the world. Most write a good e-mail and talk a good game on the phone. Two are quite prominent. The rising young businessman became a fabulously successful middle-aged businessman. The emerging political activist has become a semi-famous, sometimes controversial political activist. The two promising graduate students are now junior professors at decent universities. One of the prodigies has retired from a successful career in the intelligence trade to do consulting and muck about with high I.Q. organizations (groups like Mensa, but higher I.Q.’s required). The Average Guy has returned to grad school, where he’s finishing a degree in environmental policy. Most of the Slate Seven remain connected to hard science, which would please Graham, who valued science and scorned just about everything else. The second child prodigy, who has abandoned hard science, has transformed most radically. He donated in the early ’80s when he was a math whiz. Today he writes, In many respects I feel I am a failure. The closest I have come to conventional success was when I made my living writing term papers for rich kids at Columbia, NYU, etc. But I don’t think he really feels like a failure: He has just discarded the notion that intelligence, especially analytical intelligence, is an important measure of life. He has abandoned math and academia to become an artisan. I have gone from being an intellectual whore to … I dunno what … I will never win a Nobel Prize, but I don’t care. I will never make any great contribution to science. No matter. I have come to terms with myself and who I am. This is the best part of growing old.

    The donors are all well above average in intelligence and talent (including the so-called Average Guy), but I would be hesitant at guessing, from this description, a mean IQ greater than the 130s, giving an IQ boost of 12 points. How had the kids done by the time Slate found them?

    Is Jon what Graham dreamed of when he built his genius sperm bank? Jon doesn’t adore school, but he’s still going to graduate a year early. He’s pretty good at math but not at science. He favors history and English. He likes music, which in his case means rap. (He’s writing lyrics for a group that he started with some friends.) He says learning about his genetic head-start has made him concentrate a bit more on school work. Before I thought I didn’t have the potential. Now I think I have got the potential and that I’m just lazy, he says, half-joking…Jon’s biography is echoed in the other repository kids Slate located. They show very much promise, but they are very much children. I have interviewed nine families with 15 children conceived through the repository. (I have also corresponded some with three other families that have four kids and e-mailed cursorily with another child.) These 15-or, counting the brief contacts, 20-kids are a fraction of the entire repository crop of 219 kids… The Slate 15 range in age from 6 to 19, with most falling between 10 and 16. The group consists of eight boys and seven girls. The 15 represent eight different donors, but there is a bizarre bias toward one donor…. I have no idea how the Slate 15 compare to the entire repository group. I also have no way to test these kids for mental acuity or IQ or anything else. What I gathered is anecdote, not data. So have the super-babies grown into superkids? The Slate 15 seem to be an accomplished bunch. Half a dozen parents credit their kids with 4.0 GPAs. Five parents told me that their kids tested at the top of their school and that their school was the best in the area. Are they prodigies? That’s harder to know. Doron Blake was touted as a prodigy as a kid: He has grown up to be a very smart but not supernatural college student. The two teenage girls in the Ramm family - the only other family besides the Blakes that is public - are artistically precocious: one an outstanding singer, the other an outstanding dancer. A 14-year-old out West, Sam, is touted by his parents as a math-science genius with Olympic potential in skiing. A 14-year-old in California, Gage, is trading stocks and researching international business at a precocious age. Another teenager in California, Jacob, is a musical whiz who is already studying quantum theory. There’s a curious difference between how parents describe sons and daughters. The Slate 15 includes a cluster of five girls between 10 and 13. Their parents give them a very different kind of rave review than the boys’ parents do. The girls’ parents marvel that their daughters are wonderful yet normal. All are socially well-adjusted, athletic, and enthusiastic, and all are excellent students. They are, as one mom puts it about her daughters, Renaissance kids.…Do the children resemble their genetic fathers? Three offspring of Olympic gold medalist Donor Fuchsia are reportedly amazing athletes. Gage shares a love of economics with his donor. Several of the science/math enthusiasts were fathered by science/math professors. Three moms who explicitly chose happy donors report that their kids have sunny personalities. All the Slate 15 are in good health, except one. The Ramm’s 9-year-old son Logan -a most happy, wonderful boy, says his mother Adrienne - has a developmental disability…The 219 repository kids may grow up to be the essential men and women of the land. They may not. Many have made a stellar start, but they haven’t arrived yet. Graham’s question goes unanswered.

    Overall, since the selection is likely towards the more successful kids out of the 219, their giftedness sounds consistent with what one would predict purely from the heritabilities & mean IQs: well above average, bright and gifted, many at the tops of their classes or schools - but few prodigies or future Nobelists. The method works; it is the expectations that were unrealistic.

  2. Brooks-Gunn et al 2009 doesn’t report those specific figures; they are inferred from pg27, Table A1.2 Present value for lifetime earnings of individuals with a high school degree or some college, estimated at birth, which is based on the 2006 March Current Population Survey’s sample with the average level of education. The estimated net present value for the worst-case (4% discounting, 0% wage growth) lifetime income is $278,430, and the best-case (2% discount, 2% wage growth) is $1,374,375. Using the earlier estimate that 1 point = +1% annual wages, then on average, 1 IQ point will translate to a NPV gain of $2,783 and $13,744 respectively.

  3. More specifically, combining the College/Adult IQ/Child IQ polygenic scores and including the non-statistically-significant ones, I get out of Figure 1 :

    1. Better: g / Raven’s Progressive Matrices / Mill Hill Vocabulary / GCSE English / GCSE maths / GCSE science / PISA Attitudes to School / PISA homework Behaviour / PISA maths self-efficacy / PISA maths interest / PISA time spent on maths / Academic self-concept / Agreeableness / Conscientiousness / Extraversion / Neuroticism / Openness / Curiosity Flow / Chaos at Home / Attachment total / SDQ Total (Strengths and Difficulties Questionnaire; behavioral problems) / SDQ Conduct / SDQ Hyperactivity / SDQ Prosocial / Peer Victimization / Conners: Inattention / Conners: Impulsivity / Autism Quotient: Attention Switching / Autism Quotient: Imagination / Autism Quotient: Attention to Detail / Callous Unemotional Traits / ARBQ Anxiety / SANS (negative symptoms) / Grandiosity / Cognitive Disorganization / BMI / Height
    2. Neutral: PISA Homework total / Menarche / Puberty
    3. Worse: PISA Homework Feedback [?] / PISA time spent on maths [?] / GRIT / Curiosity Flow / Parental Control / Autism Quotient: Attention Switching / Grandiosity / Paranoid / Sleep total / Insomnia

    (In a few cases it’s not clear how to interpret. Is less time on math a good or bad thing since the PISA & GCSE math scores are still higher?) The most notable exception to the trend of intelligence positively correlating with other desirable traits & inversely correlating with undesirable traits is the correlation of intelligence with autism in both analyses; this is interesting because phenotypically, autism is usually correlated with much lower average intelligence but also with exceptions of high functioning autists, and because the cause of many cases of autism have been traced to de novo mutations with catastrophic effects on the mind. I speculate that this may be reflecting a tradeoff in canalization, similar to X-linked intellectual disability, where common variants which cause higher intelligence do so at the cost of lower robustness and greater vulnerability to the environment/developmental-problems/mutations.

  4. Hagenaars focused more on medical & psychiatric issues; breaking it down similarly across the 4 measures of cognitive functioning:

    1. better: Stroke: Cardioembolic / ADHD / Major depressive disorder / Blood pressure: diastolic / Blood pressure: systolic
    2. Mixed: Coronary artery disease / Stroke: ischaemic / Stroke: large vessel disease / Type 2 diabetes / Alzheimer’s disease / Autism / Bipolar disorder / Schizophrenia / Hippocampal volume / Intracranial volume / Infant head circumference / BMI / Height / Longevity / FEV1
    3. Worse: none
  5. Why not sperm or eggs? Why do we have to select on embryos instead? While sperm and eggs do genetically vary, they vary mostly on which of each pair of chromosomes they inherit from the body, so the genetic variance is less than after fertilization; selection only on eggs or only sperms leaves out half the genetic variance, weakening prediction substantially; and finally, as far as I know, all feasible genetic sequencing methods are destructive & require at least 1 cell, which renders it futile. So embryo selection is the only feasible approach.

  6. The supplementary information for the UMN/University of Minnesota Study says (pg15) that the UMN sample is 3 combined samples, 2 twin samples, and that IQs were available for 3376 offspring, 2909 of whom were twins.

  7. Overall & specific SNP heritabilities from Ge et al 2016:

    ## http://biorxiv.org/content/early/2016/08/18/070177
    ## http://biorxiv.org/highwire/filestream/19425/field_highwire_adjunct_files/0/070177-1.xlsx
    ge <- read.csv("http://www.gwern.net/docs/genetics/2016-ge-ukbiobank-snpheritability.csv")
    subset(ge[ge$Trait.Domain=="Cognitive Function",], select=c("Category.Name", "h2", "SE"))
    #              Category.Name     h2     SE
    # 49 Fluid intelligence test 0.2327 0.0115
    # 50     Numeric memory test 0.1515 0.0349
    # 51     Pairs matching test 0.0643 0.0038
    # 52 Prospective memory test 0.1101 0.0228
    # 53      Reaction time test 0.0732 0.0038
    subset(ge[ge$Trait.Domain=="Sociodemographics",], select=c("Category.Name", "h2", "SE"))
    #                      Category.Name     h2     SE
    # 250                      Education 0.2939 0.0066
    # 251                      Household 0.0358 0.0059
    # 252                      Household 0.0904 0.0068
    # 253 Other sociodemographic factors 0.0709 0.0199
    # 269       Baseline characteristics 0.0662 0.0038
    library(metafor)
    rem <- rma(measure="SMD", yi=h2, sei=SE, data=ge); rem
    # ...Random-Effects Model (k = 269; tau^2 estimator: REML)
    #
    # tau^2 (estimated amount of total heterogeneity): 0.0167 (SE = 0.0015)
    # tau (square root of estimated tau^2 value):      0.1293
    # I^2 (total heterogeneity / total variability):   99.72%
    # H^2 (total variability / sampling variability):  356.64
    #
    # Test for Heterogeneity:
    # Q(df = 268) = 82514.3218, p-val < .0001
    #
    # Model Results:
    #
    # estimate       se     zval     pval    ci.lb    ci.ub
    #   0.1559   0.0080  19.5261   <.0001   0.1402   0.1715
  8. We could consider a hypothetical trip to China to make use of fertility clinics there. (Or potentially South Korea, whose fertility clinics are close to US levels of expertise and benefit from state subsidies intended to boost the national birthrate.) How much extra would it cost?

    Costs:

    • US passport costs ~$110
    • a Chinese visa costs $140
    • a Shanghai to San Francisco round-trip flight prices out currently in the range $500-$1000 (lower than I expected, perhaps due to the 2015 oil glut)
    • IVF is a long and involved process, so one should budget for at least 3 months residence: Lonely Planet thinks that on a shoestring budget one can manage >$30/day, so 3*31*30=$2790.
    • IVF is also unpleasant enough that it would be wrong to consider those 3 months a vacation and one has to consider the impact of being away from one’s job for such an extended period of time (as remote working is not an option for most people & the Great Firewall would impede that as well); so since 2013 US per capita income was $53.8, those 3 months would cost on average somewhere <$13.4k
    • Summing up: 110+140+2790+13400=$16.4k.

    On the plus side:

    • Chinese IVF fertility clinics are presumably cheaper as well as more willing to do PGD. A 2011 NPR article quotes a Chinese fertility clinic as offering 1 cycle at $4.5k, which is around half of what a US clinic might charge (this halving likely would also be true for clinics in South Korean, Taiwanese, Thai etc); so the total penalty of going to China will depend on how many cycles are necessary. If 1 cycle, then it’s $10k vs $16k+$4.5k, a loss of -$10.5k; for 2, -$5k; for 3, +$0.5k; and for 4, +$6k, and so on.

      IVF cycle success rates being <41% and following a negative binomial, we can say that 2 or more cycles will be required, more often than not, and often 3 or 4, to get 1 live birth (after which it stops); one issue is that if a woman leaves as soon as pregnancy is confirmed with a fetal heartbeat, there may still be a miscarriage and they would have to decide whether to fly back to China and start another cycle, while if they stay for the full pregnancy, that triples food/hotel costs.

  9. Quality-wise, Chinese fertility clinics may be behind Western clinics, but we can expect them to rapidly improve: the Chinese population is aging and becoming more affluent, the one-child policy is dead, and the Chinese government has made biology & genetic engineering a national strategic priority and bankrolled R&D and institutes like the Beijing Genomics Institute. To quote one article :

    Dozens, if not hundreds, of Chinese institutions in both research hubs like Beijing and far-flung provincial outposts have enthusiastically deployed CRISPR. It’s a priority area for the Chinese Academy of Sciences, says Minhua Hu, a geneticist at the Guangzhou General Pharmaceutical Research Institute and one of the beagle researchers. A colleague, Liangxue Lai of the Guangzhou Institutes of Biomedicine and Health, adds that China’s government has allocated a lot of financial support in genetically modified animals in both [the] agriculture field [and the] biomedicine field.…The level and sophistication of work in China using CRISPR is already about the same as in Europe and the U.S., says George Church, a professor of genetics at Harvard Medical School. An analysis by Thomson Innovation, a division of London-based Thomson Reuters, found that more than 50 Chinese research institutions have filed gene-editing patents.

    Bloomberg News:

    Chinese scientists say they were among the first in using Crispr to make wheat resistant to a common fungal disease, dogs more muscular and pigs with leaner meat…Programs funded by Beijing are, among other things, working on disease-resistant tomatoes, breast cancer treatments and increasing the oil content in soy beans…I would rank the U.S. and China as first and second Crispr-Cas9 research countries, respectively, at this time. Both countries have much strength in this area, said Paul Knoepfler…The U.S. currently gets the edge in high-profile papers, Crispr biotech and intellectual property. China has published a lot in Crispr animals.…Most of China’s funding for Crispr research is coming from the government, with very few private companies putting money into gene modification work, said Lai Liangxue, deputy director of the Southern China Institute of Stem Cell Biology and Regenerative Medicine. Whether it’s animal or plant, our country has special funds for this aspect of work. Last year, the National Natural Science Foundation of China, a prominent government backed institution that funds research, awarded more than 23 million yuan ($3.5 million) to at least 42 Crispr projects, more than double the previous year. It is just one of several government institutions providing Crispr funding in China. China is also aided by a large pool of internationally trained scientists, many of whom have returned home after working overseas…Crispr has already boosted a new industry in China that supplies genetically edited animals to foreign research labs and pharmaceutical companies. Researchers in the U.S. and China also see its potential in agriculture – to potentially create disease resistant grains or better quality meat. Raising a genetically engineered pig with Crispr technology is already cheaper in China at about 700,000 yuan, while in the U.S. it could cost four or five times as much, said Lai. Labor and other costs are lower in China.

    On future autism work:

    Next, the researchers plan to use the CRISPR gene-editing technique to knock out the extra MECP2 copies in cells in those regions and then check whether the autism-like symptoms stop…If non-human primates prove to be a useful model for psychiatric disorders, China and other countries that are investing heavily in research on monkeys, such as Japan, could gain an edge in brain research. Muotri says that such studies probably wouldn’t be done in the United States, for example, where research on non-human primates is more expensive and controversial than it is in Japan or China. China and Japan have a clear advantage over the US on this area, he says.

    They are expected to approve agricultural-related modifications before the USA; Nature:

    Kim hopes to market the edited pig sperm to farmers in China, where demand for pork is on the rise. The regulatory climate there may favour his plan. China is investing heavily in gene editing and historically has a lax regulatory system, says Ishii. Regulators will be cautious, he says, but some might exempt genetic engineering that does not involve gene transfer from strict regulations. I think China will go first, says Kim.

    So overall, I think we can say that on average, doing IVF in China is not as financially costly as it seems up front and could even be profitable. Which leaves the more psychological issues of trust and planning and willingness to go overseas, which are hard to evaluate in the absence of any present incentive to do Chinese genetic-engineering birth tourism, but medical tourism and birth tourism are both already phenomena.

  10. See starting 2:15: This was a huge project. You take it for granted now but it was almost unimaginable a year or two ago…So the array that I will come to actually came in at about £32 for the array and the genotyping per person. An astonishingly low sum compared to what it would have been three years ago. 3:55: The economies of scale are astonishing. This was a bespoke new array, I’ll explain more in subsequent slides, and the fact that the order was coming in for half a million arrays drove the price down below anything you might have expected…The industrial providers really worked hard for something of this scale, very competitive tenders came in from different providers and the one that was selected was unsurprisingly the same for both, Affymetrix. Which I might not have expected but they made a truly impressive effort.

  11. Some of Roe’s relevant papers

  12. Plotz 2005: From the beginning, the Repository had more applicants than sperm - a state that would persist until it closed nineteen years later. Every sperm donation produced only a handful of usable semen straws. But each client needed a couple of straws every month, and it usually took several months to get pregnant. Graham couldn’t sign up donors fast enough and couldn’t get them to donate often enough to meet the demand. They were busy men, and Graham didn’t pay them, so it was hard to persuade them to donate frequently. He and his employees ended up taking a triage approach, supplying what little sperm they had to the women who seemed most likely, or most desperate, to get pregnant.

  13. The Repository introduced into the American sperm bank market the use of detailed health histories and testing, and biographical information which prospective mothers could choose based on; naturally, they are picky about which ones they choose. For more details, see the appendix.

  14. Today, Rothman is co-founder and medical director of California Cryobank, the largest sperm bank in the US. He estimates that the practice has performed close to 200 post-mortem sperm extractions. Their records show just three extractions in the 1980s and 15 in the 1990s. But from 2000 to 2014, they performed 130: an average of just under nine a year. Rothman’s is by no means the only clinic that offers this service. Recent statistics are scarce, but surveys of US fertility centres in 1997 and 2002 found increasing numbers of requests for post-mortem sperm retrieval, although from a very low base.The Moral Maze of using a dead man’s sperm

  15. Children in the DSR were taller than the median growth curve by R = 1.23 inches, averaged across all ages for both sexes (Figure 2.3B). The selection differential, S, measures the strength of selection and is defined as the difference between the mean height of the selected parents and the mean height of the population. Biological mothers were taller than the median Caucasian female by 0.7 inches and selected donors were taller than the median Caucasian male by 2 inches, resulting in a selection differential of S = 1.35 inches (Table 2.3) (18). The response to selection is related to the selection differential by the realized heritability h2=RSh^2 = \frac{R}{S}. Assisted reproduction created a rare natural experiment to study artificial selection for height in humans; the effect of selection as described by the realized heritability h2=0.91h^2 = 0.91 was consistent with the heritability of adult height calculated using traditional methods (19).

  16. 23andMe has a much larger sample with education phenotyping, n>700k, but has declined to use it.

  17. I say on average because in the Hsu 2014 simulations, the compressed-sensing algorithm exhibits an interesting phase transition behavior where the accuracy of the inferences will abruptly jump past a certain sample size, rather than smoothly increase with each datapoint like most methods.

  18. Technology Review, 5 March 2015:

    Scientists at several centers, including Church’s, think they will soon be able to use stem cells to produce eggs and sperm in the laboratory. Unlike embryos, stem cells can be grown and multiplied. Thus they could offer a vastly improved way to create edited offspring with CRISPR. The recipe goes like this: First, edit the genes of the stem cells. Second, turn them into an egg or sperm. Third, produce an offspring. Some investors got an early view of the technique on December 17, at the Benjamin Hotel in Manhattan, during commercial presentations by OvaScience. The company, which was founded four years ago, aims to commercialize the scientific work of David Sinclair, who is based at Harvard, and Jonathan Tilly, an expert on egg stem cells and the chairman of the biology department at Northeastern University (see 10 Emerging Technologies: Egg Stem Cells, May/June 2012). It made the presentations as part of a successful effort to raise $132 million in new capital during January. During the meeting, Sinclair, a velvet-voiced Australian whom Time last year named one of the 100 Most Influential People in the World, took the podium and provided Wall Street with a peek at what he called truly world-changing developments. People would look back at this moment in time and recognize it as a new chapter in how humans control their bodies, he said, because it would let parents determine when and how they have children and how healthy those children are actually going to be.

    The company has not perfected its stem-cell technology - it has not reported that the eggs it grows in the lab are viable - but Sinclair predicted that functional eggs were a when, and not an if. Once the technology works, he said, infertile women will be able to produce hundreds of eggs, and maybe hundreds of embryos. Using DNA sequencing to analyze their genes, they could pick among them for the healthiest ones. Genetically improved children may also be possible. Sinclair told the investors that he was trying to alter the DNA of these egg stem cells using gene editing, work he later told me he was doing with Church’s lab. We think the new technologies with genome editing will allow it to be used on individuals who aren’t just interested in using IVF to have children but have healthier children as well, if there is a genetic disease in their family, Sinclair told the investors. He gave the example of Huntington’s disease, caused by a gene that will trigger a fatal brain condition even in someone who inherits only one copy. Sinclair said gene editing could be used to remove the lethal gene defect from an egg cell. His goal, and that of OvaScience, is to correct those mutations before we generate your child, he said. It’s still experimental, but there is no reason to expect it won’t be possible in coming years.… Tilly also said his lab was trying to edit egg stem cells with CRISPR right now to rid them of an inherited genetic disease that he didn’t want to name. Tilly emphasized that there are two pieces of the puzzle-one being stem cells and the other gene editing. The ability to create large numbers of egg stem cells is critical, because only with sizable quantities can genetic changes be stably introduced using CRISPR, characterized using DNA sequencing, and carefully studied to check for mistakes before producing an egg. Tilly predicted that the whole end-to-end technology-cells to stem cells, stem cells to sperm or egg and then to offspring-would end up being worked out first in animals, such as cattle, either by his lab or by companies such as eGenesis, the spinoff from the Church lab working on livestock.

  19. This much like how the global burden of disease shows that the largest losses of DALYs are due to such ordinary causes as respiratory infections, heart disease, diabetes, car accidents, back+neck injuries, smoking etc and not as much as one might think to HIV or malaria.

    For example, cystic fibrosis is easily screened for in PGD and one of the most expensive genetic diseases around: the lifetime costs of CF in the USA are ~$63k annually (more when lung transplants are required) or >$2.3m over their life expectancy of 37 years. However, CF is quite rare with ~30,000 CF patients in the USA and thus <1000 born each year. Tur-Kaspa et al 2010 estimates that use of PGD to prevent 1 year’s worth of 929 CF patients has an undiscounted profit of $2.17 billion. (Discounted at 5%, that would be more like $1.2b: (939*63000) / log(1.05).) Which is less than selection on a commoner trait like IQ would yield. CF, while disabling and extraordinarily expensive is thankfully rare.

  20. Thoroughbred horse racing is particularly interesting because, in sharp contrast to human sports in the 1900s-2000s where records are regularly broken, horse racing speeds have plateaued since ~1950 (Gardner 2006), which is why there are so few records broken at Belmont/Preakness/Kentucky, and why there has been only one Triple Crown winner in my lifetime.

    Thoroughbred horsebreeding began in earnest in England perhaps around 1680; the breed is heavily inbred, with something like half the genes coming from 5 major sires: Byerley Turk (~1680), Curwen’s Bay Barb (~1690), Darley Arabian (~1704), Godolphin Arabian (1724), & Eclipse (1764). If the serious breeding can be said to have begun 1680 and improvements stopped 1950, then it took 270 years to exhaust the available genetics. All horses are sexually mature by 2 years of age, but a stud stallion might reproduce for a decade (or more, posthumously, using stored sperm straws & artificial insemination), implying total effective generations ranging from 27 to 137 generations of selection. (Feedback on racing performance for breeders is swift - even the Triple Crown is limited to 3yo horses.) Gardner 2006 notes that for the oldest race in his dataset, the Epsom Derby, the 1846-2006 reduction was 175s to 155s (had it been run before 1680, the difference would have been larger still); most horse races are won by fractions of seconds, with standard deviations of finishing times from half a second to a few seconds. So the effect of all that selection was to improve horse racing ability by as much as >40 SDs. Thus, it is entirely possible that the slowest horse running in any post-1950 race is faster than the fastest horse in the world in 1680.

  21. Taking the binomial model from before, we replace the 1/0 unit weights with weights which are 0 or are drawn from an exponential distribution. A fit to the Benyamin et al 2014 betas gives an exponential distribution with a rate/λ=1063 or since the betas were in SDs, λ=71. We can then simulate the consequences of going from 5000 0 alleles & 5000 positive alleles to all 10000 positive alleles:

    mean(replicate(1000, sum(sample(rexp(10000, rate=71), 5000))))
    # [1] 70.41815521
    sd(  replicate(1000, sum(sample(rexp(10000, rate=71), 5000))))
    # [1] 0.9788748364
    
    mean(replicate(1000, sum(sample(rexp(10000, rate=71), 10000))))
    # [1] 140.8316512
    sd(  replicate(1000, sum(sample(rexp(10000, rate=71), 10000))))
    # [1] 1.42341606
    
    (140.8316512 - 70.41815521) / 0.9788748364
    # [1] 71.93309438
    
    gains <- function (n) { upper <- n; lower <- upper*0.5; rate <- 70;
                            (upper*rate^-1 - lower*rate^-1) / sqrt(lower * (rate^-1)^2) }
    plot(sapply(seq(1, 100000, by=10), gains))

    Another way would be to calculate it: on average, one will have half the possible 10000 variants; the sum of k=5000 exponential variates of λ=71 is Gamma(5000,711)\text{Gamma}(5000, 71^{-1}), and the mean of that gamma distribution is 5000711=70.45000 \cdot 71^{-1}=70.4 with a SD of 5000((711)2)=0.99\sqrt{5000 \cdot ((71^{-1})^2)} = 0.99; while the upper bound with all 10000 is 140.8(1.4) respectively. So if existing genetic variation, leading to the differences in intelligence we can observe, represents the contribution of a genetic SD 0.99 on an absolute scale, then the possible increase going from ~5000 good variants to all 10000 variants represents an increase of 140.870.40.99=71.1\frac{140.8 - 70.4}{0.99}= 71.1 genetic SDs, which after weakening by a heritability of 80% is still large and fairly similar to the binomial result.

  22. See also Chua 2002, World on Fire.