newsletter/2018/13 (Link Bibliography)

“newsletter/​2018/​13” links:

  1. 13


  3. newsletter

  4. 01

  5. 02

  6. 03

  7. 04

  8. 05

  9. 06

  10. 07

  11. 08

  12. 09

  13. 10

  14. 11

  15. 12

  16. 13

  17. 13

  18. 13

  19. Changelog#2018

  20. Danbooru2020#danbooru2017

  21. Embryo-selection#overview-of-major-approaches

  22. Embryo-selection#faq-frequently-asked-questions

  23. Embryo-selection#multi-stage-selection

  24. Embryo-selection#gamete-selection

  25. Embryo-selection#optimal-stoppingsearch

  26. Embryo-selection#robustness-of-utility-weights

  27. Complement

  28. SMPY

  29. Cat-Sense

  30. Bakewell

  31. MLP

  32. Computers

  33. Language

  34. Traffic#july-2018january-2019

  35. 20170101-20181231-annualcomparison.pdf: “2018 Site Traffic (Comparison with 2017)”⁠, Gwern Branwen

  36. 13#overview





  41. ⁠, Nick Bostrom, Thomas Douglas, Anders Sandberg (2016-01-26):

    In some situations a number of agents each have the ability to undertake an initiative that would have substantial effects on the others. Suppose that each of these agents is purely motivated by an altruistic concern for the common good. We show that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will be undertaken more often than is optimal. We suggest that this phenomenon, which we call the unilateralist’s curse, arises in many contexts, including some that are important for public policy.

    To lift the curse, we propose a principle of conformity, which would discourage unilateralist action. We consider three different models for how this principle could be implemented, and respond to an objection that could be raised against it.

    [Keywords: The Winner’s Curse, Disagreement, Rationality, Aumann, informative ⁠, shrinkage, bid shading]

  42. 2018-lee.pdf: ⁠, James J. Lee, Robbee Wedow, Aysu Okbay, Edward Kong, Omeed Maghzian, Meghan Zacher, Tuan Anh Nguyen-Viet, Peter Bowers, Julia Sidorenko, Richard Karlsson Linnér, Mark Alan Fontana, Tushar Kundu, Chanwook Lee, Hui Li, Ruoxi Li, Rebecca Royer, Pascal N. Timshel, Raymond K. Walters, Emily A. Willoughby, Loïc Yengo, 23andMe Research Team, COGENT (Cognitive Genomics Consortium), Social Science Genetic Association Consortium, Maris Alver, Yanchun Bao, David W. Clark, Felix R. Day, Nicholas A. Furlotte, Peter K. Joshi, Kathryn E. Kemper, Aaron Kleinman, Claudia Langenberg, Reedik Mägi, Joey W. Trampush, Shefali Setia Verma, Yang Wu, Max Lam, Jing Hua Zhao, Zhili Zheng, Jason D. Boardman, Harry Campbell, Jeremy Freese, Kathleen Mullan Harris, Caroline Hayward, Pamela Herd, M. Kumari, Todd Lencz, Jian’an Luan, Anil K. Malhotra, Andres Metspalu, Lili Milani, Ken K. Ong, John R. B. Perry, David J. Porteous, Marylyn D. Ritchie, Melissa C. Smart, Blair H. Smith, Joyce Y. Tung, Nicholas J. Wareham, James F. Wilson, Jonathan P. Beauchamp, Dalton C. Conley, Tõnu Esko, Steven F. Lehrer, Patrik K. E. Magnusson, Sven Oskarsson, Tune H. Pers, Matthew R. Robinson, Kevin Thom, Chelsea Watson, Christopher F. Chabris, Michelle N. Meyer, David I. Laibson, Jian Yang, Magnus Johannesson, Philipp D. Koellinger, Patrick Turley, Peter M. Visscher, Daniel J. Benjamin, David Cesarini (2018-07-23; iq):

    Here we conducted a large-scale genetic association analysis of educational attainment in a sample of approximately 1.1 million individuals and identify 1,271 independent genome-wide-significant SNPs. For the SNPs taken together, we found evidence of heterogeneous effects across environments. The SNPs implicate genes involved in brain-development processes and neuron-to-neuron communication. In a separate analysis of the X chromosome, we identify 10 independent genome-wide-significant SNPs and estimate a heritability of around 0.3% in both men and women, consistent with partial dosage compensation. A joint (multi-phenotype) analysis of educational attainment and three related cognitive phenotypes generates polygenic scores that explain 11–13% of the in educational attainment and 7–10% of the variance in cognitive performance. This prediction accuracy substantially increases the utility of polygenic scores as tools in research.

  43. ⁠, A. G. Allegrini, S. Selzam, K. Rimfeld, S. von Stumm, J. B. Pingault, R. Plomin (2018-09-17):

    Recent advances in genomics are producing powerful DNA predictors of complex traits, especially cognitive abilities. Here, we leveraged summary statistics from the most recent genome-wide association studies of intelligence and educational attainment to build prediction models of general cognitive ability and educational achievement. To this end, we compared the performances of multi-trait genomic and polygenic scoring methods. In a representative UK sample of 7,026 children at age 12 and 16, we show that we can now predict up to 11 percent of the variance in intelligence and 16 percent in educational achievement. We also show that predictive power increases from age 12 to age 16 and that genomic predictions do not differ for girls and boys. Multivariate genomic methods were effective in boosting predictive power and, even though prediction accuracy varied across polygenic scores approaches, results were similar using different multivariate and methods. Polygenic scores for educational attainment and intelligence are the most powerful predictors in the behavioural sciences and exceed predictions that can be made from parental phenotypes such as educational attainment and occupational status.

  44. 2018-allegrini-figure2-iqpgs-mtaggsem.png

  45. 2020-barth.pdf: ⁠, Daniel Barth, Nicholas W. Papageorge, Kevin Thom (2020-04-01; economics):

    We show that genetic endowments linked to educational attainment strongly and robustly predict wealth at retirement. The estimated relationship is not fully explained by flexibly controlling for education and labor income. We therefore investigate a host of additional mechanisms that could account for the gene-wealth gradient, including inheritances, mortality, risk preferences, portfolio decisions, beliefs about the probabilities of macroeconomic events, and planning horizons. We provide evidence that genetic endowments related to human capital accumulation are associated with wealth not only through educational attainment and labor income but also through a facility with complex financial decision-making.

  46. 2018-papageorge.pdf: ⁠, Nicholas W. Papageorge, Kevin Thom (2018-09; genetics  /​ ​​ ​heritable):

    Recent advances have led to the discovery of specific genetic variants that predict educational attainment. We study how these variants, summarized as a linear index—known as a polygenic score—are associated with human capital accumulation and labor market outcomes in the Health and Retirement Study (HRS). We present two main sets of results. First, we find evidence that the genetic factors measured by this score interact strongly with childhood in determining educational outcomes. In particular, while the polygenic score predicts higher rates of college graduation on average, this relationship is substantially stronger for individuals who grew up in households with higher socioeconomic status relative to those who grew up in poorer households. Second, the polygenic score predicts labor earnings even after adjusting for completed education, with larger returns in more recent decades. These patterns suggest that the genetic traits that promote education might allow workers to better accommodate ongoing skill biased technological change. Consistent with this interpretation, we find a positive association between the polygenic score and non-routine analytic tasks that have benefited from the introduction of new technologies. Nonetheless, the college premium remains the dominant determinant of earnings differences at all levels of the polygenic score. Given the role of childhood SES in predicting college attainment, this raises concerns about wasted potential arising from limited household resources.

  47. ⁠, Aldo Rustichini, William G. Iacono, James Lee, Matt McGue (2019-09-17):

    A (GWAS) estimates size and statistical-significance of the effect of common genetic variants on a phenotype of interest. A Polygenic Score (PGS) is a score, computed for each individual, summarizing the of a phenotype on the basis of the individual’s genotype. The PGS is computed as a weighted sum of the values of the individual’s genetic variants, using as weights the GWAS estimated coefficients from a training sample. Thus, PGS carries information on the genotype, and only on the genotype, of an individual. In our case phenotypes of interest are measures of educational achievement, such as having a college degree, or the education years, in a sample of approximately 2700 adult twins and their parents.

    We set up the analysis in a standard model of optimal parental investment and intergenerational mobility, extended to include a fully specified genetic analysis of skill transmission, and show that the model’s predictions on mobility differ substantially from those of the standard model. For instance, the coefficient of intergenerational income elasticity maybe larger, and may differ across countries because the distribution of the genotype is different, completely independently of any difference in institution, technology or preferences.

    We then study how much of the educational achievement is explained by the PGS for education, thus estimating how much of the variance of education can be explained by genetic factors alone. We find a substantial effect of PGS on performance in school, years of education and college.

    Finally we study the channels between PGS and the educational achievement, distinguishing how much is due to cognitive skills and to personality traits. We show that the effect of PGS is substantially stronger on Intelligence than on other traits, like Constraint, which seem natural explanatory factors of educational success. For educational achievement, both cognitive and non cognitive skills are important, although the larger fraction of success is channeled by Intelligence.

  48. ⁠, Abdel Abdellaoui, David Hugh-Jones, Kathryn E. Kemper, Yan Holtz, Michel G. Nivard, Laura Veul, Loic Yengo, Brendan P. Zietsch, Timothy M. Frayling, Naomi Wray, Jian Yang, Karin J. H. Verweij, Peter M. Visscher (2018-10-30):

    Human DNA varies across geographic regions, with most variation observed so far reflecting distant ancestry differences. Here, we investigate the geographic clustering of genetic variants that influence complex traits and disease risk in a sample of ~450,000 individuals from Great Britain. Out of 30 traits analyzed, 16 show significant geographic clustering at the genetic level after controlling for ancestry, likely reflecting recent migration driven by socio-economic status (SES). Alleles associated with educational attainment (EA) show most clustering, with EA-decreasing alleles clustering in lower SES areas such as coal mining areas. Individuals that leave coal mining areas carry more EA-increasing alleles on average than the rest of Great Britain. In addition, we leveraged the geographic clustering of complex trait variation to further disentangle regional differences in socio-economic and cultural outcomes through genome-wide association studies on publicly available regional measures, namely coal mining, religiousness, 1970/​​​​2015 general election outcomes, and Brexit referendum results.

  49. ⁠, Daniel W. Belsky, Benjamin W. Domingue, Robbee Wedow, Louise Arseneault, Jason D. Boardman, Avshalom Caspi, Dalton Conley, Jason M. Fletcher, Jeremy Freese, Pamela Herd, Terrie E. Moffitt, Richie Poulton, Kamil Sicinski, Jasmin Wertz, Kathleen Mullan Harris (2018-07-31):

    Genome-wide association study (GWAS) discoveries about educational attainment have raised questions about the meaning of the genetics of success. These discoveries could offer clues about biological mechanisms or, because children inherit genetics and social class from parents, education-linked genetics could be spurious correlates of socially transmitted advantages. To distinguish between these hypotheses, we studied social mobility in five cohorts from three countries. We found that people with more education-linked genetics were more successful compared with parents and siblings. We also found mothers’ education-linked genetics predicted their children’s attainment over and above the children’s own genetics, indicating an environmentally mediated genetic effect. Findings reject pure social-transmission explanations of education GWAS discoveries. Instead, genetics influences attainment directly through social mobility and indirectly through family environments.

    A summary genetic measure, called a “polygenic score”, derived from a genome-wide association study (GWAS) of education can modestly predict a person’s educational and economic success. This prediction could signal a biological mechanism: Education-linked genetics could encode characteristics that help people get ahead in life. Alternatively, prediction could reflect social history: People from well-off families might stay well-off for social reasons, and these families might also look alike genetically. A key test to distinguish biological mechanism from social history is if people with higher education polygenic scores tend to climb the social ladder beyond their parents’ position. Upward mobility would indicate education-linked genetics encodes characteristics that foster success. We tested if education-linked polygenic scores predicted social mobility in >20,000 individuals in five longitudinal studies in the United States, Britain, and New Zealand. Participants with higher polygenic scores achieved more education and career success and accumulated more wealth. However, they also tended to come from better-off families. In the key test, participants with higher polygenic scores tended to be upwardly mobile compared with their parents. Moreover, in sibling-difference analysis, the sibling with the higher polygenic score was more upwardly mobile. Thus, education GWAS discoveries are not mere correlates of privilege; they influence social mobility within a life. Additional analyses revealed that a mother’s polygenic score predicted her child’s attainment over and above the child’s own polygenic score, suggesting parents’ genetics can also affect their children’s attainment through environmental pathways. Education GWAS discoveries affect socioeconomic attainment through influence on individuals’ family-of-origin environments and their social mobility.

    [Keywords: genetics, social class, social mobility, sociogenomics, polygenic score]

  50. 2016-belsky.pdf: ⁠, Daniel W. Belsky, Terrie E. Moffitt, David L. Corcoran, Benjamin Domingue, HonaLee Harrington, Sean Hogan, Renate Houts, Sandhya Ramrakha, Karen Sugden, Benjamin S. Williams, Richie Poulton, Avshalom Caspi (2016-06-01; genetics  /​ ​​ ​correlation):

    A previous genome-wide association study (GWAS) of more than 100,000 individuals identified molecular-genetic predictors of educational attainment.

    We undertook in-depth life-course investigation of the polygenic score derived from this GWAS using the 4-decade Dunedin Study (N = 918). There were 5 main findings.

    1. polygenic scores predicted adult economic outcomes even after accounting for educational attainments.
    2. genes and environments were correlated: Children with higher polygenic scores were born into better-off homes.
    3. children’s polygenic scores predicted their adult outcomes even when analyses accounted for their social-class origins; social-mobility analysis showed that children with higher polygenic scores were more upwardly mobile than children with lower scores.
    4. polygenic scores predicted behavior across the life course, from early acquisition of speech and reading skills through geographic mobility and mate choice and on to financial planning for retirement.
    5. polygenic-score associations were mediated by psychological characteristics, including intelligence, self-control, and interpersonal skill. were small.

    Factors connecting GWAS sequence with life outcomes may provide targets for interventions to promote population-wide positive development.

    [Keywords: genetics, behavior genetics, intelligence, personality, adult development]

  51. ⁠, Daniel W. Belsky, Avshalom Caspi, Louise Arseneault, David L. Corcoran, Benjamin W. Domingue, Kathleen Mullan Harris, Renate Houts, Jonathan Mill, Terrie E. Moffitt, Joseph Prinz, Karen Sugden, Jasmin Wertz, Benjamin Williams, Candice Odgers (2018-07-25):

    People’s life chances can be predicted by their neighborhoods. This observation is driving efforts to improve lives by changing neighborhoods. Some neighborhood effects may be causal, supporting neighborhood-level interventions. Other neighborhood effects may reflect selection of families with different characteristics into different neighborhoods, supporting interventions that target families/​​​​individuals directly. To test how selection affects different neighborhood-linked problems, we linked neighborhood data with genetic, health, and social-outcome data for >7,000 European-descent UK and US young people in the E-Risk and Add Health Studies. We tested selection/​​​​concentration of genetic risks for obesity, ⁠, teen-pregnancy, and poor educational outcomes in high-risk neighborhoods, including genetic analysis of neighborhood mobility. Findings argue against genetic selection/​​​​concentration as an explanation for neighborhood gradients in obesity and mental-health problems, suggesting neighborhoods may be causal. In contrast, modest genetic selection/​​​​concentration was evident for teen-pregnancy and poor educational outcomes, suggesting neighborhood effects for these outcomes should be interpreted with care.



  54. ⁠, Michael Inouye, Gad Abraham, Christopher P. Nelson, Angela M. Wood, Michael J. Sweeting, Frank Dudbridge, Florence Y. Lai, Stephen Kaptoge, Marta Brozynska, Tingting Wang, Shu Ye, Thomas R. Webb, Martin K. Rutter, Ioanna Tzoulaki, Riyaz S. Patel, Ruth J. F. Loos, Bernard Keavney, Harry Hemingway, John Thompson, Hugh Watkins, Panos Deloukas, Emanuele Di Angelantonio, Adam S. Butterworth, John Danesh, Nilesh J. Samani, for The CardioMetabolic Consortium CHD Working Group (2018-01-19):

    Background: Coronary artery disease (CAD) has substantial heritability and a polygenic architecture; however, genomic risk scores have not yet leveraged the totality of genetic information available nor been externally tested at population-scale to show potential utility in primary prevention.

    Methods: Using a meta-analytic approach to combine large-scale genome-wide and targeted genetic association data, we developed a new genomic risk score for CAD (metaGRS), consisting of 1.7 million genetic variants. We externally tested metaGRS, individually and in combination with available conventional risk factors, in 22,242 CAD cases and 460,387 non-cases from UK Biobank.


    In UK Biobank, a standard deviation increase in metaGRS had a hazard ratio (HR) of 1.71 (95% 1.68–1.73) for CAD, greater than any other externally tested genetic risk score. Individuals in the top 20% of the metaGRS distribution had a HR of 4.17 (95% CI 3.97–4.38) compared with those in the bottom 20%. The metaGRS had higher C-index (C = 0.623, 95% CI 0.615–0.631) for incident CAD than any of four conventional factors (smoking, diabetes, hypertension, and ), and addition of the metaGRS to a model of conventional risk factors increased C-index by 3.7%. In individuals on lipid-lowering or anti-hypertensive medications at recruitment, metaGRS hazard for incident CAD was significantly but only partially attenuated with HR of 2.83 (95% CI 2.61– 3.07) between the top and bottom 20% of the metaGRS distribution.


    Recent genetic association studies have yielded enough information to meaningfully stratify individuals using the metaGRS for CAD risk in both early and later life, thus enabling targeted primary intervention in combination with conventional risk factors. The metaGRS effect was partially attenuated by lipid and blood pressure-lowering medication, however other prevention strategies will be required to fully benefit from earlier genomic risk stratification.


    National Health and Medical Research Council of Australia, British Heart Foundation, Australian Heart Foundation.

  55. 2018-inouye-cad-riskprediction.png

  56. 2018-khera.pdf: “Genome-wide polygenic scores for common diseases identify individuals with risk equivalent to monogenic mutations”⁠, Amit V. Khera, Mark Chaffin, Krishna G. Aragam, Mary E. Haas, Carolina Roselli, Seung Hoan Choi, Pradeep Natarajan, Eric S. Lander, Steven A. Lubitz, Patrick T. Ellinor, Sekar Kathiresan

  57. 2018-khera-fig23-pgsprediction.png

  58. 2018-torkamani.pdf: “The personal and clinical utility of polygenic risk scores”⁠, Ali Torkamani, Nathan E. Wineinger, Eric J. Topol


  60. ⁠, Sebastian M. Sodini, Kathryn E. Kemper, Naomi R. Wray, Maciej Trzaskowski (2018-03-30):

    Accurate estimation of requires large sample sizes and access to genetically informative data, which are not always available. Accordingly, phenotypic correlations are often assumed to reflect genotypic correlations in evolutionary biology. Cheverud’s conjecture asserts that the use of phenotypic correlations as proxies for genetic correlations is appropriate. Empirical evidence of the conjecture has been found across plant and animal species, with results suggesting that there is indeed a robust relationship between the two. Here, we investigate the conjecture in human populations, an analysis made possible by recent developments in availability of human genomic data and computing resources. A sample of 108,035 British European individuals from the UK Biobank was split equally into discovery and replication datasets. 17 traits were selected based on sample size, distribution and heritability. Genetic correlations were calculated using score regression applied to the genome-wide association summary statistics of pairs of traits, and compared within and across datasets. Strong and statistically-significant correlations were found for the between-dataset comparison, suggesting that the genetic correlations from one independent sample were able to predict the phenotypic correlations from another independent sample within the same population. Designating the selected traits as morphological or non-morphological indicated little difference in correlation. The results of this study support the existence of a relationship between genetic and phenotypic correlations in humans. This finding is of specific interest in anthropological studies, which use measured phenotypic correlations to make inferences about the genetics of ancient human populations.

  61. 1988-cheverud.pdf

  62. 1995-roff.pdf

  63. 2008-kruuk.pdf

  64. 2011-dochtermann.pdf








  72. 2018-yamashiro.pdf: “Generation of human oogonia from induced pluripotent stem cells in vitro”⁠, Chika Yamashiro, Kotaro Sasaki, Yukihiro Yabuta, Yoji Kojima, Tomonori Nakamura, Ikuhiro Okamoto, Shihori Yokobayashi, Yusuke Murase, Yukiko Ishikura, Kenjiro Shirane, Hiroyuki Sasaki, Takuya Yamamoto, Mitinori Saitou



  75. 2017-wiggans.pdf: ⁠, George R. Wiggans, John B. Cole, Suzanne M. Hubbard, Tad S. Sonstegard (2017; genetics  /​ ​​ ​selection):

    Genomic selection has revolutionized dairy cattle breeding. Since 2000, assays have been developed to genotype large numbers of single-nucleotide polymorphisms (SNPs) at relatively low cost. The first commercial SNP genotyping chip was released with a set of 54,001 SNPs in December 2007. Over 15,000 genotypes were used to determine which SNPs should be used in genomic evaluation of US dairy cattle. Official USDA genomic evaluations were first released in January 2009 for Holsteins and Jerseys, in August 2009 for Brown Swiss, in April 2013 for Ayrshires, and in April 2016 for Guernseys. Producers have accepted genomic evaluations as accurate indications of a bull’s eventual daughter-based evaluation. The integration of DNA marker technology and genomics into the traditional evaluation system has doubled the rate of genetic progress for traits of economic importance, decreased generation interval, increased selection accuracy, reduced previous costs of progeny testing, and allowed identification of recessive lethals.

    [Keywords: genetic evaluation, single-nucleotide polymorphism, SNP, reliability, imputation, haplotype, genotype]

  76. 2017-wiggans-dairyselection-figure1-genotypingincreasesince2009.png

  77. 2017-wiggans-dairyselection-table4-predictionaccuracyincreases.png

  78. 2017-wiggans-dairyselection-figure5-increasingeconomicvalue.png


  80. ⁠, Lyudmila N. Trut (1999-03):

    [Popular review of the by the lead researcher. Trut gives the history of Belyaev’s founding of the experiment in 1959, and how the results gradually proved his theory about ‘domestication syndrome’: that domestication produces multiple simultaneous effects like floppy ears despite the foxes being bred solely for being willing to approach a strange human, suggesting an underlying common genetic mechanism]

    Forty years into our unique lifelong experiment, we believe that Dmitry Belyaev would be pleased with its progress. By intense selective breeding, we have compressed into a few decades an ancient process that originally unfolded over thousands of years. Before our eyes, “the Beast” has turned into “Beauty”, as the aggressive behavior of our herd’s wild progenitors entirely disappeared. We have watched new morphological traits emerge, a process previously known only from archaeological evidence. Now we know that these changes can burst into a population early in domestication, triggered by the stresses of captivity, and that many of them result from changes in the timing of developmental processes. In some cases the changes in timing, such as earlier sexual maturity or retarded growth of somatic characters, resemble pedomorphosis. Some long-standing puzzles remain. We believed at the start that foxes could be made to reproduce twice a year and all year round, like dogs. We would like to understand why this has turned out not to be quite so. We are also curious about how the vocal repertoire of foxes changes under domestication. Some of the calls of our adult foxes resemble those of dogs and, like those of dogs, appear to be holdovers from puppyhood, but only further study will reveal the details. The biggest unanswered question is just how much further our selective-breeding experiment can go. The domestic fox is not a domestic dog, but we believe that it has the genetic potential to become more and more doglike.



  83. Tryon

  84. 1972-wahlsten.pdf

  85. 1940-tryon-figure4-mazebrightdullrats-distributions.png

  86. 2010-kean.pdf: {#linkBibliography-(science)-2010 .docMetadata doi=“10.1126/​​science.328.5976.301”}, Sam Kean (Science) (2010-04-16; genetics  /​ ​​ ​selection):

    [Review of modern apple breeding techniques: genome sequencing enables selecting on seeds rather than trees by predicting taste & robustness, saving years of delay; this also allows avoiding the ‘GMO’ stigma by crossbreeding (quickly moving genes into new apple trees without direct genetic editing using ), such as a “fast-flowering gene” to accelerate maturation during evaluation but then select it out for the final tree; the creation of “The Gauntlet”, a greenhouse deliberately stocked with as many pathogens as possible, provides a stress test to weed out weak sapling as quickly as possible; and buds can be cryogenically preserved to cut down storage costs by more than an order of magnitude.]

    Until recently, geneticists, their skills honed on Arabidopsis and other quick-breeding flora, avoided fruit-tree research like a blight. Of the 11,000 U.S. field tests on plants with transgenic genes between 1987 and 2004, just 1% focused on fruit trees. That’s partly because of the slow pace. Whereas vegetables like corn might produce two harvests each summer, apple trees need eons—around 5 years—to produce their first fruit, most of which will be disregarded as ugly, bitter, or squishy. But everything in apple breeding is about to change. An Italian team plans to publish the decoded apple genome this summer, and scientists are starting to single out complex genetic markers for taste and heartiness. In some cases the scientists even plan, by inserting genes from other species, to eliminate the barren juvenile stage and push fruit trees to mature rapidly, greatly reducing generation times.


  88. {#linkBibliography-o’connor-et-al-2018 .docMetadata doi=“10.1101/​​420497”}, Luke J. O’Connor, Armin P. Schoech, Farhad Hormozdiari, Steven Gazal, Nick Patterson, Alkes L. Price (2018-09-18):

    Complex traits and common disease are highly polygenic: thousands of common variants are causal, and their effect sizes are almost always small. Polygenicity could be explained by negative selection, which constrains common-variant effect sizes and may reshape their distribution across the genome. We refer to this phenomenon as flattening, as genetic signal is flattened relative to the underlying biology. We introduce a mathematical definition of polygenicity, the effective number of associated SNPs, and a robust statistical method to estimate it. This definition of polygenicity differs from the number of causal SNPs, a standard definition; it depends strongly on SNPs with large effects. In analyses of 33 complex traits (average n = 361k), we determined that common variants are ~4× more polygenic than low-frequency variants, consistent with pervasive flattening. Moreover, functionally important regions of the genome have increased polygenicity in proportion to their increased heritability, implying that heritability enrichment reflects differences in the number of associations rather than their magnitude (which is constrained by selection). We conclude that negative selection constrains the genetic signal of biologically important regions and genes, reshaping genetic architecture.

  89. 2018-pardinas.pdf: ⁠, Antonio F. Pardiñas, Peter Holmans, Andrew J. Pocklington, Valentina Escott-Price, Stephan Ripke, Noa Carrera, Sophie E. Legge, Sophie Bishop, Darren Cameron, Marian L. Hamshere, Jun Han, Leon Hubbard, Amy Lynham, Kiran Mantripragada, Elliott Rees, James H. MacCabe, Steven A. McCarroll, Bernhard T. Baune, Gerome Breen, Enda M. Byrne, Udo Dannlowski, Thalia C. Eley, Caroline Hayward, Nicholas G. Martin, Andrew M. McIntosh, Robert Plomin, David J. Porteous, Naomi R. Wray, Armando Caballero, Daniel H. Geschwind, Laura M. Huckins, Douglas M. Ruderfer, Enrique Santiago, Pamela Sklar, Eli A. Stahl, Hyejung Won, Esben Agerbo, Thomas D. Als, Ole A. Andreassen, Marie Bækvad-Hansen, Preben Bo Mortensen, Carsten Bøcker Pedersen, Anders D. Børglum, Jonas Bybjerg-Grauholm, Srdjan Djurovic, Naser Durmishi, Marianne Giørtz Pedersen, Vera Golimbet, Jakob Grove, David M. Hougaard, Manuel Mattheisen, Espen Molden, Ole Mors, Merete Nordentoft, Milica Pejovic-Milovancevic, Engilbert Sigurdsson, Teimuraz Silagadze, Christine Søholm Hansen, Kari Stefansson, Hreinn Stefansson, Stacy Steinberg, Sarah Tosato, Thomas Werge, GERAD1 Consortium, CRESTAR Consortium, David A. Collier, Dan Rujescu, George Kirov, Michael J. Owen, Michael C. O’Donovan and James T. R. Walters (2018; genetics  /​ ​​ ​selection):

    Schizophrenia is a debilitating psychiatric condition often associated with poor quality of life and decreased life expectancy. Lack of progress in improving treatment outcomes has been attributed to limited knowledge of the underlying biology, although large-scale genomic studies have begun to provide insights. We report a new genome-wide association study of schizophrenia (11,260 cases and 24,542 controls), and through meta-analysis with existing data we identify 50 novel associated loci and 145 loci in total. Through integrating genomic fine-mapping with brain expression and chromosome conformation data, we identify candidate causal genes within 33 loci. We also show for the first time that the common variant association signal is highly enriched among genes that are under strong selective pressures. These findings provide new insights into the biology and genetic architecture of schizophrenia, highlight the importance of mutation-intolerant genes and suggest a mechanism by which common risk variants persist in the population.

  90. ⁠, Felix C. Tropf, Renske M. Verweij, Peter J. van der Most, Gert Stulp, Andrew Bakshi, Daniel A. Briley, Matthew Robinson, Anastasia Numan, Tõnu Esko, Andres Metspalu, Sarah E. Medland, Nicholas G. Martin, Harold Snieder, S. Hong Lee, Melinda C. Mills (2016-05-02; genetics  /​ ​​ ​heritable⁠, genetics  /​ ​​ ​selection  /​ ​​ ​dysgenics):

    Family and twin studies suggest that up to 50% of individual differences in human fertility within a population might be heritable. However, it remains unclear whether the genes associated with fertility outcomes such as number of children ever born (NEB) or age at first birth (AFB) are the same across geographical and historical environments. By not taking this into account, previous genetic studies implicitly assumed that the genetic effects are constant across time and space. We conduct a mega-analysis applying whole genome methods on 31,396 unrelated men and women from six Western countries. Across all individuals and environments, common single-nucleotide polymorphisms (SNPs) explained only ~4% of the variance in NEB and AFB. We then extend these models to test whether genetic effects are shared across different environments or unique to them. For individuals belonging to the same population and demographic cohort (born before or after the 20th century fertility decline), SNP-based heritability was almost five times higher at 22% for NEB and 19% for AFB. We also found no evidence suggesting that genetic effects on fertility are shared across time and space. Our findings imply that the environment strongly modifies genetic effects on the tempo and quantum of fertility, that currently ongoing natural selection is heterogeneous across environments, and that gene-environment interactions may partly account for missing heritability in fertility. Future research needs to combine efforts from genetic research and from the social sciences to better understand human fertility.

    Authors Summary

    Fertility behavior—such as age at first birth and number of children—varies strongly across historical time and geographical space. Yet, family and twin studies, which suggest that up to 50% of individual differences in fertility are heritable, implicitly assume that the genes important for fertility are the same across both time and space. Using molecular genetic data (SNPs) from over 30,000 unrelated individuals from six different countries, we show that different genes influence fertility in different time periods and different countries, and that the genetic effects consistently related to fertility are presumably small. The fact that genetic effects on fertility appear not to be universal could have tremendous implications for research in the area of reproductive medicine, social science and evolutionary biology alike.



  93. ⁠, Stéphane Aris-Brosou (2018-04-25):

    The role played by in shaping present-day human populations has received extensive scrutiny [1, 2, 3], especially in the context of local adaptations [4]. However, most studies to date assume, either explicitly or not, that populations have been in their current locations long enough to adapt to local conditions [5], and that population sizes were large enough to allow for the action of selection [6]. If these conditions were satisfied, not only should selection be effective at promoting local adaptations, but deleterious alleles should also be eliminated over time. To assess this prediction, the genomes of 2,062 individuals, including 1,179 ancient humans, were reanalyzed to reconstruct how frequencies of risk alleles and their changed through space and time in Europe. While the overall deleterious homozygosity consistently decreased through space and time, risk alleles have shown a steady increase in frequency. Even the mutations that are predicted to be most deleterious fail to exhibit any significant decrease in frequency. These conclusions do not deny the existence of local adaptations, but highlight the limitations imposed by drift and range expansions on the strength of selection in purging the mutational load affecting human populations.

  94. ⁠, Andrew Brock, Jeff Donahue, Karen Simonyan (2018-09-28):

    Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple “truncation trick,” allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator’s input. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128×128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.5 and Frechet Inception Distance () of 7.4, improving over the previous best IS of 52.52 and FID of 18.6.






  100. ⁠, Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova (2018-10-11):

    We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.

    BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).


  102. 2018-poplin.pdf: ⁠, Ryan Poplin, Avinash V. Varadarajan, Katy Blumer, Yun Liu, Michael V. McConnell, Greg S. Corrado, Lily Peng, Dale R. Webster (2018-01-01; ai):

    Traditionally, medical discoveries are made by observing associations, making hypotheses from them and then designing and running experiments to test the hypotheses. However, with medical images, observing and quantifying associations can often be difficult because of the wide variety of features, patterns, colours, values and shapes that are present in real data. Here, we show that deep learning can extract new knowledge from retinal fundus images. Using deep-learning models trained on data from 284,335 patients and validated on 2 independent datasets of 12,026 and 999 patients, we predicted cardiovascular risk factors not previously thought to be present or quantifiable in retinal images, such as age (mean absolute error within 3.26 years), gender (area under the receiver operating characteristic curve (AUC) = 0.97), smoking status (AUC = 0.71), systolic blood pressure (mean absolute error within 11.23 mmHg) and major adverse cardiac events (AUC = 0.70). We also show that the trained deep-learning models used anatomical features, such as the optic disc or blood vessels, to generate each prediction. [Sex detection replicated in ⁠.]

  103. ⁠, Brendan Shillingford, Yannis Assael, Matthew W. Hoffman, Thomas Paine, Cían Hughes, Utsav Prabhu, Hank Liao, Hasim Sak, Kanishka Rao, Lorrayne Bennett, Marie Mulville, Ben Coppin, Ben Laurie, Andrew Senior, Nando de Freitas (2018-07-13):

    This work presents a scalable solution to open-vocabulary visual speech recognition. To achieve this, we constructed the largest existing visual speech recognition dataset, consisting of pairs of text and video clips of faces speaking (3,886 hours of video). In tandem, we designed and trained an integrated lipreading system, consisting of a video processing pipeline that maps raw video to stable videos of lips and sequences of phonemes, a scalable deep neural network that maps the lip videos to sequences of phoneme distributions, and a production-level speech decoder that outputs sequences of words. The proposed system achieves a word error rate (WER) of 40.9% as measured on a held-out set. In comparison, professional lipreaders achieve either 86.4% or 92.9% WER on the same dataset when having access to additional types of contextual information. Our approach significantly improves on other lipreading approaches, including variants of LipNet and of Watch, Attend, and Spell (WAS), which are only capable of 89.8% and 76.8% WER respectively.

  104. ⁠, Borja Ibarz, Jan Leike, Tobias Pohlen, Geoffrey Irving, Shane Legg, Dario Amodei (2018-11-15):

    To solve complex real-world problems with ⁠, we cannot rely on manually specified reward functions. Instead, we can have humans communicate an objective to the agent directly. In this work, we combine two approaches to learning from human feedback: expert demonstrations and trajectory preferences. We train a deep neural network to model the reward function and use its predicted reward to train an -based deep reinforcement learning agent on 9 Atari games. Our approach beats the imitation learning baseline in 7 games and achieves strictly superhuman performance on 2 games without using game rewards. Additionally, we investigate the goodness of fit of the reward model, present some reward hacking problems, and study the effects of noise in the human labels.

  105. ⁠, Lvmin Zhang, Chengze Li, Tien-Tsin Wong, Yi Ji, Chunping Liu (2018-05-04):

    Github repo with screenshot samples of style2paints, a neural network for colorizing anime-style illustrations (trained on Danbooru2018), with or without user color hints, which was available as an online service in 2018. style2paints produces high-quality colorizations often on par with human colorizations. Many examples can be seen on Twitter or the repo:

    Example style2paints colorization of a character from Prison School

    style2paints has been described in more detail in ⁠, Zhang et al 2018:

    Sketch or line art colorization is a research field with substantial market demand. Different from photo colorization which strongly relies on texture information, sketch colorization is more challenging as sketches may not have texture. Even worse, color, texture, and gradient have to be generated from the abstract sketch lines. In this paper, we propose a semi-automatic learning-based framework to colorize sketches with proper color, texture as well as gradient. Our framework consists of two stages. In the first drafting stage, our model guesses color regions and splashes a rich variety of colors over the sketch to obtain a color draft. In the second refinement stage, it detects the unnatural colors and artifacts, and try to fix and refine the result.Comparing to existing approaches, this two-stage design effectively divides the complex colorization task into two simpler and goal-clearer subtasks. This eases the learning and raises the quality of colorization. Our model resolves the artifacts such as water-color blurring, color distortion, and dull textures.

    We build an interactive software based on our model for evaluation. Users can iteratively edit and refine the colorization. We evaluate our learning model and the interactive system through an extensive user study. Statistics shows that our method outperforms the state-of-art techniques and industrial applications in several aspects including, the visual quality, the ability of user control, user experience, and other metric

  106. 2018-zhang.pdf: ⁠, Lvmin Zhang, Chengze Li, Tientsin Wong, Yi Ji, Chunping Liu (2018; anime):

    Sketch or line art colorization is a research field with substantial market demand. Different from photo colorization which strongly relies on texture information, sketch colorization is more challenging as sketches may not have texture. Even worse, color, texture, and gradient have to be generated from the abstract sketch lines. In this paper, we propose a semi-automatic learning-based framework to colorize sketches with proper color, texture as well as gradient. Our framework consists of two stages. In the first drafting stage, our model guesses color regions and splashes a rich variety of colors over the sketch to obtain a color draft. In the second refinement stage, it detects the unnatural colors and artifacts, and try to fix and refine the result. Comparing to existing approaches, this two-stage design effectively divides the complex colorization task into two simpler and goal-clearer subtasks. This eases the learning and raises the quality of colorization. Our model resolves the artifacts such as water-color blurring, color distortion, and dull textures. We build an interactive software based on our model for evaluation. Users can iteratively edit and refine the colorization. We evaluate our learning model and the interactive system through an extensive user study. Statistics shows that our method outperforms the state-of-art techniques and industrial applications in several aspects including, the visual quality, the ability of user control, user experience, and other metrics.




  110. ⁠, Reddit ():

    Subreddit devoted to discussion of reinforcement learning research and projects, particularly deep reinforcement learning (more specialized than /r/MachineLearning). Major themes include deep learning, model-based vs model-free RL, robotics, multi-agent RL, exploration, meta-reinforcement learning, imitation learning, the psychology of RL in biological organisms such as humans, and safety/​​​​AI risk. Moderate activity level (as of 2019-09-11): ~10k subscribers, 2k pageviews/​​​​daily









  119. 2018-silver.pdf#deepmind: ⁠, David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis (2018-12-07; reinforcement-learning):

    The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the program recently achieved superhuman performance in the game of Go by reinforcement learning from self-play. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go.


  121. ⁠, David Ha, Jürgen Schmidhuber (2018-03-27):

    We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment.

    An interactive version of this paper is available at https:/​​​​/​​​​​​​​


  123. ⁠, Michał Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, Wojciech Jaśkowski (2016-05-06):

    The recent advances in deep neural networks have led to effective vision-based reinforcement learning methods that have been employed to obtain human-level controllers in Atari 2600 games from pixel data. Atari 2600 games, however, do not resemble real-world tasks since they involve non-realistic 2D environments and the third-person perspective. Here, we propose a novel test-bed platform for reinforcement learning research from raw visual information which employs the first-person perspective in a semi-realistic 3D world. The software, called ViZDoom, is based on the classical first-person shooter video game, Doom. It allows developing bots that play the game using the screen buffer. ViZDoom is lightweight, fast, and highly customizable via a convenient mechanism of user scenarios. In the experimental part, we test the environment by trying to learn bots for two scenarios: a basic move-and-shoot task and a more complex maze-navigation problem. Using convolutional deep neural networks with and experience replay, for both scenarios, we were able to train competent bots, which exhibit human-like behaviors. The results confirm the utility of ViZDoom as an AI research platform and imply that visual reinforcement learning in 3D realistic first-person perspective environments is feasible.

  124. 2018-segler.pdf: “Planning chemical syntheses with deep neural networks and symbolic AI”⁠, Marwin H. S. Segler, Mike Preuss, Mark P. Waller

  125. ⁠, OpenAI, Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, Jonas Schneider, Szymon Sidor, Josh Tobin, Peter Welinder, Lilian Weng, Wojciech Zaremba (2018-08-01):

    We use reinforcement learning (RL) to learn dexterous in-hand manipulation policies which can perform vision-based object reorientation on a physical Shadow Dexterous Hand. The training is performed in a simulated environment in which we randomize many of the physical properties of the system like friction coefficients and an object’s appearance. Our policies transfer to the physical robot despite being trained entirely in simulation. Our method does not rely on any human demonstrations, but many behaviors found in human manipulation emerge naturally, including finger gaiting, multi-finger coordination, and the controlled use of gravity. Our results were obtained using the same distributed RL system that was used to train Five. We also include a video of our results: https:/​​​​/​​​​​​​​jwSbzNHGflM






  131. ⁠, Daniel J. Mankowitz, Augustin Žídek, André Barreto, Dan Horgan, Matteo Hessel, John Quan, Junhyuk Oh, Hado van Hasselt, David Silver, Tom Schaul (2018-02-22):

    Some real-world domains are best characterized as a single task, but for others this perspective is limiting. Instead, some tasks continually grow in complexity, in tandem with the agent’s competence. In continual learning, also referred to as lifelong learning, there are no explicit task boundaries or curricula. As learning agents have become more powerful, continual learning remains one of the frontiers that has resisted quick progress. To test continual learning capabilities we consider a challenging 3D domain with an implicit sequence of tasks and sparse rewards. We propose a novel agent architecture called Unicorn, which demonstrates strong continual learning and outperforms several baseline agents on the proposed domain. The agent achieves this by jointly representing and learning multiple policies efficiently, using a parallel off-policy learning setup.

  132. ⁠, Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, Koray Kavukcuoglu (2018-02-05):

    In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters. A key challenge is to handle the increased amount of data and extended training time. We have developed a new distributed agent IMPALA (Importance Weighted Actor-Learner Architecture) that not only uses resources more efficiently in single-machine training but also scales to thousands of machines without sacrificing data efficiency or resource utilisation. We achieve stable learning at high throughput by combining decoupled acting and learning with a novel off-policy correction method called V-trace. We demonstrate the effectiveness of IMPALA for multi-task reinforcement learning on (a set of 30 tasks from the DeepMind Lab environment (Beattie et al., 2016)) and Atari-57 (all available Atari games in Arcade Learning Environment (Bellemare et al., 2013a)). Our results show that IMPALA is able to achieve better performance than previous agents with less data, and crucially exhibits positive transfer between tasks as a result of its multi-task approach.


  134. ⁠, Matteo Hessel, Hubert Soyer, Lasse Espeholt, Wojciech Czarnecki, Simon Schmitt, Hado van Hasselt (2018-09-12):

    The reinforcement learning community has made great strides in designing algorithms capable of exceeding human performance on specific tasks. These algorithms are mostly trained one task at the time, each new task requiring to train a brand new agent instance. This means the learning algorithm is general, but each solution is not; each agent can only solve the one task it was trained on. In this work, we study the problem of learning to master not one but multiple sequential-decision tasks at once. A general issue in multi-task learning is that a balance must be found between the needs of multiple tasks competing for the limited resources of a single learning system. Many learning algorithms can get distracted by certain tasks in the set of tasks to solve. Such tasks appear more salient to the learning process, for instance because of the density or magnitude of the in-task rewards. This causes the algorithm to focus on those salient tasks at the expense of generality. We propose to automatically adapt the contribution of each task to the agent’s updates, so that all tasks have a similar impact on the learning dynamics. This resulted in state of the art performance on learning to play all games in a set of 57 diverse Atari games. Excitingly, our method learned a single trained policy—with a single set of weights—that exceeds median human performance. To our knowledge, this was the first time a single agent surpassed human-level performance on this multi-task domain. The same approach also demonstrated state of the art performance on a set of 30 tasks in the 3D reinforcement learning platform DeepMind Lab.


  136. ⁠, Joel Lehman, Jeff Clune, Dusan Misevic, Christoph Adami, Lee Altenberg, Julie Beaulieu, Peter J. Bentley, Samuel Bernard, Guillaume Beslon, David M. Bryson, Patryk Chrabaszcz, Nick Cheney, Antoine Cully, Stephane Doncieux, Fred C. Dyer, Kai Olav Ellefsen, Robert Feldt, Stephan Fischer, Stephanie Forrest, Antoine Frénoy, Christian Gagné, Leni Le Goff, Laura M. Grabowski, Babak Hodjat, Frank Hutter, Laurent Keller, Carole Knibbe, Peter Krcah, Richard E. Lenski, Hod Lipson, Robert MacCurdy, Carlos Maestre, Risto Miikkulainen, Sara Mitri, David E. Moriarty, Jean-Baptiste Mouret, Anh Nguyen, Charles Ofria, Marc Parizeau, David Parsons, Robert T. Pennock, William F. Punch, Thomas S. Ray, Marc Schoenauer, Eric Shulte, Karl Sims, Kenneth O. Stanley, François Taddei, Danesh Tarapore, Simon Thibault, Westley Weimer, Richard Watson, Jason Yosinski (2018-03-09):

    Biological evolution provides a creative fount of complex and subtle adaptations, often surprising the scientists who discover them. However, because evolution is an algorithmic process that transcends the substrate in which it occurs, evolution’s creativity is not limited to nature. Indeed, many researchers in the field of digital evolution have observed their evolving algorithms and organisms subverting their intentions, exposing unrecognized bugs in their code, producing unexpected adaptations, or exhibiting outcomes uncannily convergent with ones in nature. Such stories routinely reveal creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.

  137. ⁠, Gwern Branwen (2018-10-20):

    Description of emerging machine learning paradigm identified by commentator starspawn0: discussions of building artificial brains typically presume either learning a brain architecture & parameters from scratch (AGI) or laboriously ‘scanning’ and reverse-engineering a biological brain in its entirety to get a functioning artificial brain.

    However, the rise of deep learning’s transfer learning & meta-learning shows a wide variety of intermediate approaches, where ‘side data’ from natural brains can be used as scaffolding to guide & constrain standard deep learning methods. Such approaches do not seek to ‘upload’ or ‘emulate’ any specific brain, they merely seek to imitate an average brain. A simple example would be training a to imitate saliency data: what a human looks at while playing a video game or driving is the important part of a scene, and the CNN doesn’t have to learn importance from scratch. A more complex example would be using EEG as a ‘description’ of music in addition to the music itself. fMRI data could be used to guide a NN to have a similar modularized architecture with similar activation patterns given a particular stimulus as a human brain, which presumably is related to human abilities to zero-shot/​​​​few-shot learn and generalize.

    While a highly marginal approach at the moment compared to standard approaches like scaling up models & datasets, it is largely untapped, and progress in VR with eyetracking capabilities (intended for but usable for many other purposes), brain imaging methods & has been more rapid than generally appreciated—in part thanks to breakthroughs using DL itself, suggesting the potential for a positive feedback loop where a BCI breakthrough enables a better NN for BCIs and so on.

  138. ⁠, Richard Klein, Michelangelo, Vianello, Fred, Hasselman, Byron, Adams, Reginald B. Adams, Jr., Sinan Alper, Mark Aveyard, Jordan Axt, Mayowa Babalola, Š těpán Bahník, Mihaly Berkics, Michael Bernstein, Daniel Berry, Olga Bialobrzeska, Konrad Bocian, Mark Brandt, Robert Busching, Huajian Cai, Fanny Cambier, Katarzyna Cantarero, Cheryl Carmichael, Zeynep Cemalcilar, Jesse Chandler, Jen-Ho Chang, Armand Chatard, Eva CHEN, Winnee Cheong, David ⁠, Sharon Coen, Jennifer Coleman, Brian Collisson, Morgan Conway, Katherine Corker, Paul Curran, Fiery Cushman, Ilker Dalgar, William Davis, Maaike de Bruijn, Marieke de Vries, Thierry Devos, Canay Doğulu, Nerisa Dozo, Kristin Dukes, Yarrow Dunham, Kevin Durrheim, Matthew Easterbrook, Charles Ebersole, John Edlund, Alexander English, Anja Eller, Carolyn Finck, Miguel-Ángel Freyre, Mike Friedman, Natalia Frankowska, Elisa Galliani, Tanuka Ghoshal, Steffen Giessner, Tripat Gill, Timo Gnambs, Angel Gomez, Roberto Gonzalez, Jesse Graham, Jon Grahe, Ivan Grahek, Eva Green, Kakul Hai, Matthew Haigh, Elizabeth Haines, Michael Hall, Marie Heffernan, Joshua Hicks, Petr Houdek, Marije van, der Hulst, Jeffrey Huntsinger, Ho Huynh, Hans IJzerman, Yoel Inbar, Åse Innes-Ker, William Jimenez-Leal, Melissa-Sue John, Jennifer Joy-Gaba, Roza Kamiloglu, Andreas Kappes, Heather Kappes, Serdar Karabati, Haruna Karick, Victor Keller, Anna Kende, Nicolas Kervyn, Goran Knezevic, Carrie Kovacs, Lacy Krueger, German Kurapov, Jaime Kurtz, Daniel Lakens, Ljiljana Lazarevic, Carmel Levitan, Neil Lewis, Samuel Lins, Esther Maassen, Angela Maitner, Winfrida Malingumu, Robyn Mallett, Satia Marotta, Jason McIntyre, Janko Međedović, Taciano Milfont, Wendy Morris, Andriy Myachykov, Sean Murphy, Koen Neijenhuijs, Anthony Nelson, Felix Neto, Austin Nichols, Susan O'Donnell, Masanori Oikawa, Gabor Orosz, Malgorzata Osowiecka, Grant Packard, Rolando Pérez, Boban Petrovic, Ronaldo Pilati, Brad Pinter, Lysandra Podesta, Monique Pollmann, Anna Dalla Rosa, Abraham Rutchick, Patricio Saavedra, Airi Sacco, Alexander Saeri, Erika Salomon, Kathleen Schmidt, Felix Schönbrodt, Maciek Sekerdej, David Sirlopu, Jeanine Skorinko, Michael Smith, Vanessa Smith-Castro, Agata Sobkow, Walter Sowden, Philipp Spachtholz, Troy Steiner, Jeroen Stouten, Chris Street, Oskar Sundfelt, Ewa Szumowska, Andrew Tang, Norbert Tanzer, Morgan Tear, Jordan Theriault, Manuela Thomae, David Torres-Fernández, Jakub Traczyk, Joshua Tybur, Adrienn Ujhelyi, Marcel van Assen, Anna van 't Veer, Alejandro, Vásquez-Echeverría Leigh, Ann Vaughn, Alexandra Vázquez, Diego Vega, Catherine Verniers, Mark Verschoor, Ingrid Voermans, Marek Vranka, Cheryl Welch, Aaron Wichman, Lisa Williams, Julie Woodzicka, Marta Wronska, Liane Young, John Zelenski, Brian Nosek (2019-11-19):

    We conducted replications of 28 classic and contemporary published findings with protocols that were peer reviewed in advance to examine variation in effect magnitudes across sample and setting. Each protocol was administered to approximately half of 125 samples and 15,305 total participants from 36 countries and territories. Using conventional statistical-significance (p < 0.05), fifteen (54%) of the replications provided evidence in the same direction and statistically-significant as the original finding. With a strict statistical-significance criterion (p < 0.0001), fourteen (50%) provide such evidence reflecting the extremely high powered design. Seven (25%) of the replications had effect sizes larger than the original finding and 21 (75%) had effect sizes smaller than the original finding. The median comparable Cohen’s d effect sizes for original findings was 0.60 and for replications was 0.15. Sixteen replications (57%) had small effect sizes (< 0.20) and 9 (32%) were in the opposite direction from the original finding. Across settings, 11 (39%) showed statistically-significant heterogeneity using the Q statistic and most of those were among the findings eliciting the largest overall effect sizes; only one effect that was near zero in the aggregate showed heterogeneity. Only one effect showed a Tau > 0.20 indicating moderate heterogeneity. Nine others had a Tau near or slightly above 0.10 indicating slight heterogeneity. In moderation tests, very little heterogeneity was attributable to task order, administration in lab versus online, and exploratory WEIRD versus less WEIRD culture comparisons. Cumulatively, variability in observed effect sizes was more attributable to the effect being studied than the sample or setting in which it was studied.



  141. ⁠, Houshmand Shirani-Mehr, David Rothschild, Sharad Goel, Andrew Gelman (2018-07-25):

    It is well known among researchers and practitioners that election polls suffer from a variety of sampling and nonsampling errors, often collectively referred to as total survey error. Reported margins of error typically only capture sampling variability, and in particular, generally ignore nonsampling errors in defining the target population (eg., errors due to uncertainty in who will vote).

    Here, we empirically analyze 4221 polls for 608 state-level presidential, senatorial, and gubernatorial elections between 1998 and 2014, all of which were conducted during the final three weeks of the campaigns. Comparing to the actual election outcomes, we find that average survey error as measured by root mean square error is approximately 3.5 percentage points, about twice as large as that implied by most reported margins of error. We decompose survey error into election-level bias and variance terms. We find that average absolute election-level bias is about 2 percentage points, indicating that polls for a given election often share a common component of error. This shared error may stem from the fact that polling organizations often face similar difficulties in reaching various subgroups of the population, and that they rely on similar screening rules when estimating who will vote. We also find that average election-level variance is higher than implied by simple random sampling, in part because polling organizations often use complex sampling designs and adjustment procedures.

    We conclude by discussing how these results help explain polling failures in the 2016 U.S. presidential election, and offer recommendations to improve polling practice.

    [Keywords: margin of error, non-sampling error, polling bias, total survey error]



  144. 2005-jussim.pdf: ⁠, Lee Jussim, Kent D. Harber (2005; statistics  /​ ​​ ​bias):

    This article shows that 35 years of empirical research on teacher expectations justifies the following conclusions: (a) Self-fulfilling prophecies in the classroom do occur, but these effects are typically small, they do not accumulate greatly across perceivers or over time, and they may be more likely to dissipate than accumulate; (b) powerful self-fulfilling prophecies may selectively occur among students from stigmatized social groups; (c) whether self-fulfilling prophecies affect intelligence, and whether they in general do more harm than good, remains unclear, and (d) teacher expectations may predict student outcomes more because these expectations are accurate than because they are self-fulfilling. Implications for future research, the role of self-fulfilling prophecies in social problems, and perspectives emphasizing the power of erroneous beliefs to create social reality are discussed.

    [Jussim discusses the famous ‘Pygmalion effect’. It demonstrates the Replication crisis: an initial extraordinary finding indicating that teachers could raise student IQs by dozens of points gradually shrunk over repeated replications to essentially zero net long-term effect. The original finding was driven by statistical malpractice bordering on research fraud: some students had “pretest IQ scores near zero, and others had post-test IQ scores over 200”! Rosenthal further maintained the Pygmalion effect by statistical trickery, such as his ‘fail-safe N’, which attempted to show that hundreds of studies would have to have not been published in order for the Pygmalion effect to be true—except this assumes zero publication bias in those unpublished studies and begs the question.]




  148. ⁠, Lars G. Hemkens, Despina G. Contopoulos-Ioannidis, John P. A. Ioannidis (2016-02-08):

    Objective: To assess differences in estimated treatment effects for mortality between observational studies with routinely collected health data (RCD; that are published before trials are available) and subsequent evidence from randomized controlled trials on the same clinical question.

    Design: Meta-epidemiological survey.

    Data sources: searched up to November 2014.

    Methods: Eligible RCD studies were published up to 2010 that used to address bias and reported comparative effects of interventions for mortality. The analysis included only RCD studies conducted before any trial was published on the same topic. The direction of treatment effects, confidence intervals, and effect sizes (odds ratios) were compared between RCD studies and ⁠. The relative odds ratio (that is, the summary odds ratio of trial(s) divided by the RCD study estimate) and the summary relative odds ratio were calculated across all pairs of RCD studies and trials. A summary relative odds ratio greater than one indicates that RCD studies gave more favorable mortality results.

    Results: The evaluation included 16 eligible RCD studies, and 36 subsequent published randomized controlled trials investigating the same clinical questions (with 17 275 patients and 835 deaths). Trials were published a median of three years after the corresponding RCD study. For five (31%) of the 16 clinical questions, the direction of treatment effects differed between RCD studies and trials. Confidence intervals in nine (56%) RCD studies did not include the RCT effect estimate. Overall, RCD studies showed statistically-significantly more favorable mortality estimates by 31% than subsequent trials (summary relative odds ratio 1.31 (95% confidence interval 1.03 to 1.65; I2 = 0%)).

    Conclusions: Studies of routinely collected health data could give different answers from subsequent randomized controlled trials on the same clinical questions, and may substantially overestimate treatment effects. Caution is needed to prevent misguided clinical decision making.

  149. Correlation

  150. 2001-collins.pdf: ⁠, H. M. Collins (2001-01-01; philosophy):

    Russian measurements of the quality factor (Q) of sapphire, made 20 years ago, have only just been repeated in the West. Shortfalls in have been partly responsible for this delay. The idea of ` tacit knowledge’, first put forward by the physical chemist Michael Polanyi, has been studied and analysed over the last two decades. A new classification of tacit knowledge (broadly construed) is offered here and applied to the case of sapphire. The importance of personal contact between scientists is brought out and the sources of trust described. It is suggested that the reproduction of scientific findings could be aided by a small addition to the information contained in experimental reports. The analysis is done in the context of fieldwork conducted in the USA and observations of experimental work at Glasgow University.


  152. ⁠, James C. Davie (1962-02):

    Revolutions are most likely to occur when a prolonged period of objective economic and social development is followed by a short period of sharp reversal. People then subjectively fear that ground gained with great effort will be quite lost; their mood becomes revolutionary. The evidence from Dorr’s Rebellion, the Russian Revolution, and the Egyptian Revolution supports this notion; tentatively, so do data on other civil disturbances. Various statistics—as on rural uprisings, industrial strikes, unemployment, and cost of living—may serve as crude indexes of popular mood. More useful, though less easy to obtain, are direct questions in cross-sectional interviews. The goal of predicting revolution is conceived but not yet born or mature


  154. 2018-wood.pdf: “The Elusive Backfire Effect: Mass Attitudes’ Steadfast Factual Adherence”⁠, Thomas Wood, Ethan Porter




  158. ⁠, Dan Wang (2017-06-25; sociology  /​ ​​ ​preference-falsification):

    …competition is fiercer the more that competitors resemble each other. When we’re not so different from people around us, it’s irresistible to become obsessed about beating others.

    It’s hard to construct a more perfect incubator for mimetic contagion than the American college campus. Most 18-year-olds are not super differentiated from each other. By construction, whatever distinctions any does have are usually earned through brutal, zero-sum competitions. These tournament-type distinctions include: SAT scores at or near perfection; being a top player on a sports team; gaining master status from chess matches; playing first instrument in state orchestra; earning high rankings in Math Olympiad; and so on, culminating in gaining admission to a particular college. Once people enter college, they get socialized into group environments that usually continue to operate in zero-sum competitive dynamics. These include orchestras and sport teams; fraternities and sororities; and many types of clubs. The biggest source of mimetic pressures are the classes. Everyone starts out by taking the same intro classes; those seeking distinction throw themselves into the hardest classes, or seek tutelage from star professors, and try to earn the highest grades.

    There’s very little external intermediation, instead all competitive dynamics are internally mediated…Once internal rivalries are sorted out, people coalesce into groups united against something foreign. These tendencies help explain why events on campus so often make the news—it seems like every other week we see some campus activity being labeled a “witch hunt”, “riot”, or something else that involves violence, implied or explicit. I don’t care to link to these events, they’re so easy to find. It’s interesting to see that academics are increasingly becoming the target of student activities. The terror devours its children first, who have tolerated or fanned mimetic contagion for so long.

    …I’ll end with a quote from I See Satan Fall Like Lightning: “Mimetic desire enables us to escape from the animal realm. It is responsible for the best and the worst in us, for what lowers us below the animal level as well as what elevates us above it. Our unending discords are the ransom of our freedom.”


  160. ⁠, Scott Alexander (2018-10-30):

    [Contemporary SF short story; inspired by NN text generation, social media dynamics, clickbait, and debates like ⁠; imagines AI natural language processing systems run amok after being trained to maximize user reactions to create clickbait, leading to learning ‘scissor statements’, claims which are maximally controversial and divide the population 50-50 between those who find the statement obviously correct and moral, and those who find it equally obviously false and immoral, leading to intractable polarizing debates, contempt, and warfare.]

  161. 2018-norpoth.pdf




  165. 2011-hunt-ch10-whatuseisintelligence.pdf: “_Human Intelligence_: chapter 10, What Use Is Intelligence?”⁠, Earl Hunt



  168. 2018-warne-2.pdf: ⁠, Russell T. Warne (2018; iq):

    is widely seen as the “father of gifted education”, yet his work is controversial. Terman’s “mixed legacy” includes the pioneering work in the creation of intelligence tests, the first large-scale ⁠, and the earliest discussions of gifted identification, curriculum, ability grouping, acceleration, and more. However, since the 1950s, Terman has been viewed as a sloppy thinker at best and a racist, sexist, and/​​​​or classist at worst.

    This article explores the most common criticisms of Terman’s legacy: an overemphasis on IQ, support for the meritocracy, and emphasizing genetic explanations for the origin of intelligence differences over environmental ones. Each of these criticisms is justified to some extent by the historical record, and each is relevant today. Frequently overlooked, however, is Terman’s willingness to form a strong opinion based on weak data.

    The article concludes with a discussion of the important lessons that Terman’s work has for modern educators and psychologists, including his contributions to psychometrics and gifted education, his willingness to modify his opinions in the face of new evidence, and his inventiveness and inclination to experiment. Terman’s legacy is complex, but one that provides insights that can enrich modern researchers and practitioners in these areas.

  169. 2018-gensowski.pdf: ⁠, Miriam Gensowski (2018-04; iq):

    [Published version of ]

    • This paper estimates the effects of personality traits and IQ on lifetime earnings, both as a sum and individually by age.
    • The payoffs to personality traits display a concave life-cycle pattern, with the largest effects between the ages of 40 and 60.
    • The largest effects on earnings are found for Conscientiousness, ⁠, and Agreeableness (negative).
    • An interaction of traits with education reveals that personality matters most for highly educated men.
    • The overall effect of operates partly through education, which also has substantial returns.

    This paper estimates the effects of personality traits and IQ on lifetime earnings of the men and women of the Terman study, a high-IQ U.S. sample. Age-by-age earnings profiles allow a study of when personality traits affect earnings most, and for whom the effects are strongest. I document a concave life-cycle pattern in the payoffs to personality traits, with the largest effects between the ages of 40 and 60. An interaction of traits with education reveals that personality matters most for highly educated men. The largest effects are found for Conscientiousness, Extraversion, and (negative), where Conscientiousness operates partly through education, which also has substantial returns.

    [Keywords: Personality traits, Socio-emotional skills, Cognitive skills, Returns to education, Lifetime earnings, Big Five, Human capital, ]


  171. 1987-rossi


  173. 2018-williams-2.pdf

  174. ⁠, Gideon Nave, Wi Hoon Jung, Richard Karlsson Linnér, Joseph W. Kable, Philipp D. Koellinger (2018-11-30):

    A positive relationship between brain volume and intelligence has been suspected since the 19th century, and empirical studies seem to support this hypothesis. However, this claim is controversial because of concerns about publication bias and the lack of systematic control for critical confounding factors (eg., height, population structure). We conducted a preregistered study of the relationship between brain volume and cognitive performance using a new sample of adults from the United Kingdom that is about 70% larger than the combined samples of all previous investigations on this subject (N = 13,608). Our analyses systematically controlled for sex, age, height, socioeconomic status, and population structure, and our analyses were free of publication bias. We found a robust association between total brain volume and fluid intelligence (r = 0.19), which is consistent with previous findings in the literature after controlling for measurement quality of intelligence in our data. We also found a positive relationship between total brain volume and educational attainment (r = 0.12). These relationships were mainly driven by gray matter (rather than white matter or fluid volume), and effect sizes were similar for both sexes and across age groups.

    [Keywords: intelligence, educational attainment, brain volume, preregistered analysis, UK Biobank, open data, open materials, preregistered]







  181. ⁠, Jason Huang, David H. Reiley, Nickolai M. Riabov (2018-04-21; advertising):

    A randomized experiment with almost 35 million Pandora listeners enables us to measure the sensitivity of consumers to advertising, an important topic of study in the era of ad-supported digital content provision. The experiment randomized listeners into 9 treatment groups, each of which received a different level of audio advertising interrupting their music listening, with the highest treatment group receiving more than twice as many ads as the lowest treatment group. By keeping consistent treatment assignment for 21 months, we are able to measure long-run demand effects, with three times as much ad-load sensitivity as we would have obtained if we had run a month-long experiment. We estimate a demand curve that is strikingly linear, with the number of hours listened decreasing linearly in the number of ads per hour (also known as the price of ad-supported listening). We also show the negative impact on the number of days listened and on the probability of listening at all in the final month. Using an experimental design that separately varies the number of commercial interruptions per hour and the number of ads per commercial interruption, we find that neither makes much difference to listeners beyond their impact on the total number of ads per hour. Lastly, we find that increased ad load causes a substantial increase in the number of paid ad-free subscriptions to Pandora, particularly among older listeners.


  183. Ads

  184. ⁠, Ben Miroglio, David Zeber, Jofish Kaye, Rebecca Weiss (2018-04-23):

    Web users are increasingly turning to ad blockers to avoid ads, which are often perceived as annoying or an invasion of privacy. While there has been substantial research into the factors driving ad blocker adoption and the detrimental effect to ad publishers on the Web, the resulting effects of ad blocker usage on Web users’’ browsing experience is not well understood. To approach this problem, we conduct a retrospective natural field experiment using Firefox browser usage data, with the goal of estimating the effect of adblocking on user engagement with the Web. We focus on new users who installed an ad blocker after a baseline observation period, to avoid comparing different populations. Their subsequent browser activity is compared against that of a control group, whose members do not use ad blockers, over a corresponding observation period, controlling for prior baseline usage. In order to estimate causal effects, we employ propensity score matching on a number of other features recorded during the baseline period. In the group that installed an ad blocker, we find substantial increases in both active time spent in the browser (+28% over control) and the number of pages viewed (+15% over control), while seeing no change in the number of searches. Additionally, by reapplying the same methodology to other popular Firefox browser extensions, we show that these effects are specific to ad blockers. We conclude that ad blocking has a positive impact on user engagement with the Web, suggesting that any costs of using ad blockers to users’’ browsing experience are largely drowned out by the utility that they offer.

  185. 2010-kelly-whattechnologywants-ch11-lessonsofamishhackers.pdf: “What Technology Wants: Chapter 11, Lessons of AMish Hackers⁠, Kevin Kelly


  187. 2018-07-25-johnbackus-howdecentralizationevolves.html

  188. 2018-teblunthuis.pdf: “Revisiting ` ` The Rise and Decline'' in a Population of Peer Production Projects”⁠, Nathan TeBlunthuis, Aaron Shaw, Benjamin Mako Hill




  192. 1996-dempsey.pdf: ⁠, Paul Stephen Dempsey (1996; economics):

    During the last fifteen years, Congress has deregulated, wholly or partly, a number of infrastructure industries, including most modes of transport—airlines, motor carriers, railroads, and intercity bus companies. Deregulation emerged in a comprehensive ideological movement which abhorred governmental pricing and entry controls as manifestly causing waste and inefficiency, while denying consumers the range of price and service options they desire.

    In a nation dedicated to free market capitalism, governmental restraints on the freedom to enter into a business or allowing the competitive market to set the price seem fundamentally at odds with immutable notions of economic liberty. While in the late 19th and early 20th Century, market failure gave birth to economic regulation of infrastructure industries, today, we live in an era where the conventional wisdom is that government can do little good and the market can do little wrong.

    Despite this passionate and powerful contemporary political/​​​​economic ideological movement, one mode of transportation has come full circle from regulation, through deregulation, and back again to regulation—the taxi industry. American cities began regulating local taxi firms in the 1920s. Beginning a half century later, more than 20 cities, most located in the Sunbelt, totally or partially deregulated their taxi companies. However, the experience with taxicab deregulation was so profoundly unsatisfactory that virtually every city that embraced it has since jettisoned it in favor of resumed economic regulation.

    Today, nearly all large and medium-sized communities regulate their local taxicab companies. Typically, regulation of taxicabs involves: (1) limited entry (restricting the number of firms, and/​​​​or the ratio of taxis to population), usually under a standard of “public convenience and necessity”, [PC&N] (2) just, reasonable, and non-discriminatory fares, (3) service standards (eg., vehicular and driver safety standards, as well as a common carrier obligation of non-discriminatory service, 24-hour radio dispatch capability, and a minimum level of response time), and (4) financial responsibility standards (eg., insurance).

    This article explores the legal, historical, economic, and philosophical bases of regulation and deregulation in the taxi industry, as well as the empirical results of taxi deregulation. The paradoxical metamorphosis from regulation, to deregulation, and back again, to regulation is an interesting case study of the collision of economic theory and ideology, with empirical reality. We begin with a look at the historical origins of taxi regulation.

    [Keywords: Urban Transportation, Taxi Industry, Common Carrier, Mass Transit, Taxi Industry Regulation, Taxi Deregulation, Reregulation, Taxicab Ordinance, PUC, Open Entry, Reglated Entry, Operating Efficiency, Destructive Competition, Regulated Competition, Cross Subsidy, Cream Skimming, PC&N, Pollution, Cabs]


  194. 2000-west.pdf



  197. 2018-biasi.pdf

  198. Copyright-deadweight

  199. 1995-tengs.pdf




  203. 1983-wolfe-thecitadeloftheautarch-thejustman


  205. ⁠, Franco Moretti (2000-03-01):

    The history of the world is the slaughterhouse of the world, reads a famous Hegelian aphorism; and of literature. The majority of books disappear forever—and “majority” actually misses the point: if we set today’s canon of nineteenth-century British novels at two hundred titles (which is a very high figure), they would still be only about 0.5 percent of all published novels.

    [Literature paper by ⁠. Moretti considers the vast production of literature of which only the slightest fraction is still read and studied as part of a ‘canon’. Canons are formed by market forces, leading to preservation and reading in a feedback loop—far from academics selecting the best based on esthetic grounds. Moretti offers a case study of Arthur Conan Doyle’s Sherlock Holmes by comparing to all the now-forgotten competing detective fiction, to study the evolution of the idea of a ‘clue’; his competitors reveal its difficult evolution and how everyone groped towards it. Surprisingly, clues were neither obvious nor popular nor showed any clear evolution towards success. This raises puzzling questions about how to create and interpret ‘literary history’.]

  206. 2005-moretti-graphsmapstrees-3-trees.pdf: ⁠, Franco Moretti (2005; culture):

    After the quantitative diagrams of the first chapter, and the spatial ones of the second, evolutionary trees constitute morphological diagrams, where history is systematically correlated with form. And indeed, in contrast to literary studies—where theories of form are usually blind to history, and historical work blind to form—for evolutionary thought morphology and history are truly the two dimensions of the same tree: where the vertical axis charts, from the bottom up, the regular passage of time (every interval, writes Darwin, ‘one thousand generations’), while the horizontal one follows the formal diversification (‘the little fans of diverging dotted lines’) that will eventually lead to ‘well-marked varieties’, or to entirely new species.

    The horizontal axis follows formal diversification . . . But Darwin’s words are stronger: he speaks of ‘this rather perplexing subject’—elsewhere, ‘perplexing & unintelligible’ 4—whereby forms don’t just ‘change’, but change by always diverging from each other (remember, we are in the section on ‘Divergence of Character’).5 Whether as a result of historical accidents, then, or under the action of a specific ‘principle’, 6 the reality of divergence pervades the history of life, defining its morphospace—its space-of-forms: an important concept, in the pages that follow—as an intrinsically expanding one.

    From a single common origin, to an immense variety of solutions: it is this incessant growing-apart of life forms that the branches of a morphological tree capture with such intuitive force. ‘A tree can be viewed as a simplified description of a matrix of distances’, write Cavalli-Sforza, Menozzi and Piazza in the methodological prelude to their History and Geography of Human Genes; and figure 29, with its mirror-like alignment of genetic groups and linguistic families drifting away from each other (in a ‘correspondence [that] is remarkably high but not perfect’, as they note with aristocratic aplomb), 7 makes clear what they mean: a tree is a way of sketching how far a certain language has moved from another one, or from their common point of origin.

    And if language evolves by diverging, why not literature too?











  217. Hyperbolic-Time-Chamber

  218. 2009-fortugno.pdf: “Losing Your Grip: Futility and Dramatic Necessity in _Shadow of the Colossus_”⁠, Nick Fortugno













  231. Books#the-vaccinators-jannetta-2007


  233. ⁠, Alex Roland, Philip Shiman (2002):

    Between 1983 and 1993, the Defense Advanced Research Projects Agency () spent an extra $2$11993 billion on computer research to achieve machine intelligence.1 The Strategic Computing Initiative (SCI) was conceived at the outset as an integrated plan to promote computer chip design and manufacturing, computer architecture, and artificial intelligence software. These technologies seemed ripe in the early 1980s. If only DARPA could connect them, it might achieve what Pamela McCorduck called “machines who think.” What distinguishes Strategic Computing (SC) from other stories of modern, large-scale technological development is that the program self-consciously set about advancing an entire research front. Instead of focusing on one problem after another, or of funding a whole field in hopes that all would prosper, SC treated intelligent machines as a single problem composed of interrelated subsystems. The strategy was to develop each of the subsystems cooperatively and map out the mechanisms by which they would connect. While most research programs entail tactics or strategy, SC boasted grand strategy, a master plan for an entire campaign.

  234. ARPA

  235. 1984-tidman-theoperationsevaluationgroup.pdf

  236. 1978-brower-fujiwarateikas100poemsequence.pdf

  237. Movies#kedi

  238. Movies#a-quiet-place

  239. Movies#conan-the-barbarian

  240. 08#ant-man-and-the-wasp

  241. Anime#kurozuka