newsletter/2017/13 (Link Bibliography)

“newsletter/​2017/​13” links:

  1. https://gwern.substack.com

  2. newsletter

  3. 01

  4. 02

  5. 03

  6. 04

  7. 05

  8. 06

  9. 07

  10. 08

  11. 09

  12. 10

  13. 11

  14. 12

  15. 13

  16. 13

  17. Coin-flip

  18. Story-Of-Your-Life

  19. Ads

  20. Tanks

  21. Order-statistics

  22. Traffic#july-2017---january-2018

  23. 2017-silver.pdf#deepmind: ⁠, David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, Demis Hassabis (2017-10-19; reinforcement-learning):

    A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

  24. ⁠, David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis (2017-12-05):

    The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.

  25. ⁠, Martin Arjovsky, Soumith Chintala, Léon Bottou (2017-01-26):

    We introduce a new algorithm named WGAN, an alternative to traditional training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the corresponding optimization problem is sound, and provide extensive theoretical work highlighting the deep connections to other distances between distributions.

  26. https://junyanz.github.io/CycleGAN/

  27. ⁠, Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen (2017-10-27):

    We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.

  28. ⁠, Jonathan Ho, Stefano Ermon (2016-06-10):

    Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert’s cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments.

  29. https://old.reddit.com/r/reinforcementlearning/comments/7noder/meta_rrl_subreddit_traffic_substantial_increase/

  30. https://old.reddit.com/r/reinforcementlearning/search?q=flair%3AMetaRL&restrict_sr=on&sort=top&t=year

  31. https://old.reddit.com/r/reinforcementlearning/search?q=flair%3ARobot&sort=top&restrict_sr=on&t=year

  32. https://old.reddit.com/r/reinforcementlearning/search?q=flair%3AI&restrict_sr=on&t=year

  33. https://old.reddit.com/r/reinforcementlearning/search?q=flair%3AM+flair%3ARobot&restrict_sr=on&sort=top&t=year

  34. 2018-plomin.pdf: ⁠, Robert Plomin, Sophie von Stumm (2018; iq):

    Intelligence—the ability to learn, reason and solve problems—is at the forefront of behavioural genetic research. Intelligence is highly heritable and predicts important educational, occupational and health outcomes better than any other trait. Recent have successfully identified inherited genome sequence differences that account for 20% of the 50% heritability of intelligence. These findings open new avenues for research into the causes and consequences of intelligence using genome-wide polygenic scores that aggregate the effects of thousands of genetic variants.

  35. ⁠, W. D. Hill, G. Davies, A. M. McIntosh, C. R. Gale, I. J. Deary (2017-07-07):

    Intelligence, or general cognitive function, is phenotypically and genetically correlated with many traits, including many physical and mental health variables. Both education and household income are strongly with intelligence, at r g = 0.73 and r g = 0.70 respectively. This allowed us to utilize a novel approach, Multi-Trait Analysis of Genome-wide association studies (MTAG; Turley et al 2017), to combine two large genome-wide association studies (GWASs) of education and household income to increase power in the largest GWAS on intelligence so far (Sniekers et al 2017). This study had four goals: firstly, to facilitate the discovery of new genetic loci associated with intelligence; secondly, to add to our understanding of the biology of intelligence differences; thirdly, to examine whether combining genetically correlated traits in this way produces results consistent with the primary phenotype of intelligence; and, finally, to test how well this new meta-analytic data sample on intelligence predict phenotypic intelligence in an independent sample. We apply MTAG to three large GWAS: Sniekers et al 2017 on intelligence, Okbay et al 2016 on Educational attainment, and Hill et al 2016 on household income. By combining these three samples our functional sample size increased from 78 308 participants to 147 194. We found 107 independent loci associated with intelligence, implicating 233 genes, using both -based and gene-based GWAS. We find evidence that neurogenesis may explain some of the biological differences in intelligence as well as genes expressed in the synapse and those involved in the regulation of the nervous system. We show that the results of our combined analysis demonstrate the same pattern of genetic correlations as a single measure/​​​​the simple measure of intelligence, providing support for the meta-analysis of these genetically-related phenotypes. We find that our MTAG meta-analysis of intelligence shows similar genetic correlations to 26 other phenotypes when compared with a GWAS consisting solely of cognitive tests. Finally, using an independent sample of 6 844 individuals we were able to predict 7% of intelligence using SNP data alone.

  36. ⁠, William D. Hill, Robert E. Marioni, O. Maghzian, Stuart J. Ritchie, Sarah P. Hagenaars, A. M. McIntosh, C. R. Gale, G. Davies, Ian J. Deary (2018-01-11):

    Intelligence, or general cognitive function, is phenotypically and genetically correlated with many traits, including a wide range of physical, and mental health variables. Education is strongly genetically correlated with intelligence (rg = 0.70). We used these findings as foundations for our use of a novel approach—multi-trait analysis of genome-wide association studies (MTAG; Turley et al. 2017)—to combine two large genome-wide association studies (GWASs) of education and intelligence, increasing and resulting in the largest GWAS of intelligence yet reported. Our study had four goals: first, to facilitate the discovery of new genetic loci associated with intelligence; second, to add to our understanding of the biology of intelligence differences; third, to examine whether combining genetically correlated traits in this way produces results consistent with the primary phenotype of intelligence; and, finally, to test how well this new meta-analytic data sample on intelligence predicts phenotypic intelligence in an independent sample. By combining datasets using MTAG, our functional sample size increased from 199,242 participants to 248,482. We found 187 independent loci associated with intelligence, implicating 538 genes, using both SNP-based and gene-based GWAS. We found evidence that neurogenesis and myelination—as well as genes expressed in the synapse, and those involved in the regulation of the nervous system—may explain some of the biological differences in intelligence. The results of our combined analysis demonstrated the same pattern of genetic correlations as those from previous GWASs of intelligence, providing support for the of these genetically-related phenotypes.

  37. ⁠, Louis Lello, Steven G. Avery, Laurent Tellier, Ana I. Vazquez, Gustavo de los Campos, Stephen D. H. Hsu (2017-09-19):

    We construct genomic predictors for heritable and extremely complex human quantitative traits (height, heel bone density, and educational attainment) using modern methods in high dimensional statistics (i.e., machine learning). Replication tests show that these predictors capture, respectively, ~40, 20, and 9 percent of total variance for the three traits. For example, predicted heights correlate ~0.65 with actual height; actual heights of most individuals in validation samples are within a few cm of the prediction. The variance captured for height is comparable to the estimated SNP heritability from (GREML) analysis, and seems to be close to its asymptotic value (i.e., as sample size goes to infinity), suggesting that we have captured most of the heritability for the SNPs used. Thus, our results resolve the common SNP portion of the “missing heritability” problem—i.e., the gap between prediction R-squared and SNP heritability. The ~20k activated SNPs in our height predictor reveal the genetic architecture of human height, at least for common SNPs. Our primary dataset is the cohort, comprised of almost 500k individual genotypes with multiple phenotypes. We also use other datasets and SNPs found in earlier GWAS for out-of-sample validation of our results.

  38. ⁠, Louis Lello, Steven G. Avery, Laurent Tellier, Ana I. Vazquez, Gustavo de los Campos, Stephen D. H. Hsu (2017-10-07):

    We construct genomic predictors for heritable and extremely complex human quan-titative traits (height, heel bone density, and educational attainment) using modern methods in high dimensional statistics (i.e., machine learning). Replication tests show that these predictors capture, respectively, ~40, 20, and 9 percent of total variance for the three traits. For example, predicted heights correlate ~0.65 with actual height; actual heights of most individuals in validation samples are within a few cm of the prediction. The variance captured for height is comparable to the estimated SNP heritability from GCTA (GREML) analysis, and seems to be close to its asymptotic value (i.e., as sample size goes to infinity), suggesting that we have captured most of the heritability for the SNPs used. Thus, our results resolve the common SNP portion of the “missing heritability” problem—i.e., the gap between prediction R-squared and SNP heritability. The ~20k activated SNPs in our height predictor reveal the genetic architecture of human height, at least for common SNPs. Our primary dataset is the UK Biobank cohort, comprised of almost 500k individual genotypes with multiple phenotypes. We also use other datasets and SNPs found in earlier GWAS for out-of-sample validation of our results.

  39. ⁠, W. David Hill, Ruben C. Arslan, Charley Xia, Michelle Luciano, Amador, Pau Navarro, Caroline Hayward, Reka Nagy, David J. Porteous, Andrew M. McIntosh, Ian J. Deary, Chris S. Haley, Lars Penke (2017-06-05):

    Pedigree-based analyses of intelligence have reported that genetic differences account for 50–80% of the phenotypic variation. For personality traits these effects are smaller, with 34–48% of the variance being explained by genetic differences. However, molecular genetic studies using unrelated individuals typically report a heritability estimate of around 30% for intelligence and between 0% and 15% for personality variables. -based estimates and molecular genetic estimates may differ because current genotyping platforms are poor at tagging causal variants, variants with low minor allele frequency, copy number variants, and structural variants. Using ~20 000 individuals in the Generation Scotland family cohort genotyped for ~700 000 single nucleotide polymorphisms (SNPs), we exploit the high levels of linkage disequilibrium () found in members of the same family to quantify the total effect of genetic variants that are not tagged in GWASs of unrelated individuals. In our models, genetic variants in low LD with genotyped SNPs explain over half of the genetic variance in intelligence, education, and neuroticism. By capturing these additional genetic effects our models closely approximate the heritability estimates from twin studies for intelligence and education, but not for neuroticism and extraversion. We then replicated our finding using imputed molecular genetic data from unrelated individuals to show that ~50% of differences in intelligence, and ~40% of the differences in education, can be explained by genetic effects when a larger number of rare SNPs are included. From an evolutionary genetic perspective, a substantial contribution of rare genetic variants to individual differences in intelligence and education is consistent with mutation-selection balance.

  40. 2017-sniekers.pdf: ⁠, Suzanne Sniekers, Sven Stringer, Kyoko Watanabe, Philip R. Jansen, Jonathan R. I. Coleman, Eva Krapohl, Erdogan Taskesen, Anke R. Hammerschlag, Aysu Okbay, Delilah Zabaneh, Najaf Amin, Gerome Breen, David Cesarini, Christopher F. Chabris, William G. Iacono, M. Arfan Ikram, Magnus Johannesson, Philipp Koellinger, James J. Lee, Patrik K. E Magnusson, Matt McGue, Mike B. Miller, William E. R. Ollier, Antony Payton, Neil Pendleton, Robert Plomin, Cornelius A. Rietveld, Henning Tiemeier, Cornelia M. van Duijn, Danielle Posthuma (2017-05-22; iq):

    Intelligence is associated with important economic and health-related life outcomes1. Despite intelligence having substantial heritability2 (0.54) and a confirmed polygenic nature, initial genetic studies were mostly underpowered3,4,5. Here we report a meta-analysis for intelligence of 78,308 individuals. We identify 336 associated SNPs (METAL p < 5 × 10−8) in 18 genomic loci, of which 15 are new. Around half of the SNPs are located inside a gene, implicating 22 genes, of which 11 are new findings. Gene-based analyses identified an additional 30 genes (MAGMA p < 2.73 × 10−6), of which all but one had not been implicated previously. We show that the identified genes are predominantly expressed in brain tissue, and pathway analysis indicates the involvement of genes regulating cell development (MAGMA competitive p = 3.5 × 10−6). Despite the well-known difference in twin-based heritability2 for intelligence in childhood (0.45) and adulthood (0.80), we show substantial genetic correlation (rg = 0.89, LD score regression p = 5.4 × 10−29). These findings provide new insight into the genetic architecture of intelligence.

  41. 2018-turley.pdf: ⁠, Patrick Turley, Raymond K. Walters, Omeed Maghzian, Aysu Okbay, James J. Lee, Mark Alan Fontana, Tuan Anh Nguyen-Viet, Robbee Wedow, Meghan Zacher, Nicholas A. Furlotte, 23andMe Research Team, Social Science Genetic Association Consortium, Patrik Magnusson, Sven Oskarsson, Magnus Johannesson, Peter M. Visscher, David Laibson, David Cesarini, Benjamin M. Neale, Daniel J. Benjamin (2017-10-23; genetics  /​ ​​ ​correlation):

    We introduce multi-trait analysis of GWAS (MTAG), a method for joint analysis of summary statistics from genome-wide association studies (GWAS) of different traits, possibly from overlapping samples. We apply MTAG to summary statistics for depressive symptoms (Neff = 354,862), neuroticism (n = 168,105), and subjective well-being (n = 388,538). As compared to the 32, 9, and 13 genome-wide loci identified in the single-trait GWAS (most of which are themselves novel), MTAG increases the number of associated loci to 64, 37, and 49, respectively. Moreover, association statistics from MTAG yield more informative bioinformatics analyses and increase the variance explained by polygenic scores by approximately 25%, matching theoretical expectations.

  42. ⁠, Clare Bycroft, Colin Freeman, Desislava Petkova, Gavin Band, Lloyd T. Elliott, Kevin Sharp, Allan Motyer, Damjan Vukcevic, Olivier Delaneau, Jared O’Connell, Adrian Cortes, Samantha Welsh, Gil McVean, Stephen Leslie, Peter Donnelly, Jonathan Marchini (2017-07-20):

    The UK Biobank project is a large prospective cohort study of ~500,000 individuals from across the United Kingdom, aged between 40–69 at recruitment. A rich variety of phenotypic and health-related information is available on each participant, making the resource unprecedented in its size and scope. Here we describe the genome-wide genotype data (~805,000 markers) collected on all individuals in the cohort and its quality control procedures. Genotype data on this scale offers novel opportunities for assessing quality issues, although the wide range of ancestries of the individuals in the cohort also creates particular challenges. We also conducted a set of analyses that reveal properties of the genetic data—such as population structure and relatedness—that can be important for downstream analyses. In addition, we phased and imputed genotypes into the dataset, using computationally efficient methods combined with the Haplotype Reference Consortium (HRC) and UK10K haplotype resource. This increases the number of testable variants by over 100× to ~96 million variants. We also imputed classical allelic variation at 11 human leukocyte antigen (HLA) genes, and as a quality control check of this imputation, we replicate signals of known associations between HLA alleles and many common diseases. We describe tools that allow efficient genome-wide association studies (GWAS) of multiple traits and fast phenome-wide association studies (PheWAS), which work together with a new compressed file format that has been used to distribute the dataset. As a further check of the genotyped and imputed datasets, we performed a test-case genome-wide association scan on a well-studied human trait, standing height.

  43. ⁠, Oriol Canela-Xandri, Konrad Rawlik, Albert Tenesa (2017-08-16):

    Genome-wide association studies have revealed many loci contributing to the variation of complex traits, yet the majority of loci that contribute to the heritability of complex traits remain elusive. Large study populations with sufficient statistical power are required to detect the small of the yet unidentified genetic variants. However, the analysis of huge cohorts, like UK Biobank, is complicated by incidental structure present when collecting such large cohorts. For instance, UK Biobank comprises 107,162 third degree or closer related participants. Traditionally, GWAS have removed related individuals because they comprised an insignificant proportion of the overall sample size, however, removing related individuals in UK would entail a substantial loss of power. Furthermore, modelling such structure using is computationally expensive, which requires a computational infrastructure that may not be accessible to all researchers. Here we present an atlas of genetic associations for 118 non-binary and 599 binary traits of 408,455 related and unrelated UK Biobank participants of White-British descend. Results are compiled in a publicly accessible database that allows querying genome-wide association summary results for 623,944 genotyped and HapMap2 imputed SNPs, as well downloading whole GWAS summary statistics for over 30 million imputed SNPs from the Haplotype Reference Consortium panel. Our atlas of associations (GeneAtlas, http:/​​​​/​​​​geneatlas.roslin.ed.ac.uk) will help researchers to query UK Biobank results in an easy way without the need to incur in high computational costs.

  44. http://geneatlas.roslin.ed.ac.uk/

  45. ⁠, Tian Ge, Chia-Yen Chen, Benjamin M. Neale, Mert R. Sabuncu, Jordan W. Smoller (2016-08-18):

    Heritability estimation provides important information about the relative contribution of genetic and environmental factors to phenotypic variation, and provides an upper bound for the utility of genetic risk prediction models. Recent technological and statistical advances have enabled the estimation of additive heritability attributable to common genetic variants (SNP heritability) across a broad phenotypic spectrum. However, assessing the comparative heritability of multiple traits estimated in different cohorts may be misleading due to the population-specific nature of heritability. Here we report the SNP heritability for 551 complex traits derived from the large-scale, population-based UK Biobank, comprising both quantitative phenotypes and disease codes, and examine the moderating effect of three major demographic variables (age, sex and socioeconomic status) on the heritability estimates. Our study represents the first comprehensive phenome-wide heritability analysis in the UK Biobank, and underscores the importance of considering population characteristics in comparing and interpreting heritability.

  46. ⁠, Jakob Grove, Stephan Ripke, Thomas D. Als, Manuel Mattheisen, Raymond Walters, Hyejung Won, Jonatan Pallesen, Esben Agerbo, Ole A. Andreassen, Richard Anney, Rich Belliveau, Francesco Bettella, Joseph D. Buxbaum, Jonas Bybjerg-Grauholm, Marie Bækved-Hansen, Felecia Cerrato, Kimberly Chambert, Jane H. Christensen, Claire Churchhouse, Karin Dellenvall, Ditte Demontis, Silvia De Rubeis, Bernie Devlin, Srdjan Djurovic, Ashle Dumont, Jacqueline Goldstein, Christine S. Hansen, Mads Engel Hauberg, Mads V. Hollegaard, Sigrun Hope, Daniel P. Howrigan, Hailiang Huang, Christina Hultman, Lambertus Klei, Julian Maller, Joanna Martin, Alicia R. Martin, Jennifer Moran, Mette Nyegaard, Terje Nærland, Duncan S. Palmer, Aarno Palotie, Carsten B. Pedersen, Marianne G. Pedersen, Timothy Poterba, Jesper B. Poulsen, Beate St Pourcain, Per Qvist, Karola Rehnström, Avi Reichenberg, Jennifer Reichert, Elise B. Robinson, Kathryn Roeder, Panos Roussos, Evald Saemundsen, Sven Sandin, F. Kyle Satterstrom, George D. Smith, Hreinn Stefansson, Kari Stefansson, Stacy Steinberg, Christine Stevens, Patrick F. Sullivan, Patrick Turley, G. Bragi Walters, Xinyi Xu, Autism Spectrum Disorders Working Group of The Psychiatric Genomics Consortium, BUPGEN, Major Depressive Disorder Working Group of the Psychiatric Genomics Consortium, 23andMe Research Team, Daniel Geschwind, Merete Nordentoft, David M. Hougaard, Thomas Werge, Ole Mors, Preben Bo Mortensen, Benjamin M. Neale, Mark J. Daly, Anders D. Børglum (2017-11-25):

    Autism spectrum disorder (ASD) is a highly heritable and heterogeneous group of neurodevelopmental phenotypes diagnosed in more than 1% of children. Common genetic variants contribute substantially to ASD susceptibility, but to date no individual variants have been robustly associated with ASD. With a marked sample size increase from a unique Danish population resource, we report a genome-wide association meta-analysis of 18,381 ASD cases and 27,969 controls that identifies five genome-wide statistically-significant loci. Leveraging GWAS results from three phenotypes with significantly overlapping genetic architectures (⁠, major depression, and educational attainment), seven additional loci shared with other traits are identified at equally strict significance levels. Dissecting the polygenic architecture we find both quantitative and qualitative polygenic heterogeneity across ASD subtypes, in contrast to what is typically seen in other complex disorders. These results highlight biological insights, particularly relating to neuronal function and corticogenesis and establish that GWAS performed at scale will be much more productive in the near term in ASD, just as it has been in a broad range of important psychiatric and diverse medical phenotypes.

  47. ⁠, Andrea Ganna, F. Kyle Satterstrom, Seyedeh M. Zekavat, Indraniel Das, Mitja I. Kurki, Claire Churchhouse, Jessica Alfoldi, Alicia R. Martin, Aki S. Havulinna, Andrea Byrnes, Wesley K. Thompson, Philip R. Nielsen, Konrad J. Karczewski, Elmo Saarentaus, Manuel A. Rivas, Namrata Gupta, Olli Pietiläinen, Connor A. Emdin, Francesco Lescai, Jonas Bybjerg-Grauholm, Jason Flannick, on behalf of GoT2D/​​T2D-GENES consortium, Josep Mercader, Miriam Udlerg, on behalf of SIGMA consortium, Helmsley IBD Sequencing Project, FinMetSeq Consortium, iPSYCH-Broad Consortium, Markku Laakso, Veikko Salomaa, Christina Hultman, Samuli Ripatti, Eija Hämäläinen, Jukka S. Moilanen, Jarmo Körkkö, Outi Kuismin, Merete Nordentoft, David M. Hougaard, Ole Mors, Thomas Werge, Preben Bo Mortensen, Daniel MacArthur, Mark J. Daly, Patrick F. Sullivan, Adam E. Locke, Aarno Palotie, Anders D. Børglum, Sekar Kathiresan, Benjamin M. Neale (2017-06-09):

    Protein truncating variants (PTVs) are likely to modify gene function and have been linked to hundreds of Mendelian disorders1,2. However, the impact of PTVs on complex traits has been limited by the available sample size of whole-exome sequencing studies (WES) 3. Here we assemble WES data from 100,304 individuals to quantify the impact of rare PTVs on 13 quantitative traits and 10 diseases. We focus on those PTVs that occur in PTV-intolerant (PI) genes, as these are more likely to be pathogenic. Carriers of at least one PI-PTV were found to have an increased risk of autism, schizophrenia, bipolar disorder, intellectual disability and (p-value (p) range: 5×10−3−9×10−12). In controls, without these disorders, we found that this burden associated with increased risk of mental, behavioral and neurodevelopmental disorders as captured by electronic health record information. Furthermore, carriers of PI-PTVs tended to be shorter (p = 2×10−5), have fewer years of education (p = 2×10−4) and be younger (p = 2×10−7); the latter observation possibly reflecting reduced survival or study participation. While other gene-sets derived from in vivo experiments did not show any associations with PTV-burden, gene sets implicated in GWAS of cardiovascular-related traits and inflammatory bowel disease showed a significant PTV-burden with corresponding traits, mainly driven by established genes involved in familial forms of these disorders. We leveraged population health registries from 14,117 individuals to study the phenome-wide impact of PIPTVs and identified an increase in the number of hospital visits among PI-PTV carriers. In conclusion, we provide the most thorough investigation to date of the impact of rare deleterious coding variants on complex traits, suggesting widespread pleiotropic risk.

    Collaborators

    Helmsley IBD Exome Sequencing Project: Dermot McGovern, Judy H Cho, Ann Pulver, Vincent Plagnol, Tony Segal, Gil Atzmon, Dan Turner, Ben Glaser, Inga Peter, Ramnik Xavier, Harry Sokol, Rinse Weersma, Andre Franke, John Rioux, Tariq Ahmad, Martti Färkkilä, Kimmo Kontula.

    FinMetSeq Consortium: Haley J Abel, Michael Boehnke, Lei Chen, Charleston WK Chiang, Colby C Chiang, Susan K Dutcher, Nelson B Freimer, Robert S Fulton, Liron Ganel, Ira M Hall, Anne U Jackson, Krishna L Kanchi, Chul Joo Kang, Daniel C Koboldt, Hannele Laivuori, David E Larson, Karyn Meltz Steinberg, Joanne Nelson, Thomas J Nicholas, Arto Pietilä, Matti Pirinen, Vasily Ramensky, Debashree Ray, Chiara Sabatti, Laura J Scott, Susan Service, Laurel Stell, Nathan O Stitziel, Heather M Stringham, Ryan Welch, Richard K Wilson, Pranav Yajnik.

    iPSYCH-Broad Consortium: Marianne G Pedersen, Marie Bækvad-Hansen, Christine S Hansen.

  48. ⁠, Augustine Kong, Gudmar Thorleifsson, Michael L. Frigge, Bjarni J. Vilhjálmsson, Alexander I. Young, Thorgeir E. Thorgeirsson, Stefania Benonisdottir, Asmundur Oddsson, Bjarni V. Halldórsson, Gísli Masson, Daniel F. Gudbjartsson, Agnar Helgason, Gyda Bjornsdottir, Unnur Thorsteinsdottir, Kari Stefansson (2017-11-14):

    Sequence variants in the parental genomes that are not transmitted to a child/​​​​proband are often ignored in genetic studies. Here we show that non-transmitted alleles can impact a child through their effects on the parents and other relatives, a phenomenon we call genetic nurture. Using results from a meta-analysis of educational attainment, the computed for the non-transmitted alleles of 21,637 probands with at least one parent genotyped has an estimated effect on the educational attainment of the proband that is 29.9% (P = 1.6×10−14) of that of the transmitted polygenic score. Genetic nurturing effects of this polygenic score extend to other traits. Paternal and maternal polygenic scores have similar effects on educational attainment, but mothers contribute more than fathers to nutrition/​​​​heath related traits.

    One Sentence Summary

    Nurture has a genetic component, i.e. alleles in the parents affect the parents’ phenotypes and through that influence the outcomes of the child.

  49. ⁠, Alexander I. Young, Michael L. Frigge, Daniel F. Gudbjartsson, Gudmar Thorleifsson, Gyda Bjornsdottir, Patrick Sulem, Gisli Masson, Unnur Thorsteinsdottir, Kari Stefansson, Augustine Kong (2017-11-14):

    Heritability measures the proportion of trait variation that is due to genetic inheritance. Measurement of heritability is of importance to the nature-versus-nurture debate. However, existing estimates of heritability could be biased by environmental effects. Here we introduce relatedness disequilibrium regression (RDR), a novel method for estimating heritability. RDR removes environmental bias by exploiting variation in relatedness due to random segregation. We use a sample of 54,888 Icelanders with both parents genotyped to estimate the heritability of 14 traits, including height (55.4%, S.E. 4.4%) and educational attainment (17.0%, S.E. 9.4%). Our results suggest that some other estimates of heritability could be inflated by environmental effects.

  50. 2017-visscher.pdf: ⁠, Peter M. Visscher, Naomi R. Wray, Qian Zhang, Pamela Sklar, Mark I. McCarthy, Matthew A. Brown, and Jian Yang (2017-07-06; genetics  /​ ​​ ​heritable):

    Application of the experimental design of genome-wide association studies (GWASs) is now 10 years old (young), and here we review the remarkable range of discoveries it has facilitated in population and complex-trait genetics, the biology of diseases, and translation toward new therapeutics. We predict the likely discoveries in the next 10 years, when GWASs will be based on millions of samples with array data imputed to a large fully sequenced reference panel and on hundreds of thousands of samples with whole-genome sequencing data.

  51. https://www.technologyreview.com/s/608350/first-human-embryos-edited-in-us/

  52. 2017-tang.pdf: “CRISPR  /​ ​​ ​Cas9-mediated gene editing in human zygotes using Cas9 protein⁠, Lichun Tang

  53. ⁠, Hong Ma, Nuria Marti-Gutierrez, Sang-Wook Park, Jun Wu, Yeonmi Lee, Keiichiro Suzuki, Amy Koski, Dongmei Ji, Tomonari Hayama, Riffat Ahmed, Hayley Darby, Crystal Van Dyken, Ying Li, Eunju Kang, A.-Reum Park, Daesik Kim, Sang-Tae Kim, Jianhui Gong, Ying Gu, Xun Xu, David Battaglia, Sacha A. Krieg, David M. Lee, Diana H. Wu, Don P. Wolf, Stephen B. Heitner, Juan Carlos Izpisua Belmonte, Paula Amato, Jin-Soo Kim, Sanjiv Kaul, Shoukhrat Mitalipov (2017-08-02):

    Genome editing has potential for the targeted correction of germline mutations. Here we describe the correction of the heterozygous MYBPC3 mutation in human preimplantation embryos with precise -Cas9-based targeting accuracy and high homology-directed repair efficiency by activating an endogenous, germline-specific DNA repair response. Induced double-strand breaks (DSBs) at the mutant paternal allele were predominantly repaired using the homologous wild-type maternal gene instead of a synthetic DNA template. By modulating the cell cycle stage at which the DSB was induced, we were able to avoid mosaicism in cleaving embryos and achieve a high yield of embryos carrying the wild-type MYBPC3 gene without evidence of off-target mutations. The efficiency, accuracy and safety of the approach presented suggest that it has potential to be used for the correction of heritable mutations in human embryos by complementing preimplantation genetic diagnosis. However, much remains to be considered before clinical applications, including the reproducibility of the technique with other heterozygous mutations.

  54. 2017-chari.pdf: “Beyond editing to writing large genomes”⁠, Raj Chari, George M. Church

  55. https://www.technologyreview.com/2017/08/07/105540/a-new-way-to-reproduce/

  56. 2017-normile.pdf: “Science Magazine”

  57. https://www.nature.com/articles/548272a

  58. 2017-scheufele.pdf: “Science Magazine”

  59. https://www.technologyreview.com/s/609204/eugenics-20-were-at-the-dawn-of-choosing-embryos-by-health-height-and-more/

  60. 2017-weigel.pdf: “A 100-Year Review: Methods and impact of genetic selection in dairy cattle - From daughter-dam comparisons to deep learning algorithms”⁠, K. A. Weigel, P. M. VanRaden, H. D. Norman, H. Grosu

  61. 2017-kong.pdf: ⁠, Augustine Kong, Michael L. Frigge, Gudmar Thorleifsson, Hreinn Stefansson, Alexander I. Young, Florian Zink, Gudrun A. Jonsdottir, Aysu Okbay, Patrick Sulem, Gisli Masson, Daniel F. Gudbjartsson, Agnar Helgason, Gyda Bjornsdottir, Unnur Thorsteinsdottir, Kari Stefansson (2017-01-11; genetics  /​ ​​ ​selection  /​ ​​ ​dysgenics):

    Epidemiological studies suggest that educational attainment is affected by genetic variants. Results from recent genetic studies allow us to construct a score from a person’s genotypes that captures a portion of this genetic component. Using data from Iceland that include a substantial fraction of the population we show that individuals with high scores tend to have fewer children, mainly because they have children later in life. Consequently, the average score has been decreasing over time in the population. The rate of decrease is small per generation but marked on an evolutionary timescale. Another important observation is that the association between the score and fertility remains highly statistically-significant after adjusting for the educational attainment of the individuals.

    Epidemiological and genetic association studies show that genetics play an important role in the attainment of education. Here, we investigate the effect of this genetic component on the reproductive history of 109,120 Icelanders and the consequent impact on the gene pool over time. We show that an educational attainment polygenic score, POLYEDU, constructed from results of a recent study is associated with delayed reproduction (p < 10−100) and fewer children overall. The effect is stronger for women and remains highly statistically-significant after adjusting for educational attainment. Based on 129,808 Icelanders born between 1910 and 1990, we find that the average POLYEDU has been declining at a rate of ~0.010 standard units per decade, which is substantial on an evolutionary timescale. Most importantly, because POLYEDU only captures a fraction of the overall underlying genetic component the latter could be declining at a rate that is two to three times faster.

  62. 2017-kong-iceland-education-dysgenics.png

  63. https://www.theguardian.com/science/2017/jan/16/natural-selection-making-education-genes-rarer-says-icelandic-study

  64. ⁠, Benjamin W. Domingue, Daniel W. Belsky, Amal Harrati, Dalton Conley, David Weir, Jason Boardman (2016-04-21; genetics  /​ ​​ ​heritable⁠, genetics  /​ ​​ ​selection  /​ ​​ ​dysgenics):

    Mortality selection is a general concern in the social and health sciences. Recently, existing health and social science cohorts have begun to collect genomic data. Causes of selection into a genomic dataset can influence results from genomic analyses. Selective non-participation, which is specific to a particular study and its participants, has received attention in the literature. But mortality selection—the very general phenomenon that genomic data collected at a particular age represents selective participation by only the subset of birth cohort members who have survived to the time of data collection—has been largely ignored. Here we test the hypothesis that such mortality selection may significantly alter estimates in polygenetic association studies of both health and non-health traits. We demonstrate mortality selection into genome-wide SNP data collection at older ages using the U.S.-based Health and Retirement Study (HRS). We then model the selection process. Finally, we test whether mortality selection alters estimates from genetic association studies. We find evidence for mortality selection. Healthier and more socioeconomically advantaged individuals are more likely to survive to be eligible to participate in the genetic sample of the HRS. Mortality selection leads to modest drift in estimating time-varying genetic effects, a drift that is enhanced when estimates are produced from data that has additional mortality selection. There is no general solution for correcting for mortality selection in a birth cohort prior to entry into a longitudinal study. We illustrate how genetic association studies using HRS data can adjust for mortality selection from study entry to time of genetic data collection by including probability weights that account for mortality selection. Mortality selection should be investigated more broadly in genetically-informed samples from other cohort studies.

  65. 2016-domingue-usa-education-dysgenics.png

  66. 2018-zeng.pdf⁠, Jian Zeng, Ronald Vlaming, Yang Wu, Matthew R. Robinson, Luke R. Lloyd-Jones, Loic Yengo, Chloe X. Yap, Angli Xue, Julia Sidorenko, Allan F. McRae, Joseph E. Powell, Grant W. Montgomery, Andres Metspalu, Tonu Esko, Greg Gibson, Naomi R. Wray, Peter M. Visscher, Jian Yang

  67. ⁠, Kelsey Elizabeth Johnson, Benjamin F. Voight (2017-02-17):

    Scans for positive selection in human populations have identified hundreds of sites across the genome with evidence of recent adaptation. These signatures often overlap across populations, but the question of how often these overlaps represent a single ancestral event remains unresolved. If a single positive selection event spread across many populations, the same sweeping haplotype should appear in each population and the selective pressure could be common across diverse populations and environments. Identifying such shared selective events would be of fundamental interest, pointing to genomic loci and human traits important in recent history across the globe. Additionally, genomic annotations that recently became available could help attach these signatures to a potential gene and molecular phenotype that may have been selected across multiple populations. We performed a scan for positive selection using the integrated haplotype score on 20 populations, and compared sweeping haplotypes using the haplotype-clustering capability of fastPHASE to create a catalog of shared and unshared overlapping selective sweeps in these populations. Using additional genomic annotations, we connect these multi-population sweep overlaps with potential biological mechanisms at several loci, including potential new sites of adaptive introgression, the glycophorin locus associated with malarial resistance, and the alcohol dehydrogenase cluster associated with alcohol dependency.

  68. ⁠, Ali J. Berens, Taylor L. Cooper, Joseph Lachance (2017-06-02):

    The genomes of ancient humans, Neandertals, and Denisovans contain many alleles that influence disease risks. Using genotypes at 3180 disease-associated loci, we estimated the disease burden of 147 ancient genomes. After correcting for missing data, genetic risk scores were generated for nine disease categories and the set of all combined diseases. These genetic risk scores were used to examine the effects of different types of subsistence, geography, and sample age on the number of risk alleles in each ancient genome. On a broad scale, hereditary disease risks are similar for ancient hominins and modern-day humans, and the GRS percentiles of ancient individuals span the full range of what is observed in present day individuals. In addition, there is evidence that ancient pastoralists may have had healthier genomes than hunter-gatherers and agriculturalists. We also observed a temporal trend whereby genomes from the recent past are more likely to be healthier than genomes from the deep past. This calls into question the idea that modern lifestyles have caused genetic load to increase over time. Focusing on individual genomes, we find that the overall genomic health of the Altai Neandertal is worse than 97% of present day humans and that Ötzi the Tyrolean Iceman had a genetic predisposition to gastrointestinal and cardiovascular diseases. As demonstrated by this work, ancient genomes afford us new opportunities to diagnose past human health, which has previously been limited by the quality and completeness of remains.

  69. https://phys.org/news/2017-08-cavemen-genetic-checkup.html

  70. ⁠, Fernando Racimo, Jeremy J. Berg, Joseph K. Pickrell (2017-06-07):

    An open question in human evolution is the importance of polygenic adaptation: adaptive changes in the mean of a multifactorial trait due to shifts in allele frequencies across many loci. In recent years, several methods have been developed to detect polygenic adaptation using loci identified in genome-wide association studies (GWAS). Though powerful, these methods suffer from limited interpretability: they can detect which sets of populations have evidence for polygenic adaptation, but are unable to reveal where in the history of multiple populations these processes occurred. To address this, we created a method to detect polygenic adaptation in an admixture graph, which is a representation of the historical divergences and admixture events relating different populations through time. We developed a Markov chain Monte Carlo () algorithm to infer branch-specific parameters reflecting the strength of selection in each branch of a graph. Additionally, we developed a set of summary statistics that are fast to compute and can indicate which branches are most likely to have experienced polygenic adaptation. We show via simulations that this method—which we call PolyGraph—has good power to detect polygenic adaptation, and applied it to human population genomic data from around the world. We also provide evidence that variants associated with several traits, including height, educational attainment, and self-reported unibrow, have been influenced by polygenic adaptation in different human populations.

  71. ⁠, Daniel R. Schrider, Andrew D. Kern (2017-04-27):

    The degree to which adaptation in recent human evolution shapes genetic variation remains controversial. This is in part due to the limited evidence in humans for classic “hard selective sweeps,” wherein a novel beneficial mutation rapidly sweeps through a population to ⁠. However, positive selection may often proceed via “soft sweeps” acting on mutations already present within a population. Here we examine recent positive selection across six human populations using a powerful machine learning approach that is sensitive to both hard and soft sweeps. We found evidence that soft sweeps are widespread and account for the vast majority of recent human adaptation. Surprisingly, our results also suggest that linked positive selection affects patterns of variation across much of the genome, and may increase the frequencies of deleterious mutations. Our results also reveal insights into the role of ⁠, cancer risk, and central nervous system development in recent human evolution.

  72. ⁠, Lawrence H. Uricchio, Hugo C. Kitano, Alexander Gusev, Noah A. Zaitlen (2017-08-08):

    Selection alters human genetic variation, but the evolutionary mechanisms shaping complex traits and the extent of selection’s impact on polygenic trait evolution remain largely unknown. Here, we develop a novel polygenic selection inference method (Polygenic Ancestral Selection Test Encompassing Linkage, or PASTEL) relying on GWAS summary data from a single population. We use model-based simulations of complex traits that incorporate human demography, stabilizing selection, and polygenic adaptation to show how shifts in the fitness landscape generate distinct signals in GWAS summary data. Our test retains power for relatively ancient selection events and controls for potential from linkage disequilibrium. We apply PASTEL to nine complex traits, and find evidence for selection acting on five of them (height, ⁠, schizophrenia, Crohn’s disease, and educational attainment). This study provides evidence that selection modulates the relationship between frequency and effect size of trait-altering alleles for a wide range of traits, and provides a flexible framework for future investigations of selection on complex traits using GWAS data.

  73. 2017-gazal.pdf: ⁠, Steven Gazal, Hilary K. Finucane, Nicholas A. Furlotte, Po-Ru Loh, Pier Francesco Palamara, Xuanyao Liu, Armin Schoech, Brendan Bulik-Sullivan, Benjamin M. Neale, Alexander Gusev, Alkes L. Price (2017-09-11; genetics  /​ ​​ ​selection):

    Recent work has hinted at the linkage disequilibrium (LD)-dependent architecture of human complex traits, where SNPs with low levels of LD (LLD) have larger per-SNP heritability. Here we analyzed summary statistics from 56 complex traits (average n = 101,401) by extending stratified LD score regression to continuous annotations. We determined that SNPs with low LLD have statistically-significantly larger per-SNP heritability and that roughly half of this effect can be explained by functional annotations negatively correlated with LLD, such as DNase I hypersensitivity sites (DHSs). The remaining signal is largely driven by our finding that more recent common variants tend to have lower LLD and to explain more heritability (p = 2.38 × 10−104); the youngest 20% of common SNPs explain 3.9 times more heritability than the oldest 20%, consistent with the action of negative selection. We also inferred jointly statistically-significant effects of other LD-related annotations and confirmed via forward simulations that they jointly predict deleterious effects.

  74. ⁠, Armin P. Schoech, Daniel Jordan, Po-Ru Loh, Steven Gazal, Luke O’Connor, Daniel J. Balick, Pier F. Palamara, Hilary K. Finucane, Shamil R. Sunyaev, Alkes L. Price (2017-09-13):

    Understanding the role of rare variants is important in elucidating the genetic basis of human diseases and complex traits. It is widely believed that negative selection can cause rare variants to have larger per-allele effect sizes than common variants. Here, we develop a method to estimate the minor allele frequency (MAF) dependence of SNP effect sizes. We use a model in which per-allele effect sizes have variance proportional to [p(1−p)]α, where p is the MAF and negative values of α imply larger effect sizes for rare variants. We estimate α by maximizing its profile likelihood in a linear mixed model framework using imputed genotypes, including rare variants (MAF >0.07%). We applied this method to 25 UK Biobank diseases and complex traits (n = 113,851). All traits produced negative α estimates with 20 significantly negative, implying larger rare variant effect sizes. The inferred best-fit distribution of true α values across traits had mean −0.38 (s.e. 0.02) and standard deviation 0.08 (s.e. 0.03), with statistically-significant heterogeneity across traits (p = 0.0014). Despite larger rare variant effect sizes, we show that for most traits analyzed, rare variants (MAF <1%) explain less than 10% of total SNP-heritability. Using evolutionary modeling and forward simulations, we validated the α model of MAF-dependent trait effects and estimated the level of coupling between fitness effects and trait effects. Based on this analysis an average genome-wide negative selection coefficient on the order of 10−4 or stronger is necessary to explain the α values that we inferred.

  75. ⁠, Guillaume Lample, Alexis Conneau, Ludovic Denoyer, Marc'Aurelio Ranzato (2017-10-31):

    Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.

  76. ⁠, Casey Chu, Andrey Zhmoginov, Mark Sandler (2017-12-08):

    CycleGAN (Zhu et al 2017) is one recent successful approach to learn a transformation between two image distributions. In a series of experiments, we demonstrate an intriguing property of the model: CycleGAN learns to “hide” information about a source image into the images it generates in a nearly imperceptible, high-frequency signal. This trick ensures that the generator can recover the original sample and thus satisfy the cyclic consistency requirement, while the generated image remains realistic. We connect this phenomenon with adversarial attacks by viewing CycleGAN’s training procedure as training a generator of adversarial examples and demonstrate that the cyclic consistency loss causes CycleGAN to be especially vulnerable to adversarial attacks.

  77. ⁠, Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen (2017-10-27):

    We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.

  78. https://github.com/tkarras/progressive_growing_of_gans

  79. https://www.youtube.com/XOxxPcy5Gr4

  80. ⁠, Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, Dimitris Metaxas (2017-10-19):

    Although Generative Adversarial Networks (GANs) have shown remarkable success in various tasks, they still face challenges in generating high quality images. In this paper, we propose Stacked Generative Adversarial Networks () aiming at generating high-resolution photo-realistic images. First, we propose a two-stage generative adversarial network architecture, StackGAN-v1, for text-to-image synthesis. The Stage-I GAN sketches the primitive shape and colors of the object based on given text description, yielding low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. Second, an advanced multi-stage generative adversarial network architecture, StackGAN-v2, is proposed for both conditional and unconditional generative tasks. Our StackGAN-v2 consists of multiple generators and discriminators in a tree-like structure; images at multiple scales corresponding to the same scene are generated from different branches of the tree. StackGAN-v2 shows more stable training behavior than StackGAN-v1 by jointly approximating multiple distributions. Extensive experiments demonstrate that the proposed stacked generative adversarial networks significantly outperform other state-of-the-art methods in generating photo-realistic images.

  81. https://github.com/martinarjovsky/WassersteinGAN

  82. ⁠, Josh Merel, Yuval Tassa, Dhruva TB, Sriram Srinivasan, Jay Lemmon, Ziyu Wang, Greg Wayne, Nicolas Heess (2017-07-07):

    Rapid progress in deep reinforcement learning has made it increasingly feasible to train controllers for high-dimensional humanoid bodies. However, methods that use pure reinforcement learning with simple reward functions tend to produce non-humanlike and overly stereotyped movement behaviors. In this work, we extend generative adversarial imitation learning to enable training of generic neural network policies to produce humanlike movement patterns from limited demonstrations consisting only of partially observed state features, without access to actions, even when the demonstrations come from a body with different and unknown physical parameters. We leverage this approach to build sub-skill policies from motion capture data and show that they can be reused to solve tasks when controlled by a higher level controller.

  83. ⁠, Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys (2017-04-12):

    Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator’s actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD 83 million steps to catch up to DQfD’s performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.

  84. ⁠, Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy Hoffman, Li Fei-Fei, C. Lawrence Zitnick, Ross Girshick (2017-05-10):

    Existing methods for visual reasoning attempt to directly map inputs to outputs using black-box architectures without explicitly modeling the underlying reasoning processes. As a result, these black-box models often learn to exploit biases in the data rather than learning to perform visual reasoning. Inspired by module networks, this paper proposes a model for visual reasoning that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer. Both the program generator and the execution engine are implemented by neural networks, and are trained using a combination of and ⁠. Using the benchmark for visual reasoning, we show that our model substantially outperforms strong baselines and generalizes better in a variety of settings.

  85. ⁠, Sam Gross, Marc'Aurelio Ranzato, Arthur Szlam (2017-04-20):

    Training convolutional networks (CNN’s) that fit on a single GPU with minibatch has become effective in practice. However, there is still no effective method for training large CNN’s that do not fit in the memory of a few cards, or for parallelizing training. In this work we show that a simple hard mixture of experts model can be efficiently trained to good effect on large scale hashtag (multilabel) prediction tasks. Mixture of experts models are not new (Jacobs et. al. 1991, Collobert et. al. 2003), but in the past, researchers have had to devise sophisticated methods to deal with data fragmentation. We show empirically that modern weakly supervised data sets are large enough to support naive partitioning schemes where each data point is assigned to a single expert. Because the experts are independent, training them in parallel is easy, and evaluation is cheap for the size of the model. Furthermore, we show that we can use a single decoding layer for all the experts, allowing a unified feature embedding space. We demonstrate that it is feasible (and in fact relatively painless) to train far larger models than could be practically trained with standard CNN architectures, and that the extra capacity can be well used on current datasets.

  86. ⁠, Adam Santoro, David Raposo, David G. T. Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, Timothy Lillicrap (2017-06-05):

    Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called ⁠, on which we achieve state-of-the-art, super-human performance; text-based question answering using the suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Our work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.

  87. https://deepmind.com/blog/neural-approach-relational-reasoning/

  88. ⁠, Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin (2017-06-12):

    The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring statistically-significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

  89. ⁠, Lukasz Kaiser, Aidan N. Gomez, Noam Shazeer, Ashish Vaswani, Niki Parmar, Llion Jones, Jakob Uszkoreit (2017-06-16):

    Deep learning yields great results across many fields, from speech recognition, image classification, to translation. But for each problem, getting a deep model to work well involves research into the architecture and a long period of tuning. We present a single model that yields good results on a number of problems spanning multiple domains. In particular, this single model is trained concurrently on ⁠, multiple translation tasks, image captioning ( dataset), a speech recognition corpus, and an English parsing task. Our model architecture incorporates building blocks from multiple domains. It contains convolutional layers, an attention mechanism, and sparsely-gated layers. Each of these computational blocks is crucial for a subset of the tasks we train on. Interestingly, even if a block is not crucial for a task, we observe that adding it never hurts performance and in most cases improves it on all tasks. We also show that tasks with less data benefit largely from joint training with other tasks, while performance on large tasks degrades only slightly if at all.

  90. http://www.themtank.org/a-year-in-computer-vision

  91. ⁠, Reddit ():

    Subreddit devoted to discussion of reinforcement learning research and projects, particularly deep reinforcement learning (more specialized than /r/MachineLearning). Major themes include deep learning, model-based vs model-free RL, robotics, multi-agent RL, exploration, meta-reinforcement learning, imitation learning, the psychology of RL in biological organisms such as humans, and safety/​​​​AI risk. Moderate activity level (as of 2019-09-11): ~10k subscribers, 2k pageviews/​​​​daily

  92. https://old.reddit.com/r/DecisionTheory/

  93. https://old.reddit.com/r/reinforcementlearning/comments/778vbk/mastering_the_game_of_go_without_human_knowledge/

  94. ⁠, Thomas Anthony, Zheng Tian, David Barber (2017-05-23):

    Sequential decision making problems, such as structured prediction, robotic control, and game playing, require a combination of planning policies and generalisation of those plans. In this paper, we present Expert Iteration (ExIt), a novel reinforcement learning algorithm which decomposes the problem into separate planning and generalisation tasks. Planning new policies is performed by tree search, while a deep neural network generalises those plans. Subsequently, tree search is improved by using the neural network policy to guide search, increasing the strength of new plans. In contrast, standard deep Reinforcement Learning algorithms rely on a neural network not only to generalize plans, but to discover them too. We show that ExIt outperforms for training a neural network to play the board game ⁠, and our final tree search agent, trained tabula rasa, defeats MoHex 1.0, the most recent Olympiad Champion player to be publicly released.

  95. https://davidbarber.github.io/blog/2017/11/07/Learning-From-Scratch-by-Thinking-Fast-and-Slow-with-Deep-Learning-and-Tree-Search/

  96. ⁠, Edward Groshev, Maxwell Goldstein, Aviv Tamar, Siddharth Srivastava, Pieter Abbeel (2017-08-24):

    We present a new approach to learning for planning, where knowledge acquired while solving a given set of planning problems is used to plan faster in related, but new problem instances. We show that a deep neural network can be used to learn and represent a generalized reactive policy (GRP) that maps a problem instance and a state to an action, and that the learned GRPs efficiently solve large classes of challenging problem instances. In contrast to efforts in this direction, our approach significantly reduces the dependence of learning on handcrafted domain knowledge or feature selection. Instead, the GRP is trained from scratch using a set of successful execution traces. We show that our approach can also be used to automatically learn a heuristic function that can be used in directed search algorithms. We evaluate our approach using an extensive suite of experiments on two challenging planning problem domains and show that our approach facilitates learning complex decision making policies and powerful heuristic functions with minimal human input. Videos of our results are available at goo.gl/​​​​Hpy4e3.

  97. ⁠, David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis (2017-12-05):

    The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.

  98. https://old.reddit.com/r/reinforcementlearning/comments/7hvin5/mastering_chess_and_shogi_by_selfplay_with_a/

  99. 2003-lagoudakis.pdf: “Reinforcement Learning as Classification: Leveraging Modern Classifiers”⁠, Michail Lagoudakis, Ronald Parr

  100. ⁠, Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, Lars Buesing, Sebastien Racanière, David Reichert, Théophane Weber, Daan Wierstra, Peter Battaglia (2017-07-19):

    Conventional wisdom holds that model-based planning is a powerful approach to sequential decision-making. It is often very challenging in practice, however, because while a model can be used to evaluate a plan, it does not prescribe how to construct a plan. Here we introduce the “Imagination-based Planner”, the first model-based, sequential decision-making agent that can learn to construct, evaluate, and execute plans. Before any action, it can perform a variable number of imagination steps, which involve proposing an imagined action and evaluating it with its model-based imagination. All imagined actions and outcomes are aggregated, iteratively, into a “plan context” which conditions future real and imagined actions. The agent can even decide how to imagine: testing out alternative imagined actions, chaining sequences of actions together, or building a more complex “imagination tree” by navigating flexibly among the previously imagined states using a learned policy. And our agent can learn to plan economically, jointly optimizing for external rewards and computational costs associated with using its imagination. We show that our architecture can learn to solve a challenging continuous control problem, and also learn elaborate planning strategies in a discrete maze-solving task. Our work opens a new direction toward learning the components of a model-based planning system and how to use them.

  101. ⁠, Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, ⁠, David Silver, Daan Wierstra (2017-07-19):

    We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.

  102. https://deepmind.com/blog/agents-imagine-and-plan/

  103. ⁠, Masashi Okada, Luca Rigazio, Takenobu Aoshima (2017-06-29):

    In this paper, we introduce Path Integral Networks (PI-Net), a recurrent network representation of the Path Integral optimal control algorithm. The network includes both system dynamics and cost models, used for optimal control based planning. PI-Net is fully ⁠, learning both dynamics and cost models end-to-end by back-propagation and stochastic gradient descent. Because of this, PI-Net can learn to plan. PI-Net has several advantages: it can generalize to unseen states thanks to planning, it can be applied to continuous control tasks, and it allows for a wide variety learning schemes, including imitation and reinforcement learning. Preliminary experiment results show that PI-Net, trained by imitation learning, can mimic control demonstrations for two simulated problems; a linear system and a pendulum swing-up problem. We also show that PI-Net is able to learn dynamics and cost models latent in the demonstrations.

  104. ⁠, Junhyuk Oh, Satinder Singh, Honglak Lee (2017-07-11):

    This paper proposes a novel deep reinforcement learning (RL) architecture, called Value Prediction Network (VPN), which integrates model-free and model-based RL methods into a single neural network. In contrast to typical model-based RL methods, VPN learns a dynamics model whose abstract states are trained to make option-conditional predictions of future values (discounted sum of rewards) rather than of future observations. Our experimental results show that VPN has several advantages over both model-free and model-based baselines in a stochastic environment where careful planning is required but building an accurate observation-prediction model is difficult. Furthermore, VPN outperforms Deep Q-Network (DQN) on several Atari games even with short-lookahead planning, demonstrating its potential as a new way of learning a good state representation.

  105. ⁠, Nikhil Mishra, Pieter Abbeel, Igor Mordatch (2017-03-12):

    We introduce a method for learning the dynamics of complex nonlinear systems based on deep generative models over temporal segments of states and actions. Unlike dynamics models that operate over individual discrete timesteps, we learn the distribution over future state trajectories conditioned on past state, past action, and planned future action trajectories, as well as a latent prior over action trajectories. Our approach is based on convolutional autoregressive models and variational autoencoders. It makes stable and accurate predictions over long horizons for complex, stochastic systems, effectively expressing uncertainty and modeling the effects of collisions, sensory noise, and action delays. The learned dynamics model and action prior can be used for end-to-end⁠, fully differentiable trajectory optimization and model-based policy optimization, which we use to evaluate the performance and sample-efficiency of our method.

  106. ⁠, Anusha Nagabandi, Gregory Kahn, Ronald S. Fearing, Sergey Levine (2017-08-08):

    Model-free deep reinforcement learning algorithms have been shown to be capable of learning a wide range of robotic skills, but typically require a very large number of samples to achieve good performance. Model-based algorithms, in principle, can provide for much more efficient learning, but have proven difficult to extend to expressive, high-capacity models such as deep neural networks. In this work, we demonstrate that medium-sized neural network models can in fact be combined with model predictive control (MPC) to achieve excellent sample complexity in a model-based reinforcement learning algorithm, producing stable and plausible gaits to accomplish various complex locomotion tasks. We also propose using deep neural network dynamics models to initialize a model-free learner, in order to combine the sample efficiency of model-based approaches with the high task-specific performance of model-free methods. We empirically demonstrate on MuJoCo locomotion tasks that our pure model-based approach trained on just random action data can follow arbitrary trajectories with excellent sample efficiency, and that our hybrid algorithm can accelerate model-free learning on high-speed benchmark tasks, achieving sample efficiency gains of 3–5× on swimmer, cheetah, hopper, and ant agents. Videos can be found at https:/​​​​/​​​​sites.google.com/​​​​view/​​​​mbmf

  107. ⁠, Nir Baram, Oron Anschel, Shie Mannor (2016-12-07):

    Generative adversarial learning is a popular new approach to training generative models which has been proven successful for other related problems as well. The general idea is to maintain an oracle D that discriminates between the expert’s data distribution and that of the generative model G. The generative model is trained to capture the expert’s distribution by maximizing the probability of D misclassifying the data it generates. Overall, the system is differentiable end-to-end and is trained using basic backpropagation.

    This type of learning was successfully applied to the problem of policy imitation in a model-free setup. However, a model-free approach does not allow the system to be differentiable, which requires the use of high-variance gradient estimations. In this paper we introduce the Model based Adversarial Imitation Learning (MAIL) algorithm. A model-based approach for the problem of adversarial imitation learning. We show how to use a forward model to make the system fully differentiable, which enables us to train policies using the (stochastic) gradient of D. Moreover, our approach requires relatively few environment interactions, and fewer hyper-parameters to tune.

    We test our method on the MuJoCo physics simulator and report initial results that surpass the current state-of-the-art.

  108. ⁠, Chelsea Finn, Sergey Levine (2016-10-03):

    A key challenge in scaling up robot learning to many skills and environments is removing the need for human supervision, so that robots can collect their own data and improve their own performance without being limited by the cost of requesting human feedback. Model-based reinforcement learning holds the promise of enabling an agent to learn to predict the effects of its actions, which could provide flexible predictive models for a wide range of tasks and environments, without detailed human supervision. We develop a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data. Our approach does not require a calibrated camera, an instrumented training set-up, nor precise sensing and actuation. Our results show that our method enables a real robot to perform nonprehensile manipulation—pushing objects—and can handle novel objects not seen during training.

  109. ⁠, Silvia Chiappa, Sébastien Racaniere, Daan Wierstra, Shakir Mohamed (2017-04-07):

    Models that can simulate how environments change in response to actions can be used by agents to plan and act efficiently. We improve on previous environment simulators from high-dimensional pixel observations by introducing recurrent neural networks that are able to make temporally and spatially coherent predictions for hundreds of time-steps into the future. We present an in-depth analysis of the factors affecting performance, providing the most extensive attempt to advance the understanding of the properties of these models. We address the issue of computationally inefficiency with a model that does not need to generate a high-dimensional image at each time-step. We show that our approach can be used to improve exploration and is adaptable to many diverse environments, namely 10 Atari games, a 3D car racing environment, and complex 3D mazes.

  110. ⁠, Jeff Dean (2017):

    Slide deck for Google Brain presentation on Machine Learning and the future of ML development processes. Conclusions: ML hardware is at its infancy. Even faster systems and wider deployment will lead to many more breakthroughs across a wide range of domains. Learning in the core of all of our computer systems will make them better/​​​​more adaptive. There are many opportunities for this.

  111. ⁠, Tim Kraska, Alex Beutel, Ed H. Chi, Jeffrey Dean, Neoklis Polyzotis (2017-12-04):

    Indexes are models: a B-Tree-Index can be seen as a model to map a key to the position of a record within a sorted array, a Hash-Index as a model to map a key to a position of a record within an unsorted array, and a BitMap-Index as a model to indicate if a data record exists or not. In this exploratory research paper, we start from this premise and posit that all existing index structures can be replaced with other types of models, including deep-learning models, which we term learned indexes. The key idea is that a model can learn the sort order or structure of lookup keys and use this signal to effectively predict the position or existence of records. We theoretically analyze under which conditions learned indexes outperform traditional index structures and describe the main challenges in designing learned index structures. Our initial results show, that by using neural nets we are able to outperform cache-optimized B-Trees by up to 70% in speed while saving an order-of-magnitude in memory over several real-world data sets. More importantly though, we believe that the idea of replacing core components of a data management system through learned models has far reaching implications for future systems designs and that this work just provides a glimpse of what might be possible.

  112. ⁠, Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, Dario Amodei (2017-06-12):

    For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent’s interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.

  113. https://deepmind.com/blog/learning-through-human-feedback/

  114. ⁠, Dario Amodei, Paul Christiano, Alex Ray () (2017-06-13):

    One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMind’s safety team, we’ve developed an algorithm which can infer what humans want by being told which of two proposed behaviors is better.

    We present a learning algorithm that uses small amounts of human feedback to solve modern RL environments. Machine learning systems with human feedback have been explored before, but we’ve scaled up the approach to be able to work on much more complicated tasks. Our algorithm needed 900 bits of feedback from a human evaluator to learn to backflip—a seemingly simple task which is simple to judge but challenging to specify.

    The overall training process is a 3-step feedback cycle between the human, the agent’s understanding of the goal, and the RL training.

    Preference learning architecture

    Our AI agent starts by acting randomly in the environment. Periodically, two video clips of its behavior are given to a human, and the human decides which of the two clips is closest to fulfilling its goal—in this case, a backflip. The AI gradually builds a model of the goal of the task by finding the reward function that best explains the human’s judgments. It then uses RL to learn how to achieve that goal. As its behavior improves, it continues to ask for human feedback on trajectory pairs where it’s most uncertain about which is better, and further refines its understanding of the goal.

  115. ⁠, Andrew Brock, Theodore Lim, J. M. Ritchie, Nick Weston (2017-08-17):

    Designing architectures for deep neural networks requires expert knowledge and substantial computation time. We propose a technique to accelerate architecture selection by learning an auxiliary HyperNet that generates the weights of a main model conditioned on that model’s architecture. By comparing the relative validation performance of networks with HyperNet-generated weights, we can effectively search over a wide range of architectures at the cost of a single training run. To facilitate this search, we develop a flexible mechanism based on memory read-writes that allows us to define a wide range of network connectivity patterns, with ⁠, DenseNet, and FractalNet blocks as special cases. We validate our method (SMASH) on CIFAR-10 and CIFAR-100, STL-10⁠, ModelNet10, and Imagenet32×32, achieving competitive performance with similarly-sized hand-designed networks. Our code is available at https:/​​​​/​​​​github.com/​​​​ajbrock/​​​​SMASH

  116. ⁠, Mohammad Ghavamzadeh, Shie Mannor, Joelle Pineau, Aviv Tamar (2016-09-14):

    Bayesian methods for machine learning have been widely investigated, yielding principled methods for incorporating prior information into inference algorithms. In this survey, we provide an in-depth review of the role of for the reinforcement learning (RL) paradigm. The major incentives for incorporating Bayesian reasoning in RL are: 1) it provides an elegant approach to action-selection (exploration/​​​​exploitation) as a function of the uncertainty in learning; and 2) it provides a machinery to incorporate prior knowledge into the algorithms. We first discuss models and methods for Bayesian inference in the simple single-step ⁠. We then review the extensive recent literature on Bayesian methods for model-based RL, where prior information can be expressed on the parameters of the Markov model. We also present Bayesian methods for model-free RL, where priors are expressed over the value function or policy class. The objective of the paper is to provide a comprehensive survey on Bayesian RL algorithms and their theoretical and empirical properties.

  117. ⁠, Yuxi Li (2017-01-25):

    We give an overview of recent exciting achievements of deep reinforcement learning (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration. After that, we discuss important mechanisms for RL, including attention and memory, unsupervised learning, transfer learning, multi-agent RL, hierarchical RL, and learning to learn. Then we discuss various applications of RL, including games, in particular, AlphaGo, robotics, natural language processing, including dialogue systems, machine translation, and text generation, computer vision, neural architecture design, business management, finance, healthcare, Industry 4.0, smart grid, intelligent transportation systems, and computer systems. We mention topics not reviewed yet, and list a collection of RL resources. After presenting a brief summary, we close with discussions.

    Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant update.

  118. ⁠, Gregory Farquhar, Tim Rocktäschel, Maximilian Igl, Shimon Whiteson (2017-10-31):

    Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning. To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions. TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values. We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network. Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the tree. We show that TreeQN and ATreeC outperform n-step DQN and on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al 2017) on multiple Atari games. Furthermore, we present ablation studies that demonstrate the effect of different auxiliary losses on learning transition models.

  119. http://repository.essex.ac.uk/4117/1/MCTS-Survey.pdf

  120. ⁠, Daniel Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, Zheng Wen (2017-07-07):

    Thompson sampling is an algorithm for online decision problems where actions are taken sequentially in a manner that must balance between exploiting what is known to maximize immediate performance and investing to accumulate new information that may improve future performance. The algorithm addresses a broad range of problems in a computationally efficient manner and is therefore enjoying wide use. This tutorial covers the algorithm and its application, illustrating concepts through a range of examples, including Bernoulli bandit problems, shortest path problems, product recommendation, assortment, active learning with neural networks, and reinforcement learning in Markov decision processes. Most of these problems involve complex information structures, where information revealed by taking an action informs beliefs about other actions. We will also discuss when and why is or is not effective and relations to alternative algorithms.

  121. ⁠, Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger (2017-09-19):

    In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to sustaining this progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.

  122. 2016-pica.pdf: “PEDS_20160223.indd

  123. https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science

  124. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.649.2804&rep=rep1&type=pdf

  125. ⁠, Gordon J. Lithgow, Monica Driscoll, Patrick Phillips (2017-08-22):

    About 15 years ago, one of us (G.J.L.) got an uncomfortable phone call from a colleague and collaborator. After nearly a year of frustrating experiments, this colleague was about to publish a paper1 chronicling his team’s inability to reproduce the results of our high-profile paper2 in a mainstream journal. Our study was the first to show clearly that a drug-like molecule could extend an animal’s lifespan. We had found over and over again that the treatment lengthened the life of a roundworm by as much as 67%. Numerous phone calls and e-mails failed to identify why this apparently simple experiment produced different results between the labs. Then another lab failed to replicate our study. Despite more experiments and additional publications, we couldn’t work out why the labs were getting different lifespan results. To this day, we still don’t know. A few years later, the same scenario played out with different compounds in other labs…In another, now-famous example, two cancer labs spent more than a year trying to understand inconsistencies6. It took scientists working side by side on the same tumour biopsy to reveal that small differences in how they isolated cells—vigorous stirring versus prolonged gentle rocking—produced different results. Subtle tinkering has long been important in getting biology experiments to work. Before researchers purchased kits of reagents for common experiments, it wasn’t unheard of for a team to cart distilled water from one institution when it moved to another. Lab members would spend months tweaking conditions until experiments with the new institution’s water worked as well as before. Sources of variation include the quality and purity of reagents, daily fluctuations in microenvironment and the idiosyncratic techniques of investigators7. With so many ways of getting it wrong, perhaps we should be surprised at how often experimental findings are reproducible.

    …Nonetheless, scores of publications continued to appear with claims about compounds that slow ageing. There was little effort at replication. In 2013, the three of us were charged with that unglamorous task…Our first task, to develop a protocol, seemed straightforward.

    But subtle disparities were endless. In one particularly painful teleconference, we spent an hour debating the proper procedure for picking up worms and placing them on new agar plates. Some batches of worms lived a full day longer with gentler technicians. Because a worm’s lifespan is only about 20 days, this is a big deal. Hundreds of e-mails and many teleconferences later, we converged on a technique but still had a stupendous three-day difference in lifespan between labs. The problem, it turned out, was notation—one lab determined age on the basis of when an egg hatched, others on when it was laid. We decided to buy shared batches of reagents from the start. Coordination was a nightmare; we arranged with suppliers to give us the same lot numbers and elected to change lots at the same time. We grew worms and their food from a common stock and had strict rules for handling. We established protocols that included precise positions of flasks in autoclave runs. We purchased worm incubators at the same time, from the same vendor. We also needed to cope with a large amount of data going from each lab to a single database. We wrote an iPad app so that measurements were entered directly into the system and not jotted on paper to be entered later. The app prompted us to include full descriptors for each plate of worms, and ensured that data and metadata for each experiment were proofread (the strain names MY16 and my16 are not the same). This simple technology removed small recording errors that could disproportionately affect statistical analyses.

    Once this system was in place, variability between labs decreased. After more than a year of pilot experiments and discussion of methods in excruciating detail, we almost completely eliminated systematic differences in worm survival across our labs9 (see ‘Worm wonders’)…Even in a single lab performing apparently identical experiments, we could not eliminate run-to-run differences.

    …We have found one compound that lengthens lifespan across all strains and species. Most do so in only two or three strains, and often show detrimental effects in others.

  126. ⁠, Mark Lucanic, W. Todd Plummer, Esteban Chen, Jailynn Harke, Anna C. Foulger, Brian Onken, Anna L. Coleman-Hulbert, Kathleen J. Dumas, Suzhen Guo, Erik Johnson, Dipa Bhaumik, Jian Xue, Anna B. Crist, Michael P. Presley, Girish Harinath, Christine A. Sedore, Manish Chamoli, Shaunak Kamat, Michelle K. Chen, Suzanne Angeli, Christina Chang, John H. Willis, Daniel Edgar, Mary Anne Royal, Elizabeth A. Chao, Shobhna Patel, Theo Garrett, Carolina Ibanez-Ventoso, June Hope, Jason L. Kish, Max Guo, Gordon J. Lithgow, Monica Driscoll, Patrick C. Phillips (2017-02-21):

    Limiting the debilitating consequences of ageing is a major medical challenge of our time. Robust pharmacological interventions that promote healthy ageing across diverse genetic backgrounds may engage conserved longevity pathways. Here we report results from the Caenorhabditis Intervention Testing Program in assessing longevity variation across 22 Caenorhabditis strains spanning 3 species, using multiple replicates collected across three independent laboratories. Reproducibility between test sites is high, whereas individual trial reproducibility is relatively low. Of ten pro-longevity chemicals tested, six statistically-significantly extend lifespan in at least one strain. Three reported dietary restriction mimetics are mainly effective across C. elegans strains, indicating species and strain-specific responses. In contrast, the amyloid dye ThioflavinT is both potent and robust across the strains. Our results highlight promising pharmacological leads and demonstrate the importance of assessing lifespans of discrete cohorts across repeat studies to capture biological variation in the search for reproducible ageing interventions.

  127. https://slate.com/cover-stories/2017/05/daryl-bem-proved-esp-is-real-showed-science-is-broken.html

  128. https://replicationindex.wordpress.com/2017/02/02/reconstruction-of-a-train-wreck-how-priming-research-went-of-the-rails/

  129. https://sappingattention.blogspot.com/2017/09/peer-review-is-younger-than-you-think.html

  130. https://www.the-tls.co.uk/articles/public/the-end-of-an-error-peer-review/

  131. 2017-adams.pdf

  132. ⁠, Michael Betancourt (2017-01-10):

    Hamiltonian Monte Carlo has proven a remarkable empirical success, but only recently have we begun to develop a rigorous understanding of why it performs so well on difficult problems and how it is best applied in practice. Unfortunately, that understanding is confined within the mathematics of differential geometry which has limited its dissemination, especially to the applied communities for which it is particularly important. In this review I provide a comprehensive conceptual account of these theoretical foundations, focusing on developing a principled intuition behind the method and its optimal implementations rather of any exhaustive rigor. Whether a practitioner or a statistician, the dedicated reader will acquire a solid grasp of how works, when it succeeds, and, perhaps most importantly, when it fails.

  133. Correlation

  134. ⁠, Dean Eckles, Eytan Bakshy (2017-06-14):

    Peer effects, in which the behavior of an individual is affected by the behavior of their peers, are posited by multiple theories in the social sciences. Other processes can also produce behaviors that are correlated in networks and groups, thereby generating debate about the credibility of observational (i.e. nonexperimental) studies of peer effects. Randomized field experiments that identify peer effects, however, are often expensive or infeasible. Thus, many studies of peer effects use observational data, and prior evaluations of causal inference methods for adjusting observational data to estimate peer effects have lacked an experimental “gold standard” for comparison. Here we show, in the context of information and media diffusion on Facebook, that high-dimensional adjustment of a nonexperimental control group (677 million observations) using propensity score models produces estimates of peer effects statistically indistinguishable from those from using a large randomized experiment (220 million observations). Naive observational estimators overstate peer effects by 320% and commonly used variables (e.g., demographics) offer little bias reduction, but adjusting for a measure of prior behaviors closely related to the focal behavior reduces bias by 91%. High-dimensional models adjusting for over 3,700 past behaviors provide additional bias reduction, such that the full model reduces bias by over 97%. This experimental evaluation demonstrates that detailed records of individuals’ past behavior can improve studies of social influence, information diffusion, and imitation; these results are encouraging for the credibility of some studies but also cautionary for studies of rare or new behaviors. More generally, these results show how large, high-dimensional data sets and statistical learning techniques can be used to improve causal inference in the behavioral sciences.

  135. https://pdfs.semanticscholar.org/2576/fd36efa9be01a26269e94925283de306cd83.pdf

  136. Ads#discussion

  137. 1994-loury.pdf: ⁠, Glenn C. Loury (1994-10-01; sociology⁠, sociology  /​ ​​ ​preference-falsification):

    Uncertainty about what motivates “senders” of public messages leads “receivers” to “read between the lines” to discern the sender’s deepest commitments. Anticipating this, senders “write between the lines,” editing their expressions so as to further their own ends. I examine how this interactive process of inference and deceit affects the quality and extent of public deliberations on sensitive issues. A principle conclusion is that genuine moral discourse on difficult social issues can become impossible when the risks of upsetting some portion of one’s audience are too great. Reliance on euphemism and platitude should be expected in this strategic climate. Groups may embark on a tragic course of action, believed by many at the outset to be ill-conceived, but that has become impossible to criticize.

  138. https://pdfs.semanticscholar.org/4fb9/ff8e9affa466078ff5b6d23500f75d3e5079.pdf

  139. https://www.theatlantic.com/international/archive/2017/10/red-famine-anne-applebaum-ukraine-soviet-union/542610/

  140. https://status451.com/2017/01/20/days-of-rage/

  141. https://www.theguardian.com/education/2017/feb/23/ppe-oxford-university-degree-that-rules-britain

  142. https://dominiccummings.files.wordpress.com/2017/02/201702-effective-action-2-systems-engineering-to-systems-politics.pdf

  143. http://www.thedarkenlightenment.com/the-dark-enlightenment-by-nick-land/

  144. 1969-brooks-businessadventures-ch7-noncommunicationge.pdf

  145. https://www.amazon.com/Business-Adventures-Twelve-Classic-Street/dp/1497644895

  146. http://behavioralscientist.org/mindware-the-high-cost-of-not-doing-experiments/

  147. https://www.theatlantic.com/magazine/archive/2018/01/whats-college-good-for/546590/

  148. 2016-caro.html

  149. http://80000hours.org/blog/66-social-interventions-gone-wrong

  150. http://bwc.thelab.dc.gov/

  151. https://www.nytimes.com/2017/10/20/upshot/a-big-test-of-police-body-cameras-defies-expectations.html

  152. 1987-rossi

  153. 1997-gottfredson.pdf

  154. ⁠, Roy F. Baumeister, Jennifer D. Campbell, Joachim I. Krueger, Kathleen D. Vohs (2003-05-01):

    Self-esteem has become a household word. Teachers, parents, therapists, and others have focused efforts on boosting self-esteem, on the assumption that high self-esteem will cause many positive outcomes and benefits—an assumption that is critically evaluated in this review.

    Appraisal of the effects of self-esteem is complicated by several factors. Because many people with high self-esteem exaggerate their successes and good traits, we emphasize objective measures of outcomes. High self-esteem is also a heterogeneous category, encompassing people who frankly accept their good qualities along with narcissistic, defensive, and conceited individuals.

    The modest correlations between self-esteem and school performance do not indicate that high self-esteem leads to good performance. Instead, high self-esteem is partly the result of good school performance. Efforts to boost the self-esteem of pupils have not been shown to improve academic performance and may sometimes be counterproductive. Job performance in adults is sometimes related to self-esteem, although the correlations vary widely, and the direction of causality has not been established. Occupational success may boost self-esteem rather than the reverse. Alternatively, self-esteem may be helpful only in some job contexts. Laboratory studies have generally failed to find that self-esteem causes good task performance, with the important exception that high self-esteem facilitates persistence after failure.

    People high in self-esteem claim to be more likable and attractive, to have better relationships, and to make better impressions on others than people with low self-esteem, but objective measures disconfirm most of these beliefs. Narcissists are charming at first but tend to alienate others eventually. Self-esteem has not been shown to predict the quality or duration of relationships.

    High self-esteem makes people more willing to speak up in groups and to criticize the group’s approach. Leadership does not stem directly from self-esteem, but self-esteem may have indirect effects. Relative to people with low self-esteem, those with high self-esteem show stronger in-group favoritism, which may increase prejudice and discrimination.

    Neither high nor low self-esteem is a direct cause of violence. Narcissism leads to increased aggression in retaliation for wounded pride. Low self-esteem may contribute to externalizing behavior and delinquency, although some studies have found that there are no effects or that the effect of self-esteem vanishes when other variables are controlled. The highest and lowest rates of cheating and bullying are found in different subcategories of high self-esteem.

    Self-esteem has a strong relation to happiness. Although the research has not clearly established causation, we are persuaded that high self-esteem does lead to greater happiness. Low self-esteem is more likely than high to lead to depression under some circumstances. Some studies support the buffer hypothesis, which is that high self-esteem mitigates the effects of stress, but other studies come to the opposite conclusion, indicating that the negative effects of low self-esteem are mainly felt in good times. Still others find that high self-esteem leads to happier outcomes regardless of stress or other circumstances.

    High self-esteem does not prevent children from smoking, drinking, taking drugs, or engaging in early sex. If anything, high self-esteem fosters experimentation, which may increase early sexual activity or drinking, but in general effects of self-esteem are negligible. One important exception is that high self-esteem reduces the chances of bulimia in females.

    Overall, the benefits of high self-esteem fall into two categories: enhanced initiative and pleasant feelings. We have not found evidence that boosting self-esteem (by therapeutic interventions or school programs) causes benefits. Our findings do not support continued widespread efforts to boost self-esteem in the hope that it will by itself foster improved outcomes. In view of the heterogeneity of high self-esteem, indiscriminate praise might just as easily promote narcissism, with its less desirable consequences. Instead, we recommend using praise to boost self-esteem as a reward for socially desirable behavior and self-improvement.

  155. 2014-02-25-matter-themanwhodestroyedamericasego.html

  156. https://www.theguardian.com/lifeandstyle/2017/jun/03/quasi-religious-great-self-esteem-con

  157. https://www.nature.com/articles/548268a

  158. https://www.nature.com/articles/tp2017148

  159. 2011-gensowski.pdf: ⁠, Miriam Gensowski, James Heckman, Peter Savelyev (2011-01-24; iq):

    [Preprint version of ]

    This paper estimates the internal rate of return (IRR) to education for men and women of the ⁠, a 70-year long prospective cohort study of high-ability individuals. The Terman data is unique in that it not only provides full working-life earnings histories of the participants, but it also includes detailed profiles of each subject, including IQ and measures of latent personality traits. Having information on latent personality traits is important as it allows us to measure the importance of personality on educational attainment and lifetime earnings.

    Our analysis addresses two problems of the literature on returns to education: First, we establish causality of the treatment effect of education on earnings by implementing generalized matching on a full set of observable individual characteristics and unobserved personality traits. Second, since we observe lifetime earnings data, our estimates of the IRR are direct and do not depend on the assumptions that are usually made in order to justify the interpretation of regression coefficients as rates of return.

    For the males, the returns to education beyond high school are sizeable. For example, the IRR for obtaining a bachelor’s degree over a high school diploma is 11.1%, and for a doctoral degree over a bachelor’s degree it is 6.7%. These results are unique because they highlight the returns to high-ability and high-education individuals, who are not well-represented in regular data sets.

    Our results highlight the importance of personality and intelligence on our outcome variables. We find that personality traits similar to the personality traits are statistically-significant factors that help determine educational attainment and lifetime earnings. Even holding the level of education constant, measures of personality traits have statistically-significant effects on earnings. Similarly, IQ is rewarded in the labor market, independently of education. Most of the effect of personality and IQ on life-time earnings arise late in life, during the prime working years. Therefore, estimates from samples with shorter durations underestimate the treatment effects.

  160. https://www.theatlantic.com/magazine/archive/2017/06/when-your-child-is-a-psychopath/524502/

  161. ⁠, Falk, Orjan Wallinius, Märta Lundström, Sebastian Frisell, Thomas Anckarsäter, Henrik Kerekes, Nóra (2014):

    Purpose: Population-based studies on violent crime and background factors may provide an understanding of the relationships between susceptibility factors and crime. We aimed to determine the distribution of violent crime convictions in the Swedish population 1973-2004 and to identify criminal, academic, parental, and psychiatric risk factors for persistence in violent crime.

    Method: The nationwide multi-generation register was used with many other linked nationwide registers to select participants. All individuals born in 1958-1980 (2,393,765 individuals) were included. Persistent violent offenders (those with a lifetime history of three or more violent crime convictions) were compared with individuals having one or two such convictions, and to matched non-offenders. Independent variables were gender, age of first conviction for a violent crime, nonviolent crime convictions, and diagnoses for major mental disorders, personality disorders, and substance use disorders.

    Results: A total of 93,642 individuals (3.9%) had at least one violent conviction. The distribution of convictions was highly skewed; 24,342 persistent violent offenders (1.0% of the total population) accounted for 63.2% of all convictions. Persistence in violence was associated with male sex (OR 2.5), personality disorder (OR 2.3), violent crime conviction before age 19 (OR 2.0), drug-related offenses (OR 1.9), nonviolent criminality (OR 1.9), substance use disorder (OR 1.9), and major mental disorder (OR 1.3).

    Conclusions: The majority of violent crimes are perpetrated by a small number of persistent violent offenders, typically males, characterized by early onset of violent criminality, substance abuse, personality disorders, and nonviolent criminality.

  162. 2015-pettersson.pdf: “Common psychiatric disorders share the same genetic origin: a multivariate sibling study of the Swedish population”⁠, E Pettersson, H. Larsson, P. Lichtenstein

  163. ⁠, Brian Moriarty (1999-03-17):

    [March 1999 talk (video) by video game designer on conspiracy theories, gamification, art, and human psychology, in the same vein as his “The Secret of Palm 46” talk.

    Moriarty discusses the conspiracy theory: that the Beatles has in fact been dead for the past 54 years, replaced by a lookalike to keep the Beatles media empire going.

    The theory began as a rumor, and spread through early Beatlemania forums among young obsessive students, who began analyzing songs (playing them backwards as necessary) to discover hidden messages and developing an elaborate symbolic mythology where it is held that the lookalike & Beatles themselves are covertly alluding to their coverup through coded messages (possibly out of guilt), where the positioning of stars, garbled lyrics, which hand a cigarette is held in, hands held up as benedictions/​​​​wardings, interpreting scenes as funeral processions, and so on. No amount of denials or interviews with Paul McCartney could kill the theory. Most of these ‘clues’ can be debunked, given the wealth of documentation about the most minute details of the production of Beatles albums. A few oddities remain, but Moriarty suggests they are covert messages or allusions for other things, like the ‘walrus’ references, and may even have been the Beatles playing along with the theorists! What is the point of discussing this? See QAnon:]

    This silly event, which happened way back when I was a kid, made a really big impression on me. It was so eerie, so deliciously creepy. And so… consuming! Clue hunting occupied me and my friends constantly for nearly six weeks! It was all we ever talked about! We spent every school night and entire weekends going over every square millimeter of these five records. We destroyed every copy we had, spinning them backwards on our cheap record players. It drove our parents nuts! “Turn me on, dead man! Turn me on, dead man!” And they hated it even more when they heard it again on the evening news!

    I can’t remember the last time I had so much fun.

    And, although I didn’t appreciate it at the time, something wonderful happened as we scoured these records, backwards and forwards, line by line. We memorized them. “Who Buried Paul?” is one of the best games I ever played. This ridiculous rumor sucked my entire generation into a massively multiplayer adventure. A morbid treasure hunt in which accomplices were connected by word-of-mouth, college newspapers, the alternative press and underground radio. We can only wonder what would happen if something like this were to happen today, in the age of the World Wide Web. Imagine how such a thing might get started, by accident…

    …put something like this in front of people, and all kinds of evocative coincidences become likely. Why is this useful for us as entertainers? Because that moment when you peer into the mirror of chaos and discover yourself is satisfying in a uniquely personal sense. You get a little oomph when you make a connection that way. Those little oomphs are what make good stories and puzzles and movies so compelling. And those little oomphs are what made the Paul-is-dead rumor so much fun…Let your players employ their own imaginative intelligence to fill in the gaps in your worlds you can’t afford to close. Chances are, they’ll paint the chaos in exactly the colors they want to see. What’s more, they’ll enjoy themselves doing it. But the credit will be yours.

  164. http://journals.sagepub.com/doi/full/10.1177/0963721417712760

  165. 2014-mosing.pdf: ⁠, Miriam A. Mosing, Guy Madison, Nancy L. Pedersen, Ralf Kuja-Halkola, Fredrik Ullén (2014-07-30; genetics  /​ ​​ ​correlation):

    The relative importance of nature and nurture for various forms of expertise has been intensely debated. Music proficiency is viewed as a general model for expertise, and associations between deliberate practice and music proficiency have been interpreted as supporting the prevailing idea that long-term deliberate practice inevitably results in increased music ability.

    Here, we examined the associations (rs = 0.18–0.36) between music practice and music ability (rhythm, melody, and pitch discrimination) in 10,500 Swedish twins. We found that music practice was substantially heritable (40%–70%). Associations between music practice and music ability were predominantly genetic, and, contrary to the causal hypothesis, nonshared environmental influences did not contribute. There was no difference in ability within monozygotic twin pairs differing in their amount of practice, so that when genetic predisposition was controlled for, more practice was no longer associated with better music skills.

    These findings suggest that music practice may not causally influence music ability and that genetic variation among individuals affects both ability and inclination to practice.

    [Keywords: training, expertise, music ability, practice, heritability, twin, causality]

  166. ⁠, Andrew Drucker (2011):

    In this informal article, I’ll describe the “recognition method”—a simple, powerful technique for memorization and mental calculation. Compared to traditional memorization techniques, which use elaborate encoding and visualization processes [1], the recognition method is easy to learn and requires relatively little effort…The method works: using it, I was able to mentally multiply two random 10-digit numbers, by the usual grade-school algorithm, on my first attempt! I have a normal, untrained memory, and the task would have been impossible by a direct approach. (I can’t claim I was speedy: I worked slowly and carefully, using about 7 hours plus rest breaks. I practiced twice with 5-digit numbers beforehand.)

    …It turns out that ordinary people are incredibly good at this task [recognizing whether a photograph has been seen before]. In one of the most widely-cited studies on ⁠, Standing [2] showed participants an epic 10,000 photographs over the course of 5 days, with 5 seconds’ exposure per image. He then tested their familiarity, essentially as described above. The participants showed an 83% success rate, suggesting that they had become familiar with about 6,600 images during their ordeal. Other volunteers, trained on a smaller collection of 1,000 images selected for vividness, had a 94% success rate.

  167. https://www.samharris.org/blog/item/the-fireplace-delusion

  168. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.510.4296&rep=rep1&type=pdf

  169. https://vizhub.healthdata.org/gbd-compare/#settings=1c5235ed0e4875665b1e2005644f6a7694b65e96

  170. ⁠, Aleksandra Jakubowski, Sally C. Stearns, Margaret E. Kruk, Gustavo Angeles, Harsha Thirumurthy (2017-05-09):

    Background: Despite substantial financial contributions by the United States (PMI) since 2006, no studies have carefully assessed how this program may have affected important population-level health outcomes. We utilized multiple publicly available data sources to evaluate the association between introduction of PMI and child mortality rates in sub-Saharan Africa (SSA).

    Methods and findings: We used difference-in-differences analyses to compare trends in the primary outcome of under-5 mortality rates and secondary outcomes reflecting population coverage of malaria interventions in 19 PMI-recipient and 13 non-recipient countries between 1995 and 2014. The analyses controlled for presence and intensity of other large funding sources, individual and household characteristics, and country and year fixed effects.

    PMI program implementation was associated with a statistically-significant reduction in the annual risk of under-5 child mortality (adjusted risk ratio [RR] 0.84, 95% CI 0.74–0.96). Each dollar of per-capita PMI expenditures in a country, a measure of PMI intensity, was also associated with a reduction in child mortality (RR 0.86, 95% 0.78–0.93). We estimated that the under-5 mortality rate in PMI countries was reduced from 28.9 to 24.3 per 1,000 person-years. Population coverage of insecticide-treated nets increased by 8.34 percentage points (95% CI 0.86–15.83) and coverage of indoor residual spraying increased by 6.63 percentage points (95% CI 0.79–12.47) after PMI implementation. Per-capita PMI spending was also associated with a modest increase in artemisinin-based combination therapy coverage (3.56 percentage point increase, 95% CI −0.07–7.19), though this association was only marginally statistically-significant (p = 0.054). Our results were robust to several sensitivity analyses. Because our study design leaves open the possibility of unmeasured confounding, we cannot definitively interpret these results as causal.

    Conclusions: PMI may have statistically-significantly contributed to reducing the burden of malaria in SSA and reducing the number of child deaths in the region. Introduction of PMI was associated with increased coverage of malaria prevention technologies, which are important mechanisms through which child mortality can be reduced. To our knowledge, this study is the first to assess the association between PMI and all-cause child mortality in SSA with the use of appropriate comparison groups and adjustments for regional trends in child mortality.

    Author summary:

    • Why was this study done?

      • Despite the considerable investment the US government has made in the President’s Malaria Initiative (PMI) since 2006, no studies to date have evaluated its association with population health outcomes.
      • Previous evaluations have documented decreasing child mortality and increasing use of key malaria interventions in PMI-recipient countries. Our study sought to determine whether the trends in health outcomes in PMI-recipient countries differed statistically-significantly from the trends in these outcomes in PMI non-recipient countries in sub-Saharan Africa (SSA) over the past 2 decades.
    • What did the researchers do and find?

      • We used a study design that leveraged multiple publicly available data sources from countries throughout SSA, spanning the years before and after PMI introduction, in order to estimate association between the introduction of PMI and child mortality rates.
      • Our dataset included 7,752,071 child-year observations from 2,112,951 individual children who lived in 32 sub-Saharan countries, including all 19 PMI countries.
      • We found that after adjusting for baseline differences between countries, overall time trends, other funding sources, and individual characteristics, PMI was associated with 16% annual risk reduction in child mortality and increased population coverage of key malaria prevention and treatment technologies.
      • We tested the robustness of our results with a series of sensitivity analyses.
    • What do these findings mean?

      • The study provides evidence that introduction of PMI was associated with statistically-significant reductions in child mortality in SSA, primarily through increased access to malaria prevention technologies.
      • Evidence from this study can be used to inform policy decisions about future funding levels for malaria interventions.
      • The interpretation of our study results rests on the assumption that there were no important unmeasured variables that differentially affected mortality rates in PMI and comparison countries during the study period.
  171. https://slate.com/technology/2017/02/secondhand-smoke-isnt-as-bad-as-we-thought.html

  172. ⁠, Nikoline N. Knudsen, Jörg Schullehner, Birgitte Hansen, Lisbeth F. Jørgensen, Søren M. Kristiansen, Denitza D. Voutchkova, Thomas A. Gerds, Per K. Andersen, Kristine Bihrmann, Morten Grønbæk, Lars V. Kessing, Annette K. Ersbøll (2017-06-10):

    Suicide is a major public health concern. High-dose is used to stabilize mood and prevent suicide in patients with affective disorders. Lithium occurs naturally in drinking water worldwide in much lower doses, but with large geographical variation. Several studies conducted at an aggregate level have suggested an association between lithium in drinking water and a reduced risk of suicide; however, a causal relation is uncertain.

    Individual-level register-based data on the entire Danish adult population (3.7 million individuals) from 1991 to 2012 were linked with a moving 5-year time-weighted average (TWA) lithium exposure level from drinking water hypothesizing an inverse relationship. The mean lithium level was 11.6 μg/​​​​L ranging from 0.6 to 30.7 μg/​​​​L. The suicide rate decreased from 29.7 per 100,000 person-years at risk in 1991 to 18.4 per 100,000 person-years in 2012.

    We found no statistically-significant indication of an association between increasing 5-year TWA lithium exposure level and decreasing suicide rate. The comprehensiveness of using individual-level data and spatial analyses with 22 years of follow-up makes a pronounced contribution to previous findings.

    Our findings demonstrate that there does not seem to be a protective effect of exposure to lithium on the incidence of suicide with levels below 31 μg/​​​​L in drinking water.

    [Keywords: drinking water, lithium, suicide, individual-level data, spatial analysis, Denmark, exposure assessment]

  173. 2017-kessing.pdf

  174. ⁠, Sebastiaan Bol, Jana Caspers, Lauren Buckingham, Gail Denise Anderson-Shelton, Carrie Ridgway, C. A. Tony Buffington, Stefan Schulz, Evelien M. Bunnik (2017-03-16; cat  /​ ​​ ​catnip⁠, cat  /​ ​​ ​silvervine⁠, cat  /​ ​​ ​tartarian-honeysuckle⁠, cat  /​ ​​ ​valerian):

    Background: Olfactory stimulation is an often overlooked method of environmental enrichment for in captivity. The best known example of olfactory enrichment is the use of ⁠, a plant that can cause an apparently euphoric reaction in domestic cats and most of the Pantherinae. It has long been known that some domestic cats and most tigers do not respond to catnip. Although many anecdotes exist of other plants with similar effects, data are lacking about the number of cats that respond to these plants, and if cats that do not respond to catnip respond to any of them. Furthermore, much is still unknown about which chemicals in these plants cause this response.

    Methods: We tested catnip, ⁠, Tatarian honeysuckle and root on 100 domestic cats and observed their response. Each cat was offered all four plant materials and a control, multiple times. Catnip and silver vine also were offered to nine tigers. The plant materials were analyzed by gas chromatography coupled with mass spectrometry to quantify concentrations of compounds believed to exert stimulating effects on cats.

    Results: Nearly all domestic cats responded positively to olfactory enrichment. In agreement with previous studies, one out of every three cats did not respond to catnip. Almost 80% of the domestic cats responded to silver vine and about 50% to Tatarian honeysuckle and valerian root. Although cats predominantly responded to fruit galls of the silver vine plant, some also responded positively to its wood. Of the cats that did not respond to catnip, almost 75% did respond to silver vine and about one out of three to ⁠. Unlike domestic cats, tigers were either not interested in silver vine or responded disapprovingly. The amount of was highest in catnip and only present at marginal levels in the other plants. Silver vine contained the highest concentrations of all other compounds tested.

    Conclusions: Olfactory enrichment for cats may have great potential. Silver vine powder from dried fruit galls and catnip were most popular among domestic cats. Silver vine and Tatarian honeysuckle appear to be good alternatives to catnip for domestic cats that do not respond to catnip.

  175. http://alliance.nautil.us/article/242/mapping-the-human-exposome

  176. 2017-kowarsky.pdf

  177. https://www.sciencedaily.com/releases/2017/08/170823090945.htm

  178. http://agingfree.org/Portals/0/xBlog/uploads/2017/5/24/Metformin%20alters%20the%20gut%20microbiome%20of%20individuals.pdf

  179. ⁠, Stuart J. Ritchie, Simon R. Cox, Xueyi Shen, Michael V. Lombardo, Lianne M. Reus, Clara Alloza, Matthew A. Harris, Helen L. Alderson, Stuart Hunter, Emma Neilson, David C. M. Liewald, Bonnie Auyeung, Heather C. Whalley, Stephen M. Lawrie, Catharine R. Gale, Mark E. Bastin, Andrew M. McIntosh, Ian J. Deary (2017-04-04):

    Sex differences in human brain structure and function are of substantial scientific interest because of sex-differential susceptibility to psychiatric disorders [1,2,3] and because of the potential to explain sex differences in psychological traits [4]. Males are known to have larger brain volumes, though the patterns of differences across brain subregions have typically only been examined in small, inconsistent studies [5]. In addition, despite common findings of greater male variability in traits like intelligence [6], personality [7], and physical performance [8], variance differences in the brain have received little attention. Here we report the largest single-sample study of structural and functional sex differences in the human brain to date (2,750 female and 2,466 male participants aged 44–77 years). Males had higher cortical and sub-cortical volumes, cortical surface areas, and white matter diffusion directionality; females had thicker cortices and higher white matter tract complexity. Considerable overlap between the distributions for males and females was common, and subregional differences were smaller after accounting for global differences. There was generally greater male variance across structural measures. The modestly higher male score on two cognitive tests was partly mediated via structural differences. Functional connectome organization showed stronger connectivity for males in unimodal sensorimotor cortices, and stronger connectivity for females in the ⁠. This large-scale characterisation of neurobiological sex differences provides a foundation for attempts to understand the causes of sex differences in brain structure and function, and their associated psychological and psychiatric consequences.

  180. http://rspb.royalsocietypublishing.org/content/284/1851/20162562

  181. https://www.wired.com/story/mirai-botnet-minecraft-scam-brought-down-the-internet/

  182. http://www.scottaaronson.com/papers/pnp.pdf

  183. 1940-sciam-harrington-nuclearweapons-dontworryitcanthappen.pdf: {#linkBibliography-american)-1940 .docMetadata doi=“10.2307/​​24988773”}, Jean Harrington (Scientific American) (1940-05-01; existential-risk):

    …Early last summer, in the midst of all this research, a chilly sensation began tingling up and down the spines of the experimenters. These extra neutrons that were being erupted—could they not in turn become involuntary bullets, flying from one exploding uranium nucleus into the heart of another, causing another fission which would itself cause still others? Wasn’t there a dangerous possibility that the uranium would at last become explosive? That the samples being bombarded in the laboratories at Columbia University, for example, might blow up the whole of New York City? To make matters more ominous, news of fission research from Germany, plentiful in the early part of 1939, mysteriously and abruptly stopped for some months. Had government censorship been placed on what might be a secret of military importance? The press and populace, getting wind of these possibly lethal goings-on, raised a hue and cry. Nothing daunted, however, the physicists worked on to find out whether or not they would be blown up, and the rest of us along with them. Now, a year after the original discovery, word comes from Paris that we don’t have to worry.

    …With typical French—and scientific—caution, they added that this was perhaps true only for the particular conditions of their own experiment, which was carried out on a large mass of uranium under water. But most scientists agreed that it was very likely true in general.

    …Readers made insomnious by “newspaper talk” of terrific atomic war weapons held in reserve by dictators may now get sleep.

  184. https://people.cs.uchicago.edu/~teutsch/papers/truebit.pdf

  185. 06#links

  186. https://eprint.iacr.org/2002/160.pdf

  187. http://leipper.org/manuals/zip-fill/safelocks_for_compscientist.pdf

  188. ⁠, Matt Blaze (2004-03-06):

    This position paper initiates and advocates the study of “Human-Scale Security Protocols” as a core activity of computing and network security research. The Human-Scale Security Protocols (HSSP) project treats “human scale” security problems and protocols as a central part of computer science. Our aim is to identify, stimulate research on, analyze, and improve “non-traditional” protocols that might either have something to teach us or be susceptible to improvement via the techniques and tools of computer security. There are compelling security problems across a wide spectrum of areas that do not outwardly involve computers or electronic communication and yet are remarkably similar in structure to the systems computer scientists routinely study. Interesting and relevant problem spaces that computer security has traditionally ignored range from the very serious (preventing terrorists from subverting aviation security) to the trivial and personal (ensuring that a restaurant serves the same wine that was ordered and charged for).

  189. https://arstechnica.com/science/2017/11/when-will-the-earth-try-to-kill-us-again/

  190. http://www.demarcken.org/carl/papers/ITA-software-travel-complexity/ITA-software-travel-complexity.pdf

  191. https://blog.archive.org/2017/10/10/books-from-1923-to-1941-now-liberated/

  192. 2017-gard.pdf: ⁠, Elizabeth Townsend Gard (2017-10-02; economics):

    [IA blog] Section 108(h) has not been utilized by libraries and archives, in part because of the uncertainty over definitions (eg. “normal commercial exploitation”), determination of the eligibility window (last 20 years of the copyright term of published works), and how to communicate the information in the record to the general public.

    This paper seeks to explore the elements necessary to implement the Last Twenty exception, otherwise known as Section 108(h) and create a Last Twenty (L20) collection. In short, published works in the last 20 years of the copyright may be digitized and distributed by libraries, archives, and museums, as long as there is no commercial sale of the works and no reasonably priced copy is available. This means that Section 108(h) is available for the forgotten and neglected works, 1923-1941, including millions of foreign works restored by ⁠. Section 108(h) is less effective for big, commercially available works.

    In many ways, that is the dividing line created by Section 108(h): allow for commercial exploitation of works throughout their term, but allow libraries to rescue works that had no commercial exploitation or copies available for sale and make them available through copying and distribution for research, scholarship, and preservation. In fact, Section 108(h) when it was being debated in Congress was called labeled This paper suggests ways to think about the requirements of Section 108(h) and to make it more usable for libraries. Essentially, by confidently using Section 108(h) we can continue to make the past usable one query at a time.

    The paper ends with an evaluation of the recent Discussion Paper by the U.S. Copyright Office on Section 108 and suggests changes/​​​​recommendations related to the proposed changes to Section 108(h).

    [Keywords: copyright, ⁠, library, archives, museum, Section 108(h), ⁠, orphan works]

  193. Copyright-deadweight

  194. https://github.com/kdeldycke/awesome-falsehood

  195. http://lukemuehlhauser.com/industrial-revolution/

  196. https://web.archive.org/web/20171030232756/https://medium.com/stubborn-attachments/stubborn-attachments-full-text-8fc946b694d

  197. 2000-delong.pdf: ⁠, J. Bradford DeLong (2000-03-01; economics):

    There is one central fact about the economic history of the twentieth century: above all, the century just past has been the century of increasing material wealth and economic productivity. No previous era and no previous economy has seen material wealth and productive potential grow at such a pace. The bulk of America’s population today achieves standards of material comfort and capabilities that were beyond the reach of even the richest of previous centuries. Even lower middle-class households in relatively poor countries have today material standards of living that would make them, in many respects, the envy of the powerful and lordly of past centuries.

  198. https://medium.com/future-crunch/99-reasons-2017-was-a-good-year-d119d0c32d19

  199. https://www.adamsmith.org/research/back-in-the-ussr

  200. http://www.law.nyu.edu/sites/default/files/upload_documents/Property%20Monopoly.pdf

  201. ⁠, Lynn M. LoPucki (2018):

    In a 2014 article, Professor Shawn Bayern demonstrated that anyone can confer legal personhood on an autonomous computer algorithm by putting it in control of a limited liability company. Bayern’s demonstration coincided with the development of “autonomous” online businesses that operate independently of their human owners—accepting payments in online currencies and contracting with human agents to perform the off-line aspects of their businesses. About the same time, leading technologists Elon Musk, Bill Gates, and Stephen Hawking said that they regard human-level artificial intelligence as an existential threat to the human race.

    This Article argues that algorithmic entities—legal entities that have no human controllers—greatly exacerbate the threat of artificial intelligence. Algorithmic entities are likely to prosper first and most in criminal, terrorist, and other anti-social activities because that is where they have their greatest over human-controlled entities. Control of legal entities will contribute to the threat algorithms pose by providing them with identities. Those identities will enable them to conceal their algorithmic natures while they participate in commerce, accumulate wealth, and carry out anti-social activities.

    Four aspects of corporate law make the human race vulnerable to the threat of algorithmic entities. First, algorithms can lawfully have exclusive control of not just American LLC’s but also a large majority of the entity forms in most countries. Second, entities can change regulatory regimes quickly and easily through migration. Third, governments—particularly in the United States—lack the ability to determine who controls the entities they charter and so cannot determine which have non-human controllers. Lastly, corporate charter competition, combined with ease of entity migration, makes it virtually impossible for any government to regulate algorithmic control of entities.

  202. 2016-bayern.pdf: ⁠, Shawn Bayern (2016-06; ai):

    Nonhuman autonomous systems are not legal persons under current law. The history of organizational law, however, demonstrates that agreements can, with increasing degrees of autonomy, direct the actions of legal persons. Agreements are isomorphic with algorithms; that is, a legally enforceable agreement can give legal effect to the arbitrary discernible states of an algorithm or other process. As a result, autonomous systems may end up being able, at least, to emulate many of the private-law rights of legal persons. This essay demonstrates a technique by which this is possible by means of limited liability companies (LLCs), a very flexible modern type of business organization. The techniques that this essay describes are not just futuristic possibilities; as this essay argues, they are already possible under current law.

  203. https://www.yalelawjournal.org/note/amazons-antitrust-paradox

  204. 2017-houde.pdf

  205. 2020-bloom.pdf: ⁠, Nicholas Bloom, Charles I. Jones, John Van Reenen, Michael Webb (2020-04-01; economics):

    Long-run growth in many models is the product of two terms: the effective number of researchers and their research productivity. We present evidence from various industries, products, and firms showing that research effort is rising substantially while research productivity is declining sharply. A good example is Moore’s Law. The number of researchers required today to achieve the famous doubling of computer chip density is more than 18 times larger than the number required in the early 1970s. More generally, everywhere we look we find that ideas, and the exponential growth they imply, are getting harder to find.

  206. ⁠, Timur Kuran (2018):

    This essay critically evaluates the analytic literature concerned with causal connections between Islam and economic performance. It focuses on works since 1997, when this literature was last surveyed. Among the findings are the following: Ramadan fasting by pregnant women harms prenatal development; Islamic charities mainly benefit the middle class; Islam affects educational outcomes less through Islamic schooling than through structural factors that handicap learning as a whole; Islamic finance hardly affects Muslim financial behavior; and low generalized trust depresses Muslim trade. The last feature reflects the Muslim world’s delay in transitioning from personal to impersonal exchange. The delay resulted from the persistent simplicity of the private enterprises formed under Islamic law. Weak property rights reinforced the private sector’s stagnation by driving capital out of commerce and into rigid waqfs. Waqfs limited economic development through their inflexibility and democratization by restraining the development of civil society. Parts of the Muslim world conquered by Arab armies are especially undemocratic, which suggests that early Islamic institutions, including slave-based armies, were particularly critical to the persistence of authoritarian patterns of governance. States have contributed themselves to the persistence of authoritarianism by treating Islam as an instrument of governance. As the world started to industrialize, non-Muslim subjects of Muslim-governed states pulled ahead of their Muslim neighbors by exercising the choice of law they enjoyed under Islamic law in favor of a Western legal system.

  207. 2017-glitz.pdf: “Industrial Espionage and Productivity”⁠, Albrecht Glitz, Erik Meyersson

  208. https://www.bloomberg.com/news/features/2017-01-05/when-big-business-happens-to-your-pet

  209. https://corpgov.law.harvard.edu/2017/01/31/the-common-law-corporation-the-power-of-the-trust-in-anglo-american-business-history/

  210. ⁠, Ben Garfinkel, Miles Brundage, Daniel Filan, Carrick Flynn, Jelena Luketina, Michael Page, ⁠, Andrew Snyder-Beattie, Max Tegmark (2017-03-31):

    In recent years, a number of prominent computer scientists, along with academics in fields such as philosophy and physics, have lent credence to the notion that machines may one day become as large as humans. Many have further argued that machines could even come to exceed human size by a significant margin. However, there are at least seven distinct arguments that preclude this outcome. We show that it is not only implausible that machines will ever exceed human size, but in fact impossible.

  211. https://slatestarcodex.com/2017/04/01/g-k-chesterton-on-ai-risk/

  212. http://dresdencodak.com/2009/05/15/a-thinking-apes-critique-of-trans-simianism-repost/

  213. https://www.newyorker.com/magazine/2011/09/05/how-to-be-good

  214. 2016-lipsitch.pdf

  215. https://slatestarcodex.com/2017/08/16/fear-and-loathing-at-effective-altruism-global-2017/

  216. https://philpapers.org/archive/bouwdp

  217. https://web.archive.org/web/20020312064520/http://www.humanistictexts.org/carvaka.htm

  218. https://www.lesswrong.com/r/discussion/lw/e8u/mike_darwin_on_animal_research_moral_cowardice/

  219. https://qualiacomputing.com/2016/08/20/wireheading_done_right/

  220. 1996-berman.pdf

  221. https://www.lesswrong.com/posts/Fy2b55mLtghd4fQpx/the-zombie-preacher-of-somerset

  222. http://hoaxes.org/archive/permalink/the_great_moon_hoax

  223. https://www.lrb.co.uk/v27/n17/steven-shapin/what-did-you-expect

  224. http://silkandhornheresy.blogspot.co.uk/2012/08/this-warrior-of-dead-world-gene-wolfes.html

  225. https://llamasandmystegosaurus.blogspot.com/2017/05/alpha.html

  226. https://slate.com/articles/arts/prog_spring/features/2012/prog_rock/history_of_prog_the_nice_emerson_lake_palmer_and_other_bands_of_the_1970s_.html

  227. 1998-smits.pdf

  228. https://wondersinthedark.wordpress.com/2012/09/01/if-you-read-closely-aoi-hiragis-whisper-of-the-heart-on-page-and-screen/

  229. 2011-yvain-iliadaslawsuit.html

  230. https://www.amazon.com/Borges-Selected-Non-Fictions-Jorge-Luis/dp/0140290117

  231. 07#links

  232. https://sre.google/sre-book/table-of-contents/

  233. 01#books

  234. https://www.amazon.com/002-Interview-Barry-G-Golson/dp/0399507698

  235. http://www.stevenlevy.com/index.php/books/artificial-life

  236. http://www.fadedpage.com/showbook.php?pid=20160325

  237. http://www.berkshirehathaway.com/letters/letters.html

  238. 1997-tsuzuki-tokyoacertainstyle.pdf: ⁠, Kyoichi Tsuzuki (1997; japanese):

    Writer-photographer Kyoichi Tsuzuki visited a hundred apartments, condos, and houses, documenting what he saw in more than 400 color photos that show the real Tokyo style—a far cry from the serene gardens, shoji screens, and Zen minimalism usually associated with Japanese dwellings.

    In this Tokyo, necessities such as beds, bathrooms, and kitchens vie for space with electronic gadgets, musical instruments, clothes, books, records, and kitschy collectibles. Candid photos vividly capture the dizzying “cockpit effect”of living in a snug space crammed floor to ceiling with stuff. And it’s not just bohemian types and students who must fit their lives and work into tight quarters, but professionals and families with children, too. In descriptive captions, the inhabitants discuss the ingenious ways they’ve adapted their home environments to suit their diverse lifestyles.

  239. https://www.amazon.com/Grand-Strategy-Roman-Empire-Century/dp/1421419459/

  240. https://www.amazon.com/Moondust-Search-Men-Fell-Earth/dp/0007155425

  241. ⁠, Scott Alexander (2015-12-28):

    [Unsong is a finished (2015–2017) online web serial fantasy “kabbalah-punk” novel written by Scott Alexander (SSC). summary:

    Aaron Smith-Teller works in a kabbalistic sweatshop in Silicon Valley, where he and hundreds of other minimum-wage workers try to brute-force the Holy Names of God. All around him, vast forces have been moving their pieces into place for the final confrontation. An overworked archangel tries to debug the laws of physics. Henry Kissinger transforms the ancient conflict between Heaven and Hell into a US-Soviet proxy war. A Mexican hedge wizard with no actual magic wreaks havoc using the dark art of placebomancy. The Messiah reads a book by and starts wondering exactly what it would mean to do as much good as possible…

    Aaron doesn’t care about any of this. He and his not-quite-girlfriend Ana are engaged in something far more important—griping about magical intellectual property law. But when a chance discovery brings them into conflict with mysterious international magic-intellectual-property watchdog UNSONG, they find themselves caught in a web of plots, crusades, and prophecies leading inexorably to the end of the world.

    TVTropes⁠; my review of Unsong: ★★★★☆.]

  242. 1997-carter-shotetsu-unforgottendreams.pdf: {#linkBibliography-shōtetsu-(translator)-1997 .docMetadata}, Shōtetsu, Steven D. Carter (translator) (1997; japanese):

    [This volume presents translations of over 200 poems by Shōtetsu, who is generally considered to be the last great poet of the uta form. Includes an introduction, a glossary of important names and places and a list of sources of the poems.]

    The Zen monk (1381–1459) suffered several rather serious misfortunes in his life: he lost all the poems of his first thirty years—more than 30,000 of them—in a fire; his estate revenues were confiscated by an angry shogun; and rivals refused to allow his work to appear in the only imperially commissioned poetry anthology of his time. Undeterred by these obstacles, he still managed to make a living from his poetry and won recognition as a true master, widely considered to be the last great poet of the classical uta, or waka, tradition. Shōtetsu viewed his poetry as both a professional and religious calling, and his extraordinarily prolific corpus comprised more than 11,000 poems—the single largest body of work in the Japanese canon.

    The first major collection of Shōtetsu’s work in English, Unforgotten Dreams presents beautifully rendered translations of more than two hundred poems. The book opens with Steven Carter’s generous introduction on Shōtetsu’s life and work and his importance in Japanese literature, and includes a glossary of important names and places and a list of sources of the poems. Revealing as never before the enduring creative spirit of one of Japan’s greatest poets, this fine collection fills a major gap in the English translations of medieval Japanese literature.

  243. https://www.amazon.com/Sunset-Spider-Poetry-Ancient-Korea/dp/0030120713/

  244. Movies#amy

  245. Movies#the-great-happiness-space

  246. Movies#breaking-bad

  247. Movies#blade-runner-2049

  248. Movies#hackers

  249. 02#one-upon-a-time-in-the-west

  250. Movies#all-about-eve

  251. 03#books

  252. Anime#the-tale-of-the-princess-kaguya

  253. Anime#kubo-and-the-two-strings

  254. Anime#fma-brotherhood