October 2018 gwern.net newsletter with 5 new posts, links on genetics/human evolution/AI/meta-science/history of tech, 2 book reviews, 2 movie reviews, 1 series review. newsletter 2018-09-23–2021-01-04finishedcertainty: logimportance: 0
Among the genetic disorders whose phenotype risk can routinely be diagnosed by EPⓖT are:
300+ common single-gene disorders, such as Cystic Fibrosis, Thalassemia, BRCA, Sickle Cell Anemia, and Gaucher Disease
Polygenic Disease Risk, such as risk for Type 1 and Type 2 diabetes, Dwarfism, Hypothyroidism, Mental Disability, Atrial Fibrillation and other Cardiovascular Diseases like CAD, Inflammatory Bowel Disease, and Breast Cancer.
“R2D2: Recurrent Experience Replay in Distributed Reinforcement Learning”, Kapturowski et al 2018 (new ALE/DM Lab-30 SOTA: “exceeds human-level in 52/57 ALE” games; large improvement over Ape-X using a RNN. Just 4 years after DQN, ALE borders on being solved, like ImageNet, with relatively minor tweaks to NNs. Montezuma’s Revenge remains as an exploration problem but is starting to crack under various deep exploration & intrinsic reward approaches, without requiring elaborate memory, hierarchical RL, or transfer learning from humans.)
Two examples of underestimating the Roman technology/economy:
The Nemi ships were over 70 meters long, their full scale only discovered in 1928, and used advanced technology like ball-bearings, bilge pumps, indoor plumbing, lead anchor stocks, and hulls indicative of standardized (industrial?) design & production
Some years ago, a company in Boston began marketing Simulated Presence Therapy, which involved making a prerecorded audiotape to simulate one side of a phone conversation. A relative or someone close to the patient would put together an “asset inventory” of the patient’s cherished memories, anecdotes, and subjects of special interest; a chatty script was developed from the inventory, and a tape was recorded according to the script, with pauses every now and then to allow time for replies. When the tape was ready, the patient was given headphones to listen to it and told that they were talking to the person over the phone. Because patients’ memories were short, they could listen to the same tape over and over, even daily, and find it newly comforting each time. There was a séance-like quality to these sessions: they were designed to simulate the presence of someone who was merely not there, but they could, in principle, continue even after that person was dead.
On “doing enough” (Yoshinori Kitase: “Death comes suddenly and there is no notion of good or bad. It leaves, not a dramatic feeling but great emptiness. When you lose someone you loved very much you feel this big empty space and think, ‘If I had known this was coming I would have done things differently.’”)
Newsletter tag: archive of all issues back to 2013 for the gwern.net newsletter (monthly updates, which will include summaries of projects I’ve worked on that month (the same as the changelog), collations of links or discussions from my subreddit, and book/movie reviews.)
This page is a changelog for Gwern.net: a monthly reverse chronological list of recent major writings/changes/additions.
Following my writing can be a little difficult because it is often so incremental. So every month, in addition to my regular /r/Gwern subreddit submissions, I write up reasonably-interesting changes and send it out to the mailing list in addition to a compilation of links & reviews (archives).
A subreddit for posting links of interest and also for announcing updates to gwern.net (which can be used as a RSS feed). Submissions are categorized similar to the monthly newsletter and typically will be collated there.
I watch the 2010 Western animated series My Little Pony: Friendship is Magic (seasons 1–9), delving deep into it and the MLP fandom, and reflect on it. What makes it good and powers its fandom subculture, producing a wide array of fanfictions, music, and art? Focusing on fandom, plot, development, and meaning of bronydom, I conclude that, among other things, it has surprisingly high-quality production & aesthetics which are easily adapted to fandom and which power a Westernized shonen anime—which depicts an underappreciated plausibly-contemporary capitalist utopian perspective on self-actualization, reminiscent of other more explicitly self-help-oriented pop culture movements such as the recent Jordan B. Peterson movement. Included are my personal rankings of characters, seasons, episodes, and official & fan music.
Like anything else, the idea of “breeding” had to be invented. That traits are genetically-influenced broadly equally by both parents subject to considerable randomness and can be selected for over many generations to create large average population-wide increases had to be discovered the hard way, with many wildly wrong theories discarded along the way. Animal breeding is a case in point, as reviewed by an intellectual history of animal breeding, Like Engend’ring Like, which covers mistaken theories of conception & inheritance from the ancient Greeks to perhaps the first truly successful modern animal breeder, Robert Bakewell (1725–1795).
Why did it take thousands of years to begin developing useful animal breeding techniques, a topic of interest to almost all farmers everywhere, a field which has no prerequisites such as advanced mathematics or special chemicals or mechanical tools, and seemingly requires only close observation and patience? This question can be asked of many innovations early in the Industrial Revolution, such as the flying shuttle.
Some veins in economics history and sociology suggest that at least one ingredient is an improving attitude: a detached outsider’s attitude which asks whether there is any way to optimize something, in defiance of ‘the wisdom of tradition’, and looks for improvements. A relevant English example is the English Royal Society of Arts, founded not too distant in time from Bakewell, specifically to spur competition and imitation and new inventions. Psychological barriers may be as important as anything like per capita wealth or peace in innovation.
Ericsson 1993 notes that many major writers or researchers prioritized writing by making it the first activity of their day, often getting up early in the morning. This is based largely on writers anecdotally reporting they write best first thing early in the morning, apparently even if they are not morning people, although there is some additional survey/software-logging evidence of morning writing being effective. I compile all the anecdotes of writers discussing their writing times I have come across thus far. Do they, and why?
Recent advances have led to the discovery of specific genetic variants that predict educational attainment. We study how these variants, summarized as a linear index—known as a polygenic score—are associated with human capital accumulation and labor market outcomes in the Health and Retirement Study (HRS). We present two main sets of results. First, we find evidence that the genetic factors measured by this score interact strongly with childhood socioeconomic status in determining educational outcomes. In particular, while the polygenic score predicts higher rates of college graduation on average, this relationship is substantially stronger for individuals who grew up in households with higher socioeconomic status relative to those who grew up in poorer households. Second, the polygenic score predicts labor earnings even after adjusting for completed education, with larger returns in more recent decades. These patterns suggest that the genetic traits that promote education might allow workers to better accommodate ongoing skill biased technological change. Consistent with this interpretation, we find a positive association between the polygenic score and non-routine analytic tasks that have benefited from the introduction of new technologies. Nonetheless, the college premium remains the dominant determinant of earnings differences at all levels of the polygenic score. Given the role of childhood SES in predicting college attainment, this raises concerns about wasted potential arising from limited household resources.
A Genome-wide association study (GWAS) estimates size and significance of the effect of common genetic variants on a phenotype of interest. A Polygenic Score (PGS) is a score, computed for each individual, summarizing the expected value of a phenotype on the basis of the individual’s genotype. The PGS is computed as a weighted sum of the values of the individual’s genetic variants, using as weights the GWAS estimated coefficients from a training sample. Thus, PGS carries information on the genotype, and only on the genotype, of an individual. In our case phenotypes of interest are measures of educational achievement, such as having a college degree, or the education years, in a sample of approximately 2700 adult twins and their parents.
We set up the analysis in a standard model of optimal parental investment and intergenerational mobility, extended to include a fully specified genetic analysis of skill transmission, and show that the model’s predictions on mobility differ substantially from those of the standard model. For instance, the coefficient of intergenerational income elasticity maybe larger, and may differ across countries because the distribution of the genotype is different, completely independently of any difference in institution, technology or preferences.
We then study how much of the educational achievement is explained by the PGS for education, thus estimating how much of the variance of education can be explained by genetic factors alone. We find a substantial effect of PGS on performance in school, years of education and college.
Finally we study the channels between PGS and the educational achievement, distinguishing how much is due to cognitive skills and to personality traits. We show that the effect of PGS is substantially stronger on Intelligence than on other traits, like Constraint, which seem natural explanatory factors of educational success. For educational achievement, both cognitive and non cognitive skills are important, although the larger fraction of success is channeled by Intelligence.
Human DNA varies across geographic regions, with most variation observed so far reflecting distant ancestry differences. Here, we investigate the geographic clustering of genetic variants that influence complex traits and disease risk in a sample of ~450,000 individuals from Great Britain. Out of 30 traits analyzed, 16 show significant geographic clustering at the genetic level after controlling for ancestry, likely reflecting recent migration driven by socio-economic status (SES). Alleles associated with educational attainment (EA) show most clustering, with EA-decreasing alleles clustering in lower SES areas such as coal mining areas. Individuals that leave coal mining areas carry more EA-increasing alleles on average than the rest of Great Britain. In addition, we leveraged the geographic clustering of complex trait variation to further disentangle regional differences in socio-economic and cultural outcomes through genome-wide association studies on publicly available regional measures, namely coal mining, religiousness, 1970/2015 general election outcomes, and Brexit referendum results.
Organ transplantation is a medical procedure in which an organ is removed from one body and placed in the body of a recipient, to replace a damaged or missing organ. The donor and recipient may be at the same location, or organs may be transported from a donor site to another location. Organs and/or tissues that are transplanted within the same person's body are called autografts. Transplants that are recently performed between two subjects of the same species are called allografts. Allografts can either be from a living or cadaveric source.
Female reproductive behaviors have an important implication in evolutionary fitness and health of offspring. Previous studies have shown that age at first birth of women (AFB) is genetically associated with schizophrenia (SCZ). However, for most other psychiatric disorders and reproductive traits, the latent shared genetic architecture is largely unknown. Here we used the second wave of UK Biobank data (N = 220,685) to evaluate the association between five female reproductive traits and polygenetic risk scores (PRS) projected from genome-wide association study summary statistics of six psychiatric disorders (N = 429,178). We found that the PRS of attention-deficit/hyperactivity disorder (ADHD) were strongly associated with AFB (genetic correlation of −0.68 ± 0.03 with p-value = 1.86E-89), age at first sexual intercourse (AFS) (−0.56 ± 0.03 with p-value = 3.42E-60), number of live births (NLB) (0.36 ± 0.04 with p-value = 4.01E-17) and age at menopause (−0.27 ± 0.04 with p-value = 5.71E-13). There were also robustly significant associations between the PRS of eating disorder (ED) and AFB (genetic correlation of 0.35 ± 0.06), ED and AFS (0.19 0.06), Major depressive disorder (MDD) and AFB (−0.27 ± 0.07), MDD and AFS (− 0.27 ± 0.03) and SCZ and AFS (−0.10 ± 0.03). Our findings reveal the shared genetic architecture between the five reproductive traits in women and six psychiatric disorders, which have a potential implication that helps to improve reproductive health in women, hence better child outcomes. Our findings may also explain, at least in part, an evolutionary hypothesis that causal mutations underlying psychiatric disorders have positive effects on reproductive success.
Polygenic selection is likely to target some human traits, but the specific evolutionary mechanisms driving complex trait variation are largely unknown. We developed an evolutionary compass for detecting selection and mutational bias that uses polarized GWAS summary statistics from a single population. We found evidence for selection and mutational bias acting on variation in five traits (BMI, schizophrenia, Crohn’s disease, educational attainment, and height). We then used model-based analyses to show that these signals can be explained by stabilizing selection with shifts in the fitness-phenotype relationship. We additionally provide evidence that selection has acted on Neanderthal alleles for height, schizophrenia, and depression, and discuss potential sources of confounding. Our results provide a flexible and powerful framework for evolutionary analysis of complex phenotypes in humans and other organisms, and provide insights into the evolutionary mechanisms driving variation in human polygenic traits.
With genetic predictors of a phenotypic trait, it is possible to select embryos during an in vitro fertilization process to increase or decrease that trait. Extending the work of Shulman & Bostrom 2014/Hsu 2014, I consider the case of human intelligence using SNP-based genetic prediction, finding:
a meta-analysis of GCTA results indicates that SNPs can explain >33% of variance in current intelligence scores, and >44% with better-quality phenotype testing
this sets an upper bound on the effectiveness of SNP-based selection: a gain of 9 IQ points when selecting the top embryo out of 10
the best 2016 polygenic score could achieve a gain of ~3 IQ points when selecting out of 10
the marginal cost of embryo selection (assuming IVF is already being done) is modest, at $1500 + $200 per embryo, with the sequencing cost projected to drop rapidly
a model of the IVF process, incorporating number of extracted eggs, losses to abnormalities & vitrification & failed implantation & miscarriages from 2 real IVF patient populations, estimates feasible gains of 0.39 & 0.68 IQ points
embryo selection is currently unprofitable (mean: -$358) in the USA under the lowest estimate of the value of an IQ point, but profitable under the highest (mean: $6230). The main constraints on selection profitability is the polygenic score; under the highest value, the NPVEVPI of a perfect SNP predictor is $24b and the EVSI per education/SNP sample is $71k
under the worst-case estimate, selection can be made profitable with a better polygenic score, which would require n > 237,300 using education phenotype data (and much less using fluid intelligence measures)
selection can be made more effective by selecting on multiple phenotype traits: considering an example using 7 traits (IQ/height/BMI/diabetes/ADHD/bipolar/schizophrenia), there is a factor gain over IQ alone; the outperformance of multiple selection remains after adjusting for genetic correlations & polygenic scores and using a broader set of 16 traits.
Mutations in the gene encoding dystrophin, a protein that maintains muscle integrity and function, cause Duchenne muscular dystrophy (DMD). The deltaE50-MD dog model of DMD harbors a mutation corresponding to a mutational “hotspot” in the human DMD gene. We used adeno-associated viruses to deliver CRISPR gene editing components to four dogs and examined dystrophin protein expression 6 weeks after intramuscular delivery (n = 2) or 8 weeks after systemic delivery (n = 2). After systemic delivery in skeletal muscle, dystrophin was restored to levels ranging from 3 to 90% of normal, depending on muscle type. In cardiac muscle, dystrophin levels in the dog receiving the highest dose reached 92% of normal. The treated dogs also showed improved muscle histology. These large-animal data support the concept that, with further development, gene editing approaches may prove clinically useful for the treatment of DMD.
Abstract: Building on the recent successes of distributed training of RL agents, in this paper we investigate the training of RNN-based RL agents from distributed prioritized experience replay. We study the effects of parameter lag resulting in representational drift and recurrent state staleness and empirically derive an improved training strategy. Using a single network architecture and fixed set of hyper-parameters, the resulting agent, Recurrent Replay Distributed DQN (R2D2), quadruples the previous state of the art on Atari-57, and matches the state of the art on DMLab-30. It is the first agent to exceed human-level performance in 52 of the 57 Atari games.
The reinforcement learning community has made great strides in designing algorithms capable of exceeding human performance on specific tasks. These algorithms are mostly trained one task at the time, each new task requiring to train a brand new agent instance. This means the learning algorithm is general, but each solution is not; each agent can only solve the one task it was trained on. In this work, we study the problem of learning to master not one but multiple sequential-decision tasks at once. A general issue in multi-task learning is that a balance must be found between the needs of multiple tasks competing for the limited resources of a single learning system. Many learning algorithms can get distracted by certain tasks in the set of tasks to solve. Such tasks appear more salient to the learning process, for instance because of the density or magnitude of the in-task rewards. This causes the algorithm to focus on those salient tasks at the expense of generality. We propose to automatically adapt the contribution of each task to the agent’s updates, so that all tasks have a similar impact on the learning dynamics. This resulted in state of the art performance on learning to play all games in a set of 57 diverse Atari games. Excitingly, our method learned a single trained policy—with a single set of weights—that exceeds median human performance. To our knowledge, this was the first time a single agent surpassed human-level performance on this multi-task domain. The same approach also demonstrated state of the art performance on a set of 30 tasks in the 3D reinforcement learning platform DeepMind Lab.
We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma’s Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level.
Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator’s input. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128×128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.5 and Frechet Inception Distance (FID) of 7.4, improving over the previous best IS of 52.52 and FID of 18.6.
Description of emerging machine learning paradigm identified by commentator starspawn0: discussions of building artificial brains typically presume either learning a brain architecture & parameters from scratch (AGI) or laboriously ‘scanning’ and reverse-engineering a biological brain in its entirety to get a functioning artificial brain.
However, the rise of deep learning’s transfer learning & meta-learning shows a wide variety of intermediate approaches, where ‘side data’ from natural brains can be used as scaffolding to guide & constrain standard deep learning methods. Such approaches do not seek to ‘upload’ or ‘emulate’ any specific brain, they merely seek to imitate an average brain. A simple example would be training a CNN to imitate eyetracking saliency data: what a human looks at while playing a video game or driving is the important part of a scene, and the CNN doesn’t have to learn importance from scratch. A more complex example would be using EEG as a ‘description’ of music in addition to the music itself. fMRI data could be used to guide a NN to have a similar modularized architecture with similar activation patterns given a particular stimulus as a human brain, which presumably is related to human abilities to zero-shot/few-shot learn and generalize.
While a highly marginal approach at the moment compared to standard approaches like scaling up models & datasets, it is largely untapped, and progress in VR headsets with eyetracking capabilities (intended for foveated rendering but usable for many other purposes), brain imaging methods & BCIs has been more rapid than generally appreciated—in part thanks to breakthroughs using DL itself, suggesting the potential for a positive feedback loop where a BCI breakthrough enables a better NN for BCIs and so on.
It is well known among researchers and practitioners that election polls suffer from a variety of sampling and nonsampling errors, often collectively referred to as total survey error. Reported margins of error typically only capture sampling variability, and in particular, generally ignore nonsampling errors in defining the target population (e.g., errors due to uncertainty in who will vote). Here, we empirically analyze 4221 polls for 608 state-level presidential, senatorial, and gubernatorial elections between 1998 and 2014, all of which were conducted during the final three weeks of the campaigns. Comparing to the actual election outcomes, we find that average survey error as measured by root mean square error is approximately 3.5 percentage points, about twice as large as that implied by most reported margins of error. We decompose survey error into election-level bias and variance terms. We find that average absolute election-level bias is about 2 percentage points, indicating that polls for a given election often share a common component of error. This shared error may stem from the fact that polling organizations often face similar difficulties in reaching various subgroups of the population, and that they rely on similar screening rules when estimating who will vote. We also find that average election-level variance is higher than implied by simple random sampling, in part because polling organizations often use complex sampling designs and adjustment procedures. We conclude by discussing how these results help explain polling failures in the 2016 U.S. presidential election, and offer recommendations to improve polling practice. [Keywords: Margin of error, Non-sampling error, Polling bias, Total survey error]
Many people have argued that computer programming should strive to become more like mathematics. Maybe so, but not in the way they seem to think. The aim of program verification, an attempt to make programming more mathematics-like, is to increase dramatically one’s confidence in the correct functioning of a piece of software, and the device that verifiers use to achieve this goal is a long chain of formal, deductive logic. In mathematics, the aim is to increase one’s confidence in the correctness of a theorem, and it’s true that one of the devices mathematicians could in theory use to achieve this goal is a long chain of formal logic. But in fact they don’t. What they use is a proof, a very different animal. Nor does the proof settle the matter; contrary to what its name suggests, a proof is only one step in the direction of confidence. We believe that, in the end, it is a social process that determines whether mathematicians feel confident about a theorem—and we believe that, because no comparable social process can take place among program verifiers, program verification is bound to fail. We can’t see how it’s going to be able to affect anyone’s confidence about programs.
…competition is fiercer the more that competitors resemble each other. When we’re not so different from people around us, it’s irresistible to become obsessed about beating others.
It’s hard to construct a more perfect incubator for mimetic contagion than the American college campus. Most 18-year-olds are not super differentiated from each other. By construction, whatever distinctions any does have are usually earned through brutal, zero-sum competitions. These tournament-type distinctions include: SAT scores at or near perfection; being a top player on a sports team; gaining master status from chess matches; playing first instrument in state orchestra; earning high rankings in Math Olympiad; and so on, culminating in gaining admission to a particular college. Once people enter college, they get socialized into group environments that usually continue to operate in zero-sum competitive dynamics. These include orchestras and sport teams; fraternities and sororities; and many types of clubs. The biggest source of mimetic pressures are the classes. Everyone starts out by taking the same intro classes; those seeking distinction throw themselves into the hardest classes, or seek tutelage from star professors, and try to earn the highest grades.
There’s very little external intermediation, instead all competitive dynamics are internally mediated…Once internal rivalries are sorted out, people coalesce into groups united against something foreign. These tendencies help explain why events on campus so often make the news—it seems like every other week we see some campus activity being labeled a “witch hunt,” “riot,” or something else that involves violence, implied or explicit. I don’t care to link to these events, they’re so easy to find. It’s interesting to see that academics are increasingly becoming the target of student activities. The Girardian terror devours its children first, who have tolerated or fanned mimetic contagion for so long.
…I’ll end with a quote from I See Satan Fall Like Lightning: “Mimetic desire enables us to escape from the animal realm. It is responsible for the best and the worst in us, for what lowers us below the animal level as well as what elevates us above it. Our unending discords are the ransom of our freedom.”
[Contemporary SF short story; inspired by NN text generation, social media dynamics, clickbait, and debates like ‘the dress’; imagines AI natural language processing systems run amok after being trained to maximize user reactions to create clickbait, leading to learning ‘scissor statements’, claims which are maximally controversial and divide the population 50-50 between those who find the statement obviously correct and moral, and those who find it equally obviously false and immoral, leading to intractable polarizing debates, contempt, and warfare.]
Lewis Terman is widely seen as the “father of gifted education,” yet his work is controversial. Terman’s “mixed legacy” includes the pioneering work in the creation of intelligence tests, the first large-scale longitudinal study, and the earliest discussions of gifted identification, curriculum, ability grouping, acceleration, and more. However, since the 1950s, Terman has been viewed as a sloppy thinker at best and a racist, sexist, and/or classist at worst. This article explores the most common criticisms of Terman’s legacy: an overemphasis on IQ, support for the meritocracy, and emphasizing genetic explanations for the origin of intelligence differences over environmental ones. Each of these criticisms is justified to some extent by the historical record, and each is relevant today. Frequently overlooked, however, is Terman’s willingness to form a strong opinion based on weak data. The article concludes with a discussion of the important lessons that Terman’s work has for modern educators and psychologists, including his contributions to psychometrics and gifted education, his willingness to modify his opinions in the face of new evidence, and his inventiveness and inclination to experiment. Terman’s legacy is complex, but one that provides insights that can enrich modern researchers and practitioners in these areas.
The Nemi ships were two ships, one larger than the other, built under the reign of the Roman emperor Caligula in the 1st century AD at Lake Nemi. Although the purpose of the ships is only speculated upon, the larger ship was essentially an elaborate floating palace, which contained quantities of marble, mosaic floors, heating and plumbing and amenities such as baths. Both ships featured technology thought to have been developed historically much later. It has been stated that the emperor was influenced by the lavish lifestyles of the Hellenistic rulers of Syracuse and Ptolemaic Egypt. Recovered from the lake bed in 1929, the ships were destroyed by fire during World War II in 1944.
The history of the world is the slaughterhouse of the world, reads a famous Hegelian aphorism; and of literature. The majority of books disappear forever—and “majority” actually misses the point: if we set today’s canon of nineteenth-century British novels at two hundred titles (which is a very high figure), they would still be only about 0.5 percent of all published novels.
[Literature paper by Franco Moretti. Moretti considers the vast production of literature of which only the slightest fraction is still read and studied as part of a ‘canon’. Canons are formed by market forces, leading to preservation and reading in a feedback loop—far from academics selecting the best based on esthetic grounds. Moretti offers a case study of Arthur Conan Doyle’s Sherlock Holmes by comparing to all the now-forgotten competing detective fiction, to study the evolution of the idea of a ‘clue’; his competitors reveal its difficult evolution and how everyone groped towards it. Surprisingly, clues were neither obvious nor popular nor showed any clear evolution towards success. This raises puzzling questions about how to create and interpret ‘literary history’.]
After the quantitative diagrams of the first chapter, and the spatial ones of the second, evolutionary trees constitute morphological diagrams, where history is systematically correlated with form. And indeed, in contrast to literary studies—where theories of form are usually blind to history, and historical work blind to form—for evolutionary thought morphology and history are truly the two dimensions of the same tree: where the vertical axis charts, from the bottom up, the regular passage of time (every interval, writes Darwin, ‘one thousand generations’), while the horizontal one follows the formal diversification (‘the little fans of diverging dotted lines’) that will eventually lead to ‘well-marked varieties’, or to entirely new species.
The horizontal axis follows formal diversification . . . But Darwin’s words are stronger: he speaks of ‘this rather perplexing subject’—elsewhere, ‘perplexing & unintelligible’ 4—whereby forms don’t just ‘change’, but change by always diverging from each other (remember, we are in the section on ‘Divergence of Character’).5 Whether as a result of historical accidents, then, or under the action of a specific ‘principle’, 6 the reality of divergence pervades the history of life, defining its morphospace—its space-of-forms: an important concept, in the pages that follow—as an intrinsically expanding one.
From a single common origin, to an immense variety of solutions: it is this incessant growing-apart of life forms that the branches of a morphological tree capture with such intuitive force. ‘A tree can be viewed as a simplified description of a matrix of distances’, write Cavalli-Sforza, Menozzi and Piazza in the methodological prelude to their History and Geography of Human Genes; and figure 29, with its mirror-like alignment of genetic groups and linguistic families drifting away from each other (in a ‘correspondence [that] is remarkably high but not perfect’, as they note with aristocratic aplomb), 7 makes clear what they mean: a tree is a way of sketching how far a certain language has moved from another one, or from their common point of origin.
And if language evolves by diverging, why not literature too?
Bushido Blade is a 3D fighting video game developed by Light Weight and published by Square and Sony Computer Entertainment for the PlayStation. The game features one-on-one armed combat. Its name refers to the Japanese warrior code of honor Bushidō.
Shadow of the Vampire is a 2000 metafiction horror film directed by E. Elias Merhige, written by Steven Katz, and starring John Malkovich and Willem Dafoe. The film is a fictionalized documentary account of the making of the classic vampire film Nosferatu, eine Symphonie des Grauens, directed by F. W. Murnau, during which the film crew began to have disturbing suspicions about their lead actor.
My Little Pony: Friendship Is Magic is a Canadian-American animated fantasy television series based on Hasbro's My Little Pony line of toys and animated works and is often referred to by collectors as the fourth generation of the franchise. The series aired on The Hub from October 10, 2010 to October 12, 2019. Hasbro selected animator Lauren Faust as the creative director and executive producer for the show. Faust sought to challenge the established nature of the existing My Little Pony line, creating more in-depth characters and adventurous settings; she left the series during season 2, to be replaced by Meghan McCarthy as showrunner for the remainder of the series.
Incredibles 2 is a 2018 American computer-animated superhero film produced by Pixar Animation Studios and released by Walt Disney Pictures. Written and directed by Brad Bird, it is the sequel to The Incredibles (2004) and the second full-length installment of the franchise. The story follows the Parr family as they try to restore the public's trust in superheroes while balancing their family life, only to combat a new foe who seeks to turn the populace against all superheroes. Craig T. Nelson, Holly Hunter, Sarah Vowell and Samuel L. Jackson reprise their roles from the first film; newcomers to the cast include Huckleberry Milner, Bob Odenkirk, Catherine Keener and Jonathan Banks. Michael Giacchino returned to compose the score.
Subscription page for the monthly gwern.net newsletter. There are monthly updates, which will include summaries of projects I’ve worked on that month (the same as the changelog), collations of links or discussions from my subreddit, and book/movie reviews. You can also browse the archives since December 2013.
Robert Bakewell was a English agriculturalist, now recognized as one of the most important figures in the British Agricultural Revolution. In addition to work in agronomy, Bakewell is particularly notable as the first to implement systematic selective breeding of livestock. His advancements not only led to specific improvements in sheep, cattle and horses, but contributed to general knowledge of artificial selection.
How do genes affect cognitive ability or other human quantitative traits such as height or disease risk? Progress on this challenging question is likely to be significant in the near future. I begin with a brief review of psychometric measurements of intelligence, introducing the idea of a "general factor" or g score. The main results concern the stability, validity (predictive power), and heritability of adult g. The largest component of genetic variance for both height and intelligence is additive (linear), leading to important simplifications in predictive modeling and statistical estimation. Due mainly to the rapidly decreasing cost of genotyping, it is possible that within the coming decade researchers will identify loci which account for a significant fraction of total g variation. In the case of height analogous efforts are well under way. I describe some unpublished results concerning the genetic architecture of height and cognitive ability, which suggest that roughly 10k moderately rare causal variants of mostly negative effect are responsible for normal population variation. Using results from Compressed Sensing (L1-penalized regression), I estimate the statistical power required to characterize both linear and nonlinear models for quantitative traits. The main unknown parameter s (sparsity) is the number of loci which account for the bulk of the genetic variation. The required sample size is of order 100s, or roughly a million in the case of cognitive ability.
Genome-wide complex trait analysis (GCTA) Genome-based restricted maximum likelihood (GREML) is a statistical method for variance component estimation in genetics which quantifies the total narrow-sense (additive) contribution to a trait's heritability of a particular subset of genetic variants. This is done by directly quantifying the chance genetic similarity of unrelated individuals and comparing it to their measured similarity on a trait; if two unrelated individuals are relatively similar genetically and also have similar trait measurements, then the measured genetics are likely to causally influence that trait, and the correlation can to some degree tell how much. This can be illustrated by plotting the squared pairwise trait differences between individuals against their estimated degree of relatedness. The GCTA framework can be applied in a variety of settings. For example, it can be used to examine changes in heritability over aging and development. It can also be extended to analyse bivariate genetic correlations between traits. There is an ongoing debate about whether GCTA generates reliable or stable estimates of heritability when used on current SNP data. The method is based on the outdated and false dichotomy of genes versus the environment. It also suffers from serious methodological weaknesses, such as susceptibility to population stratification.
We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.
Eye tracking is the process of measuring either the point of gaze or the motion of an eye relative to the head. An eye tracker is a device for measuring eye positions and eye movement. Eye trackers are used in research on the visual system, in psychology, in psycholinguistics, marketing, as an input device for human-computer interaction, and in product design. Eye trackers are also being increasingly used for rehabilitative and assistive applications .There are a number of methods for measuring eye movement. The most popular variant uses video images from which the eye position is extracted. Other methods use search coils or are based on the electrooculogram.
A virtual reality headset is a head-mounted device that provides virtual reality for the wearer. Virtual reality (VR) headsets are widely used with video games but they are also used in other applications, including simulators and trainers. They comprise a stereoscopic head-mounted display, stereo sound, and head motion tracking sensors. Some VR headsets also have eye tracking sensors and gaming controllers.
Foveated rendering is a rendering technique which uses an eye tracker integrated with a virtual reality headset to reduce the rendering workload by greatly reducing the image quality in the peripheral vision.
A brain–computer interface (BCI), sometimes called a neural control interface (NCI), mind–machine interface (MMI), direct neural interface (DNI), or brain–machine interface (BMI), is a direct communication pathway between an enhanced or wired brain and an external device. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.
René Noël Théophile Girard was a French historian, literary critic, and philosopher of social science whose work belongs to the tradition of anthropological philosophy. Girard was the author of nearly thirty books, with his writings spanning many academic domains. Although the reception of his work is different in each of these areas, there is a growing body of secondary literature on his work and his influence on disciplines such as literary criticism, critical theory, anthropology, theology, psychology, mythology, sociology, economics, cultural studies, and philosophy.
The dress is a photograph that became a viral internet sensation on 26 February 2015, when viewers disagreed over whether the dress pictured was coloured black and royal blue, or white and gold. The phenomenon revealed differences in human colour perception, which have been the subject of ongoing scientific investigations into neuroscience and vision science, with a number of papers published in peer-reviewed science journals.
Franco Moretti is an Italian literary historian and theorist. He graduated in Modern Literatures from the University of Rome in 1972. He has taught at the universities of Salerno (1979–1983) and Verona (1983–1990); in the US, at Columbia (1990–2000) and Stanford (2000–2016), where in 2000 he founded the Center for the Study of the Novel, and in 2010, with Matthew Jockers, the Stanford Literary Lab. Moretti has given the Gauss Seminars at Princeton, the Beckman Lectures at Berkeley, the Carpenter Lectures at the University of Chicago, and has been a lecturer and visiting professor in many countries, including, until the end of 2019, the Digital Humanities Institute at the École Polytechnique Fédérale de Lausanne.