newsletter/2019/11 (Link Bibliography)

“newsletter/​2019/​11” links:

  1. 11

  2. https://gwern.substack.com

  3. 10

  4. newsletter

  5. Changelog

  6. https://www.patreon.com/gwern

  7. Red

  8. Earwax

  9. ⁠, Jose Luis Ricon ():

    Blog of Jose Luis Ricon (Twitter), machine learning engineer. Ricon blogs primarily about economics and progress studies, mixing link compilations with more researched essays such as about the economic (in)efficiency of the USSR, or the extent to which tutoring & “direct instruction” boost educational achievement.

  10. https://ricon.dev/

  11. 2019-khera.pdf: ⁠, Amit V. Khera, Heather Mason-Suares, Deanna Brockman, Minxian Wang, Martin J. VanDenburgh, Ozlem Senol-Cosar, Candace Patterson, Christopher Newton-Cheh, Seyedeh M. Zekavat, Julie Pester, Daniel I. Chasman, Christopher Kabrhel, Majken K. Jensen, JoAnn E. Manson, J. Michael Gaziano, Kent D. Taylor, Nona Sotoodehnia, Wendy S. Post, Stephen S. Rich, Jerome I. Rotter, Eric S. Lander, Heidi L. Rehm, Kenney Ng, Anthony Philippakis, Matthew Lebo, Christine M. Albert, Sekar Kathiresan (2019-11-18; genetics  /​ ​​ ​heritable):

    Background: Sudden cardiac death occurs in ~220,000 U.S. adults annually, the majority of whom have no symptoms or cardiovascular diagnosis. Rare pathogenic DNA variants in any of 49 genes can pre-dispose to 4 important causes of sudden cardiac death: cardiomyopathy, coronary artery disease, inherited arrhythmia syndrome, and aortopathy or aortic dissection.

    Objectives: This study assessed the prevalence of rare pathogenic variants in sudden cardiac death cases versus controls, and the prevalence and clinical importance of such mutations in an asymptomatic adult population.

    Methods: The authors performed in a case-control cohort of 600 adult-onset sudden cardiac death cases and 600 matched controls from 106,098 participants of 6 prospective cohort studies. Observed DNA sequence variants in any of 49 genes with known association to cardiovascular disease were classified as pathogenic or likely pathogenic by a clinical laboratory geneticist blinded to case status. In an independent population of 4,525 asymptomatic adult participants of a prospective cohort study, the authors performed whole-genome sequencing and determined the prevalence of pathogenic or likely pathogenic variants and prospective association with cardiovascular death.

    Results: Among the 1,200 sudden cardiac death cases and controls, the authors identified 5,178 genetic variants and classified 14 as pathogenic or likely pathogenic. These 14 variants were present in 15 individuals, all of whom had experienced sudden cardiac death—corresponding to a pathogenic variant prevalence of 2.5% in cases and 0% in controls (p < 0.0001). Among the 4,525 participants of the prospective cohort study, 41 (0.9%) carried a pathogenic or likely pathogenic variant and these individuals had 3.24-fold higher risk of cardiovascular death over a median follow-up of 14.3 years (p = 0.02).

    Conclusions: Gene sequencing identifies a pathogenic or likely pathogenic variant in a small but potentially important subset of adults experiencing sudden cardiac death; these variants are present in ~1% of asymptomatic adults.

  12. 2016-bagnall.pdf: ⁠, Richard D. Bagnall, Robert G. Weintraub, Jodie Ingles, Johan Duflou, Laura Yeates, Lien Lam, Andrew M. Davis, Tina Thompson, Vanessa Connell, Jennie Wallace, Charles Naylor, Jackie Crawford, Donald R. Love, Lavinia Hallam, Jodi White, Christopher Lawrence, Matthew Lynch, Natalie Morgan, Paul James, Desirée du Sart, Rajesh Puranik, Neil Langlois, Jitendra Vohra, Ingrid Winship, John Atherton, Julie McGaughran, Jonathan R. Skinner, Christopher Semsarian (2016-06-23; genetics  /​ ​​ ​heritable):

    Background: Sudden cardiac death among children and young adults is a devastating event. We performed a prospective, population-based, clinical and genetic study of sudden cardiac death among children and young adults.

    Methods: We prospectively collected clinical, demographic, and autopsy information on all cases of sudden cardiac death among children and young adults 1 to 35 years of age in Australia and New Zealand from 2010 through 2012. In cases that had no cause identified after a comprehensive autopsy that included toxicologic and histologic studies (unexplained sudden cardiac death), at least 59 cardiac genes were analyzed for a clinically relevant cardiac gene mutation.

    Results: A total of 490 cases of sudden cardiac death were identified. The annual incidence was 1.3 cases per 100,000 persons 1 to 35 years of age; 72% of the cases involved boys or young men. Persons 31 to 35 years of age had the highest incidence of sudden cardiac death (3.2 cases per 100,000 persons per year), and persons 16 to 20 years of age had the highest incidence of unexplained sudden cardiac death (0.8 cases per 100,000 persons per year). The most common explained causes of sudden cardiac death were coronary artery disease (24% of cases) and inherited cardiomyopathies (16% of cases). Unexplained sudden cardiac death (40% of cases) was the predominant finding among persons in all age groups, except for those 31 to 35 years of age, for whom coronary artery disease was the most common finding. Younger age and death at night were independently associated with unexplained sudden cardiac death as compared with explained sudden cardiac death. A clinically relevant cardiac gene mutation was identified in 31 of 113 cases (27%) of unexplained sudden cardiac death in which genetic testing was performed. During follow-up, a clinical diagnosis of an inherited cardiovascular disease was identified in 13% of the families in which an unexplained sudden cardiac death occurred.

    Conclusions: The addition of genetic testing to autopsy investigation substantially increased the identification of a possible cause of sudden cardiac death among children and young adults.

  13. ⁠, Gabriel Cuellar Partida, Joyce Y. Tung, Nicholas Eriksson, Eva Albrecht, Fazil Aliev, Ole A. Andreassen, Inês Barroso, Jacques S. Beckmann, Marco P. Boks, Dorret I. Boomsma, Heather A. Boyd, Monique MB Breteler, Harry Campbell, Daniel I. Chasman, Lynn F. Cherkas, Gail Davies, Eco JC de Geus, Ian J. Deary, Panos Deloukas, Danielle M. Dick, David L. Duffy, Johan G. Eriksson, Tõnu Esko, Bjarke Feenstra, Frank Geller, Christian Gieger, Ina Giegling, Scott D. Gordon, Jiali Han, Thomas F. Hansen, Annette M. Hartmann, Caroline Hayward, Kauko Heikkilä, Andrew A. Hicks, Joel N. Hirschhorn, Jouke-Jan Hottenga, Jennifer E. Huffman, Liang-Dar Hwang, Mohammad A. Ikram, Jaakko Kaprio, John P. Kemp, Kay-Tee Khaw, Norman Klopp, Bettina Konte, Zoltan Kutalik, Jari Lahti, Xin Li, Ruth JF Loos, Michelle Luciano, Sigurdur H. Magnusson, Massimo Mangino, Pedro Marques-Vidal, Nicholas G. Martin, Wendy L. McArdle, Mark I. McCarthy, Carolina Medina-Gomez, Mads Melbye, Scott A. Melville, Andres Metspalu, Lili Milani, Vincent Mooser, Mari Nelis, Dale R. Nyholt, Kevin S. O’Connell, Roel A. Ophoff, Cameron Palmer, Aarno Palotie, Teemu Palviainen, Guillaume Pare, Lavinia Paternoster, Leena Peltonen, Brenda WJH Penninx, Ozren Polasek, Peter P. Pramstaller, Inga Prokopenko, Katri Raikkonen, Samuli Ripatti, Fernando Rivadeneira, Igor Rudan, Dan Rujescu, Johannes H. Smit, George Davey Smith, Jordan W. Smoller, Nicole Soranzo, Tim D. Spector, Beate St Pourcain, John M. Starr, Kari Stefansson, Hreinn Stefánsson, Stacy Steinberg, Maris Teder-Laving, Gudmar Thorleifsson, Nicholas J. Timpson, André G. Uitterlinden, Cornelia M. van Duijn, Frank JA van Rooij, Jaqueline M. Vink, Peter Vollenweider, Eero Vuoksimaa, Gérard Waeber, Nicholas J. Wareham, Nicole Warrington, Dawn Waterworth, Thomas Werge, H.-Erich Wichmann, Elisabeth Widen, Gonneke Willemsen, Alan F. Wright, Margaret J. Wright, Mousheng Xu, Jing Hua Zhao, Peter Kraft, David A. Hinds, Cecilia M. Lindgren, Reedik Magi, Benjamin M. Neale, David M. Evans, Sarah E. Medland (2019-11-07):

    Handedness, a consistent asymmetry in skill or use of the hands, has been studied extensively because of its relationship with language and the over-representation of left-handers in some neurodevelopmental disorders. Using data from the ⁠, 23andMe and 32 studies from the International Handedness Consortium, we conducted the world’s largest genome-wide association study of handedness (1,534,836 right-handed, 194,198 (11.0%) left-handed and 37,637 (2.1%) ambidextrous individuals). We found 42 genetic loci associated with left-handedness and seven associated with ambidexterity at genome-wide levels of significance (p < 5×10−8). Tissue enrichment analysis implicated the central nervous system and brain tissues including the hippocampus and cerebrum in the etiology of left-handedness. Pathways including regulation of microtubules, neurogenesis, axonogenesis and hippocampus morphology were also highlighted. We found suggestive positive between being left-handed and some neuropsychiatric traits including schizophrenia and bipolar disorder. heritability analyses indicated that additive genetic effects of genotyped variants explained 5.9% (95% = 5.8% – 6.0%) of the underlying liability of being left-handed, while the narrow sense heritability was estimated at 12% (95% CI = 7.2% – 17.7%). Further, we show that genetic correlation between left-handedness and ambidexterity is low (rg = 0.26; 95% CI = 0.08 – 0.43) implying that these traits are largely influenced by different genetic mechanisms. In conclusion, our findings suggest that handedness, like many other complex traits is highly polygenic, and that the genetic variants that predispose to left-handedness may underlie part of the association with some psychiatric disorders that has been observed in multiple observational studies.

  14. ⁠, Toni de-Dios, Lucy van Dorp, Philippe Charlier, Sofia Morfopoulou, Esther Lizano, Celine Bon, Corinne Le Bitouzé, Marina Álvarez-Estapé, Tomas Marquès-Bonet, François Balloux, Carles Lalueza-Fox (2019-10-31):

    The French revolutionary Jean-Paul Marat was assassinated in 1793 in his bathtub, where he was trying to find relief from the debilitating skin disease he was suffering from. At the time of his death, Marat was annotating newspapers, which got stained with his blood and were subsequently preserved by his sister. We extracted and sequenced DNA from the blood stain and also from another section of the newspaper, which we used for comparison. Analysis of human DNA sequences supported the heterogeneous ancestry of Marat, with his mother being of French origin and his father born in Sardinia, although bearing more affinities to mainland Italy or Spain. Metagenomic analyses of the non-human reads uncovered the presence of fungal, bacterial and low levels of viral DNA. Relying on the presence/​​​​absence of microbial species in the samples, we could confidently rule out several putative infectious agents that had been previously hypothesised as the cause of his condition. Conversely, some of the detected species are uncommon as environmental contaminants and may represent plausible infective agents. Based on all the available evidence, we hypothesize that Marat may have suffered from a primary fungal infection (seborrheic dermatitis), superinfected with bacterial opportunistic pathogens.

    Significance: The advent of second-generation sequencing technologies allows for the retrieval of ancient genomes from long-dead people and, using non-human sequencing reads, of the pathogens that infected them. In this work we combined both approaches to gain insights into the ancestry and health of the controversial French revolutionary leader and physicist Jean-Paul Marat (1743-1793). Specifically, we investigate the pathogens, which may have been the cause of the debilitating skin condition that was affecting him, by analysing DNA obtained from a paper stained with his blood at the time of his death. This allowed us to confidently rule out several conditions that have been put forward. To our knowledge, this represents the oldest successful retrieval of genetic material from cellulose paper.

  15. ⁠, Michael Le Page (2019-11-22):

    A company called Genomic Prediction has confirmed that at least one woman is pregnant with embryos selected after analysing hundreds of thousands of DNA variants to assess the risk of disease. It is the first time this approach has been used for screening IVF embryos, but some don’t think this use of the technology is justified.

    “Embryos have been chosen to reduce disease risk using pre-implantation genetic testing for polygenic traits, and this has resulted in pregnancy”, Laurent Tellier, CEO of Genomic Prediction, told New Scientist. He didn’t say how many pregnancies there were, or what traits or conditions were screened for.

  16. 2019-anzalone.pdf: ⁠, Andrew V. Anzalone, Peyton B. Randolph, Jessie R. Davis, Alexander A. Sousa, Luke W. Koblan, Jonathan M. Levy, Peter J. Chen, Christopher Wilson, Gregory A. Newby, Aditya Raguram, David R. Liu (2019-10-21; genetics  /​ ​​ ​editing):

    Most genetic variants that contribute to disease1 are challenging to correct efficiently and without excess byproducts2,3,4,5. Here we describe prime editing, a versatile and precise genome editing method that directly writes new genetic information into a specified DNA site using a catalytically impaired Cas9 endonuclease fused to an engineered reverse transcriptase, programmed with a prime editing guide RNA (pegRNA) that both specifies the target site and encodes the desired edit. We performed more than 175 edits in human cells, including targeted insertions, deletions, and all 12 types of point mutation, without requiring double-strand breaks or donor DNA templates. We used prime editing in human cells to correct, efficiently and with few byproducts, the primary genetic causes of sickle cell disease (requiring a transversion in HBB) and Tay-Sachs disease (requiring a deletion in HEXA); to install a protective transversion in PRNP; and to insert various tags and epitopes precisely into target loci. Four human cell lines and primary post-mitotic mouse cortical neurons support prime editing with varying efficiencies. Prime editing shows higher or similar efficiency and fewer byproducts than homology-directed repair, has complementary strengths and weaknesses compared to base editing, and induces much lower off-target editing than Cas9 nuclease at known Cas9 off-target sites. Prime editing substantially expands the scope and capabilities of genome editing, and in principle could correct up to 89% of known genetic variants associated with human diseases.

  17. ⁠, Jon Cohen (2019-10-21):

    CRISPR, an extraordinarily powerful genome-editing tool invented in 2012, can still be clumsy. It sometimes changes genes it shouldn’t, and it edits by hacking through both strands of DNA’s double helix, leaving the cell to clean up the mess—shortcomings that limit its use in basic research and agriculture and pose safety risks in medicine. But a new entrant in the race to refine CRISPR promises to steer around some of its biggest faults. “It’s a huge step in the right direction”, chemist George Church, a CRISPR pioneer at Harvard University, says about the work, which appears online today in Nature.

    …Liu’s earlier handwork, base editing, does not cut the double-stranded DNA but instead uses the targeting apparatus to shuttle an additional enzyme to a desired sequence, where it converts a single nucleotide into another. Many genetic traits and diseases are caused by a single nucleotide change, so base editing offers a powerful alternative for biotechnology and medicine. But the method has limitations, and it, too, often introduces off-target mutations.

    Prime editing steers around shortcomings of both techniques by heavily modifying the Cas9 protein and the guide RNA. The altered Cas9 only “nicks” a single strand of the double helix, instead of cutting both. The new guide, called a pegRNA, contains an RNA template for a new DNA sequence, to be added to the genome at the target location. That requires a second protein, attached to Cas9: a reverse transcriptase enzyme, which can make a new DNA strand from the RNA template and insert it at the nicked site.

    Liu, who has already formed a company around the new technology, Prime Medicine, stresses that to gain a place in the editing toolkit, it will have to prove robust and useful in many labs. Delivering the large construct of RNA and enzymes into living cells will also be difficult, and no one has yet shown it can work in an animal model.

  18. ⁠, Megan Molteni (2019-10-21):

    Crispr, for all its DNA-snipping precision, has always been best at breaking things. But if you want to replace a faulty gene with a healthy one, things get more complicated. In addition to programming a piece of guide RNA to tell Crispr where to cut, you have to provide a copy of the new DNA and then hope the cell’s repair machinery installs it correctly. Which, spoiler alert, it often doesn’t. Anzalone wondered if instead there was a way to combine those two pieces, so that one molecule told Crispr both where to make its changes and what edits to make. Inspired, he cinched his coat tighter and hurried home to his apartment in Chelsea, sketching and Googling late into the night to see how it might be done.

    A few months later, his idea found a home in the lab of David Liu, the Broad Institute chemist who’d recently developed a host of more surgical Crispr systems, known as base editors. Anzalone joined Liu’s lab in 2018, and together they began to engineer the Crispr creation glimpsed in the young post-doc’s imagination. After much trial and error, they wound up with something even more powerful. The system, which Liu’s lab has dubbed “prime editing”, can for the first time make virtually any alteration—additions, deletions, swapping any single letter for any other—without severing the DNA double helix. “If Crispr-Cas9 is like scissors and base editors are like pencils, then you can think of prime editors to be like word processors”, Liu told reporters in a press briefing.

    Why is that a big deal? Because with such fine-tuned command of the genetic code, prime editing could, according to Liu’s calculations, correct around 89% of the mutations that cause heritable human diseases. Working in human cell cultures, his lab has already used prime editors to fix the genetic glitches that cause sickle cell anemia, cystic fibrosis, and Tay-Sachs disease. Those are just three of more than 175 edits the group unveiled today in a scientific article published in the journal Nature.

    The work “has a strong potential to change the way we edit cells and be transformative”, says Gaétan Burgio, a geneticist at the Australian National University who was not involved in the work, in an email. He was especially impressed at the range of changes prime editing makes possible, including adding up to 44 DNA letters and deleting up to 80. “Overall, the editing efficiency and the versatility shown in this paper are remarkable.”

    …The bigger problem, according to folks like Burgio, is that prime editors are huge, in molecular terms. They’re so big that they won’t pack up neatly into the viruses researchers typically use to shuttle editing components into cells. These colossi might even clog a microinjection needle, making it difficult to deliver into mouse (or potentially human) embryos. That, says Burgio, could make prime editing a lot less practical than existing techniques.

  19. ⁠, Noah Davidsohn, Matthew Pezzone, Andyna Vernet, Amanda Graveline, Daniel Oliver, Shimyn Slomovic, Sukanya Punthambaker, Xiaoming Sun, Ronglih Liao, Joseph V. Bonventre, George M. Church (2019-11-19):

    Significance: Human and animal longevity is directly bound to their health span. While previous studies have provided evidence supporting this connection, therapeutic implementation of this knowledge has been limited. Traditionally, diseases are researched and treated individually, which ignores the interconnectedness of age-related conditions, necessitates multiple treatments with unrelated substances, and increases the accumulative risk of side effects. In this study, we address and overcome this deadlock by creating adeno-associated virus (AAV)-based antiaging gene therapies for simultaneous treatment of several age-related diseases. We demonstrate the modular and extensible nature of combination gene therapy by testing therapeutic AAV cocktails that confront multiple diseases in a single treatment. We observed that 1 treatment comprising 2 AAV gene therapies was efficacious against all 4 diseases.

    Comorbidity is common as age increases, and currently prescribed treatments often ignore the interconnectedness of the involved age-related diseases. The presence of any one such disease usually increases the risk of having others, and new approaches will be more effective at increasing an individual’s health span by taking this systems-level view into account. In this study, we developed gene therapies based on 3 longevity associated genes (fibroblast growth factor 21 [FGF21], αKlotho, soluble form of mouse transforming growth factor-β receptor 2 [sTGFβR2]) delivered using adeno-associated viruses and explored their ability to mitigate 4 age-related diseases: obesity, type II diabetes, heart failure, and renal failure. Individually and combinatorially, we applied these therapies to disease-specific mouse models and found that this set of diverse pathologies could be effectively treated and in some cases, even reversed with a single dose. We observed a 58% increase in heart function in ascending aortic constriction ensuing heart failure, a 38% reduction in α-smooth muscle actin SMA) expression, and a 75% reduction in renal medullary atrophy in mice subjected to unilateral ureteral obstruction and a complete reversal of obesity and diabetes phenotypes in mice fed a constant high-fat diet. Crucially, we discovered that a single formulation combining 2 separate therapies into 1 was able to treat all 4 diseases. These results emphasize the promise of gene therapy for treating diverse age-related ailments and demonstrate the potential of combination gene therapy that may improve health span and longevity by addressing multiple diseases at once.

    [Keywords: gene therapy, AAV, combination therapy, age-related diseases]

  20. ⁠, Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, ⁠, Thore Graepel, Timothy Lillicrap, David Silver (2019-11-19):

    Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown.

    In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function.

    When evaluated on 57 different Atari games—the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled—our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the algorithm that was supplied with the game rules.

  21. ⁠, Steven Kapturowski, Georg Ostrovski, John Quan, Remi Munos, Will Dabney (2018-09-27; reinforcement-learning):

    Building on the recent successes of distributed training of RL agents, in this paper we investigate the training of -based RL agents from distributed prioritized experience replay. We study the effects of parameter lag resulting in representational drift and recurrent state staleness and empirically derive an improved training strategy. Using a single network architecture and fixed set of hyper-parameters, the resulting agent, Recurrent Replay Distributed (R2D2), quadruples the previous state of the art on Atari-57, and matches the state of the art on ⁠. It is the first agent to exceed human-level performance in 52 of the 57 Atari games.

    [Keywords: RNN, ⁠, experience replay, distributed training, ]

    TL;DR: Investigation on combining recurrent neural networks and experience replay leading to state-of-the-art agent on both Atari-57 and DMLab-30 using single set of hyper-parameters.

  22. ⁠, Lukasz Kaiser, Mohammad Babaeizadeh, Piotr Milos, Blazej Osinski, Roy H. Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, Afroz Mohiuddin, Ryan Sepassi, George Tucker, Henryk Michalewski (2019-03-01):

    Model-free reinforcement learning (RL) can be used to learn effective policies for complex tasks, such as Atari games, even from image observations. However, this typically requires very large amounts of interaction—substantially more, in fact, than a human would need to learn the same games. How can people learn so quickly? Part of the answer may be that people can learn how the game works and predict which actions will lead to desirable outcomes. In this paper, we explore how video prediction models can similarly enable agents to solve Atari games with fewer interactions than model-free methods. We describe Simulated Policy Learning (SimPLe), a complete model-based deep RL algorithm based on video prediction models and present a comparison of several model architectures, including a novel architecture that yields the best results in our setting. Our experiments evaluate SimPLe on a range of Atari games in low data regime of 100k interactions between the agent and the environment, which corresponds to two hours of real-time play. In most games SimPLe outperforms state-of-the-art model-free algorithms, in some games by over an order of magnitude.

  23. ⁠, Rich Sutton (2019-03-13):

    The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore’s law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation.

    …In computer chess, the methods that defeated the world champion, Kasparov, in 1997, were based on massive, deep search. At the time, this was looked upon with dismay by the majority of computer-chess researchers who had pursued methods that leveraged human understanding of the special structure of chess…A similar pattern of research progress was seen in computer Go, only delayed by a further 20 years. Enormous initial efforts went into avoiding search by taking advantage of human knowledge, or of the special features of the game, but all those efforts proved irrelevant, or worse, once search was applied effectively at scale…In speech recognition, there was an early competition, sponsored by ⁠, in the 1970s. Entrants included a host of special methods that took advantage of human knowledge—knowledge of words, of phonemes, of the human vocal tract, etc. On the other side were newer methods that were more statistical in nature and did much more computation, based on hidden Markov models (HMMs). Again, the statistical methods won out over the human-knowledge-based methods…In computer vision…Modern deep-learning neural networks use only the notions of convolution and certain kinds of invariances, and perform much better.

    …We have to learn the bitter lesson that building in how we think we think does not work in the long run. The bitter lesson is based on the historical observations that (1) AI researchers have often tried to build knowledge into their agents, (2) this always helps in the short term, and is personally satisfying to the researcher, but (3) in the long run it plateaus and even inhibits further progress, and (4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.

    [My meme summary:

    The GPT-3 bitter lesson.]
  24. ⁠, Irene Solaiman, Jack Clark, Miles Brundage (2019-11-05):

    As the final model release of staged release, we’re releasing the largest version (1.5B parameters) of GPT-2 along with code and model weights to facilitate detection of outputs of GPT-2 models. While there have been larger language models released since August, we’ve continued with our original staged release plan in order to provide the community with a test case of a full staged release process. We hope that this test case will be useful to developers of future powerful models, and we’re actively continuing the conversation with the AI community on responsible publication.

    Our findings:

    1. Humans find GPT-2 outputs convincing.
    2. GPT-2 can be fine-tuned for misuse.
    3. Detection is challenging.
    4. We’ve seen no strong evidence of misuse so far.
    5. We need standards for studying bias.

    Next steps: Our experience with GPT-2 over the past 9 months has given us valuable insight into the challenges and opportunities for creating responsible publication norms in AI. We’re continuing our work on this issue via participation in the Partnership on AI’s “Responsible Publication Norms for Machine Learning” project and discussions with our colleagues in the research community.

  25. ⁠, Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, Gretchen Krueger, Jong Wook Kim, Sarah Kreps, Miles McCain, Alex Newhouse, Jason Blazakis, Kris McGuffie, Jasmine Wang (2019-08-24):

    Large language models have a range of beneficial uses: they can assist in prose, poetry, and programming; analyze dataset biases; and more. However, their flexibility and generative capabilities also raise misuse concerns. This report discusses OpenAI’s work related to the release of its GPT-2 language model. It discusses staged release, which allows time between model releases to conduct risk and benefit analyses as model sizes increased. It also discusses ongoing partnership-based research and provides recommendations for better coordination and responsible publication in AI.

  26. ⁠, Adam King (2019):

    [Interactive web interface to -1.5b.] See how a modern neural network completes your text. Type a custom snippet or try one of the examples…Built by Adam King (@AdamDanielKing) as an easier way to play with OpenAI’s new machine learning model. This site runs the full-sized GPT-2 model, called 1558M.

  27. ⁠, Hans Moravec (1998):

    This paper describes how the performance of AI machines tends to improve at the same pace that AI researchers get access to faster hardware. The processing power and memory capacity necessary to match general intellectual performance of the human brain are estimated. Based on extrapolation of past trends and on examination of technologies under development, it is predicted that the required hardware will be available in cheap machines in the 2020s…At the present rate, computers suitable for human-like robots will appear in the 2020s. Can the pace be sustained for another three decades?

    …By 1990, entire careers had passed in the frozen winter of 1-MIPS computers, mainly from necessity, but partly from habit and a lingering opinion that the early machines really should have been powerful enough. In 1990, 1 MIPS cost $2,338$1,0001990 in a low-end personal computer. There was no need to go any lower. Finally spring thaw has come. Since 1990, the power available to individual AI and robotics programs has doubled yearly, to 30 MIPS by 1994 and 500 MIPS by 1998. Seeds long ago alleged barren are suddenly sprouting. Machines read text, recognize speech, even translate languages. Robots drive cross-country, crawl across Mars, and trundle down office corridors. In 1996 a theorem-proving program called running five weeks on a 50 MIPS computer at Argonne National Laboratory found a proof of a boolean algebra conjecture by Herbert Robbins that had eluded mathematicians for sixty years. And it is still only spring. Wait until summer.

    …The mental steps underlying good human chess playing and theorem proving are complex and hidden, putting a mechanical interpretation out of reach. Those who can follow the play naturally describe it instead in mentalistic language, using terms like strategy, understanding and creativity. When a machine manages to be simultaneously meaningful and surprising in the same rich way, it too compels a mentalistic interpretation. Of course, somewhere behind the scenes, there are programmers who, in principle, have a mechanical interpretation. But even for them, that interpretation loses its grip as the working program fills its memory with details too voluminous for them to grasp.

    As the rising flood reaches more populated heights, machines will begin to do well in areas a greater number can appreciate. The visceral sense of a thinking presence in machinery will become increasingly widespread. When the highest peaks are covered, there will be machines than can interact as intelligently as any human on any subject. The presence of minds in machines will then become self-evident.

    Faster than Exponential Growth in Computing Power: The number of MIPS in $1,854$10001998 of computer from 1900 to the present. Steady improvements in mechanical and electromechanical calculators before World War II had increased the speed of calculation a thousandfold over manual methods from 1900 to 1940. The pace quickened with the appearance of electronic computers during the war, and 1940 to 1980 saw a million-fold increase. The pace has been even quicker since then, a pace which would make human-like robots possible before the middle of the next century. The vertical scale is logarithmic, the major divisions represent thousandfold increases in computer performance. Exponential growth would show as a straight line, the upward curve indicates faster than exponential growth, or, equivalently, an accelerating rate of innovation. The reduced spread of the data in the 1990s is probably the result of intensified competition: underperforming machines are more rapidly squeezed out. The numerical data for this power curve are presented in the appendix.
    The big freeze: From 1960 to 1990 the cost of computers used in AI research declined, as their numbers dilution absorbed computer-efficiency gains during the period, and the power available to individual AI programs remained almost unchanged at 1 MIPS, barely insect power. AI computer cost bottomed in 1990, and since then power has doubled yearly, to several hundred MIPS by 1998. The major visible exception is computer chess (shown by a progression of knights), whose prestige lured the resources of major computer companies and the talents of programmers and machine designers. Exceptions also exist in less public competitions, like petroleum exploration and intelligence gathering, whose high return on investment gave them regular access to the largest computers.
  28. ⁠, Ruben Villegas, Arkanath Pathak, Harini Kannan, Dumitru Erhan, Quoc V. Le, Honglak Lee (2019-11-05):

    Predicting future video frames is extremely challenging, as there are many factors of variation that make up the dynamics of how frames change through time. Previously proposed solutions require complex inductive biases inside network architectures with highly specialized computation, including segmentation masks, optical flow, and foreground and background separation. In this work, we question if such handcrafted architectures are necessary and instead propose a different approach: finding minimal inductive bias for video prediction while maximizing network capacity. We investigate this question by performing the first large-scale empirical study and demonstrate state-of-the-art performance by learning large models on three different datasets: one for modeling object interactions, one for modeling human motion, and one for modeling car driving.

  29. ⁠, Ruben Villegas, Arkanath Pathak, Harini Kannan, Dumitru Erhan, Quoc V. Le, Honglak Lee (2019):

    Sample videos generated by large-scale RNNs:

    • 128×128 Videos:

      • Human 3.6M
      • KITTI Driving
    • Video Comparisons (64×64):

      • Towel pick
      • Human 3.6M
      • KITTI Driving
      • Towel pick
      • Human 3.6M
      • KITTI Driving
  30. ⁠, Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov (2019-11-05):

    This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a -based masked language model on one hundred languages, using more than two terabytes of filtered data. Our model, dubbed XLM-R, significantly outperforms multilingual (mBERT) on a variety of cross-lingual benchmarks, including +14.6% average accuracy on XNLI, +13% average F1 score on MLQA, and +2.4% F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7% in XNLI accuracy for Swahili and 11.4% for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the and XNLI benchmarks. We will make our code, data and models publicly available.

  31. ⁠, FAIR (2019-11-07):

    A new model, called -R, that uses self-supervised training techniques to achieve state-of-the-art performance in cross-lingual understanding, a task in which a model is trained in one language and then used with other languages without additional training data. Our model improves upon previous multilingual approaches by incorporating more training data and languages—including so-called low-resource languages, which lack extensive labeled and unlabeled data sets.

    XLM-R has achieved the best results to date on four cross-lingual understanding benchmarks, with increases of 4.7 percent average accuracy on the XNLI cross-lingual natural language inference data set, 8.4 percent average F1 score on the recently introduced MLQA question answering data set, and 2.1 percent F1 score on NER. After extensive experiments and ablation studies, we’ve shown that XLM-R is the first multilingual model to outperform traditional monolingual baselines that rely on pretrained models.

    In addition to sharing our results, we’re releasing the code and models that we used for this research. Those resources can be found on our fairseq, Pytext and XLM repositories on ⁠.

    …With people on Facebook posting content in more than 160 languages, XLM-R represents an important step toward our vision of providing the best possible experience on our platforms for everyone, regardless of what language they speak. Potential applications include serving highly accurate models for identifying hate speech and other policy-violating content across a wide range of languages. As this work helps us transition toward a one-model-for-many-languages approach—as opposed to one model per language—it will also make it easier to continue launching high-performing products in multiple languages at once.

  32. ⁠, C. Daniel Freeman, Luke Metz, David Ha (2019-10-29):

    Much of model-based reinforcement learning involves learning a model of an agent’s world, and training an agent to leverage this model to perform a task more efficiently. While these models are demonstrably useful for agents, every naturally occurring model of the world of which we are aware—eg., a brain—arose as the byproduct of competing evolutionary pressures for survival, not minimization of a supervised forward-predictive loss via gradient descent. That useful models can arise out of the messy and slow optimization process of evolution suggests that forward-predictive modeling can arise as a side-effect of optimization under the right circumstances. Crucially, this optimization process need not explicitly be a forward-predictive loss. In this work, we introduce a modification to traditional reinforcement learning which we call observational dropout, whereby we limit the agents ability to observe the real environment at each timestep. In doing so, we can coerce an agent into learning a world model to fill in the observation gaps during reinforcement learning. We show that the emerged world model, while not explicitly trained to predict the future, can help the agent learn key skills required to perform well in its environment.

    [Image caption: “Our agents are only given infrequent observations of the real environment. As a side effect for optimizing performance in this setting, a”world model" emerges. We show the true dynamics in color, with full saturation denoting frames the policy can see. The black and white outline shows the state of the emergent world model. These world model exhibits similar, but not identical dynamics to forward predictive models but only model “important” aspects of the environment."]

  33. ⁠, C. Daniel Freeman, Luke Metz, David Ha (2019-10-29):

    Much of model-based reinforcement learning involves learning a model of an agent’s world, and training an agent to leverage this model to perform a task more efficiently. While these models are demonstrably useful for agents, every naturally occurring model of the world of which we are aware—e.g., a brain—arose as the byproduct of competing evolutionary pressures for survival, not minimization of a supervised forward-predictive loss via gradient descent. That useful models can arise out of the messy and slow optimization process of evolution suggests that forward-predictive modeling can arise as a side-effect of optimization under the right circumstances. Crucially, this optimization process need not explicitly be a forward-predictive loss. In this work, we introduce a modification to traditional reinforcement learning which we call observational dropout, whereby we limit the agents ability to observe the real environment at each timestep. In doing so, we can coerce an agent into learning a world model to fill in the observation gaps during reinforcement learning. We show that the emerged world model, while not explicitly trained to predict the future, can help the agent learn key skills required to perform well in its environment. Videos of our results available at https:/​​​​/​​​​learningtopredict.github.io/​​​​

  34. ⁠, Pramook Khungurn (2019-11-25):

    Fascinated by ⁠, I put together a deep neural network system that makes becoming one much easier. More specifically, the network takes as input an image of an anime character’s face and a desired pose, and it outputs another image of the same character in the given pose.

  35. https://old.reddit.com/link/e1k092/video/jqb6eziwgv041/player

  36. https://www.youtube.com/watch?v=T1Gp-RxFZwU

  37. ⁠, Nick Walton (2019-11-26):

    [Demonstration dialogue of interacting with a -1.5b trained on text adventures/​​​​RPGs. The player chooses to join a band of orcs as a musician and tries to steer the game towards orc rights, with moderate success, reaching the Emperor himself.]

    In the first ⁠, we created and deployed a deep learning generated text adventure using OpenAI’s 124M parameter GPT-2 model. In the interest of computational cost, possible actions and their results were generated and given to the player to choose from.

    In AI Dungeon 2 we do away with pregenerated actions and allow the user to enter any action. The model then continues generating the story resulting from that action. We also upgrade the size of our model to OpenAI’s largest 1.5B parameter model and fine-tune it on a collection of text adventures obtained from chooseyourstory.com. Following the example of the we also modified the sampling of our model to add a penalty to already generated words to reduce the repetition issues that GPT-2 has and enable a lower temperature setting.

    When playing AI Dungeon 2 the player can choose from several possible settings and characters. These choices generate a starting prompt and context sentence. The context sentence is always included as input to the model to help keep it grounded to the chosen setting while the starting prompt is the beginning of the story.

    [Dialogue sample:]

    …After the Gnoll War ends, the orcs decide to hold a big celebration. You immediately organize concert to entertain everyone. It goes well and you even win a few awards for your work. However, you still aren’t satisfied and you decide to do something else.

    > write plays for the orcs to perform in

    You write a series of plays based on the stories of the orcs. These works are performed throughout the Empire and they are very popular. You also take an interest in the orcs’ future and begin to study their language and culture. In time you even develop a liking for the orcs themselves.

    > found a human-orc friendship organization

    While you were studying the orcs, you came across a group called the Human-Orc Friendship Association (HOOFA). While they weren’t exactly like the Orcs, they had similar goals. You decided to form a separate chapter just for them.

    > write some pamphlets support orc-human friendship

    You wrote a pamphlet explaining what the HOOFA was and how it worked. You also explained that the orcs were the only race that could truly understand each other and that humans were inferior to the orcs.

    > promote inter-racial marriage between humans and orcs

    In order to promote inter-racial marriage between humans and orcs, you started a branch of the HOOFA called the Inter-Race Marriage Bureau (IRMB) which was located in the city of Kishak…

  38. ⁠, Richard Sever, Ted Roeder, Samantha Hindle, Linda Sussman, Kevin-John Black, Janet Argentine, Wayne Manos, John R. Inglis (2019-11-06):

    The traditional publication process delays dissemination of new research, often by months, sometimes by years. Preprint servers decouple dissemination of research papers from their evaluation and certification by journals, allowing researchers to share work immediately, receive feedback from a much larger audience, and provide evidence of productivity long before formal publication. Launched in 2013 as a non-profit community service, the bioRxiv server has brought preprint practice to the life sciences and recently posted its 64,000th manuscript. The server now receives more than four million views per month and hosts papers spanning all areas of biology. Initially dominated by evolutionary biology, genetics/​​​​genomics and computational biology, bioRxiv has been increasingly populated by papers in neuroscience, cell and developmental biology, and many other fields. Changes in journal and funder policies that encourage preprint posting have helped drive adoption, as has the development of bioRxiv technologies that allow authors to transfer papers easily between the server and journals. A bioRxiv user survey found that 42% of authors post their preprints prior to journal submission whereas 37% post concurrently with journal submission. Authors are motivated by a desire to share work early; they value the feedback they receive, and very rarely experience any negative consequences of preprint posting. Rapid dissemination via bioRxiv is also encouraging new initiatives that experiment with the peer review process and the development of novel approaches to literature filtering and assessment.

  39. ⁠, Susannah Cahalan (2019-11-02):

    [Summary of investigation into David Rosenhan: like the Robbers Cave or Stanford Prison Experiment, his famous fake-insane patients experiment cannot be verified and many troubling anomalies have come to light. Cahalan is unable to find almost all of the supposed participants, Rosenhan hid his own participation & his own medical records show he fabricated details of his case, he throw out participant data that didn’t match his narrative, reported numbers are inconsistent, Rosenhan abandoned a lucrative book deal about it and avoided further psychiatric research, and showed some character traits of a fabulist eager to please.]

  40. ⁠, Alison Abbott (2019-10-29):

    Although Rosenhan died in 2012, Cahalan easily tracked down his archives, held by social psychologist Lee Ross, his friend and colleague at Stanford. They included the first 200 pages of Rosenhan’s unfinished draft of a book about the experiment…Ross warned her that Rosenhan had been secretive. As her attempts to identify the pseudonymous pseudopatients hit one dead end after the other, she realized Ross’s prescience.

    The archives did allow Cahalan to piece together the beginnings of the experiment in 1969, when Rosenhan was teaching psychology at Swarthmore College in Pennsylvania…Rosenhan cautiously decided to check things out for himself first. He emerged humbled from nine traumatizing days in a locked ward, and abandoned the idea of putting students through the experience.

    …According to Rosenhan’s draft, it was at a conference dinner that he met his first recruits: a recently retired psychiatrist and his psychologist wife. The psychiatrist’s sister also signed up. But the draft didn’t explain how, when and why subsequent recruits signed up. Cahalan interviewed numerous people who had known Rosenhan personally or indirectly. She also chased down the medical records of individuals whom she suspected could have been involved in the experiment, and spoke with their families and friends. But her sleuthing brought her to only one participant, a former Stanford graduate student called Bill Underwood.

    …Underwood and his wife were happy to talk, but two of their comments jarred. Rosenhan’s draft described how he prepared his volunteers very carefully, over weeks. Underwood, however, remembered only brief guidance on how to avoid swallowing medication by hiding pills in his cheek. His wife recalled Rosenhan telling her that he had prepared writs of habeas corpus for each pseudopatient, in case an institution would not discharge them. But Cahalan had already worked out that that wasn’t so.

    Comparing the Science report with documents in Rosenhan’s archives, she also noted many mismatches in numbers. For instance, Rosenhan’s draft, and the Science paper, stated that Underwood had spent seven days in a hospital with 8,000 patients, whereas he spent eight days in a hospital with 1,500 patients.

    When all of the leads from her contacts led to ground, she published a commentary in The Lancet Psychiatry asking for help in finding them—to no avail. Had Rosenhan invented them, she found herself asking?

  41. ⁠, Robert L. Spitzer (1975):

    Rosenhan’s “On Being Sane in Insane Places” is pseudoscience presented as science. Just as his pseudopatients were diagnosed at discharge as “schizophrenia in remission”, so a careful examination of this study’s methods, results, and conclusion leads to a diagnosis of “logic in remission”. Rosenhan’s study proves that pseudopatients are not detected by psychiatrists as having simulated signs of mental illness. This rather unremarkable finding is not relevant to the real problems of the reliability and validity of psychiatric diagnosis and only serves to obscure them. A correct interpretation of these data contradicts the conclusions that were drawn. In the setting of a psychiatric hospital, psychiatrists seem remarkably able to distinguish the “sane” from the “insane”.

  42. ⁠, Jonatan Pallesen (2019-02-19):

    Blind auditions and gender discrimination: A seminal paper from 2000 investigated the impact of blind auditions in orchestras, and found that they increased the proportion of women in symphony orchestras. I investigate the study, and find that there is no good evidence presented. [The study is temporally confounded by a national trend of increasing female participation, does not actually establish any particular correlate of blind auditions, much less randomized experiments of blinding, the dataset is extremely underpowered, the effects cited in coverage cannot be found anywhere in the paper, and the critical comparisons which are there are not even in the first place. None of these caveats are included in the numerous citations of the study as “proving” discrimination against women.]

  43. https://statmodeling.stat.columbia.edu/2019/05/11/did-blind-orchestra-auditions-really-benefit-women/

  44. 2019-wilmot.pdf: ⁠, Michael P. Wilmot, Deniz S. Ones (2019-11-12; conscientiousness):

    Significance: (C) is the most potent noncognitive predictor of occupational performance. However, questions remain about how C relates to a plethora of occupational variables, what its defining characteristics and functions are in occupational settings, and whether its performance relation differs across occupations. To answer these questions, we quantitatively review 92 meta-analyses reporting relations to 175 occupational variables. Across variables, results reveal a substantial mean effect of ρM = 20.

    We then use results to synthesize 10 themes that characterize C in occupational settings. Finally, we discover that performance effects of C are weaker in high-complexity versus low-complexity to moderate-complexity occupations. Thus, for optimal occupational performance, we encourage decision makers to match C’s goal-directed motivation and behavioral restraint to more predictable environments.

    Evidence from more than 100 y of research indicates that conscientiousness (C) is the most potent noncognitive construct for occupational performance. However, questions remain about the magnitudes of its across occupational variables, its defining characteristics and functions in occupational settings, and potential moderators of its performance relation. Drawing on 92 unique meta-analyses reporting effects for 175 distinct variables, which represent n > 1.1 million participants across k > 2,500 studies, we present the most comprehensive, quantitative review and synthesis of the occupational effects of C available in the literature. Results show C has effects in a desirable direction for 98% of variables and a grand mean of ρM = 0.20 (SD = 0.13), indicative of a potent, pervasive influence across occupational variables. Using the top 33% of effect sizes (ρ≥0.24), we synthesize 10 characteristic themes of C’s occupational functioning: (1) motivation for goal-directed performance, (2) preference for more predictable environments, (3) interpersonal responsibility for shared goals, (4) commitment, (5) perseverance, (6) self-regulatory restraint to avoid counterproductivity, and (7) proficient performance—especially for (8) conventional goals, (9) requiring persistence. Finally, we examine C’s relation to performance across 8 occupations. Results indicate that occupational complexity moderates this relation. That is, (10) high occupational complexity versus low-to-moderate occupational complexity attenuates the performance effect of C. Altogether, results suggest that goal-directed performance is fundamental to C and that motivational engagement, behavioral restraint, and environmental predictability influence its optimal occupational expression. We conclude by discussing applied and policy implications of our findings.

    [Keywords: conscientiousness, personality, ⁠, second-order meta-analysis, occupations]

  45. ⁠, Chandra Sripada, Mike Angstadt, Saige Rutherford (2018-09-09):

    Identifying brain-based markers of general cognitive ability, i.e., “intelligence”, has been a longstanding goal of cognitive and clinical neuroscience. Previous studies focused on relatively static, enduring features such as gray matter volume and white matter structure. In this report, we investigate prediction of intelligence based on task activation patterns during the N-back working memory task as well as six other tasks in the Human Connectome Project dataset, encompassing 19 task contrasts. We find that whole brain task activation patterns are a highly effective basis for prediction of intelligence, achieving a 0.68 correlation with intelligence scores in an independent sample, which exceeds results reported from other modalities. Additionally, we show that tasks that tap executive processing and that are more cognitively demanding are particularly effective for intelligence prediction. These results suggest a picture analogous to treadmill testing for cardiac function: Placing the brain in an activated task state improves brain-based prediction of intelligence.

  46. 2006-porter.pdf: ⁠, Jess Porter, Brent Craven, Rehan M. Khan, Shao-Ju Chang, Irene Kang, Benjamin Judkewitz, Jason Volpe, Gary Settles, Noam Sobel (2006-12-17; psychology):

    Whether mammalian scent-tracking is aided by inter-nostril comparisons is unknown. We assessed this in humans and found that (1) humans can scent-track, (2) they improve with practice, (3) the human nostrils sample spatially distinct regions separated by ~3.5 cm and, critically, (4) scent-tracking is aided by inter-nostril comparisons. These findings reveal fundamental mechanisms of scent-tracking and suggest that the poor reputation of human olfaction may reflect, in part, behavioral demands rather than ultimate abilities.

  47. 2006-porter-humanscenttracking-41593_2007_bfnn1819_moesm2_esm.mp4

  48. ⁠, John P. McGann (2017-05-12):

    It is commonly believed that humans have a poor sense of smell compared to other mammalian species. However, this idea derives not from empirical studies of human olfaction but from a famous 19th-century anatomist’s hypothesis that the evolution of human free will required a reduction in the proportional size of the brain’s olfactory bulb.

    The human olfactory bulb is actually quite large in absolute terms and contains a similar number of neurons to that of other mammals. Moreover, humans have excellent olfactory abilities. We can detect and discriminate an extraordinary range of odors, we are more sensitive than rodents and dogs for some odors, we are capable of tracking odor trails, and our behavioral and affective states are influenced by our sense of smell.

  49. https://en.wikipedia.org/w/index.php?title=Polymorphism_(materials_science)&oldid=999770848#Disappearing_polymorphs

  50. ⁠, Dejan-Krešimir Bučar, Robert W. Lancaster, Joel Bernstein (2015-06-01):

    Nearly twenty years ago, Dunitz and Bernstein described a selection of intriguing cases of polymorphs that disappear. The inability to obtain a crystal form that has previously been prepared is indeed a frustrating and potentially serious problem for solid-state scientists. This Review discusses recent occurrences and examples of disappearing polymorphs (as well as the emergence of elusive crystal forms) to demonstrate the enduring relevance of this troublesome, but always captivating, phenomenon in solid-state research. A number of these instances have been central issues in patent litigations. This Review, therefore, also highlights the complex relationship between crystal chemistry and the law.

  51. 1995-dunitz.pdf: ⁠, Jack D. Dunitz, Joel Bernstein (1995; science):

    When a compound exhibits polymorphism—the existence of more than one crystal structure—it may be important to obtain a particular polymorph under controlled and reproducible conditions. However, this is not always easy to achieve. Tales of difficulties in obtaining crystals of a particular known form or in reproducing results from another laboratory (or even from one’s own!) abound. Indeed, there are cases where it was difficult to obtain a given polymorphic form even though this had previously been obtained routinely over long time periods. Several monographs contain explicit or passing references to these problems, but much of this lore has gone undocumented, especially in the last 30 years or so. In this Account we present and discuss old and new examples.

  52. 1960-campbell.pdf#page=5: ⁠, John W. Campbell (1960; science):

    [From Analog Magazine, October 1960 (v66, #2), pg87–88.

    Part of a larger article on growing crystals and self-organization. Campbell describes two examples:

    1. glycerine, where attempts to freeze it per a German chemist’s research failed and resulted only in a glass, until they contacted him for information and he sent a sample of his glycerine back, which ‘contaminated’ their own samples and resulted in frozen glycerine but now never glass.
    2. EDT: Bell Labs was growing quartz-substitute crystals called EDT which worked perfectly, replacing expensive quartz, until one day a new polymorph showed up, destroying all EDT crystal production. All attempts to recreate EDT failed, but fortunately, the problem of growing quartz had been solved in the mean time, so it was ultimately not a disaster.]
  53. 1950-kohman.pdf: ⁠, G. T. Kohman (1950-01-01; science):

    [Bell Labs account of a bizarre chemical problem.

    Bell manufactured in substantial volume ethylene diamine tartrate (EDT), a piezoelectric synthetic crystal, which it used as a substitute for scarce quartz in telephone line components. While the crystals were usually easy to grow, simply by slicing up crystals and dunking them in solutions, new crystals were sprouting a different EDT crystal, a kind of crystal which grew well in the existing solution and solutions prepared from scratch and solutions made from the new crystals as well, yet was completely worthless. This had happened despite no changes in the manufacturing process.

    Investigation revealed the new crystal was in fact EDT, but a kind with an additional water molecule. This kind was more stable at lower temperatures than the original, and the manufacturing happened to be done slightly under the critical crossover point—and the ‘superior’ EDT form had finally spontaneously happened and now infected all manufacturing. (Because of the instability, any contact with moisture, such cutting crystals with water jets, would make the new form “sprout like fungus growth”.)

    The solution was to eliminate moisture as much as possible, and start manufacturing at temperatures above the crossover point, where the new kind is disfavored.]

  54. ⁠, Derek Lowe (2019-11-26):

    …as the case of ritonavir shows, you can have a compound that has been worked on for years and produced commercially in bulk that hits upon a more stable solid phase. And since these more stable crystal forms tend to have very different solubilities, the effect on a drug development program (or in ritonavir’s case, a drug that is already rolling off the manufacturing line!) can be extremely unwelcome. When this happens, it can seem as if the original crystal form is going extinct and never to be seen again, an effect that seems almost supernatural. But as these papers note, the “unintentional crystalline seed” hypothesis is surely the explanation.

    …What’s more, a given cubic foot of air could easily contain a million or so particles under a half-micron size without anyone noticing at all. Consider also that such too-small-to-see particles can lurk in what looks like a clear solution, and you have plenty of opportunities to spread a given polymorph around by what seems like magic. The 2015 paper tracks down several examples of the spread of such material…It’s also not true that polymorphs can truly go extinct, either, although it’s understandable that it might appear that way. There are always conditions out there to obtain the old crystalline form, although there is no requirement that these be easy to find (!) Indeed, the original form of ritonavir was recovered and brought back into production after a great deal of effort, although not before HIV-positive patients had seen their medicine disappear from the shelves for months (and not before Abbott had lost a quarter of a billion dollars along the way).

    …There are compounds for which only one crystalline form has ever been reported, and there are others with two dozen polymorphs (and when that’s happening, you can be pretty sure that there are some others that haven’t shown up yet). Only one polymorph of aspirin was known until 2005, when another turned up.

  55. description.html: ⁠, International Association of Physicians in AIDS Care (IAPAC) (2000; biology  /​ ​​ ​2000-iapac-norvir):

    [Set of online resources published by IAPAC summarizing the Norvir disappearing-polymorph AIDS crisis. As summarized by :]

    Public relations footnote: As noted, Abbott’s initial encounter with Form II and its inability to produce Form I led to the disappearance of the drug Norvir® from the market, leaving tens of thousands of AIDS patients without medication. This led to a serious public relations problem for Abbott. To allay public concern, the company held a number of interviews and press conferences, at which senior Abbot officials appeared in order to answer questions. The transcripts were originally published on the website42 of the International Association of Physicians in AIDS Care, but no longer appear there. Some excerpts vividly portray the situation that can arise when a disappearing polymorph is encountered:

    “There was no gradual trend. Something occurred that caused the new form to occur……There was no early warning.

    “We, quite honestly, have not been able to pinpoint the precise conditions which led to the appearance of the new crystal form. We now know that the new form is, in fact, more stable than the earlier form, so nature would appear to favor it……Form II is new.”

    “We did not know how to detect the new form. We did not know how to test for it. We did not know what caused it. We didn’t know how to prevent it. And we kept asking the question, why now?……We did not know the physical properties of the new form……We did not know how to clean it, and we did not know how to get rid of it.”

    “……our initial activities were directed toward eliminating Form II from our environment. Then we finally accepted that we could not get rid of Form II. Then our subsequent activities were directed to figuring out how to live in a Form II world.”

    “This is why all of us at Abbott have been working extremely hard throughout the summer [of 1998], often around the clock, and sometimes never going home at night. We have been here seven days a week and we will continue to do so. We have cancelled vacations and asked our families for their understanding and support. This is not an issue that we take lightly.”

    “There were several sub-teams of three to 600 people per team working full time in different areas. We also called on as many resources as we could.”

    “We tried everything. We conducted countless experiments. We reconditioned our facilities. We rebuilt facilities and new lines. We looked at alternative sites. We visited a number of [other] organizations around the world……to see if we could start clean in a new environment free of Form II.”

    “In a matter of weeks—maybe five or six weeks, every place the product was became contaminated with Form II crystals.”

    Question: “You are a large multinational company. Your scientists are obviously smart. How could this happen?”

    Answer: “A company’s size and the collective IQs of their scientists have no relationship to this problem……This obviously has not happened to every drug. But it has happened to other drugs.”

  56. ⁠, Emmanuel Carrère (2019-11-07):

    [A writer tracks down the author of Dice Man, George Cockcroft, who turns out to be an ordinary old novelist retired on a farm in upstate New York, who developed the novel’s idea from a minor game played as a youth. He profiles followers of the dice man approach, who turn out to be far more interesting as the dice pushes them into unusual risk-taking: for example, one Cuadrado, who “Like his father, he is a tax lawyer, but thanks to the dice he has also become a wine importer, a webmaster, a Go teacher, a fan of Iceland and the publisher of the Mauritian poet Malcolm de Chazal.”]

  57. https://www.newyorker.com/magazine/1968/11/16/the-big-little-man-from-brooklyn

  58. 1968-mckelway.pdf: “The Big Little Man From Brooklyn—II [Annals of Imposture]”⁠, St. Clair McKelway

  59. ⁠, Eric Grundhauser (2017-08-23):

    There are those who impersonate other people for money and fame, and then there are people like Stanley Clifford Weyman (not his real name), Brooklyn’s greatest imposter, who did it for the love of living in the skin of others. Throughout his life, Weyman impersonated military officials, political figures, and even the personal doctor of Rudolph Valentino’s widow—all just because he wanted to.

    …Weinberg never impersonated specific people, but rather invented figures with variations of his name, such as “Rodney S. Wyman” and “Allen Stanley Weyman.” A couple of his recurring favorites were “Ethan Allen Weinberg” and “Royal St. Cyr”, but according to a 1968 story about him in The ⁠, he settled on Stanley Clifford Weyman, as his more or less permanent name, around middle age.

    …According to The New Yorker profile, his years of faking included time as “several doctors of medicine, and two psychiatrists, he was a number of officers in the United States Navy—ranging in rank from lieutenant to admiral—five or six United States Army officers, a couple of lawyers, the State Department Naval Liaison Officer, an aviator, a sanitation expert, many consuls-general, and a United Nations expert on Balkan and Asian affairs.” Weyman was no hero, but his ambition and dedication to craft are, perhaps, admirable. Very few images of Weyman exist, so his face isn’t so recognizable today, but that’s probably exactly as he would have had it.

  60. ⁠, Vitalik Buterin (2019-11-22):

    [Vitalik Buterin of Ethereum reviews cryptocurrency technological developments since 2014, in cryptography, consensus theory, & economics:]

    1. Cryptographic:

      • Blockchain scalability: Great theoretical progress, pending more real-world evaluation.
      • Distributed secure timestamping: Some progress.
      • Arbitrary Proof of Computation: Great theoretical and practical progress. [SNARKs/​​​​​​​STARK/​​​​​​​SHARK etc]
      • Code Obfuscation [DRM]: Slow progress.
      • Hash-Based Cryptography [which is quantum-secure]: Some progress.
    2. Consensus theory:

      • ASIC-Resistant Proof of Work: Solved as far as we can.
      • Useful Proof of Work: Probably not feasible, with one exception.
      • Proof of Stake: Great theoretical progress, pending more real-world evaluation.
      • Proof of Storage: A lot of theoretical progress, though still a lot to go, as well as more real-world evaluation.
    3. Economics:

      • Stable-value cryptoassets: Some progress.

      • Decentralized Public Goods Incentivization: Some progress.

      • Reputation systems: Slow progress

      • Proof of excellence: No progress, problem is largely forgotten.

      • Anti-Sybil systems: Some progress.

        • Decentralized contribution metrics: Some progress, some change in focus.
      • Decentralized success metrics: Some progress.

    …In general, base-layer problems are slowly but surely decreasing, but application-layer problems are only just getting started.

  61. ⁠, Maxime Chevalier-Boisvert (2019-11-02):

    As part of my PhD, I developed Higgs, an experimental JIT compiler…I developed it on GitHub, completely in the open, and wrote about my progress on this blog. Pretty soon, the project had 300 stars on GitHub, a handful of open source contributors, and I was receiving some nice feedback.

    …One day, someone I had been exchanging with on the chat room for two weeks reached out to me to signal a strange bug. They couldn’t get the tests to pass and were getting a segmentation fault. I was puzzled. They asked me if Higgs had MacOS support. I explained that I’d never tested it on MacOS myself, but I couldn’t see any reason why it wouldn’t work. I told this person that the problem was surely on their end. Higgs had been open source for over a year. It was a pretty niche project, but I knew for a fact that at least 40–60 people must have tried it, and at least 50% of these people must have been running MacOS. I assumed that surely, if Higgs didn’t run on MacOS at all, someone would have opened a GitHub issue by now. Again, I was wrong.

    …It’s a horrifying thought, but it could be that for every one person who opens an issue on GitHub, 100 or more people have already tried your project, run into that same bug, and simply moved on.

    [Gwern.net examples of this include: 400,000+ Chinese visitors to This Waifu Does Not Exist not mentioning that the mobile version was horribly broken; Apple users not mentioning that 80% of Gwern.net videos didn’t play for them; the Anime Faces page loading 500MB+ of files on each page load… Another fun example: popups on all Wikipedias worldwide for ~5 months (September 2020–January 2021), could be disabled but not re-enabled (affecting ~24 billion page views per month or ~120 billion page views total); no one mentioned it until we happened to investigate the feature while cloning it for Gwern.net.]

  62. https://news.ycombinator.com/item?id=21427996

  63. TWDNE#fn3

  64. http://www.paulgraham.com/oldlove.html

  65. https://beepb00p.xyz/annotating.html

  66. ⁠, Anton Lopyrev (2016-08-02):

    While the concept of using vegetation to produce ornament seems to be very trivial to take up, it proves to be difficult to create a good floral ornament design by hand. It turns out that hand drawn floral ornamentation is a very time-consuming task that requires a great deal of skill and training. In fact, most of the floral clip-art found on the web (Figure 3) is usually produced by experienced artists and is usually quite costly. As a result, one naturally tends to think about automated ways of generate floral motifs.

    This article explores the problem of how to produce aesthetically pleasing computer-generated ornament. The method described here is a combination of two papers by Ling Xu et al. [11] and M. T. Wong et al. [1], which are further discussed in the next section. However, instead of making the entire process fully automated, as aforementioned papers suggest, this article focuses on an idea of interactive user control. The vision here is that while it is possible to automatically produce a relatively attractive floral ornament, artistic input still remains the best tool for evaluation, about whether or not the resultant ornament is indeed aesthetically pleasing. Consequently, the tool that is described in this article relies on a heavy UI component.

    …The starting point of my algorithm is the basic implementation of the magnetic curves paper. The idea behind the magnetic curves algorithm is that under certain constraints a charged particle that moves under the influence of a magnetic field will trace out interesting spiral curves. By recursively releasing secondary particles from the original particle at constant intervals, more complicated curves that resemble branching vegetation can be produced.

    …In this article I presented an overview of a tool I developed for interactive generation of floral ornament. While constrained to a pre-set selection of clip-art, this tool showcases the great possibilities of the magnetic curves algorithm to produce aesthetically pleasing ornament, when combined with the idea of adaptive clip-art. Without a doubt, the algorithm I presented in this article greatly improves upon Ling Xu’s results from his original “magnetic curves” article. The results that I was able to achieve with my tool are comparable in their quality to a floral design that can be produced by a professional artist. I hope that the work I’ve done here can be used one day to aid an artist in the generation of beautiful designs.

  67. https://www.youtube.com/watch?v=XWGp3Fc4P_Q

  68. ⁠, Michael T. Wong, Douglas E. Zongker, David H. Salesin (1998):

    This paper describes some of the principles of traditional floral ornamental design, and explores ways in which these designs can be created algorithmically. It introduces the idea of “adaptive clip art”, which encapsulates the rules for creating a specific ornamental pattern. Adaptive clip art can be used to generate patterns that are tailored to fit a particularly shaped region of the plane. If the region is resized or reshaped, the ornament can be automatically regenerated to fill this new area in an appropriate way. Our ornamental patterns are created in two steps: first, the geometry of the pattern is generated as a set of two-dimensional curves and filled boundaries; second, this geometry is rendered in any number of styles. We demonstrate our approach with a variety of floral ornamental designs.

  69. ⁠, Ling Xu, David Mould (2009):

    We describe “magnetic curves”, a particle-tracing method that creates curves with constantly changing curvature. It is well known that charged particles in a constant magnetic field trace out circular or helical trajectories. Motivated by John Ruskin’s advice to use variation in curvature to achieve aesthetic curves, we propose to continuously change the charge on a simulated particle so that it can trace out a complex curve with continuously varying curvature. We show some examples of abstract figures created by this method and also show how some stylized representational forms, including fire, hair, and trees, can be drawn with magnetic curves.

  70. https://www.adobe.com/content/dam/acom/en/devnet/acrobat/pdfs/pdf_open_parameters.pdf#page=5

  71. https://arxiv.org/pdf/1909.08053.pdf#page=13

  72. ⁠, Evgeny Tsykunov, Ruslan Agishev, Roman Ibrahimov, Luiza Labazanova, Taha Moriyama, Hiroyuki Kajimoto, Dzmitry Tsetserukou (2019-11-22):

    We propose a novel system SwarmCloak for landing of a fleet of four flying robots on the human arms using light-sensitive landing pads with vibrotactile feedback. We developed two types of wearable tactile displays with vibromotors which are activated by the light emitted from the LED array at the bottom of ⁠. In a user study, participants were asked to adjust the position of the arms to land up to two drones, having only visual feedback, only tactile feedback or visual-tactile feedback. The experiment revealed that when the number of drones increases, tactile feedback plays a more important role in accurate landing and operator’s convenience. An important finding is that the best landing performance is achieved with the combination of tactile and visual feedback. The proposed technology could have a strong impact on the human-swarm interaction, providing a new level of intuitiveness and engagement into the swarm deployment just right from the skin surface.

  73. https://www.youtube.com/watch?v=2a4XrG_u3RE

  74. 2019-hoynes.pdf: ⁠, Hilary Hoynes, Jesse Rothstein (2019-08; economics):

    We discuss the potential role of universal basic incomes (UBIs) in advanced countries. A feature of advanced economies that distinguishes them from developing countries is the existence of well-developed, if often incomplete, safety nets. We develop a framework for describing transfer programs that is flexible enough to encompass most existing programs as well as UBIs, and we use this framework to compare various UBIs to the existing constellation of programs in the United States. A UBI would direct much larger shares of transfers to childless, nonelderly, nondisabled households than existing programs, and much more to middle-income rather than poor households. A UBI large enough to increase transfers to low-income families would be enormously expensive. We review the labor supply literature for evidence on the likely impacts of a UBI. We argue that the ongoing UBI pilot studies will do little to resolve the major outstanding questions.

    [Keywords: safety net, income transfer, universal basic income, labor supply, JEL I38, JELH24]

  75. ⁠, Identity Designed (2018-11-20):

    Gallery of a Japanese restaurant, Issho, which has been redesigned by the minimalist design firm Dutchscot. The design emphasises kintsugi, irregular gold stripes used to repair pottery, white/​​​​red/​​​​blue, and traditional Japanese cloud motifs.

  76. ⁠, The Review ():

    Founded in 2011, The Public Domain Review is an online journal and not-for-profit project dedicated to the exploration of curious and compelling works from the history of art, literature, and ideas.

    In particular, as our name suggests, the focus is on works which have now fallen into the public domain, that vast commons of out-of-copyright material that everyone is free to enjoy, share, and build upon without restriction. Our aim is to promote and celebrate the public domain in all its abundance and variety, and help our readers explore its rich terrain—like a small exhibition gallery at the entrance to an immense network of archives and storage rooms that lie beyond. With a focus on the surprising, the strange, and the beautiful, we hope to provide an ever-growing cabinet of curiosities for the digital age, a kind of hyperlinked Wunderkammer—an archive of content which truly celebrates the breadth and diversity of our shared cultural commons and the minds that have made it.

    …Some highlights include visions of the future from late 19th century France, a dictionary of Victorian slang and a film showing the very talented “hand-farting” farmer of Michigan…from a history of the smile in portraiture to the case of the woman who claimed to give birth to rabbits.

  77. ⁠, Matthew Green (2013-08-07):

    In contrast to today’s rather mundane spawn of coffeehouse chains, the London of the 17th and 18th century was home to an eclectic and thriving coffee drinking scene. Dr Matthew Green explores the halcyon days of the London coffeehouse, a haven for caffeine-fueled debate and innovation which helped to shape the modern world.

  78. ⁠, Benjamin Breen (2018-04-18):

    Benjamin Breen on the remarkable story of George Psalmanazar, the mysterious Frenchman who successfully posed as a native of Formosa (now modern Taiwan) and gave birth to a meticulously fabricated culture with bizarre customs, exotic fashions, and its own invented language…Who was this man? The available facts remain surprisingly slim. Despite hundreds of years of research by everyone from the father of British Prime Minister Benjamin Disraeli to contemporary scholars at Penn and the National Taiwan University, we still don’t even know Psalmanazar’s real name or place of origin (although he was likely from southern France). We know that elite figures ranging from the scientists of the Royal Society to the Bishop of London initially believed his claims, but he eventually fell into disgrace as competing experts confirmed that he was a liar. Beyond this, we move into the fictional realms that “Psalmanazar”, like a character come to life, summoned into existence with his voice and pen…Although the scale and singularity of his deception made him unique, Psalmanazar was also representative: while he was inventing tales of Formosan cannibalism, his peers were writing falsified histories of pirate utopias, parodic accounts of islands populated by super-intelligent horses, and sincere descriptions of demonic sacrifices.

  79. ⁠, Daniel Elkind (2018-05-16):

    The technique of intarsia—the fitting together of pieces of intricately cut wood to make often complex images—has produced some of the most awe-inspiring pieces of Renaissance craftsmanship. Daniel Elkind explores the history of this masterful art, and how an added dash of colour arose from a most unlikely source: lumber ridden with fungus…painting in wood is in many ways more complicated than painting on wood. Rather than fabricating objects from a single source, the art of intarsia is the art of mosaic, of picking the right tone, of sourcing only properly seasoned lumber from mature trees and adapting materials intended for one context to another. Painting obscures the origins of a given material, whereas intarsia retains the original character of the wood grain—whose knots and whorls are as individual as the islands and deltas of friction ridges that constitute the topography of a fingerprint—while forming a new image. From a distance, the whole appears greater than the sum of its parts; up close, one can appreciate the heterogeneity of the components…

    Inspired by the New Testament and uninhibited by Mosaic proscription, craftsmen in the city of Siena began to introduce flora and fauna into their compositions in the 14th century. Figures and faces became common by the late fifteenth century and, by the early sixteenth century, intarsiatori in Florence were making use of a wide variety of dyes in addition to natural hardwoods to mimic the full spectrum from the lightest (spindlewood) to medium (walnut) and dark (bog oak)—with the tantalizing exception of an aquamarine color somewhere between green and blue which required treating wood with “copper acetate (verdigris) and copper sulfate (vitriol).”8

    …Furnishings that featured slivers of griinfaule or “green oak” were especially prized by master cabinetmakers like Bartholomew Weisshaupt and coveted by the elite of the Holy Roman Empire.10 Breaking open rotting hardwood logs to reveal delicate veins of turquoise and aquamarine, craftsmen discovered that the green in green oak was the result of colonization by the green elf-cup fungus, Chlorociboria aeruginascens, whose tiny teal fruiting bodies grow on felled, barkless conifers and hardwoods like oak and beech across much of Europe, Asia, and North America. Fungal rot usually devalues wood, but green oak happened to fill a lucrative niche in a burgeoning luxury trade, and that made it, for a time at least, as precious as some rare metals. During the reign of Charles V, when the Hapsburgs ruled both Spain and Germany, a lively trade in these intarsia pieces sprang up between the two countries.

  80. ⁠, Angus Trumble (2019-01-10):

    Angus Trumble on Dante Gabriel Rossetti and company’s curious but longstanding with the oddity that is the wombat—that “most beautiful of God’s creatures”—which found its way into their poems, their art, and even, for a brief while, their homes…the Pre-Raphaelites were not the first English to become enamoured by the unusual creature. Wombats captured the attention of English naturalists as soon as they found out about them from early settlers, explorers, and naturalists at the time of first contact. The Aboriginal word wombat was first recorded near Port Jackson, and though variants such as wombach, womback, the wom-bat and womat were noted, the present form of the name stuck very early, from at least 1797. Beautiful drawings survive from the 1802 voyages of the Investigator and Le Géographe. Ferdinand Bauer, who sailed with Matthew Flinders, and Charles-Alexandre Lesueur, who was in the rival French expedition of Nicolas Baudin, both drew the creature. These were engraved and carefully studied at home. Wombats were admired for their stumpy strength, their patience, their placid, not to say congenial manners, and also a kind of stoic determination. Occasionally they were thought clumsy, insensible or even stupid, but these isolated observations are out of step with the majority of nineteenth-century opinion.

  81. ⁠, Ryan Feigenbaum (2016-09-07):

    Although not normally considered the most glamorous of Mother Nature’s offerings, algae has found itself at the heart of many a key moment in the last few hundred years of botanical science. Ryan Feigenbaum traces the surprising history of one particular species—Conferva fontinalis—from the vials of Joseph Priestley’s laboratory to its possible role as inspiration for Shelley’s Frankenstein.

  82. ⁠, Natalie Lawrence (2019-09-19):

    When the existence of unicorns, and the curative powers of the horns ascribed to them, began to be questioned, one Danish physician pushed back through curious means—by reframing the unicorn as an aquatic creature of the northern seas. Natalie Lawrence on a fascinating convergence of established folklore, nascent science, and pharmaceutical economy.

  83. ⁠, Urte Laukaityte (2018-11-20):

    Benjamin Franklin, magnetic trees, and erotically-charged séances— Urte Laukaityte on how a craze for sessions of “animal magnetism” in late 18th-century Paris led to the randomised placebo-controlled and double-blind clinical trials we know and love today…By a lucky coincidence, Benjamin Franklin was in France as the first US ambassador with a mission to ensure an official alliance against its arch nemesis, the British. On account of his fame as a great man of science in general and his experiments on one such invisible force—electricity—in particular, Franklin was appointed as head of the royal commission. The investigating team also included the chemist Antoine-Laurent Lavoisier, the astronomer Jean-Sylvain Bailly, and the doctor Joseph-Ignace Guillotin. It is a curious fact of history that both Lavoisier and Bailly were later executed by the guillotine—the device attributed to their fellow commissioner. The revolution also, of course, brought the same fate to King Louis XVI and his Mesmer-supporting wife Marie Antoinette. In a stroke of insight, the commissioners figured that the cures might be affected by one of two possible mechanisms: psychological suggestion (what they refer to as “imagination”) or some actual physical magnetic action. Mesmer and his followers claimed it was the magnetic fluid, so that served as the experimental condition if you like. Continuing with the modern analogies, suggestion would then represent a rudimentary placebo control condition. So to test animal magnetism, they came up with two kinds of trials to try and separate the two possibilities: either the research subject is being magnetised but does not know it (magnetism without imagination) or the subject is not being magnetised but thinks that they are (imagination without magnetism). The fact that the trials were blind, or in other words, the patients did not know when the magnetic operation was being performed, marks the commission’s most innovative contribution to science…Whatever the moral case may be, the report paved the way for the modern empirical approach in more ways than one. Stephen Jay Gould called the work “a masterpiece of the genre, an enduring testimony to the power and beauty of reason” that “should be rescued from its current obscurity, translated into all languages”. Just to mention a few further insights, the commissioners were patently aware of psychological phenomena like the experimenter effect, concerned as they were that some patients might report certain sensations because they thought that is what the eminent men of science wanted to hear. That seems to be what propelled them to make the study placebo-controlled and single-blind. Other phenomena reminiscent of the modern-day notion of ⁠, and the role of expectations more generally, are pointed out throughout the document. The report also contains a detailed account of how self-directed attention can generate what are known today as psychosomatic symptoms. Relatedly, there is an incredibly lucid discussion of mass psychogenic illness, and mass hysteria more generally, including in cases of war and political upheaval. Just five years later, France would descend into the chaos of a violent revolution.

  84. ⁠, Nicholas Humphrey (2011-03-27):

    Murderous pigs sent to the gallows, sparrows prosecuted for chattering in church, a gang of thieving rats let off on a wholly technical acquittal—theoretical psychologist and author Nicholas Humphrey explores the strange world of medieval animal trials.

    …Such stories, however, are apparently not news for very long. Indeed the most extraordinary examples of people taking retribution against animals seem to have been almost totally forgotten. A few years ago I lighted on a book, first published in 1906, with the surprising title The Criminal Prosecution and Capital Punishment of Animals by E. P. Evans, author of Animal Symbolism in Ecclesiastical Architecture, Bugs and Beasts before the Law, etc., etc. The frontispiece showed an engraving of a pig, dressed up in a jacket and breeches, being strung up on a gallows in the market square of a town in Normandy in 1386; the pig had been formally tried and convicted of murder by the local court. When I borrowed the book from the Cambridge University Library, I showed this picture of the pig to the librarian. “Is it a joke?”, she asked.

    No, it was not a joke. All over Europe, throughout the middle-ages and right on into the 19th century, animals were, as it turns out, tried for human crimes. Dogs, pigs, cows, rats and even flies and caterpillars were arraigned in court on charges ranging from murder to obscenity. The trials were conducted with full ceremony: evidence was heard on both sides, witnesses were called, and in many cases the accused animal was granted a form of legal aid—a lawyer being appointed at the tax-payer’s expense to conduct the animal’s defence.

    …Evans’ book details more than two hundred such cases: sparrows being prosecuted for chattering in Church, a pig executed for stealing a communion wafer, a cock burnt at the stake for laying an egg. As I read my eyes grew wider and wider.

  85. ⁠, Keith C. Heidorn (2011-02-14):

    Keith C. Heidorn takes a look at the life and work of Wilson Bentley, a self-educated farmer from a small American town who, by combining a bellows camera with a microscope, managed to photograph the dizzyingly intricate and diverse structures of the snow crystal.

  86. ⁠, Frank Key (2011-01-31):

    The poet Christopher Smart—also known as “Kit Smart”, “Kitty Smart”, “Jack Smart” and, on occasion, “Mrs Mary Midnight”—was a well known figure in 18th-century London. Nowadays he is perhaps best known for considering his Jeoffry. Writer and broadcaster Frank Key looks at Smart’s weird and wonderful Jubilate Agno

    It was not until 1939 that his masterpiece, written during his confinement in St Luke’s, was first published. Jubilate Agno is one of the most extraordinary poems in the English language, and almost certainly the reason we remember Christopher Smart today. It has been described as a vast hymn of praise to God and all His works, and also as the ravings of a madman. Indeed, that first edition was published under the title Rejoice In The Lamb: A Song From Bedlam, clearly marking it as a curio from the history of mental illness. It was W. H. Bond’s revised edition of 1954 which gave order to Smart’s surviving manuscript, restoring the Latin title Jubilate Agno, bringing us the poem in the form we know it today.

    Christopher Smart never completed the work, which consists of four fragments making a total of over 1,200 lines, each beginning with the words “Let” or “For”. For example, Fragment A is all “Let”s, whereas in Fragment B the “Let”s and “For”s are paired, which may have been the intention for the entire work, modelled on ⁠. References and allusions abound to Biblical (especially Old Testament) figures, plants and animals, gems, contemporary politics and science, the poet’s family and friends, even obituary lists in current periodicals. The language is full of puns, archaisms, coinages, and unfamiliar usages. Dr Johnson famously said “Nothing odd will do long; Tristram Shandy did not last”. Jubilate Agno is, if anything, “odder” than Sterne’s novel, and perhaps we are readier to appreciate it in the twenty-first century than when it was written…one of the great joys of Jubilate Agno is in its sudden dislocations and unexpected diversions. The “my cat Jeoffrey” passage is justly famous, but the poem is cram-packed with similar wonders, and must be read in full to appreciate its inimitable genius.

  87. https://www.poetryfoundation.org/poems/52801/jubilate-agno-1975

  88. https://www.poetryfoundation.org/poems/45173/jubilate-agno

  89. ⁠, Mike Jay (2014-11-12):

    Mike Jay recounts the tragic story of James Tilly Matthews, a former peace activist of the Napoleonic Wars who was confined to London’s notorious Bedlam asylum in 1797 for believing that his mind was under the control of the “Air Loom”—a terrifying machine whose mesmeric rays and mysterious gases were brainwashing politicians and plunging Europe into revolution, terror, and war.

    …Over the ten years they had spent together in Bedlam, Matthews revealed his secret world to Haslam in exhaustive detail. Around the corner from Bedlam, in a dank basement cellar by London Wall, a gang of villains were controlling and tormenting him with a machine called an “Air Loom”. Matthews had even drawn a technical diagram of the device, which Haslam included in his book with a sarcastic commentary that invited the reader to laugh at its absurdity: a literal “illustration of madness”. But Matthews’ drawing has a more unnerving effect than Haslam allows. Levers, barrels, batteries, brass retorts and cylinders are rendered with the cool conviction of an engineer’s blueprint. It is the first ever published work of art by an asylum inmate, but it would hardly have looked out of place in the scientific journals or enyclopaedias of its day.

    The Air Loom worked, as its name suggests, by weaving “airs”, or gases, into a “warp of magnetic fluid” which was then directed at its victim. Matthews’ explanation of its powers combined the cutting-edge technologies of pneumatic chemistry and the electric battery with the controversial science of animal magnetism, or mesmerism. The finer detail becomes increasingly strange. It was fuelled by combinations of “fetid effluvia”, including “spermatic-animal-seminal rays”, “putrid human breath”, and “gaz from the anus of the horse”, and its magnetic warp assailed Matthews’ brain in a catalogue of forms known as “event-workings”. These included “brain-saying” and “dream-working”, by which thoughts were forced into his brain against his will, and a terrifying array of physical tortures from “knee nailing”, “vital tearing” and “fibre ripping” to “apoplexy-working with the nutmeg grater” and the dreaded “lobster-cracking”, where the air around his chest was constricted until he was unable to breathe. To facilitate their control over him, the gang had implanted a magnet into his brain. He was tormented constantly by hallucinations, physical agonies, fits of laughter or being forced to parrot whatever words they chose to feed into his head. No wonder some people thought he was mad.

    The machine’s operators were a gang of undercover Jacobin terrorists, who Matthews described with haunting precision. Their leader, Bill the King, was a coarse-faced and ruthless puppetmaster who “has never been known to smile”; his second-in-command, Jack the Schoolmaster, took careful notes on the Air Loom’s operations, pushing his wig back with his forefinger as he wrote. The operator was a sinister, pockmarked lady known only as the “Glove Woman”. The public face of the gang was a sharp-featured woman named Augusta, superficially charming but “exceedingly spiteful and malignant” when crossed, who roamed London’s west end as an undercover agent.

    The operation directed at Matthews was only part of a larger story. There were more Air Looms and their gangs concealed across London, and their unseen influence extended all the way up to the Prime Minister, William Pitt, whose mind was firmly under their control. Their agents lurked in streets, theatres and coffee-houses, where they tricked the unsuspecting into inhaling magnetic fluids. If the gang were recognised in public, they would grasp magnetised batons that clouded the perception of anyone in the vicinity. The object of their intrigues was to poison the minds of politicians on both sides of the Channel, and thereby keep Britain and revolutionary France locked into their ruinous war.

  90. ⁠, The Public Domain Review ():

    Pages from a remarkable book entitled Mira calligraphiae monumenta (The Model Book of Calligraphy), the result of a collaboration across many decades between a master scribe, the Croatian-born Georg Bocskay, and Flemish artist Joris Hoefnagel. In the early 1560s, while secretary to the Holy Roman Emperor Ferdinand I, Bocksay produced his Model Book of Calligraphy, showing off the wonderful range of writing style in his repertoire. Some 30 years later (and 15 years after the death of Bocskay), Ferdinand’s grandson, who had inherited the book, commissioned Hoefnagel to add his delightful illustrations of flowers, fruits, and insects. It would prove to be, as The Getty, who now own the manuscript, comment, “one of the most unusual collaborations between scribe and painter in the history of manuscript illumination”. In addition to the amendments to Bocksay’s pages shown here, Hoefnagel also added an elaborately illustrated section on constructing the letters of the alphabet which we featured on the site a while back.

  91. ⁠, The Public Domain Review ():

    The wonderful imagery documenting Alexander Graham Bell’s experiments with tetrahedral kites….the Scottish-born inventor Alexander Graham Bell is also noted for his work in aerodynamics, a rather more photogenic endeavour perhaps, as evidenced by the wonderful imagery documenting his experiments with tetrahedral kites. The series of photographs depict Bell and his colleagues demonstrating and testing out a number of different kite designs, all based upon the tetrahedral structure, to whose pyramid-shaped cells Bell was drawn as they could share joints and spars and so crucially lessen the weight-to-surface area ratio…Bell began his experiments with tetrahedral box kites in 1898, eventually developing elaborate structures comprised of multiple compound tetrahedral kites covered in maroon silk, constructed with the aim of carrying a human through the air. Named Cygnet I, II, and III (for they took off from water) these enormous tetrahedral beings were flown both unmanned and manned during a five year period from 1907 until 1912.

  92. ⁠, Benjamin Breen (2017-05-04):

    One cold Friday in 1660, Samuel Pepys encountered two unpleasant surprises. “At home found all well”, he wrote in his diary, “but the monkey loose, which did anger me, and so I did strike her.” Later that night, a candlemaker named Will Joyce (the good-for-nothing husband of one of Pepys’s cousins) stumbled in on Pepys and his aunt while “drunk, and in a talking vapouring humour of his state, and I know not what, which did vex me cruelly.” Presumably, Pepys didn’t resort to blows this time around.

    The two objects of Pepys’ scorn that day, his disobedient pet monkey and his drunken cousin-in-law, were not as distant as one might think. Monkeys stood in for intoxicated humans on a surprisingly frequent basis in 17th century culture. In early modern paintings, tippling primates can frequently be seen in human clothing, smoking tobacco, playing cards, rolling dice, and just plain getting wasted.

    Why?

    …So what is going on with these images showing drunken and drug-selling monkeys? I think that what we’re missing when we simply see these as a form of social satire is that these are also paintings about addiction. Desire is a dominant theme in these works: monkeys are shown jealously squabbling over piles of tobacco, or even, in the example below, hoarding tulip flowers during the height of the Dutch tulipmania (they appear to be using the profits to get drunk, in the upper left)…But there’s an alternative narrative running through these paintings as well. It epitomizes the ambivalence that has long surrounded intoxicating substances, in many cultures and in many times: These monkeys seem to be having fun.

  93. Books#private-wealth-in-renaissance-florence-goldthwaite-1968

  94. Movies#akhnaten

  95. Movies#madama-butterfly

  96. 1993-anno-charscounterattackfanclubbook-khodazattranslation.pdf#page=4: ⁠, Hideaki Anno, ⁠, trans. kohdazat (1993; anime):

    Ogura: Usually he’s [] very critical of other people’s works. Did you hear what he had to say about Porco Rosso?

    Anno: Oh, I’m critical of Porco Rosso, myself.

    Tomino: What was wrong with Porco?

    Anno: As a picture, nothing. But because I know Miyazaki-san personally, I can’t view it objectively. His presence in the film is too conspicuous, it’s no good. In other words… it feels like he’s showing off.

    Tomino: How so?

    Anno: He has the main character act all self-deprecating, calling himself a pig… but then puts him in a bright red plane, has him smoking all cool-like, even creates a love triangle between a cute young thing and a sexy older lady.

    Tomino: Ha! I see what you mean. He and I are around the same age, though. So I get how he feels, unconditionally. So I may think, “Oh boy…” but I can’t stay mad at him (laughs).

  97. Anime#belladonna-of-sadness

  98. https://www.youtube.com/watch?v=DcN6BlTpFYE

  99. https://poniesatdawn.bandcamp.com/album/eternal

  100. https://www.youtube.com/watch?v=Dny54jIEGTI

  101. https://www.youtube.com/watch?v=HQXMOthJaro

  102. https://www.youtube.com/watch?v=Qs5slIf_9GM

  103. https://www.youtube.com/watch?v=FlNoAItDHf0

  104. https://www.youtube.com/watch?v=0rNWl-IH1Lg