newsletter/2019/13 (Link Bibliography)

“newsletter/​2019/​13” links:

  1. 13


  3. newsletter

  4. 01

  5. 02

  6. 03

  7. 04

  8. 05

  9. 06

  10. 07

  11. 08

  12. 09

  13. 10

  14. 11

  15. 12

  16. 13

  17. 13

  18. 13

  19. 13

  20. Changelog#2019

  21. Faces

  22. GPT-2

  23. Danbooru2020#danbooru2018

  24. Research-criticism

  25. Modus

  26. Timing

  27. Everything

  28. Unseeing

  29. Clone

  30. Clone#nba-screening-scenario

  31. Red

  32. Design

  33. Inflation.hs: ⁠, Gwern Branwen (2019-03-27):

    Experimental Pandoc module for implementing automatic inflation adjustment of nominal date-stamped dollar or amounts to provide real prices; Bitcoin’s exchange rate has moved by multiple orders of magnitude over its early years (rendering nominal amounts deeply unintuitive), and this is particularly critical in any economics or technology discussion where a nominal price from 1950 is 11× the 2019 real price!

    Years/​​​​dates are specified in a variant of my interwiki link syntax; for example: $50 or [₿0.5]​(₿2017-01-01), giving link adjustments which compile to something like like <span class="inflationAdjusted" data-originalYear="2017-01-01" data-originalAmount="50.50" data-currentYear="2019" data-currentAmount="50,500">₿50.50<span class="math inline"><sub>2017</sub><sup>$50,500</sup></span></span>.

    Dollar amounts use year, and Bitcoins use full dates, as the greater temporal resolution is necessary. Inflation rates/​​​​exchange rates are specified as constants and need to be manually updated every once in a while; if out of date, the last available rate is carried forward for future adjustments.

  34. popups.js: ⁠, Said Achmiz (2019-08-21; wikipedia):

    popups.js: standalone Javascript library for creating ‘popups’ which display link metadata (typically, title/​​​​author/​​​​date/​​​​summary), for extremely convenient reference/​​​​abstract reading, with mobile and YouTube support. Whenever any such link is mouse-overed by the user, popups.js will pop up a large tooltip-like square with the contents of the attributes. This is particularly intended for references, where it is extremely convenient to autopopulate links such as to​​​​​​​​Pubmed/​​​​PLOS/​​​​​​​​Wikipedia with the link’s title/​​​​author/​​​​date/​​​​abstract, so the reader can see it instantly.

    popups.js parses a HTML document and looks for <a> links which have the docMetadata attribute class, and the attributes data-popup-title, data-popup-author, data-popup-date, data-popup-doi, data-popup-abstract. (These attributes are expected to be populated already by the HTML document’s compiler, however, they can also be done dynamically. See for an example of a library which does Wikipedia-only dynamically on page loads.)

    For an example of a Hakyll library which generates annotations for Wikipedia/​​​​Biorxiv/​​​​⁠/​​​​PDFs/​​​​arbitrarily-defined links, see LinkMetadata.hs⁠.

  35. sidenotes.js

  36. Sidenotes

  37. Traffic#july-2019january-2020

  38. 20180101-20191231-annualcomparison.pdf: “2019 Site Traffic (Comparison with 2018)”⁠, Gwern Branwen

  39. ⁠, US Department of Justice (2019-05-08):

    The alleged owners and operators of a website known as DeepDotWeb (DDW) have been indicted by a federal grand jury sitting in Pittsburgh, Pennsylvania, for money laundering conspiracy, relating to millions of dollars in kickbacks they received for purchases of ⁠, heroin, and other illegal contraband by individuals referred to Darknet marketplaces by DDW. The website has now been seized by court order…In an indictment unsealed today, Tal Prihar, 37, an Israeli citizen residing in Brazil, and Michael Phan, 34, an Israeli citizen residing in Israel, were charged on April 24, 2019, in a one-count indictment by a federal grand jury in Pittsburgh. Prihar was arrested on May 6, 2019 by French law enforcement authorities in Paris, pursuant to a provisional arrest request by the United States in connection with the indictment. Phan was arrested in Israel on May 6 pursuant to charges in Israel. Further, the FBI seized DDW, pursuant to a court order issued by the U.S. District Court for the Western District of Pennsylvania.

    …DDW provided users with direct access to numerous online Darknet marketplaces, not accessible through traditional search engines, at which vendors offered for sale illegal narcotics such as fentanyl, ⁠, cocaine, heroin, and crystal methamphetamine, firearms, including assault rifles, malicious software and hacking tools; stolen financial information and payment cards and numbers; access device-making equipment and other illegal contraband.

    Prihar and Phan received kickback payments, representing commissions on the proceeds from each purchase of the illegal goods made by individuals referred to a Darknet marketplace from the DDW site. These kickback payments were made in virtual currency, such as bitcoin, and paid into a DDW-controlled bitcoin “wallet.” To conceal and disguise the nature and source of the illegal proceeds, totaling over $15 million, Prihar and Phan transferred their illegal kickback payments from their DDW bitcoin wallet to other bitcoin accounts and to bank accounts they controlled in the names of shell companies.

    …During the time period relevant to this Indictment, DDW’s referral links were widely used by users in the Western District of Pennsylvania and elsewhere to access and then create accounts on many Darknet marketplaces, including AlphaBay Market, Agora Market, Abraxas Market, Dream Market, Valhalla Market, Hansa Market, TradeRoute Market, Dr. D’s, Wall Street Market, and Tochka Market. When AlphaBay was seized by law enforcement in 2017, it was one of the largest Darknet markets that offered illegal drugs, fraudulent identification materials, counterfeit goods, hacking tools, malware, firearms, and toxic chemicals. Approximately 23.6% of all orders completed on AlphaBay were associated with an account created through a DDW referral link, meaning that DDW received a referral fee for 23.6% of all orders made on AlphaBay.

    Over the course of the conspiracy, the defendants referred hundreds of thousands of users to Darknet marketplaces. These users in turn completed hundreds of millions’ of dollars’ worth of transactions, including purchases of illegal narcotics such as fentanyl, carfentanil, cocaine, heroin, and crystal methamphetamine; firearms, including assault rifles; malicious software and hacking tools; stolen financial information and payment cards and numbers; access device-making equipment; and other illegal contraband. Through the use of the referral links, the defendants received kickbacks worth millions of dollars, generated from the illicit sales conducted on Darknet marketplace accounts created through the site.

    …Between in and around November 2014 and April 10, 2019, DDW received approximately 8,155 bitcoin in kickback payments from Darknet marketplaces, worth approximately $8,414,173 when adjusted for the trading value of bitcoin at the time of each transaction. The bitcoin was transferred to DDW’s bitcoin wallet, controlled by the defendants, in a series of more than 40,000 deposits and was subsequently withdrawn to various destinations both known and unknown to the grand jury through over 2,700 transactions. Due to bitcoin’s fluctuating exchange rate, the value of the bitcoin at the time of the withdrawals from the DDW bitcoin wallet equated to approximately $15,489,415. In seeking to conceal their illicit activities and protect their criminal enterprise and the illegal proceeds it generated, the defendants set up numerous shell companies around the world. The defendants used these companies to move their ill-gotten gains and conduct other activity related to DDW. These companies included WwwCom Ltd., M&T Marketing, Imtech, O.T.S.R. Biztech, and Tal Advanced Tech.

  40. /docs/psychology/okcupid

  41. index.html: ⁠, (2019-10-05;  /​ ​​ ​library):

    Old Internet users will remember ⁠. I didn’t much care for the main site, but I enjoyed their writeups in the ‘Rotten Library’ section. The website has been offline for years now and shows no sign of coming back, so I have put up a mirror of the (What’s New).

    You can now enjoy such classic entries as Penis Cakes⁠, LSD blotters⁠, the Mountain Meadows Massacre⁠, Kellogg cornflakes⁠, Kinderhook plates⁠, Lucky Luciano⁠, Kevin Mitnick⁠, on banned cartoons⁠, & Steve Wozniak⁠.

    (I used zscole’s archive⁠, compressed the JPEGs, and rewrote all the absolute links to make it work on, and fixed a few errors I found along the way—principally broken links and links to entries which appear to’ve never been written.)

  42. ⁠, Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, ⁠, Thore Graepel, Timothy Lillicrap, David Silver (2019-11-19):

    Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown.

    In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function.

    When evaluated on 57 different Atari games—the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled—our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the algorithm that was supplied with the game rules.

  43. Scaling-hypothesis#blessings-of-scale

  44. ⁠, Rich Sutton (2019-03-13):

    The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore’s law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation.

    …In computer chess, the methods that defeated the world champion, Kasparov, in 1997, were based on massive, deep search. At the time, this was looked upon with dismay by the majority of computer-chess researchers who had pursued methods that leveraged human understanding of the special structure of chess…A similar pattern of research progress was seen in computer Go, only delayed by a further 20 years. Enormous initial efforts went into avoiding search by taking advantage of human knowledge, or of the special features of the game, but all those efforts proved irrelevant, or worse, once search was applied effectively at scale…In speech recognition, there was an early competition, sponsored by ⁠, in the 1970s. Entrants included a host of special methods that took advantage of human knowledge—knowledge of words, of phonemes, of the human vocal tract, etc. On the other side were newer methods that were more statistical in nature and did much more computation, based on hidden Markov models (HMMs). Again, the statistical methods won out over the human-knowledge-based methods…In computer vision…Modern deep-learning neural networks use only the notions of convolution and certain kinds of invariances, and perform much better.

    …We have to learn the bitter lesson that building in how we think we think does not work in the long run. The bitter lesson is based on the historical observations that (1) AI researchers have often tried to build knowledge into their agents, (2) this always helps in the short term, and is personally satisfying to the researcher, but (3) in the long run it plateaus and even inhibits further progress, and (4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.

    [My meme summary:

    The GPT-3 bitter lesson.]
  45. ⁠, Jeff Clune (2019-05-27):

    Perhaps the most ambitious scientific quest in human history is the creation of general artificial intelligence, which roughly means AI that is as smart or smarter than humans. The dominant approach in the machine learning community is to attempt to discover each of the pieces required for intelligence, with the implicit assumption that some future group will complete the Herculean task of figuring out how to combine all of those pieces into a complex thinking machine. I call this the “manual AI approach”. This paper describes another exciting path that ultimately may be more successful at producing general AI. It is based on the clear trend in machine learning that hand-designed solutions eventually are replaced by more effective, learned solutions. The idea is to create an AI-generating algorithm (AI-GA), which automatically learns how to produce general AI. Three Pillars are essential for the approach: (1) meta-learning architectures, (2) meta-learning the learning algorithms themselves, and (3) generating effective learning environments. I argue that either approach could produce general AI first, and both are scientifically worthwhile irrespective of which is the fastest path. Because both are promising, yet the ML community is currently committed to the manual approach, I argue that our community should increase its research investment in the AI-GA approach. To encourage such research, I describe promising work in each of the Three Pillars. I also discuss AI-GA-specific safety and ethical considerations. Because it it may be the fastest path to general AI and because it is inherently scientifically interesting to understand the conditions in which a simple algorithm can produce general AI (as happened on Earth where Darwinian evolution produced human intelligence), I argue that the pursuit of AI-GAs should be considered a new grand challenge of computer science research.


  47. 1943-hazel.pdf

  48. ⁠, Scott Alexander (2020-01-08):

    [Scott Alexander look back on how his ideas/​​​​beliefs evolved over the past decade of blogging at Jackdaws/​​​​LessWrong⁠/​​​​SlateStarCodex. Primary topics:

    1. Bayesian predictive coding as a unified theory of brain perception, control, behavior, and psychiatric disorders as bad priors/​​​​​​updates

      • Psychedelics use as modifying brain priors, explaining how psychedelics affect and sometimes benefit their users
      • trauma/​​​​​​​attachment disorder
    2. Philosophy of mental disease

    3. efficacy of SSRIs

    4. Genetics of psychiatric disorders, especially autism/​​​​​​transsexuals: ???

    5. Willpower: also predictive coding???

    6. Diet/​​​​​​weight loss: setpoints, somehow

    7. Existential risk: dissolving the Great Filter, raising AI risk awareness

    8. Secular stagnation: progress is slowing, perhaps because human populations aren’t growing exponentially

      • Baumol’s cost disease as core cause of economic stagnation and political backlash
    9. The Replication Crisis: even worse than he thought

    10. Psychological effects:

      • Placebo effect: much more powerless than he thought
      • Birth order effects: much more powerful than he thought
    11. Utilitarianism: still confused, but more towards rule-utilitarianism

    12. Politics: social media turbocharging tribalism/​​​​​​outgroup-bias

    13. Ideology of liberalism and SJWism

    14. Coordination problems as core problem of politics

    15. Enlightenment: not actually that great, possibly wireheading]


  50. ⁠, Sisodiya, Sanjay M. Thompson, Pamela J. Need, Anna Harris, Sarah E. Weale, Michael E. Wilkie, Susan E. Michaelides, Michel Free, Samantha L. Walley, Nicole Gumbs, Curtis Gerrelli, Dianne Ruddle, Piers Whalley, Lawrence J. Starr, John M. Hunt, David M. Goldstein, David B. Deary, Ian J. Moore, Anthony T (2007):

    Background: The genetic basis of variation in human cognitive abilities is poorly understood. RIMS1 encodes a synapse active-zone protein with important roles in the maintenance of normal synaptic function: mice lacking this protein have greatly reduced learning ability and memory function.

    Objective: An established paradigm examining the structural and functional effects of mutations in genes expressed in the eye and the brain was used to study a kindred with an inherited retinal dystrophy due to RIMS1 mutation.

    Materials and Methods: Neuropsychological tests and high-resolution MRI brain scanning were undertaken in the kindred. In a population cohort, neuropsychological scores were associated with common variation in RIMS1. Additionally, RIMS1 was sequenced in top-scoring individuals. Evolution of RIMS1 was assessed, and its expression in developing human brain was studied.

    Results: Affected individuals showed significantly enhanced cognitive abilities across a range of domains. Analysis suggests that factors other than RIMS1 mutation were unlikely to explain enhanced cognition. No association with common variation and verbal IQ was found in the population cohort, and no other mutations in RIMS1 were detected in the highest scoring individuals from this cohort. RIMS1 protein is expressed in developing human brain, but RIMS1 does not seem to have been subjected to accelerated evolution in man.

    Conclusions: A possible role for RIMS1 in the enhancement of cognitive function at least in this kindred is suggested. Although further work is clearly required to explore these findings before a role for RIMS1 in human cognition can be formally accepted, the findings suggest that genetic mutation may enhance human cognition in some cases.

  51. Bitcoin-is-Worse-is-Better

  52. ⁠, Hans Moravec (1998):

    This paper describes how the performance of AI machines tends to improve at the same pace that AI researchers get access to faster hardware. The processing power and memory capacity necessary to match general intellectual performance of the human brain are estimated. Based on extrapolation of past trends and on examination of technologies under development, it is predicted that the required hardware will be available in cheap machines in the 2020s…At the present rate, computers suitable for human-like robots will appear in the 2020s. Can the pace be sustained for another three decades?

    …By 1990, entire careers had passed in the frozen winter of 1-MIPS computers, mainly from necessity, but partly from habit and a lingering opinion that the early machines really should have been powerful enough. In 1990, 1 MIPS cost $2,338$1,0001990 in a low-end personal computer. There was no need to go any lower. Finally spring thaw has come. Since 1990, the power available to individual AI and robotics programs has doubled yearly, to 30 MIPS by 1994 and 500 MIPS by 1998. Seeds long ago alleged barren are suddenly sprouting. Machines read text, recognize speech, even translate languages. Robots drive cross-country, crawl across Mars, and trundle down office corridors. In 1996 a theorem-proving program called running five weeks on a 50 MIPS computer at Argonne National Laboratory found a proof of a boolean algebra conjecture by Herbert Robbins that had eluded mathematicians for sixty years. And it is still only spring. Wait until summer.

    …The mental steps underlying good human chess playing and theorem proving are complex and hidden, putting a mechanical interpretation out of reach. Those who can follow the play naturally describe it instead in mentalistic language, using terms like strategy, understanding and creativity. When a machine manages to be simultaneously meaningful and surprising in the same rich way, it too compels a mentalistic interpretation. Of course, somewhere behind the scenes, there are programmers who, in principle, have a mechanical interpretation. But even for them, that interpretation loses its grip as the working program fills its memory with details too voluminous for them to grasp.

    As the rising flood reaches more populated heights, machines will begin to do well in areas a greater number can appreciate. The visceral sense of a thinking presence in machinery will become increasingly widespread. When the highest peaks are covered, there will be machines than can interact as intelligently as any human on any subject. The presence of minds in machines will then become self-evident.

    Faster than Exponential Growth in Computing Power: The number of MIPS in $1,854$10001998 of computer from 1900 to the present. Steady improvements in mechanical and electromechanical calculators before World War II had increased the speed of calculation a thousandfold over manual methods from 1900 to 1940. The pace quickened with the appearance of electronic computers during the war, and 1940 to 1980 saw a million-fold increase. The pace has been even quicker since then, a pace which would make human-like robots possible before the middle of the next century. The vertical scale is logarithmic, the major divisions represent thousandfold increases in computer performance. Exponential growth would show as a straight line, the upward curve indicates faster than exponential growth, or, equivalently, an accelerating rate of innovation. The reduced spread of the data in the 1990s is probably the result of intensified competition: underperforming machines are more rapidly squeezed out. The numerical data for this power curve are presented in the appendix.
    The big freeze: From 1960 to 1990 the cost of computers used in AI research declined, as their numbers dilution absorbed computer-efficiency gains during the period, and the power available to individual AI programs remained almost unchanged at 1 MIPS, barely insect power. AI computer cost bottomed in 1990, and since then power has doubled yearly, to several hundred MIPS by 1998. The major visible exception is computer chess (shown by a progression of knights), whose prestige lured the resources of major computer companies and the talents of programmers and machine designers. Exceptions also exist in less public competitions, like petroleum exploration and intelligence gathering, whose high return on investment gave them regular access to the largest computers.
  53. Scaling-hypothesis


  55. Sparsity

  56. Complement

  57. {#linkBibliography-yorker)-2018 .docMetadata}, Andrew Marantz (The ) (2018-03-12):

    Although redditors didn’t yet know it, Huffman could edit any part of the site. He wrote a script that would automatically replace his username with those of The_Donald’s most prominent members, directing the insults back at the insulters in real time: in one comment, “Fuck u/​​​​Spez” became “Fuck u/​​​​Trumpshaker”; in another, “Fuck u/​​​​Spez” became “Fuck u/​​​​MAGAdocious.” The_Donald’s users saw what was happening, and they reacted by spinning a conspiracy theory that, in this case, turned out to be true. “Manipulating the words of your users is fucked”, a commenter wrote. “Even Facebook and Twitter haven’t stooped this low.” “Trust nothing.”

    …In October, on the morning the new policy was rolled out, Ashooh sat at a long conference table with a dozen other employees. Before each of them was a laptop, a mug of coffee, and a few hours’ worth of snacks. “Welcome to the Policy Update War Room”, she said. “And, yes, I’m aware of the irony of calling it a war room when the point is to make Reddit less violent, but it’s too late to change the name.” The job of policing Reddit’s most pernicious content falls primarily to three groups of employees—the community team, the trust-and-safety team, and the anti-evil team—which are sometimes described, respectively, as good cop, bad cop, and RoboCop. Community stays in touch with a cross-section of redditors, asking them for feedback and encouraging them to be on their best behavior. When this fails and redditors break the rules, trust and safety punishes them. Anti-evil, a team of back-end engineers, makes software that flags dodgy-looking content and sends that content to humans, who decide what to do about it.

    Ashooh went over the plan for the day. All at once, they would replace the old policy with the new policy, post an announcement explaining the new policy, warn a batch of subreddits that they were probably in violation of the new policy, and ban another batch of subreddits that were flagrantly, irredeemably in violation. I glanced at a spreadsheet with a list of the hundred and nine subreddits that were about to be banned (r/​​​​KKK, r/​​​​KillAllJews, r/​​​​KilltheJews, r/​​​​KilltheJoos), followed by the name of the employee who would carry out each deletion, and, if applicable, the reason for the ban (“mostly just swastikas?”). “Today we’re focusing on a lot of Nazi stuff and bestiality stuff”, Ashooh said. “Context matters, of course, and you shouldn’t get in trouble for posting a swastika if it’s a historical photo from the 1936 Olympics, or if you’re using it as a Hindu symbol. But, even so, there’s a lot that’s clear-cut.” I asked whether the same logic—that the Nazi flag was an inherently violent symbol—would apply to the Confederate flag, or the Soviet flag, or the flag under which King Richard fought the Crusades. “We can have those conversations in the future”, Ashooh said. “But we have to start somewhere.”

    At 10AM, the trust-and-safety team posted the announcement and began the purge. “Thank you for letting me do DylannRoofInnocent”, one employee said. “That was one of the ones I really wanted.”

    “What is ReallyWackyTicTacs?” another employee asked, looking down the list. “Trust me, you don’t want to know”, Ashooh said. “That was the most unpleasant shit I’ve ever seen, and I’ve spent a lot of time looking into Syrian war crimes.”

    Some of the comments on the announcement were cynical. “They don’t actually want to change anything”, one redditor wrote, arguing that the bans were meant to appease advertisers. “It was, in fact, never about free speech, it was about money.” One trust-and-safety manager, a young woman wearing a leather jacket and a ship captain’s cap, was in charge of monitoring the comments and responding to the most relevant ones. “Everyone seems to be taking it pretty well so far”, she said. “There’s one guy, freespeechwarrior, who seems very pissed, but I guess that makes sense, given his username.” “People are making lists of all the Nazi subs getting banned, but nobody has noticed that we’re banning bestiality ones at the same time”, Ashooh said…“I’m going to get more cheese sticks”, the woman in the captain’s cap said, standing up. “How many cheese sticks is too many in one day? At what point am I encouraging or glorifying violence against my own body?” “It all depends on context”, Ashooh said.

    I understood why other companies had been reluctant to let me see something like this. Never again would I be able to read a lofty phrase about a social-media company’s shift in policy—“open and connected”, or “encouraging meaningful interactions”—without imagining a group of people sitting around a conference room, eating free snacks and making fallible decisions. Social networks, no matter how big they get or how familiar they seem, are not ineluctable forces but experimental technologies built by human beings. We can tell ourselves that these human beings aren’t gatekeepers, or that they have cleansed themselves of all bias and emotion, but this would have no relation to reality. “I have biases, like everyone else”, Huffman told me once. “I just work really hard to make sure that they don’t prevent me from doing what’s right.”

  58. Google-shutdowns

  59. ⁠, Pierrick Wainschtein, Deepti P. Jain, Loic Yengo, Zhili Zheng, TOPMed Anthropometry Working Group, Trans-Omics for Precision Medicine Consortium, L. Adrienne Cupples, Aladdin H. Shadyab, Barbara McKnight, Benjamin M. Shoemaker, Braxton D. Mitchell, Bruce M. Psaty, Charles Kooperberg, Dan Roden, Dawood Darbar, Donna K. Arnett, Elizabeth A. Regan, Eric Boerwinkle, Jerome I. Rotter, Matthew A. Allison, Merry-Lynn N. McDonald, Mina K. Chung, Nicholas L. Smith, Patrick T. Ellinor, Ramachandran S. Vasan, Rasika A. Mathias, Stephen S. Rich, Susan R. Heckbert, Susan Redline, Xiuqing Guo, Y.-D Ida Chen, Ching-Ti Liu, Mariza de Andrade, Lisa R. Yanek, Christine M. Albert, Ryan D. Hernandez, Stephen T. McGarvey, Kari E. North, Leslie A. Lange, Bruce S. Weir, Cathy C. Laurie, Jian Yang, Peter M. Visscher (2019-03-25):

    Heritability, the proportion of phenotypic explained by genetic factors, can be estimated from data 1, but such estimates are uninformative with respect to the underlying genetic architecture. Analyses of data from genome-wide association studies () on unrelated individuals have shown that for human traits and disease, approximately one-third to two-thirds of heritability is captured by common SNPs 2–5. It is not known whether the remaining heritability is due to the imperfect tagging of causal variants by common SNPs, in particular if the causal variants are rare, or other reasons such as over-estimation of heritability from pedigree data. Here we show that pedigree heritability for height and (BMI) appears to be fully recovered from whole-genome sequence (WGS) data on 21,620 unrelated individuals of European ancestry. We assigned 47.1 million genetic variants to groups based upon their minor allele frequencies (MAF) and (LD) with variants nearby, and estimated and partitioned variation accordingly. The estimated heritability was 0.79 (SE 0.09) for height and 0.40 (SE 0.09) for BMI, consistent with pedigree estimates. Low-MAF variants in low LD with neighbouring variants were enriched for heritability, to a greater extent for protein altering variants, consistent with thereon. Cumulatively variants in the MAF range of 0.0001 to 0.1 explained 0.54 (SE 0.05) and 0.51 (SE 0.11) of heritability for height and BMI, respectively. Our results imply that the still missing heritability of complex traits and disease is accounted for by rare variants, in particular those in regions of low LD.

  60. 2019-lee.pdf: ⁠, James J. Lee, Matt McGue, William G. Iacono, Andrew M. Michael, Christopher F. Chabris (2019-07; iq):

    There exists a moderate correlation between MRI-measured brain size and the general factor of IQ performance (g), but the question of whether the association reflects a theoretically important causal relationship or spurious remains somewhat open. Previous small studies (n < 100) looking for the persistence of this correlation within families failed to find a tendency for the sibling with the larger brain to obtain a higher test score. We studied the within-family relationship between brain volume and intelligence in the much larger sample provided by the Human Connectome Project (n = 1022) and found a highly statistically-significant correlation (disattenuated ρ = 0.18, p < 0.001). We replicated this result in the Minnesota Center for Twin and Family Research (n = 2698), finding a highly statistically-significant within-family correlation between head circumference and intelligence (disattenuated ρ = 0.19, p < 0.001). We also employed novel methods of causal inference relying on summary statistics from genome-wide association studies (GWAS) of head size (n ≈ 10,000) and measures of cognition (257,000 < n < 767,000). Using bivariate LD Score regression, we found a genetic correlation between intracranial volume (ICV) and years of education (EduYears) of 0.41 (p < 0.001). Using the (LCV) method, we found a genetic causality proportion of 0.72 (p < 0.001); thus the arises from an asymmetric pattern, extending to sub-significant loci, of genetic variants associated with ICV also being associated with EduYears but many genetic variants associated with EduYears not being associated with ICV. This is the pattern of genetic results expected from a causal effect of brain size on intelligence. These findings give reason to take up the hypothesis that the dramatic increase in brain volume over the course of human evolution has been the result of favoring general intelligence.

  61. ⁠, W. David Hill, Neil M. Davies, Stuart J. Ritchie, Nathan G. Skene, Julien Bryois, Steven Bell, Emanuele Di Angelantonio, David J. Roberts, Shen Xueyi, Gail Davies, David C. M. Liewald, David J. Porteous, Caroline Hayward, Adam S. Butterworth, Andrew M. McIntosh, Catharine R. Gale, Ian J. Deary (2019-03-12):

    Socio-economic position (SEP) is a multi-dimensional construct reflecting (and influencing) multiple socio-cultural, physical, and environmental factors. Previous genome-wide association studies (GWAS) using household income as a marker of SEP have shown that common genetic variants account for 11% of its variation. Here, in a sample of 286,301 participants from UK Biobank, we identified 30 independent genome-wide statistically-significant loci, 29 novel, that are associated with household income. Using a recently-developed method to meta-analyze data that leverages power from genetically-correlated traits, we identified an additional 120 income-associated loci. These loci showed clear evidence of functional enrichment, with transcriptional differences identified across multiple cortical tissues, in addition to links with GABAergic and serotonergic neurotransmission. We identified neurogenesis and the components of the synapse as candidate biological systems that are linked with income. By combining our GWAS on income with data from eQTL studies and chromatin interactions, 24 genes were prioritized for follow up, 18 of which were previously associated with cognitive ability. Using ⁠, we identified cognitive ability as one of the causal, partly-heritable phenotypes that bridges the gap between molecular genetic inheritance and phenotypic consequence in terms of income differences. Significant differences between genetic correlations indicated that, the genetic variants associated with income are related to better mental health than those linked to educational attainment (another commonly-used marker of SEP). Finally, we were able to predict 2.5% of income differences using genetic data alone in an independent sample. These results are important for understanding the observed socioeconomic inequalities in Great Britain today.

  62. ⁠, W. David Hill, Neil M. Davies, Stuart J. Ritchie, Nathan G. Skene, Julien Bryois, Steven Bell, Emanuele Di Angelantonio, David J. Roberts, Shen Xueyi, Gail Davies, David C. M. Liewald, David J. Porteous, Caroline Hayward, Adam S. Butterworth, Andrew M. McIntosh, Catharine R. Gale, Ian J. Deary (2019-12-16):

    Socioeconomic position (SEP) is a multi-dimensional construct reflecting (and influencing) multiple socio-cultural, physical, and environmental factors. In a sample of 286,301 participants from ⁠, we identify 30 (29 previously unreported) independent-loci associated with income. Using a method to meta-analyze data from genetically-correlated traits, we identify an additional 120 income-associated loci. These loci show clear evidence of functionality, with transcriptional differences identified across multiple cortical tissues, and links to GABA-ergic and serotonergic neurotransmission. By combining our genome wide association study on income with data from eQTL studies and chromatin interactions, 24 genes are prioritized for follow up, 18 of which were previously associated with intelligence. We identify intelligence as one of the likely causal, partly-heritable phenotypes that might bridge the gap between molecular genetic inheritance and phenotypic consequence in terms of income differences. These results indicate that, in modern era Great Britain, genetic effects contribute towards some of the observed socioeconomic inequalities.

  63. ⁠, Saskia Selzam, Stuart J. Ritchie, Jean-Baptiste Pingault, Chandra A. Reynolds, Paul F. O’Reilly, Robert Plomin (2019-04-10):

    Polygenic scores are a popular tool for prediction of complex traits. However, prediction estimates in samples of unrelated participants can include effects of population stratification, assortative mating and environmentally mediated parental genetic effects, a form of genotype-environment correlation (rGE). Comparing genome-wide (GPS) predictions in unrelated individuals with predictions between siblings in a within-family design is a powerful approach to identify these different sources of prediction.

    Here, we compared within-family to between-family GPS predictions of eight life outcomes (anthropometric, cognitive, personality and health) for eight corresponding GPSs. The outcomes were assessed in up to 2,366 dizygotic (DZ) twin pairs from the Twins Early Development Study from age 12 to age 21. To account for family clustering, we used mixed-effects modelling, simultaneously estimating within-family and between-family effects for target-trait and cross-trait GPS prediction of the outcomes.

    There were three main findings: (1) DZ twin GPS differences predicted DZ differences in height, BMI, intelligence, educational achievement and symptoms; (2) target and cross-trait analyses indicated that GPS prediction estimates for cognitive traits (intelligence and educational achievement) were on average 60% greater between families than within families, but this was not the case for non-cognitive traits; and (3) this within-family and between-family difference for cognitive traits disappeared after controlling for family (SES), suggesting that SES is a source of between-family prediction through rGE mechanisms.

    These results provide novel insights into the patterns by which rGE contributes to GPS prediction, while ruling out confounding due to population stratification and mating.

  64. ⁠, Andrea Ganna, Karin J. H. Verweij, Michel G. Nivard, Robert Maier, Robbee Wedow, Alexander S. Busch, Abdel Abdellaoui, Shengru Guo, J. Fah Sathirapongsasuti, 23andMe Research Team, Paul Lichtenstein, Sebastian Lundström, Niklas Långström, Adam Auton, Kathleen Mullan Harris, Gary W. Beecham, Eden R. Martin, Alan R. Sanders, John R. B. Perry, Benjamin M. Neale, Brendan P. Zietsch (2019-08-29):

    Twin studies and other analyses of inheritance of sexual orientation in humans has indicated that same-sex sexual behavior has a genetic component. Previous searches for the specific genes involved have been underpowered and thus unable to detect genetic signals. Ganna et al. perform a genome-wide association study on 493,001 participants from the United States, the United Kingdom, and Sweden to study genes associated with sexual orientation (see the Perspective by Mills). They find multiple loci implicated in same-sex sexual behavior indicating that, like other behavioral traits, nonheterosexual behavior is polygenic.

    Introduction: Across human societies and in both sexes, some 2 to 10% of individuals report engaging in sex with same-sex partners, either exclusively or in addition to sex with opposite-sex partners. Twin and family studies have shown that same-sex sexual behavior is partly genetically influenced, but previous searches for the specific genes involved have been underpowered to detect realistic for complex traits.

    Rationale: For the first time, new large-scale datasets afford sufficient to identify genetic variants associated with same-sex sexual behavior (ever versus never had a same-sex partner), estimate the proportion of variation in the trait accounted for by all variants in aggregate, estimate the genetic correlation of same-sex sexual behavior with other traits, and probe the biology and complexity of the trait. To these ends, we performed genome-wide association discovery analyses on 477,522 individuals from the United Kingdom and United States, replication analyses in 15,142 individuals from the United States and Sweden, and follow-up analyses using different aspects of sexual preference.

    Results: In the discovery samples (UK Biobank and 23andMe), 5 autosomal loci were statistically-significantly associated with same-sex sexual behavior. Follow-up of these loci suggested links to biological pathways that involve sex hormone regulation and olfaction. 3 of the loci were in a meta-analysis of smaller, independent replication samples. Although only a few loci passed the stringent statistical corrections for genome-wide multiple testing and were replicated in other samples, our analyses show that many loci underlie same-sex sexual behavior in both sexes. In aggregate, all tested genetic variants accounted for 8 to 25% of variation in male and female same-sex sexual behavior, and the genetic influences were positively but imperfectly correlated between the sexes [genetic correlation coefficient (rg)= 0.63; 95% ⁠, 0.48 to 0.78]. These aggregate genetic influences partly overlapped with those on a variety of other traits, including externalizing behaviors such as smoking, cannabis use, risk-taking, and the personality trait “openness to experience.” Additional analyses suggested that sexual behavior, attraction, identity, and fantasies are influenced by a similar set of genetic variants (rg > 0.83); however, the genetic effects that differentiate heterosexual from same-sex sexual behavior are not the same as those that differ among nonheterosexuals with lower versus higher proportions of same-sex partners, which suggests that there is no single continuum from opposite-sex to same-sex preference.

    Conclusion: Same-sex sexual behavior is influenced by not one or a few genes but many. Overlap with genetic influences on other traits provides insights into the underlying biology of same-sex sexual behavior, and analysis of different aspects of sexual preference underscore its complexity and call into question the validity of bipolar continuum measures such as the Kinsey scale. Nevertheless, many uncertainties remain to be explored, including how sociocultural influences on sexual preference might interact with genetic influences. To help communicate our study to the broader public, we organized workshops in which representatives of the public, activists, and researchers discussed the rationale, results, and implications of our study.

  65. 2019-lakhani.pdf: ⁠, Chirag M. Lakhani, Braden T. Tierney, Arjun K. Manrai, Jian Yang, Peter M. Visscher, Chirag J. Patel (2019-01-14; genetics  /​ ​​ ​heritable):

    We analysed a large health insurance dataset to assess the genetic and environmental contributions of 560 disease-related phenotypes in 56,396 twin pairs and 724,513 sibling pairs out of 44,859,462 individuals that live in the United States. We estimated the contribution of environmental risk factors (socioeconomic status (SES), air pollution and climate) in each phenotype. Mean heritability (h2 = 0.311) and shared environmental variance (c2 = 0.088) were higher than variance attributed to specific environmental factors such as zip-code-level SES (varSES = 0.002), daily air quality (var~AQI` = 0.0004), and average temperature (vartemp = 0.001) overall, as well as for individual phenotypes. We found statistically-significant heritability and shared environment for a number of comorbidities (h2 = 0.433, c2 = 0.241) and average monthly cost (h2 = 0.290, c2 = 0.302). All results are available using our Claims Analysis of Twin Correlation and Heritability (CaTCH) web application.

  66. ⁠, Loic Yengo, Naomi R. Wray, Peter M. Visscher (2019-09-03):

    In most human societies, there are taboos and laws banning mating between first-degreee and second-degree relatives, but actual prevalence and effects on health and fitness are poorly quantified. Here, we leverage a large observational study of ~450,000 participants of European ancestry from the UK Biobank (UKB) to quantify extreme inbreeding (EI) and its consequences. We use genotyped SNPs to detect large runs of (ROH) and call EI when >10% of an individual’s genome comprise ROHs. We estimate a prevalence of EI of ~0.03%, ie., ~1⁄3652. EI cases have phenotypic means between 0.3 and 0.7 standard deviation below the population mean for 7 traits, including stature and cognitive ability, consistent with estimated from individuals with low levels of inbreeding. Our study provides DNA-based quantification of the prevalence of EI in a European ancestry sample from the UK and measures its effects on health and fitness traits.

    In most human societies, there are taboos and laws banning mating between first-degree and second-degree relatives, but actual prevalence and effects on health and fitness are poorly quantified. Here, we leverage a large observational study of ~450,000 participants of European ancestry from the UK Biobank (UKB) to quantify extreme inbreeding (EI) and its consequences. We use genotyped SNPs to detect large runs of homozygosity (ROH) and call EI when >10% of an individual’s genome comprise ROHs. We estimate a prevalence of EI of ~0.03%, ie., ~1⁄3652. EI cases have phenotypic means between 0.3 and 0.7 standard deviation below the population mean for 7 traits, including stature and cognitive ability, consistent with inbreeding depression estimated from individuals with low levels of inbreeding. Our study provides DNA-based quantification of the prevalence of EI in a European ancestry sample from the UK and measures its effects on health and fitness traits.

  67. ⁠, Michael Le Page (2019-11-22):

    A company called Genomic Prediction has confirmed that at least one woman is pregnant with embryos selected after analysing hundreds of thousands of DNA variants to assess the risk of disease. It is the first time this approach has been used for screening IVF embryos, but some don’t think this use of the technology is justified.

    “Embryos have been chosen to reduce disease risk using pre-implantation genetic testing for polygenic traits, and this has resulted in pregnancy”, Laurent Tellier, CEO of Genomic Prediction, told New Scientist. He didn’t say how many pregnancies there were, or what traits or conditions were screened for.

  68. 2019-fredens.pdf: ⁠, Julius Fredens, Kaihang Wang, Daniel de la Torre, Louise F. H. Funke, Wesley E. Robertson, Yonka Christova, Tiongsun Chia, Wolfgang H. Schmied, Daniel L. Dunkelmann, Václav Beránek, Chayasith Uttamapinant, Andres Gonzalez Llamazares, Thomas S. Elliott, Jason W. Chin (2019-05-15; genetics  /​ ​​ ​editing):

    Nature uses 64 codons to encode the synthesis of proteins from the genome, and chooses 1 sense codon—out of up to 6 synonyms—to encode each amino acid. Synonymous codon choice has diverse and important roles, and many synonymous substitutions are detrimental. Here we demonstrate that the number of codons used to encode the canonical amino acids can be reduced, through the genome-wide substitution of target codons by defined synonyms. We create a variant of Escherichia coli with a four-megabase synthetic genome through a high-fidelity convergent total synthesis. Our synthetic genome implements a defined recoding and refactoring scheme—with simple corrections at just seven positions—to replace every known occurrence of two sense codons and a stop codon in the genome. Thus, we recode 18,214 codons to create an organism with a 61-codon genome; this organism uses 59 codons to encode the 20 amino acids, and enables the deletion of a previously essential ⁠.

  69. 2019-ostrov.pdf: ⁠, Nili Ostrov, Jacob Beal, Tom Ellis, D. Benjamin Gordon, Bogumil J. Karas, Henry H. Lee, Scott C. Lenaghan, Jeffery A. Schloss, Giovanni Stracquadanio, Axel Trefzer, Joel S. Bader, George M. Church, Cintia M. Coelho, J. William Efcavitch, Marc Güell, Leslie A. Mitchell, Alec A. K. Nielsen, Bill Peck, Alexander C. Smith, C. Neal Stewart Jr., Hille Tekotte (2019-10-18; genetics  /​ ​​ ​editing):

    Engineering biology with recombinant DNA, broadly called synthetic biology, has progressed tremendously in the last decade, owing to continued industrialization of DNA synthesis, discovery and development of molecular tools and organisms, and increasingly sophisticated modeling and analytic tools. However, we have yet to understand the full potential of engineering biology because of our inability to write and test whole genomes, which we call synthetic genomics. Substantial improvements are needed to reduce the cost and increase the speed and reliability of genetic tools. Here, we identify emerging technologies and improvements to existing methods that will be needed in four major areas to advance synthetic genomics within the next 10 years: genome design, DNA synthesis, genome editing, and chromosome construction (see table). Similar to other large-scale projects for responsible advancement of innovative technologies, such as the Human Genome Project, an international, cross-disciplinary effort consisting of public and private entities will likely yield maximal return on investment and open new avenues of research and biotechnology.

  70. ⁠, Yanan Yue, Yinan Kan, Weihong Xu, Hong-Ye Zhao, Yixuan Zhou, Xiaobin Song, Jiajia Wu, Juan Xiong, Dharmendra Goswami, Meng Yang, Lydia Lamriben, Mengyuan Xu, Qi Zhang, Yu Luo, Jianxiong Guo, Shenyi Mao, Deling Jiao, Tien Dat Nguyen, Zhuo Li, Jacob V. Layer, Malin Li, Violette Paragas, Michele E. Youd, Zhongquan Sun, Yuan Ding, Weilin Wang, Hongwei Dou, Lingling Song, Xueqiong Wang, Lei Le, Xin Fang, Haydy George, Ranjith Anand, Shi Yun Wang, William F. Westlin, Marc Guell, James Markmann, Wenning Qin, Yangbin Gao, Hongjiang Wei, George M. Church, Luhan Yang (2019-12-19):

    Xenotransplantation, specifically the use of porcine organs for human transplantation, has long been sought after as an alternative for patients suffering from organ failure. However, clinical application of this approach has been impeded by two main hurdles: 1) risk of transmission of porcine endogenous retroviruses (PERVs) and 2) molecular incompatibilities between donor pigs and humans which culminate in rejection of the graft. We previously demonstrated that all 25 copies of the PERV elements in the pig genome could be inactivated and live pigs successfully generated. In this study, we improved the scale of porcine germline editing from targeting a single repetitive locus with CRISPR to engineering 18 different loci using multiple genome engineering methods. we engineered the pig genome at 42 alleles using -Cas9 and transposon and produced PERVKO·3KO·9TG pigs which carry PERV inactivation, xeno-antigen KO and 9 effective human transgenes.. The engineered pigs exhibit normal physiology, fertility, and germline transmission of the edited alleles. In vitro assays demonstrated that these pigs gain significant resistance to human humoral and cell mediated damage, and coagulation dysregulations, similar to that of allotransplantation. Successful creation of PERVKO·3KO·9TG pigs represents a significant step forward towards safe and effective porcine xenotransplantation, which also represents a synthetic biology accomplishment of engineering novel functions in a living organism.

    One Sentence Summary

    Extensive genome engineering is applied to modify pigs for safe and immune compatible organs for human transplantation

  71. ⁠, Cory J. Smith, Oscar Castanon, Khaled Said, Verena Volf, Parastoo Khoshakhlagh, Amanda Hornick, Raphael Ferreira, Chun-Ting Wu, Marc Güell, Shilpa Garg, Hannu Myllykallio, George M. Church (2019-03-15):

    To extend the frontier of genome editing and enable the radical redesign of mammalian genomes, we developed a set of dead-Cas9 base editor (dBEs) variants that allow editing at tens of thousands of loci per cell by overcoming the cell death associated with DNA double-strand breaks (DSBs) and single-strand breaks (SSBs). We used a set of gRNAs targeting repetitive elements—ranging in target copy number from about 31 to 124,000 per cell. dBEs enabled survival after large-scale base editing, allowing targeted mutations at up to ~13,200 and ~2610 loci in 293T and human induced pluripotent stem cells (hiPSCs), respectively, three orders of magnitude greater than previously recorded. These dBEs can overcome current on-target mutation and toxicity barriers that prevent cell survival after large-scale genome engineering.

    One Sentence Summary

    Base editing with reduced DNA nicking allows for the simultaneous editing of >10,000 loci in human cells.

  72. ⁠, Chenglei Tian, Linlin Liu, Xiaoying Ye, Haifeng Fu, Xiaoyan Sheng, Lingling Wang, Huasong Wang, Dai Heng, Lin Liu (2019-12-24):

    • Granulosa cells can be reprogrammed to form oocytes by chemical reprogramming
    • Rock inhibition and crotonic acid facilitate the chemical induction of gPSCs from GCs
    • PGCLCs derived from gPSCs exhibit longer telomeres and high genomic stability

    The generation of genomically stable and functional oocytes has great potential for preserving fertility and restoring ovarian function. It remains elusive whether functional oocytes can be generated from adult female somatic cells through reprogramming to germline-competent pluripotent stem cells (gPSCs) by chemical treatment alone. Here, we show that somatic granulosa cells isolated from adult mouse ovaries can be robustly induced to generate gPSCs by a purely chemical approach, with additional Rock inhibition and critical reprogramming facilitated by crotonic sodium or acid. These gPSCs acquired high germline competency and could consistently be directed to differentiate into primordial-germ-cell-like cells and form functional oocytes that produce fertile mice. Moreover, gPSCs promoted by crotonylation and the derived germ cells exhibited longer telomeres and high genomic stability like PGCs in vivo, providing additional evidence supporting the safety and effectiveness of chemical induction, which is particularly important for germ cells in genetic inheritance.

    [Keywords: chemical reprogramming, pluripotent stem cell, oocyte, granulosa cell]

  73. 2019-zheng.pdf: ⁠, Yi Zheng, Xufeng Xue, Yue Shao, Sicong Wang, Sajedeh Nasr Esfahani, Zida Li, Jonathon M. Muncie, Johnathon N. Lakins, Valerie M. Weaver, Deborah L. Gumucio, Jianping Fu (2019-09-11; genetics  /​ ​​ ​editing):

    Early human embryonic development involves extensive lineage diversification, cell-fate specification and tissue patterning1. Despite its basic and clinical importance, early human embryonic development remains relatively unexplained owing to interspecies divergence2,3 and limited accessibility to human embryo samples. Here we report that human pluripotent stem cells (hPSCs) in a microfluidic device recapitulate, in a highly controllable and scalable fashion, landmarks of the development of the epiblast and amniotic ectoderm parts of the conceptus, including lumenogenesis of the epiblast and the resultant pro-amniotic cavity, formation of a bipolar embryonic sac, and specification of primordial germ cells and primitive streak cells. We further show that amniotic ectoderm-like cells function as a signalling centre to trigger the onset of gastrulation-like events in hPSCs. Given its controllability and scalability, the microfluidic model provides a powerful experimental system to advance knowledge of human embryology and reproduction. This model could assist in the rational design of differentiation protocols of hPSCs for disease modelling and cell therapy, and in high-throughput drug and toxicity screens to prevent pregnancy failure and birth defects.

  74. 2019-05-06-theexpresstribune-80percentofsouthkoreassnifferdogsarecloned.html: {#linkBibliography-tribune)-2019 .docMetadata}, APP (The Express Tribune) (2019-05-06; genetics  /​ ​​ ​editing):

    Some 80% of active sniffer dogs deployed by South Korea’s quarantine agency are cloned, data showed Monday, as activists express their concerns over potential animal abuse. According to the Animal and Plant Quarantine Agency, 42 of its 51 sniffer dogs were cloned from parent animals as of April, indicating such cloned detection dogs are already making substantial contributions to the country’s quarantine activities. The number of cloned dogs first outpaced their naturally born counterparts in 2014, the agency said. Of the active cloned dogs, 39 are currently deployed at Incheon International Airport, the country’s main gateway.

    Deploying cloned dogs can save time and money over training naturally born puppies as they maintain the outstanding traits of their parents, whose capabilities have already been verified in the field, according to experts. While the average cost of raising one detection dog is over 100 million won (US$85,600), it is less than half that when utilising cloned puppies, they said.

  75. 2019-south.pdf: ⁠, Paul F. South, Amanda P. Cavanagh (2019-01-01; genetics  /​ ​​ ​editing):

    Photorespiration is required in C3 plants to metabolize toxic glycolate formed when ribulose-1,5-bisphosphate carboxylase-oxygenase oxygenates rather than carboxylates ribulose-1,5-bisphosphate. Depending on growing temperatures, photorespiration can reduce yields by 20 to 50% in C3 crops. Inspired by earlier work, we installed into tobacco chloroplasts synthetic glycolate metabolic pathways that are thought to be more efficient than the native pathway. Flux through the synthetic pathways was maximized by inhibiting glycolate export from the chloroplast. The synthetic pathways tested improved photosynthetic quantum yield by 20%. Numerous homozygous transgenic lines increased biomass productivity between 19 and 37% in replicated field trials. These results show that engineering alternative glycolate metabolic pathways into crop chloroplasts while inhibiting glycolate export into the native pathway can drive increases in C3 crop yield under agricultural field conditions.

  76. 2019-grunwald.pdf: ⁠, Hannah A. Grunwald, Valentino M. Gantz, Gunnar Poplawski, Xiang-Ru S. Xu, Ethan Bier & Kimberly L. Cooper (2019-01-23; genetics  /​ ​​ ​editing):

    A gene drive biases the transmission of one of the two copies of a gene such that it is inherited more frequently than by random segregation. Highly efficient gene drive systems have recently been developed in insects, which leverage the sequence-targeted DNA cleavage activity of CRISPR-Cas9 and endogenous homology-directed repair mechanisms to convert heterozygous genotypes to homozygosity1,2,3,4. If implemented in laboratory rodents, similar systems would enable the rapid assembly of currently impractical genotypes that involve multiple homozygous genes (for example, to model multigenic human diseases). To our knowledge, however, such a system has not yet been demonstrated in mammals. Here we use an active genetic element that encodes a guide RNA, which is embedded in the mouse tyrosinase (Tyr) gene, to evaluate whether targeted gene conversion can occur when CRISPR-Cas9 is active in the early embryo or in the developing germline. Although Cas9 efficiently induces double-stranded DNA breaks in the early embryo and male germline, these breaks are not corrected by homology-directed repair. By contrast, Cas9 expression limited to the female germline induces double-stranded breaks that are corrected by homology-directed repair, which copies the active genetic element from the donor to the receiver chromosome and increases its rate of inheritance in the next generation. These results demonstrate the feasibility of CRISPR-Cas9-mediated systems that bias inheritance of desired alleles in mice and that have the potential to transform the use of rodent models in basic and biomedical research.

  77. ⁠, Tait-Burkard, Christine Doeschl-Wilson, Andrea McGrew, Mike J. Archibald, Alan L. Sang, Helen M. Houston, Ross D. Whitelaw, C. Bruce Watson, Mick (2018):

    The human population is growing, and as a result we need to produce more food whilst reducing the impact of farming on the environment. Selective breeding and have had a transformational impact on livestock productivity, and now transgenic and genome-editing technologies offer exciting opportunities for the production of fitter, healthier and more-productive livestock. Here, we review recent progress in the application of genome editing to farmed animal species and discuss the potential impact on our ability to produce food.

  78. {#linkBibliography-(nyt)-2019 .docMetadata}, Sui-Lee Wee (NYT) (2019-12-30):

    A court in China on Monday sentenced He Jiankui, the researcher who shocked the global scientific community when he claimed that he had created the world’s first genetically edited babies, to three years in prison for carrying out “illegal medical practices.” In a surprise announcement from a trial that was closed to the public, the court in the southern city of Shenzhen found Dr. He guilty of forging approval documents from ethics review boards to recruit couples in which the man had H.I.V. and the woman did not, Xinhua, China’s official news agency, reported. Dr. He had said he was trying to prevent H.I.V. infections in newborns, but the state media on Monday said he deceived the subjects and the medical authorities alike.

    Dr. He, 35, sent the scientific world into an uproar last year when he announced at a conference in Hong Kong that he had created the world’s first genetically edited babies—twin girls. On Monday, China’s state media said his work had resulted in a third genetically edited baby, who had been previously undisclosed.

    Dr. He pleaded guilty and was also fined $430,000, according to Xinhua. In a brief trial, the court also handed down prison sentences to two other scientists who it said had “conspired” with him: Zhang Renli, who was sentenced to two years in prison, and Qin Jinzhou, who got a suspended sentence of one and a half years…The court said the trial had to be closed to the public to guard the privacy of the people involved.

  79. ⁠, Guillaume Laval, Etienne Patin, Pierre Boutillier, Lluis Quintana-Murci (2019-12-23):

    Over the last 100,000 years, humans have spread across the globe and encountered a highly diverse set of environments to which they have had to adapt. Genome-wide scans of selection are powerful to detect selective sweeps. However, because of unknown fractions of undetected sweeps and false discoveries, the numbers of detected sweeps often poorly reflect actual numbers of selective sweeps in populations. The thousands of soft sweeps on standing variation recently evidenced in humans have also been interpreted as a majority of mis-classified neutral regions. In such a context, the extent of human adaptation remains little understood. We present a new rationale to estimate these actual numbers of sweeps expected over the last 100,000 years (denoted by X) from genome-wide population data, both considering hard sweeps and selective sweeps on standing variation. We implemented an approximate Bayesian computation framework and showed, based on computer simulations, that such a method can properly estimate X. We then jointly estimated the number of selective sweeps, their mean intensity and age in several 1000G African, European and Asian populations. Our estimations of X, found weakly sensitive to demographic misspecifications, revealed very limited numbers of sweeps regardless the frequency of the selected alleles at the onset of selection and the completion of sweeps. We estimated ~80 sweeps in average across fifteen 1000G populations when assuming incomplete sweeps only and ~140 selective sweeps in non-African populations when incorporating complete sweeps in our simulations. The method proposed may help to address controversies on the number of selective sweeps in populations, guiding further genome-wide investigations of recent positive selection.

  80. ⁠, Evan L. MacLeant, Noah Snyder-Mackler, Bridgett M. vonHoldt, James A. Serpell (2019-01-01):

    Variation across dog breeds presents a unique opportunity for investigating the evolution and biological basis of complex behavioral traits. We integrated behavioral data from more than 17,000 dogs from 101 breeds with breed-averaged genotypic data (n = 5,697 dogs) from over 100,000 loci in the dog genome. Across 14 traits, we found that breed differences in behavior are highly heritable, and that clustering of breeds based on behavior accurately recapitulates genetic relationships. We identify 131 single nucleotide polymorphisms associated with breed differences in behavior, which are found in genes that are highly expressed in the brain and enriched for neurobiological functions and developmental processes. Our results provide insight into the heritability and genetic architecture of complex behavioral traits, and suggest that dogs provide a powerful model for these questions.

  81. 2019-horschler.pdf: ⁠, Daniel J. Horschler, Brian Hare, Josep Call, Juliane Kaminski, Ádám Miklósi, Evan L. MacLean (2019-01-03; iq):

    Large-scale phylogenetic studies of animal cognition have revealed robust links between absolute brain volume and species differences in However, past comparative samples have been composed largely of primates, which are characterized by evolutionarily derived neural scaling rules. Therefore, it is currently unknown whether positive associations between brain volume and executive function reflect a broad-scale evolutionary phenomenon, or alternatively, a unique consequence of primate brain evolution. Domestic dogs provide a powerful opportunity for investigating this question due to their close genetic relatedness, but vast intraspecific variation. Using citizen science data on more than 7000 purebred dogs from 74 breeds, and controlling for genetic relatedness between breeds, we identify strong relationships between estimated absolute brain weight and breed differences in cognition. Specifically, larger-brained breeds performed statistically-significantly better on measures of short-term memory and self-control. However, the relationships between estimated brain weight and other cognitive measures varied widely, supporting domain-specific accounts of cognitive evolution. Our results suggest that evolutionary increases in brain size are positively associated with taxonomic differences in executive function, even in the absence of primate-like neuroanatomy. These findings also suggest that variation between dog breeds may present a powerful model for investigating correlated changes in neuroanatomy and cognition among closely related taxa.

  82. ⁠, Milla Salonen, Katariina Vapalahti, Katriina Tiira, Asko Mäki-Tanila, Hannes Lohi (2019-05-28):

    Cat domestication and selective breeding have resulted in tens of breeds with major morphological differences. These breeds may also show distinctive behaviour differences; which, however, have been poorly studied. To improve the understanding of feline behaviour, we examined whether behavioural differences exist among breeds and whether behaviour is heritable. For these aims, we utilized our extensive health and behaviour questionnaire directed to cat owners and collected a survey data of 5726 cats. Firstly, for studying breed differences, we utilized models with multiple environmental factors and discovered behaviour differences in 19 breeds and breed groups in ten different behaviour traits. Secondly, the studied cat breeds grouped into four clusters, with the Turkish Van and Angora cats alone forming one of them. These findings indicate that cat breeds have diverged not only morphologically but also behaviourally. Thirdly, we estimated heritability in three breeds and obtained moderate heritability estimates in seven studied traits, varying from 0.4 to 0.53, as well as phenotypic and genetic correlations for several trait pairs. Our results show that it is possible to partition the observed variation in behaviour traits into genetic and environmental components, and that substantial genetic variation exists within breed populations.

  83. 2018-keller.pdf: ⁠, Matthew C. Keller (2018-05; genetics  /​ ​​ ​selection):

    Evolutionary medicine uses evolutionary theory to help elucidate why humans are vulnerable to disease and disorders. I discuss two different types of evolutionary explanations that have been used to help understand human psychiatric disorders.

    First, a consistent finding is that psychiatric disorders are moderately to highly heritable, and many, such as ⁠, are also highly disabling and appear to decrease Darwinian fitness. Models used in evolutionary genetics to understand why genetic variation exists in fitness-related traits can be used to understand why risk alleles for psychiatric disorders persist in the population. The usual explanation for species-typical adaptations—natural selection—is less useful for understanding individual differences in genetic risk to disorders. Rather, two other types of models, mutation-selection-drift and balancing selection, offer frameworks for understanding why genetic variation in risk to psychiatric (and other) disorders exists, and each makes predictions that are now testable using whole-genome data.

    Second, species-typical capacities to mount reactions to negative events are likely to have been crafted by natural selection to minimize fitness loss. The pain reaction to tissue damage is almost certainly such an example, but it has been argued that the capacity to experience depressive symptoms such as sadness, anhedonia, crying, and fatigue in the face of adverse life situations may have been crafted by natural selection as well. I review the rationale and strength of evidence for this hypothesis.

    Evolutionary hypotheses of psychiatric disorders are important not only for offering explanations for why psychiatric disorders exist, but also for generating new, testable hypotheses and understanding how best to design studies and analyze data.

    [Keywords: evolution, psychiatric disorders, genetics, schizophrenia, depression],/​​​​p>

  84. 2019-sella.pdf: ⁠, Guy Sella, Nicholas H. Barton (2019-06-21; genetics  /​ ​​ ​selection):

    Many traits of interest are highly heritable and genetically complex, meaning that much of the variation they exhibit arises from differences at numerous loci in the genome. Complex traits and their evolution have been studied for more than a century, but only in the last decade have genome-wide association studies (GWASs) in humans begun to reveal their genetic basis. Here, we bring these threads of research together to ask how findings from GWASs can further our understanding of the processes that give rise to heritable variation in complex traits and of the genetic basis of complex trait evolution in response to changing selection pressures (ie., of polygenic adaptation). Conversely, we ask how evolutionary thinking helps us to interpret findings from GWASs and informs related efforts of practical importance.

    [Keywords: evolution, genome-wide association study, GWAS, quantitative genetics, complex traits, polygenic adaptation, genetic architecture]

  85. ⁠, Samantha L. Cox, Christopher B. Ruff, Robert M. Maier, Iain Mathieson (2019-07-02):

    The relative contributions of genetics and environment to temporal and geographic variation in human height remain largely unknown. Ancient DNA has identified changes in genetic ancestry over time, but it is not clear whether those changes in ancestry are associated with changes in height. Here, we directly test whether changes over the past 38,000 years in European height predicted using DNA from 1071 ancient individuals are consistent with changes observed in 1159 skeletal remains from comparable populations. We show that the observed decrease in height between the Early Upper Paleolithic and the Mesolithic is qualitatively predicted by genetics. Similarly, both skeletal and genetic height remained constant between the Mesolithic and Neolithic and increased between the Neolithic and Bronze Age. Sitting height changes much less than standing height–consistent with genetic predictions–although genetics predicts a small Bronze Age increase that is not observed in skeletal remains. Geographic variation in stature is also qualitatively consistent with genetic predictions, particularly with respect to latitude. We find that the changes in genetic height between the Neolithic and Bronze Age may be driven by polygenic adaptation. Finally, we hypothesize that an observed decrease in genetic heel bone mineral density in the Neolithic reflects adaptation to the decreased mobility indicated by decreased femoral bending strength. This study provides a model for interpreting phenotypic changes predicted from ancient DNA and demonstrates how they can be combined with phenotypic measurements to understand the relative contribution of genetic and developmentally plastic responses to environmental change.

  86. ⁠, Alec Radford, Jeffrey Wu, Dario Amodei, Daniela Amodei, Jack Clark, Miles Brundage, Ilya Sutskever () (2019-02-14):

    Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a ⁠.

    GPT-2 is a large -based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10× the parameters and trained on more than 10× the amount of data.

    GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with an input and have it generate a lengthy continuation. In addition, GPT-2 outperforms other language models trained on specific domains (like Wikipedia, news, or books) without needing to use these domain-specific training datasets. On language tasks like question answering, reading comprehension, summarization, and translation, GPT-2 begins to learn these tasks from the raw text, using no task-specific training data. While scores on these downstream tasks are far from state-of-the-art, they suggest that the tasks can benefit from unsupervised techniques, given sufficient (unlabeled) data and compute.

  87. 2019-radford.pdf#openai: ⁠, Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever (2019-02-14; ai):

    Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on task-specific datasets.

    We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset—matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples.

    The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text.

    These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.

  88. ⁠, NVIDIA ADLR (2019-08-13):

    Larger language models are dramatically more useful for NLP tasks such as article completion, question answering, and dialog systems. Training the largest neural language model has recently been the best way to advance the state of the art in NLP applications. Two recent papers, and ⁠, demonstrate the benefits of large scale language modeling. Both papers leverage advances in compute and available text corpora to substantially surpass state of the art performance in natural language understanding, modeling, and generation. Training these models requires hundreds of exaflops of compute and to trade recomputation for a reduced memory footprint. However, for very large models beyond a billion parameters, the memory on a single GPU is not enough to fit the model along with the parameters needed for training, requiring model parallelism to split the parameters across multiple GPUs. Several approaches to model parallelism exist, but they are difficult to use, either because they rely on custom compilers, or because they scale poorly or require changes to the optimizer.

    In this work, we implement a simple and efficient model parallel approach by making only a few targeted modifications to existing PyTorch transformer implementations. Our code is written in native Python, leverages mixed precision training, and utilizes the NCCL library for communication between GPUs. We showcase this approach by training an 8.3 billion parameter transformer language model with 8-way model parallelism and 64-way data parallelism on 512 GPUs, making it the largest transformer based language model ever trained at 24× the size of BERT and 5.6× the size of GPT-2. We have published the code that implements this approach at our GitHub repository⁠.

    Our experiments are conducted on NVIDIA’s DGX SuperPOD⁠. Without model parallelism, we can fit a baseline model of 1.2B parameters on a single 32GB ⁠, and sustain 39 TeraFLOPS during the overall training process, which is 30% of the theoretical peak FLOPS for a single GPU in a -H server. Scaling the model to 8.3 billion parameters on 512 GPUs with 8-way model parallelism, we achieved up to 15.1 PetaFLOPS sustained performance over the entire application and reached 76% scaling efficiency compared to the single GPU case.

  89. ⁠, Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu (2019-10-23):

    Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

  90. ⁠, Huggingface ():

    🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides state-of-the-art general-purpose architectures (⁠, ⁠, ⁠, ⁠, ⁠, ⁠, …) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between TensorFlow 2.0 and PyTorch.


    • As easy to use as pytorch-transformers
    • As powerful and concise as Keras
    • High performance on NLU and NLG tasks
    • Low barrier to entry for educators and practitioners

    State-of-the-art NLP for everyone:

    • Deep learning researchers
    • Hands-on practitioners
    • AI/​​​​​​ML/​​​​​​NLP teachers and educators

    Lower compute costs, smaller carbon footprint:

    • Researchers can share trained models instead of always retraining
    • Practitioners can reduce compute time and production costs
    • 10 architectures with over 30 pretrained models, some in more than 100 languages

    Choose the right framework for every part of a model’s lifetime:

    • Train state-of-the-art models in 3 lines of code
    • Deep interoperability between TensorFlow 2.0 and PyTorch models
    • Move a single model between TF2.0/​​​​​​PyTorch frameworks at will
    • Seamlessly pick the right framework for training, evaluation, production
  91. ⁠, Mingxing Tan, Quoc V. Le (2019-05-28):

    Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/​​​​width/​​​​resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet.

    To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4× smaller and 6.1× faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at https:/​​​​/​​​​​​​​tensorflow/​​​​tpu/​​​​tree/​​​​master/​​​​models/​​​​official/​​​​efficientnet.

  92. ⁠, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry (2019-05-06):

    Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans. After capturing these features within a theoretical framework, we establish their widespread existence in standard datasets. Finally, we present a simple setting where we can rigorously tie the phenomena we observe in practice to a misalignment between the (human-specified) notion of robustness and the inherent geometry of the data.


  94. ⁠, Christine Payne (OpenAI) (2019-04-25):

    We’ve created MuseNet, a deep neural network that can generate 4-minute musical compositions with 10 different instruments, and can combine styles from country to Mozart to the Beatles. MuseNet was not explicitly programmed with our understanding of music, but instead discovered patterns of harmony, rhythm, and style by learning to predict the next token in hundreds of thousands of MIDI files. MuseNet uses the same general-purpose unsupervised technology as ⁠, a large-scale transformer model trained to predict the next token in a sequence, whether audio or text.

    [See also: ⁠, Child et al 2019

    Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length. In this paper we introduce sparse factorizations of the attention matrix which reduce this to 𝒪(n ⋅ √n). We also introduce (a) a variation on architecture and initialization to train deeper networks, (b) the recomputation of attention matrices to save memory, and (c) fast attention kernels for training. We call networks with these changes Sparse Transformers, and show they can model sequences tens of thousands of timesteps long using hundreds of layers. We use the same architecture to model images, audio, and text from raw bytes, setting a new state of the art for density modeling of Enwik8, CIFAR-10, and ImageNet-64. We generate unconditional samples that demonstrate global coherence and great diversity, and show it is possible in principle to use self-attention to model sequences of length one million or more. ]

  95. ⁠, Rewon Child, Scott Gray, Alec Radford, Ilya Sutskever (2019-04-23):

    Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length.

    In this paper we introduce sparse factorizations of the attention matrix which reduce this to 𝑂(nn). We also introduce (1) a variation on architecture and initialization to train deeper networks, (2) the recomputation of attention matrices to save memory, and (3) fast attention kernels for training. We call networks with these changes Sparse Transformers, and show they can model sequences tens of thousands of timesteps long using hundreds of layers.

    We use the same architecture to model images, audio, and text from raw bytes, setting a new state of the art for density modeling of Enwik8, CIFAR-10, and -64.

    We generate unconditional samples that demonstrate global coherence and great diversity, and show it is possible in principle to use self-attention to model sequences of length one million or more.

  96. ⁠, Sizigi Studios (2019-07-23):

    [Waifu Labs is an interactive website for generating (1024px?) anime faces using a customized trained on Danbooru2018. Similar to ⁠, it supports face exploration and face editing, and at the end, a user can purchase prints of a particular face.]

    We taught a world-class artificial intelligence how to draw anime. All the drawings you see were made by a non-human artist! Wild, right? It turns out machines love waifus almost as much as humans do. We proudly present the next chapter of human history: lit waifu commissions from the world’s smartest AI artist. In less than 5 minutes, the artist learns your preferences to make the perfect waifu just for you.

  97. ⁠, Sagar Savla (2019-02-04):

    The World Health Organization (WHO) estimates that there are 466 million people globally that are deaf and hard of hearing. A crucial technology in empowering communication and inclusive access to the world’s information to this population is automatic speech recognition (ASR), which enables computers to detect audible languages and transcribe them into text for reading. Google’s ASR is behind automated captions in Youtube, presentations in Slides and also phone calls…Today, we’re announcing Live Transcribe, a free Android service that makes real-world conversations more accessible by bringing the power of automatic captioning into everyday, conversational use. Powered by Google Cloud, Live Transcribe captions conversations in real-time, supporting over 70 languages and more than 80% of the world’s population. You can launch it with a single tap from within any app, directly from the accessibility icon on the system tray.

    …Relying on cloud ASR provides us greater accuracy, but we wanted to reduce the network data consumption that Live Transcribe requires. To do this, we implemented an on-device neural network-based speech detector, built on our previous work with AudioSet. This network is an image-like model, similar to our published VGGish model, which detects speech and automatically manages network connections to the cloud ASR engine, minimizing data usage over long periods of use.

    …Known as the cocktail party problem, understanding a speaker in a noisy room is a major challenge for computers. To address this, we built an indicator that visualizes the volume of user speech relative to background noise. This also gives users instant feedback on how well the microphone is receiving the incoming speech from the speaker, allowing them to adjust the placement of the phone…Potential future improvements in mobile-based automatic speech transcription include on-device recognition, speaker-separation, and speech enhancement. Relying solely on transcription can have pitfalls that can lead to miscommunication. Our research with Gallaudet University shows that combining it with other auditory signals like speech detection and a loudness indicator, makes a tangibly meaningful change in communication options for our users.

  98. ⁠, Reddit ():

    Subreddit devoted to discussion of reinforcement learning research and projects, particularly deep reinforcement learning (more specialized than /r/MachineLearning). Major themes include deep learning, model-based vs model-free RL, robotics, multi-agent RL, exploration, meta-⁠, imitation learning, the psychology of RL in biological organisms such as humans, and safety/​​​​AI risk. Moderate activity level (as of 2019-09-11): ~10k subscribers, 2k pageviews/​​​​daily

  99. 2019-vinyals.pdf#deepmind: ⁠, Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, John P. Agapiou, Max Jaderberg, Alexander S. Vezhnevets, Rémi Leblond, Tobias Pohlen, Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, Tom L. Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario Wünsch, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Koray Kavukcuoglu, Demis Hassabis, Chris Apps, David Silver (2019-10-30; reinforcement-learning):

    Many real-world applications require artificial agents to compete and coordinate with other agents in complex environments. As a stepping stone to this goal, the domain of StarCraft has emerged as an important challenge for artificial intelligence research, owing to its iconic and enduring status among the most difficult professional e-sports and its relevance to the real world in terms of its raw complexity and multi-agent challenges. Over the course of a decade and numerous competitions, the strongest agents have simplified important aspects of the game, utilized superhuman capabilities, or employed hand-crafted sub-systems. Despite these advantages, no previous agent has come close to matching the overall skill of top StarCraft players. We chose to address the challenge of StarCraft using general-purpose learning methods that are in principle applicable to other complex domains: a multi-agent reinforcement learning algorithm that uses data from both human and agent games within a diverse league of continually adapting strategies and counter-strategies, each represented by deep neural networks. We evaluated our agent, AlphaStar, in the full game of StarCraft II, through a series of online games against human players. AlphaStar was rated at Grandmaster level for all three StarCraft races and above 99.8% of officially ranked human players.


  101. ⁠, OpenAI, Ilge Akkaya, Marcin Andrychowicz, Maciek Chociej, Mateusz Litwin, Bob McGrew, Arthur Petron, Alex Paino, Matthias Plappert, Glenn Powell, Raphael Ribas, Jonas Schneider, Nikolas Tezak, Jerry Tworek, Peter Welinder, Lilian Weng, Qiming Yuan, Wojciech Zaremba, Lei Zhang (2019-10-16):

    We demonstrate that models trained only in simulation can be used to solve a manipulation problem of unprecedented complexity on a real robot. This is made possible by two key components: a novel algorithm, which we call automatic domain randomization (ADR) and a robot platform built for machine learning. ADR automatically generates a distribution over randomized environments of ever-increasing difficulty. Control policies and vision state estimators trained with ADR exhibit vastly improved sim2real transfer. For control policies, memory-augmented models trained on an ADR-generated distribution of environments show clear signs of emergent meta-learning at test time. The combination of ADR with our custom robot platform allows us to solve a Rubik’s cube with a humanoid robot hand, which involves both control and state estimation problems. Videos summarizing our results are available: https:/​​​​/​​​​​​​​blog/​​​​solving-rubiks-cube/​​​​

  102. ⁠, OpenAI (2019-10-15):

    [On ⁠.]

    We’ve trained a pair of neural networks to solve the Rubik’s Cube with a human-like robot hand. The neural networks are trained entirely in simulation, using the same reinforcement learning code as OpenAI Five paired with a new technique called Automatic Domain Randomization (ADR). The system can handle situations it never saw during training, such as being prodded by a stuffed giraffe. This shows that reinforcement learning isn’t just a tool for virtual tasks, but can solve physical-world problems requiring unprecedented dexterity.

    …Since May 2017, we’ve been trying to train a human-like robotic hand to solve the Rubik’s Cube. We set this goal because we believe that successfully training such a robotic hand to do complex manipulation tasks lays the foundation for general-purpose robots. We solved the Rubik’s Cube in simulation in July 2017. But as of July 2018, we could only manipulate a block on the robot. Now, we’ve reached our initial goal. Solving a Rubik’s Cube one-handed is a challenging task even for humans, and it takes children several years to gain the dexterity required to master it. Our robot still hasn’t perfected its technique though, as it solves the Rubik’s Cube 60% of the time (and only 20% of the time for a maximally difficult scramble).

  103. ⁠, OpenAI, :, Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław Dębiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, Rafal Józefowicz, Scott Gray, Catherine Olsson, Jakub Pachocki, Michael Petrov, Henrique P. d. O. Pinto, Jonathan Raiman, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon Sidor, Ilya Sutskever, Jie Tang, Filip Wolski, Susan Zhang (2019-12-13):

    On April 13th, 2019, OpenAI Five became the first AI system to defeat the world champions at an esports game. The game of Dota 2 presents novel challenges for AI systems such as long time horizons, imperfect information, and complex, continuous state-action spaces, all challenges which will become increasingly central to more capable AI systems. OpenAI Five leveraged existing reinforcement learning techniques, scaled to learn from batches of approximately 2 million frames every 2 seconds. We developed a distributed training system and tools for continual training which allowed us to train OpenAI Five for 10 months. By defeating the Dota 2 world champion (Team OG), OpenAI Five demonstrates that self-play reinforcement learning can achieve superhuman performance on a difficult task.

  104. ⁠, OpenAI (2019-12-13):

    At OpenAI, we’ve used the multiplayer video game Dota 2 as a research platform for general-purpose AI systems. Our Dota 2 AI, called OpenAI Five, learned by playing over 10,000 years of games against itself. It demonstrated the ability to achieve expert-level performance, learn human-AI cooperation, and operate at internet scale.

    [OpenAI final report on OA5: timeline, training curve, index of blog posts.]

  105. ⁠, Noam Brown, Tuomas Sandholm (2019-07-11):

    In recent years there have been great strides in artificial intelligence (AI), with games often serving as challenge problems, benchmarks, and milestones for progress. Poker has served for decades as such a challenge problem. Past successes in such benchmarks, including poker, have been limited to two-player games. However, poker in particular is traditionally played with more than two players. Multiplayer games present fundamental additional issues beyond those in two-player games, and multiplayer poker is a recognized AI milestone.

    In this paper we present Pluribus, an AI that we show is stronger than top human professionals in six-player no-limit Texas hold’em poker, the most popular form of poker played by humans.

    [Keywords: Monte Carlo CFR, state abstraction, Nash equilibrium]

  106. ⁠, Pedro A. Ortega, Jane X. Wang, Mark Rowland, Tim Genewein, Zeb Kurth-Nelson, Razvan Pascanu, Nicolas Heess, Joel Veness, Alex Pritzel, Pablo Sprechmann, Siddhant M. Jayakumar, Tom McGrath, Kevin Miller, Mohammad Azar, Ian Osband, Neil Rabinowitz, András György, Silvia Chiappa, Simon Osindero, Yee Whye Teh, Hado van Hasselt, Nando de Freitas, Matthew Botvinick, Shane Legg (2019-05-08):

    In this report we review memory-based meta-learning as a tool for building sample-efficient strategies that learn from past experience to adapt to any task within a target class. Our goal is to equip the reader with the conceptual foundations of this tool for building new, scalable agents that operate on broad domains. To do so, we present basic algorithmic templates for building near-optimal predictors and reinforcement learners which behave as if they had a probabilistic model that allowed them to efficiently exploit task structure. Furthermore, we recast memory-based meta-learning within a Bayesian framework, showing that the meta-learned strategies are near-optimal because they amortize Bayes-filtered data, where the adaptation is implemented in the memory dynamics as a state-machine of sufficient statistics. Essentially, memory-based meta-learning translates the hard problem of probabilistic sequential inference into a regression problem.

  107. ⁠, Matthew Botvinick, Sam Ritter, Jane X. Wang, Zeb Kurth-Nelson, Charles Blundell, Demis Hassabis (2019-05-16):

    Recent AI research has given rise to powerful techniques for deep reinforcement learning. In their combination of representation learning with reward-driven behavior, deep reinforcement learning would appear to have inherent interest for psychology and neuroscience.

    One reservation has been that deep reinforcement learning procedures demand large amounts of training data, suggesting that these algorithms may differ fundamentally from those underlying human learning.

    While this concern applies to the initial wave of deep RL techniques, subsequent AI work has established methods that allow deep RL systems to learn more quickly and efficiently. Two particularly interesting and promising techniques center, respectively, on episodic memory and meta-learning. Alongside their interest as AI techniques, deep RL methods leveraging episodic memory and meta-learning have direct and interesting implications for psychology and neuroscience. One subtle but critically important insight which these techniques bring into focus is the fundamental connection between fast and slow forms of learning.

    Deep reinforcement learning (RL) methods have driven impressive advances in artificial intelligence in recent years, exceeding human performance in domains ranging from Atari to Go to no-limit poker. This progress has drawn the attention of cognitive scientists interested in understanding human learning. However, the concern has been raised that deep RL may be too sample-inefficient—that is, it may simply be too slow—to provide a plausible model of how humans learn. In the present review, we counter this critique by describing recently developed techniques that allow deep RL to operate more nimbly, solving problems much more quickly than previous methods. Although these techniques were developed in an AI context, we propose that they may have rich implications for psychology and neuroscience. A key insight, arising from these AI methods, concerns the fundamental connection between fast RL and slower, more incremental forms of learning.

  108. ⁠, Neil C. Rabinowitz (2019-05-03):

    Meta-learning is a tool that allows us to build sample-efficient learning systems. Here we show that, once meta-trained, LSTM Meta-Learners aren’t just faster learners than their sample-inefficient deep learning (DL) and reinforcement learning (RL) brethren, but that they actually pursue fundamentally different learning trajectories. We study their learning dynamics on three sets of structured tasks for which the corresponding learning dynamics of DL and RL systems have been previously described: linear regression (Saxe et al., 2013), nonlinear regression (Rahaman et al., 2018; Xu et al., 2018), and contextual bandits (Schaul et al., 2019). In each case, while sample-inefficient DL and RL Learners uncover the task structure in a staggered manner, meta-trained Meta-Learners uncover almost all task structure concurrently, congruent with the patterns expected from Bayes-optimal inference algorithms. This has implications for research areas wherever the learning behaviour itself is of interest, such as safety, curriculum design, and human-in-the-loop machine learning.

  109. ⁠, Tom Schaul, Diana Borsa, Joseph Modayil, Razvan Pascanu (2019-04-25):

    Rather than proposing a new method, this paper investigates an issue present in existing learning algorithms. We study the learning dynamics of reinforcement learning (RL), specifically a characteristic coupling between learning and data generation that arises because RL agents control their future data distribution. In the presence of function approximation, this coupling can lead to a problematic type of ‘ray interference’, characterized by learning dynamics that sequentially traverse a number of performance plateaus, effectively constraining the agent to learn one thing at a time even when learning in parallel is better. We establish the conditions under which ray interference occurs, show its relation to saddle points and obtain the exact learning dynamics in a restricted setting. We characterize a number of its properties and discuss possible remedies.

  110. ⁠, Lilian Weng (2019-06-23):

    [Review/​​​​discussion] Meta-RL is meta-learning on reinforcement learning tasks. After trained over a distribution of tasks, the agent is able to solve a new task by developing a new RL algorithm with its internal activity dynamics. This post starts with the origin of meta-RL and then dives into three key components of meta-RL…, a good meta-learning model is expected to generalize to new tasks or new environments that have never been encountered during training. The adaptation process, essentially a mini learning session, happens at test with limited exposure to the new configurations. Even without any explicit fine-tuning (no gradient on trainable variables), the meta-learning model autonomously adjusts internal hidden states to learn.

  111. ⁠, C. Daniel Freeman, Luke Metz, David Ha (2019-10-29):

    [HTML version of Freeman et al 2019, with videos.]

    Much of model-based reinforcement learning involves learning a model of an agent’s world, and training an agent to leverage this model to perform a task more efficiently. While these models are demonstrably useful for agents, every naturally occurring model of the world of which we are aware—eg., a brain—arose as the byproduct of competing evolutionary pressures for survival, not minimization of a supervised forward-predictive loss via gradient descent. That useful models can arise out of the messy and slow optimization process of evolution suggests that forward-predictive modeling can arise as a side-effect of optimization under the right circumstances. Crucially, this optimization process need not explicitly be a forward-predictive loss. In this work, we introduce a modification to traditional reinforcement learning which we call observational dropout, whereby we limit the agents ability to observe the real environment at each timestep. In doing so, we can coerce an agent into learning a world model to fill in the observation gaps during reinforcement learning. We show that the emerged world model, while not explicitly trained to predict the future, can help the agent learn key skills required to perform well in its environment.

  112. ⁠, C. Daniel Freeman, Luke Metz, David Ha (2019-10-29):

    Much of model-based reinforcement learning involves learning a model of an agent’s world, and training an agent to leverage this model to perform a task more efficiently. While these models are demonstrably useful for agents, every naturally occurring model of the world of which we are aware—e.g., a brain—arose as the byproduct of competing evolutionary pressures for survival, not minimization of a supervised forward-predictive loss via gradient descent. That useful models can arise out of the messy and slow optimization process of evolution suggests that forward-predictive modeling can arise as a side-effect of optimization under the right circumstances. Crucially, this optimization process need not explicitly be a forward-predictive loss. In this work, we introduce a modification to traditional reinforcement learning which we call observational dropout, whereby we limit the agents ability to observe the real environment at each timestep. In doing so, we can coerce an agent into learning a world model to fill in the observation gaps during reinforcement learning. We show that the emerged world model, while not explicitly trained to predict the future, can help the agent learn key skills required to perform well in its environment. Videos of our results available at https:/​​​​/​​​​​​​​

  113. ⁠, David Abel (2019-06):

    The 2019 ICML edition of David Abel’s famous conference notes: he goes to as many presentations and talks as possible, jotting down opinionated summaries & equations, with a particular focus on DRL. Topics covered:

    Tutorial: PAC-Bayes Theory (Part II) · PAC-Bayes Theory · PAC-Bayes and Task Awareness · Tutorial: Meta-Learning · Two Ways to View Meta-Learning · Meta-Learning Algorithms · Meta-Reinforcement Learning · Challenges and Frontiers in Meta Learning · Tuesday June: Main Conference Best Paper Talk: Challenging Assumptions in Learning Disentangled Representations Contributed Talks: Deep RL · and Time Discretization · Nonlinear Distributional Gradient TD Learning · Composing Entropic Policies using Divergence Correction · TibGM: A Graphical Model Approach for RL · Multi-Agent Adversarial IRL · Policy Consolidation for Continual RL · Off-Policy Evaluation Deep RL w/​​​​o Exploration · Random Expert Distillation · Revisiting the Softmax Bellman Operator · Contributed Talks: RL Theory · Distributional RL for Efficient Exploration · Optimistic Policy Optimization via Importance Sampling · Neural Logic RL · Learning to Collaborate in MDPs · Predictor-Corrector Policy Optimization · Learning a over Intent via Meta IRL · DeepMDP: Learning Late Space Models for RL · Importance Sampling Policy Evaluation · Learning from a Learner · Separating Value Functions Across Time-Scales · Learning Action Representations in RL · Bayesian Counterfactual Risk Minimization · Per-Decision Option Counting · Problem Dependent Regret Bounds in RL · A Theory of Regularized MDPs · Discovering Options for Exploration by Minimizing Cover Time · Policy Certificates: Towards Accountable RL · Action Robust RL · The Value Function Polytope · Wednesday June: Main Conference Contributed Talks: Multitask and Lifelong Learning · Domain Agnostic Learning with Disentangled Representations · Composing Value Functions in RL · CAVIA: Fast Context Adaptation via Meta Learning · Gradient Based Meta-Learning · Towards Understanding Knowledge Distillation · Transferable Adversarial Training · Contributed Talks: RL Theory · Provably Efficient Imitation Learning from Observation Alone · Dead Ends and Secure Exploration · Statistics and Samples in Distributional RL · Hessian Aided Policy Gradient · Maximum Entropy Exploration · Combining Multiple Models for Off-Policy Evaluation · Sample-Optimal ParametricQ-Learning Using Linear Features · Transfer of Samples in Policy Search · Exploration Conscious RL Revisited · Kernel Based RL in Robust MDPs · Thursday June: Main Conference Contributed Talks: RL · Batch Policy learning under Constraints · Quantifying Generalization in RL · Learning Dynamics for Planning from Pixels · Projections for Approximate Policy Iteration · Learning Structured Decision Problems with Unawareness · Calibrated Model-Based Deep RL · RL in Configurable Continuous Environments · Target-Based Temporal-Difference Learning · Linearized Control: Stable Algorithms and Complexity Guarantees · Contributed Talks: Deep Learning Theory · Why do Larger Models Generalize Better? · On the Spectral Bias of Neural Nets · Recursive Sketches for Modular Deep Learning · Zero-Shot Knowledge Distillation in Deep Networks · Convergence Theory for Deep Learning via Over-Parameterization · Best Paper Award: Rates of Convergence for Sparse Gaussian Process Regression · Friday June: Workshops Workshop: AI for Climate Change · John Platt on What ML can do to help Climate Change · Jack Kelly: Why It’s Hard to Mitigate Climate Change, and How to Do Better, Andrew Ng: Tackling Climate Change with AI through Collaboration · Workshop: RL for Real Life · Panel Discussion · Workshop: Real World Sequential Decision Making · Emma Brunskill on Efficient RL When Data is Costly · Miro Dudik: Doubly Robust Off-Policy Evaluation via Shrinkage

  114. ⁠, Tom Everitt, Marcus Hutter, Ramana Kumar, Victoria Krakovna (2019-08-13):

    Can humans get arbitrarily capable reinforcement learning (RL) agents to do their bidding? Or will sufficiently capable RL agents always find ways to bypass their intended objectives by shortcutting their reward signal? This question impacts how far RL can be scaled, and whether alternative paradigms must be developed in order to build safe artificial general intelligence. In this paper, we study when an RL agent has an instrumental goal to tamper with its reward process, and describe design principles that prevent instrumental goals for two different types of reward tampering (reward function tampering and RF-input tampering). Combined, the design principles can prevent both types of reward tampering from being instrumental goals. The analysis benefits from causal influence diagrams to provide intuitive yet precise formalizations.

  115. 2019-lortieforgues.pdf: ⁠, Hugues Lortie-Forgues, Matthew Inglis (2019-03-11; sociology):

    There are a growing number of large-scale educational randomized controlled trials (). Considering their expense, it is important to reflect on the effectiveness of this approach. We assessed the magnitude and precision of effects found in those large-scale RCTs commissioned by the UK-based Education Endowment Foundation and the U.S.-based National Center for Educational Evaluation and Regional Assistance, which evaluated interventions aimed at improving academic achievement in K–12 (141 RCTs; 1,222,024 students). The mean effect size was 0.06 standard deviations. These sat within relatively large confidence intervals (mean width = 0.30 SDs), which meant that the results were often uninformative (the median Bayes factor was 0.56). We argue that our field needs, as a priority, to understand why educational RCTs often find small and uninformative effects.

    [Keywords: educational policy, evaluation, ⁠, program evaluation.]

  116. ⁠, Diana Herrera-Perez, Alyson Haslam, Tyler Crain, Jennifer Gill, Catherine Livingston, Victoria Kaestner, Michael Hayes, Dan Morgan, Adam S. Cifu, Vinay Prasad (2019-06-11):

    The ability to identify medical reversals and other low-value medical practices is an essential prerequisite for efforts to reduce spending on such practices. Through an analysis of more than 3000 randomized controlled trials (RCTs) published in three leading medical journals (the Journal of the American Medical Association, the Lancet, and the New England Journal of Medicine), we have identified 396 medical reversals. Most of the studies (92%) were conducted on populations in high-income countries, cardiovascular disease was the most common medical category (20%), and medication was the most common type of intervention (33%).

  117. 2019-kvarven.pdf: ⁠, Amanda Kvarven, Eirik Strømland, Magnus Johannesson (2019-12-23; statistics  /​ ​​ ​bias):

    Many researchers rely on meta-analysis to summarize research evidence. However, there is a concern that publication bias and selective reporting may lead to biased meta-analytic effect sizes. We compare the results of meta-analyses to large-scale preregistered replications in psychology carried out at multiple laboratories. The multiple-laboratory replications provide precisely estimated effect sizes that do not suffer from publication bias or selective reporting. We searched the literature and identified 15 meta-analyses on the same topics as multiple-laboratory replications. We find that meta-analytic effect sizes are statistically-significantly different from replication effect sizes for 12 out of the 15 meta-replication pairs. These differences are systematic and, on average, meta-analytic effect sizes are almost 3 times as large as replication effect sizes. We also implement 3 methods of correcting meta-analysis for bias, but these methods do not substantively improve the meta-analytic results.

  118. ⁠, David Goodstein (1994):

    [On the end to the post-WWII Vannevar Bushian exponential growth of academia and consequences thereof: growth can’t go on forever, and it didn’t.]

    According to modern cosmology, the universe began with a big bang about 10 billion years ago, and it has been expanding ever since. If the density of mass in the universe is great enough, its gravitational force will cause that expansion to slow down and reverse, causing the universe to fall back in on itself. Then the universe will end in a cataclysmic event known as ‘the Big Crunch’. I would like to present to you a vaguely analogous theory of the history of science. The upper curve on Figure 1 was first made by historian Derek da Solla Price, sometime in the 1950s. It is a semilog plot of the cumulative number of scientific journals founded worldwide as a function of time…the growth of the profession of science, the scientific enterprise, is bound to reach certain limits. I contend that these limits have now been reached.

    …But after about 1970 and the Big Crunch, the gleaming gems produced at the end of the vast mining-and-sorting operation produced less often from American ore. Research professors and their universities, using ore imported from across the oceans, kept the machinery humming.

    …Let me finish by summarizing what I’ve been trying to tell you. We stand at an historic juncture in the history of science. The long era of exponential expansion ended decades ago, but we have not yet reconciled ourselves to that fact. The present social structure of science, by which I mean institutions, education, funding, publications and so on all evolved during the period of exponential expansion, before The Big Crunch. They are not suited to the unknown future we face. Today’s scientific leaders, in the universities, government, industry and the scientific societies are mostly people who came of age during the golden era, 1950–1970. I am myself part of that generation. We think those were normal times and expect them to return. But we are wrong. Nothing like it will ever happen again. It is by no means certain that science will even survive, much less flourish, in the difficult times we face. Before it can survive, those of us who have gained so much from the era of scientific elites and scientific illiterates must learn to face reality, and admit that those days are gone forever.

  119. ⁠, Scott Alexander (2019-04-22):

    [On the relationship between absolute population size, population growth, economic growth (absolute and per capita), innovation, ideas, and science: is the long exponential history of the progress of science, technology, and computing merely due to the accompanying exponential growth of the human population size after reaching a critical point where the Malthusian trap could be escaped and a new higher equilibrium sought, creating more possible researchers and enabling positive externalities? If so, then the end of exponential global population growth in the 1960s–1970s was also the end of the exponential era in human progress… At least until a new mode of exponential growth, such as artificial intelligence or brain emulations, begins.]

  120. ⁠, Patrick Collison, Tyler Cowen (2019-07-30):

    Progress itself is understudied. By ‘progress,’ we mean the combination of economic, technological, scientific, cultural, and organizational advancement that has transformed our lives and raised standards of living over the past couple of centuries. For a number of reasons, there is no broad-based intellectual movement focused on understanding the dynamics of progress, or targeting the deeper goal of speeding it up. We believe that it deserves a dedicated field of study. We suggest inaugurating the discipline of ‘Progress Studies.’

    Before digging into what Progress Studies would entail, it’s worth noting that we still need a lot of progress. We haven’t yet cured all diseases; we don’t yet know how to solve climate change; we’re still a very long way from enabling most of the world’s population to live as comfortably as the wealthiest people do today; we don’t yet understand how best to predict or mitigate all kinds of natural disasters; we aren’t yet able to travel as cheaply and quickly as we’d like; we could be far better than we are at educating young people. The list of opportunities for improvement is still extremely long.

    …Plenty of existing scholarship touches on these topics, but it takes place in a highly fragmented fashion and fails to directly confront some of the most important practical questions.

    Imagine you want to know how to most effectively select and train the most talented students. While this is an important challenge facing educators, policy makers, and philanthropists, knowledge about how best to do so is dispersed across a very long list of different fields. Psychometrics literature investigates which tests predict success. Sociologists consider how networks are used to find talent. Anthropologists investigate how talent depends on circumstances, and a historiometric literature studies clusters of artistic creativity. There’s a lively debate about when and whether ‘10,000 hours of practice’ are required for truly excellent performance. The education literature studies talent-search programs such as the Center for Talented Youth. Personality psychologists investigate the extent to which openness or affect earnings. More recently, there’s work in sportometrics, looking at which numerical variables predict athletic success. In economics, Raj Chetty and his co-authors have examined the backgrounds and communities liable to best encourage innovators. Thinkers in these disciplines don’t necessarily attend the same conferences, publish in the same journals, or work together to solve shared problems.

    When we consider other major determinants of progress, we see insufficient engagement with the central questions. For example, there’s a growing body of evidence suggesting that management practices determine a great deal of the difference in performance between organizations. One recent study found that a particular intervention—teaching better management practices to firms in Italy—improved productivity by 49 percent over 15 years when compared with peer firms that didn’t receive the training. How widely does this apply, and can it be repeated? Economists have been learning that firm productivity commonly varies within a given sector by a factor of two or three, which implies that a priority in management science and organizational psychology should be understanding the drivers of these differences. In a related vein, we’re coming to appreciate more and more that organizations with higher levels of trust can delegate authority more effectively, thereby boosting their responsiveness and ability to handle problems. Organizations as varied as Y Combinator, MIT’s Radiation Lab, and ARPA have astonishing track records in catalyzing progress far beyond their confines. While research exists on all of these fronts, we’re underinvesting considerably. These examples collectively indicate that one of our highest priorities should be figuring out interventions that increase the efficacy, productivity, and innovative capacity of human organizations…

  121. ⁠, Stefan Torges (2019-05-16):

    This post tries to answer the question of what qualities make some research teams more effective than others. I was particularly interested in learning more about “disruptive” research teams, ie. research teams that have an outsized impact on (1) the research landscape itself (eg. by paving the way for new fields or establishing a new paradigm), and/​​​​or (2) society at large (eg. by shaping technology or policy). However, I expect the conclusions to be somewhat relevant for all research teams…

    Key findings: excellent researchers have individual qualities and diversity, with shared direction, purposeful vision, concrete goals, leadership, and no inconveniences. Their organizations emphasize autonomy & self-organization, organic decentralized collaboration (with possibly metrics, goal-setting, and incentives), spaces for interaction, shared physical space, shared ‘psychological spaces’ and forced interaction combined with psychological safety. Teams are small, seek external input and feedback, and value immaterial rewards.

    …Based on the findings above, these are the most important takeaways for our research team at the Foundational Research Institute (FRI) as I see them: (1) We should continue to apply a high bar for hiring researchers…(2) Currently, we have staff who either excel at leadership or at research but nobody who combines both skill sets. We would likely benefit substantially from such an addition to our team…(3) We should continue to provide our research staff with as much freedom and operational support as possible…(4) Currently, many of our researchers work remotely which seems to have higher costs than I previously thought. As a consequence, I have become more convinced that we should try to create a research office geared toward the needs of our research staff…(5) We should invest more time into creating psychological safety for our research staff. I’m not yet sure how to best proceed here…(6) It was worth it to invest time into developing a theory of change, ie., thinking about how exactly our research would lead to real-world changes when it comes to AI designs and deployment…(7) Organizing research workshops with other organizations focused on similar questions is worth it. We should also look into other formats of high-intensity in-person interaction.

  122. ⁠, AI Impacts (2019-02-15):

    Figure 0: The four main determinants of forecasting accuracy. GJP's tools: averaging of forecasts; selecting 'superforecasters' to over-weight in averaging; training of superforecasters; teaming up forecasters; aggregation algorithms to reweight further.

    Figure 0: The “four main determinants of forecasting accuracy.” This graph can be found here⁠, the GJP’s list of academic literature on this topic. The graph illustrates approximate relative effects. It will be discussed more in Section 2.

    Experience and data from the Good Judgment Project (GJP) provide important evidence about how to make accurate predictions. For a concise summary of the evidence and what we learn from it, see this page⁠. For a review of Superforecasting⁠, the popular book written on the subject, see this blog⁠.

    This post explores the evidence in more detail, drawing from the book, the academic literature, the older Expert Political Judgment book, and an interview with a superforecaster.

    …Tetlock describes how superforecasters go about making their predictions.56 Here is an attempt at a summary:

    1. Sometimes a question can be answered more rigorously if it is first “Fermi-ized”, ie. broken down into sub-questions for which more rigorous methods can be applied.
    2. Next, use the outside view on the sub-questions (and/​​​​​or the main question, if possible). You may then adjust your estimates using other considerations (‘the inside view’), but do this cautiously.
    3. Seek out other perspectives, both on the sub-questions and on how to Fermi-ize the main question. You can also generate other perspectives yourself.
    4. Repeat steps 1–3 until you hit ⁠.
    5. Your final prediction should be based on an aggregation of various models, reference classes, other experts, etc.
  123. {#linkBibliography-(stat)-2019 .docMetadata}, Sharon Begley (STAT) (2019-06-25; sociology  /​ ​​ ​preference-falsification):

    In the 30 years that biomedical researchers have worked determinedly to find a cure for Alzheimer’s disease, their counterparts have developed drugs that helped cut deaths from cardiovascular disease by more than half, and cancer drugs able to eliminate tumors that had been incurable. But for Alzheimer’s, not only is there no cure, there is not even a disease-slowing treatment.

    …In more than two dozen interviews, scientists whose ideas fell outside the dogma recounted how, for decades, believers in the dominant hypothesis suppressed research on alternative ideas: They influenced what studies got published in top journals, which scientists got funded, who got tenure, and who got speaking slots at reputation-buffing scientific conferences. The scientists described the frustrating, even career-ending, obstacles that they confronted in pursuing their research. A top journal told one that it would not publish her paper because others hadn’t. Another got whispered advice to at least pretend that the research for which she was seeking funding was related to the leading idea—that a protein fragment called beta-amyloid accumulates in the brain, creating neuron-killing clumps that are both the cause of Alzheimer’s and the key to treating it. Others could not get speaking slots at important meetings, a key showcase for research results. Several who tried to start companies to develop Alzheimer’s cures were told again and again by venture capital firms and major biopharma companies that they would back only an amyloid approach.

    …For all her regrets about the amyloid hegemony, Neve is an unlikely critic: She co-led the 1987 discovery of mutations in a gene called APP that increases amyloid levels and causes Alzheimer’s in middle age, supporting the then-emerging orthodoxy. Yet she believes that one reason Alzheimer’s remains incurable and untreatable is that the amyloid camp “dominated the field”, she said. Its followers were influential “to the extent that they persuaded the National Institute of Neurological Disorders and Stroke [part of the National Institutes of Health] that it was a waste of money to fund any Alzheimer’s-related grants that didn’t center around amyloid.” To be sure, NIH did fund some Alzheimer’s research that did not focus on amyloid. In a sea of amyloid-focused grants, there are tiny islands of research on oxidative stress, neuroinflammation, and, especially, a protein called tau. But Neve’s NINDS program officer, she said, “told me that I should at least collaborate with the amyloid people or I wouldn’t get any more NINDS grants.(She hoped to study how neurons die.) A decade after her APP discovery, a disillusioned Neve left Alzheimer’s research, building a distinguished career in gene editing. Today, she said, she is “sick about the millions of people who have needlessly died from” the disease.

    Dr. Daniel Alkon, a longtime NIH neuroscientist who started a company to develop an Alzheimer’s treatment, is even more emphatic: “If it weren’t for the near-total dominance of the idea that amyloid is the only appropriate drug target”, he said, “we would be 10 or 15 years ahead of where we are now.”

    Making it worse is that the empirical support for the amyloid hypothesis has always been shaky. There were numerous red flags over the decades that targeting amyloid alone might not slow or reverse Alzheimer’s. “Even at the time the amyloid hypothesis emerged, 30 years ago, there was concern about putting all our eggs into one basket, especially the idea that ridding the brain of amyloid would lead to a successful treatment”, said neurobiologist Susan Fitzpatrick, president of the James S. McDonnell Foundation. But research pointing out shortcomings of the hypothesis was relegated to second-tier journals, at best, a signal to other scientists and drug companies that the criticisms needn’t be taken too seriously. Zaven Khachaturian spent years at NIH overseeing its early Alzheimer’s funding. Amyloid partisans, he said, “came to permeate drug companies, journals, and NIH study sections”, the groups of mostly outside academics who decide what research NIH should fund. “Things shifted from a scientific inquiry into an almost religious belief system, where people stopped being skeptical or even questioning.”

    …“You had a whole industry going after amyloid, hundreds of clinical trials targeting it in different ways”, Alkon said. Despite success in millions of mice, “none of it worked in patients.”

    Scientists who raised doubts about the amyloid model suspected why. Amyloid deposits, they thought, are a response to the true cause of Alzheimer’s and therefore a marker of the disease—again, the gravestones of neurons and synapses, not the killers. The evidence? For one thing, although the brains of elderly Alzheimer’s patients had amyloid plaques, so did the brains of people the same age who died with no signs of dementia, a pathologist discovered in 1991. Why didn’t amyloid rob them of their memories? For another, mice engineered with human genes for early Alzheimer’s developed both amyloid plaques and dementia, but there was no proof that the much more common, late-onset form of Alzheimer’s worked the same way. And yes, amyloid plaques destroy synapses (the basis of memory and every other brain function) in mouse brains, but there is no correlation between the degree of cognitive impairment in humans and the amyloid burden in the memory-forming hippocampus or the higher-thought frontal cortex. “There were so many clues”, said neuroscientist Nikolaos Robakis of the Icahn School of Medicine at Mount Sinai, who also discovered a mutation for early-onset Alzheimer’s. “Somehow the field believed all the studies supporting it, but not those raising doubts, which were very strong. The many weaknesses in the theory were ignored.”

  124. 2019-thomas.pdf: ⁠, Kelsey R. Thomas, Katherine J. Bangen, Alexandra J. Weigand, Emily C. Edmonds, Christina G. Wong, Shanna Cooper, Lisa Delano-Wood, Mark W. Bondi, for the Alzheimer's Disease Neuroimaging Initiative (2019-12-30; biology):

    Objective: To determine the temporal sequence of objectively defined subtle cognitive difficulties (Obj-SCD) in relation to amyloidosis and neurodegeneration, the current study examined the trajectories of amyloid PET and medial temporal neurodegeneration in participants with Obj-SCD relative to cognitively normal (CN) and mild cognitive impairment (MCI) groups.

    Method: A total of 747 Alzheimer’s Disease Neuroimaging Initiative participants (305 CN, 153 Obj-SCD, 289 MCI) underwent neuropsychological testing and serial amyloid PET and structural MRI examinations. examined 4-year rate of change in cortical 18F-florbetapir PET, entorhinal cortex thickness, and hippocampal volume in those classified as Obj-SCD and MCI relative to CN.

    Result: Amyloid accumulation was faster in the Obj-SCD group than in the CN group; the MCI and CN groups did not statistically-significantly differ from each other. The Obj-SCD and MCI groups both demonstrated faster entorhinal cortical thinning relative to the CN group; only the MCI group exhibited faster hippocampal atrophy than CN participants.

    Conclusion: Relative to CN participants, Obj-SCD was associated with faster amyloid accumulation and selective vulnerability of entorhinal cortical thinning, whereas MCI was associated with faster entorhinal and hippocampal atrophy. Findings suggest that Obj-SCD, operationally defined using sensitive neuropsychological measures, can be identified prior to or during the preclinical stage of amyloid deposition. Further, consistent with the Braak neurofibrillary staging scheme, Obj-SCD status may track with early entorhinal pathologic changes, whereas MCI may track with more widespread medial temporal change. Thus, Obj-SCD may be a sensitive and noninvasive predictor of encroaching amyloidosis and neurodegeneration, prior to frank cognitive impairment associated with MCI.

  125. ⁠, Richard Wiseman, Caroline Watt, Diana Kornbrot (2019-01-16):

    The recent ‘replication crisis’ in psychology has focused attention on ways of increasing methodological rigor within the behavioral sciences. Part of this work has involved promoting ‘Registered Reports’, wherein journals peer review papers prior to data collection and publication. Although this approach is usually seen as a relatively recent development, we note that a prototype of this publishing model was initiated in the mid-1970s by parapsychologist Martin Johnson in the European Journal of Parapsychology (EJP). A retrospective and observational comparison of Registered and non- published in the EJP during a seventeen-year period provides circumstantial evidence to suggest that the approach helped to reduce questionable research practices. This paper aims both to bring Johnson’s pioneering work to a wider audience, and to investigate the positive role that Registered Reports may play in helping to promote higher methodological and statistical standards.

    …The final dataset contained 60 papers: 25 RRs and 35 non-RRs. The RRs described 31 experiments that tested 131 hypotheses, and the non-RRs described 60 experiments that tested 232 hypotheses.

    28.4% of the statistical tests reported in non-RRs were statistically-significant (66⁄232: 95% CI [21.5%–36.4%]); compared to 8.4% of those in the RRs (11⁄131: 95% CI [4.0%–16.8%]). A simple 2 × 2 contingency analysis showed that this difference is highly statistically-significant (Fisher’s exact test: p < 0.0005, Pearson chi-square = 20.1, Cohen’s d = 0.48).

    …Parapsychologists investigate the possible existence of phenomena that, for many, have a low a priori likelihood of being genuine (see, eg., Wagenmakers et al 2011). This has often resulted in their work being subjected to a considerable amount of critical attention (from both within and outwith the field) that has led to them pioneering several methodological advances prior to their use within mainstream psychology, including the development of randomisation in experimental design (Hacking, 1988), the use of blinds (Kaptchuk, 1998), explorations into randomisation and statistical inference (Fisher, 1924), advances in replication issues (Rosenthal, 1986), the need for pre-specification in meta-analysis (Akers, 1985; Milton, 1999; Kennedy, 2004), and the creation of a formal study registry (Watt, 2012; Watt & Kennedy, 2015). Johnson’s work on RRs provides another striking illustration of this principle at work.

  126. ⁠, Richard Sever, Ted Roeder, Samantha Hindle, Linda Sussman, Kevin-John Black, Janet Argentine, Wayne Manos, John R. Inglis (2019-11-06):

    The traditional publication process delays dissemination of new research, often by months, sometimes by years. Preprint servers decouple dissemination of research papers from their evaluation and certification by journals, allowing researchers to share work immediately, receive feedback from a much larger audience, and provide evidence of productivity long before formal publication. Launched in 2013 as a non-profit community service, the bioRxiv server has brought preprint practice to the life sciences and recently posted its 64,000th manuscript. The server now receives more than four million views per month and hosts papers spanning all areas of biology. Initially dominated by evolutionary biology, genetics/​​​​genomics and computational biology, bioRxiv has been increasingly populated by papers in neuroscience, cell and developmental biology, and many other fields. Changes in journal and funder policies that encourage preprint posting have helped drive adoption, as has the development of bioRxiv technologies that allow authors to transfer papers easily between the server and journals. A bioRxiv user survey found that 42% of authors post their preprints prior to journal submission whereas 37% post concurrently with journal submission. Authors are motivated by a desire to share work early; they value the feedback they receive, and very rarely experience any negative consequences of preprint posting. Rapid dissemination via bioRxiv is also encouraging new initiatives that experiment with the peer review process and the development of novel approaches to literature filtering and assessment.

  127. 2004-tushnet.pdf: ⁠, Mark Tushnet (2004; sociology):

    For the past several years I have been noticing a phenomenon that seems to me new in my lifetime as a scholar of constitutional law. I call the phenomenon constitutional hardball. This Essay develops the idea that there is such a practice, that there is a sense in which it is new, and that its emergence (or re-emergence) is interesting because it signals that political actors understand that they are in a position to put in place a new set of deep institutional arrangements of a sort I call a “constitutional order”. A shorthand sketch of constitutional hardball is this: it consists of political claims and practices-legislative and executive initiatives-that are without much question within the bounds of existing constitutional doctrine and practice but that are nonetheless in some tension with existing pre-constitutional understandings. It is hardball because its practitioners see themselves as playing for keeps in a special kind of way; they believe the stakes of the political controversy their actions provoke are quite high, and that their defeat and their opponents’ victory would be a serious, perhaps permanent setback to the political positions they hold.

  128. ⁠, Paul Fussell (1989-08):

    [Hard-hitting longform piece on WWII about demystifying the ‘good war’ and bringing home the chaos, folly, incompetence, suffering, death, destruction visited on soldiers, and propaganda or silence which covered it all up.]

    On its fiftieth anniversary, how should we think of the Second World War? What is its contemporary meaning? One possible meaning, reflected in every line of what follows, is obscured by that oddly minimizing term “conventional war.” With our fears focused on nuclear destruction, we tend to be less mindful of just what conventional war between modern industrial powers is like. This article describes such war, in a stark, unromantic manner

  129. ⁠, William Broyles, Jr. (1984-11-01):

    “What people can’t understand”, Hiers said, gently picking up each tiny rabbit and placing it in the nest, “is how much fun Vietnam was. I loved it. I loved it, and I can’t tell anybody.” Hiers loved war. And as I drove back from Vermont in a blizzard, my children asleep in the back of the car, I had to admit that for all these years I also had loved it, and more than I knew. I hated war, too. Ask me, ask any man who has been to war about his experience, and chances are we’ll say we don’t want to talk about it—implying that we hated it so much, it was so terrible, that we would rather leave it buried. And it is no mystery why men hate war. War is ugly, horrible, evil, and it is reasonable for men to hate all that. But I believe that most men who have been to war would have to admit, if they are honest, that somewhere inside themselves they loved it too, loved it as much as anything that has happened to them before or since. And how do you explain that to your wife, your children, your parents, or your friends?

    …I spent most of my combat tour in Vietnam trudging through its jungles and rice paddies without incident, but I have seen enough of war to know that I never want to fight again, and that I would do everything in my power to keep my son from fighting. Then why, at the oddest times—when I am in a meeting or running errands, or on beautiful summer evenings, with the light fading and children playing around me—do my thoughts turn back fifteen years to a war I didn’t believe in and never wanted to fight? Why do I miss it?

    I miss it because I loved it, loved it in strange and troubling ways. When I talk about loving war I don’t mean the romantic notion of war that once mesmerized generations raised on Walter Scott. What little was left of that was ground into the mud at Verdun and Passchendaele: honor and glory do not survive the machine gun. And it’s not the mindless bliss of martyrdom that sends Iranian teenagers armed with sticks against Iraqi tanks. Nor do I mean the sort of hysteria that can grip a whole country, the way during the Falklands war the English press inflamed the lust that lurks beneath the cool exterior of Britain. That is vicarious war, the thrill of participation without risk, the lust of the audience for blood. It is easily fanned, that lust; even the invasion of a tiny island like Grenada can do it. Like all lust, for as long as it lasts it dominates everything else; a nation’s other problems are seared away, a phenomenon exploited by kings, dictators, and presidents since civilization began.

  130. ⁠, Scott Alexander (2019-06-04):

    [Book review of an anthropologist text arguing for imitation and extensive cultural as the driving force of human civilization, with imitation of other humans being the unique human cognitive skill that gave us the edge over other primates and all animals, with any kind of raw intelligence being strictly minor. Further this extensive multi-level group selectionism implies that most knowledge is embodied in apparently-arbitrary cultural practices, such as traditional food preparation or divination or hunting rituals, which are effective despite lacking any observable rationale and the actual reasons for their efficacy are inaccessible to mere reason (except possibly by a far more advanced science).]

  131. 2017-mercier.pdf: ⁠, Hugo Mercier (2017-05-18; psychology⁠, sociology⁠, philosophy⁠, advertising):

    A long tradition of scholarship, from ancient Greece to Marxism or some contemporary social psychology, portrays humans as strongly gullible—wont to accept harmful messages by being unduly deferent. However, if humans are reasonably well adapted, they should not be strongly gullible: they should be vigilant toward communicated information. Evidence from experimental psychology reveals that humans are equipped with well-functioning mechanisms of epistemic vigilance. They check the plausibility of messages against their background beliefs, calibrate their trust as a function of the source’s competence and benevolence, and critically evaluate arguments offered to them. Even if humans are equipped with well-functioning mechanisms of epistemic vigilance, an adaptive lag might render them gullible in the face of new challenges, from clever marketing to omnipresent propaganda. I review evidence from different cultural domains often taken as proof of strong gullibility: religion, demagoguery, propaganda, political campaigns, advertising, erroneous medical beliefs, and rumors. Converging evidence reveals that communication is much less influential than often believed—that religious proselytizing, propaganda, advertising, and so forth are generally not very effective at changing people’s minds. Beliefs that lead to costly behavior are even less likely to be accepted. Finally, it is also argued that most cases of acceptance of misguided communicated information do not stem from undue deference, but from a fit between the communicated information and the audience’s preexisting beliefs.

    [Keywords: epistemic vigilance, gullibility, trust]

  132. ⁠, Charles Homans (2017-06-14):

    In her first season, the Slava caught just 386 whales. But by the fifth—before which the fleet’s crew wrote a letter to Stalin pledging to bring home more than 500 tons of whale oil—the Slava’s annual catch was approaching 2,000. The next year it was 3,000…The Soviet fleets killed almost 13,000 humpback whales in the 1959–60 season and nearly as many the next, when the Slava and Sovetskaya Ukraina were joined by a third factory ship, the Yuriy Dolgorukiy. It was grueling work: One former whaler, writing years later in a Moscow newspaper, claimed that five or six Soviet crewmen died on the Southern Hemisphere expeditions each year, and that a comparable number went mad.

    …“In five years of intensive whaling by first one, then two, three, and finally four fleets”, he wrote, the populations of humpback whales off the coasts of Australia and New Zealand “were so reduced in abundance that we can now say that they are completely destroyed!”…The Soviet Union was a party to the International Convention for the Regulation of Whaling, a 1946 treaty that limited countries to a set quota of whales each year. By the time a ban on commercial whaling went into effect, in 1986, the Soviets had reported killing a total of 2,710 humpback whales in the Southern Hemisphere. In fact, the country’s fleets had killed nearly 18 times that many, along with thousands of unreported whales of other species. It had been an elaborate and audacious deception: Soviet captains had disguised ships, tampered with scientific data, and misled international authorities for decades. In the estimation of the marine biologists Yulia Ivashchenko, Phillip Clapham, and Robert Brownell, it was “arguably one of the greatest environmental crimes of the 20th century.”

    …It was also a perplexing one…Unlike Norway and Japan, the other major whaling nations of the era, the Soviet Union had little real demand for whale products. Once the blubber was cut away for conversion into oil, the rest of the animal, as often as not, was left in the sea to rot or was thrown into a furnace and reduced to bone meal—a low-value material used for agricultural fertilizer, made from the few animal byproducts that slaughterhouses and fish canneries can’t put to more profitable use. “It was a good product”, Dmitri Tormosov, a scientist who worked on the Soviet fleets, wryly recalls, “but maybe not so important as to support a whole whaling industry.” This was the riddle the Soviet ships left in their wake: Why did a country with so little use for whales kill so many of them?

  133. 2010-dobelli.pdf: ⁠, Rolf Dobelli (2010; culture):

    This article is the antidote to news. It is long, and you probably won’t be able to skim it. Thanks to heavy news consumption, many people have lost the reading habit and struggle to absorb more than four pages straight. This article will show you how to get out of this trap—if you are not already too deeply in it.

  134. ⁠, Xavier Marquez (2013-11-21):

    A footnote on Inga Clendinnen’s extraordinary Aztecs: An Interpretation. If there’s a better book on the Aztecs than this, I want to read it…Consider this passage Clendinnen quotes from the Florentine Codex (one of the main sources for pre-conquest Mexica thought and culture), coming after the speech with which the Mexica greeted a new tlatoani (ruler; literally, the “Great Speaker”) and exhorted him to good behaviour:

    Those early and anxious exhortations to benevolent behaviour were necessary, ‘for it was said when we replaced one, when we selected someone…he was already our lord, our executioner and our enemy.’ (p. 80; the quote is from Book 6, chapter 10, in Dibble and Anderson’s translation from the Nahuatl).

    It’s an arresting thought: “he was already our lord, our executioner, and our enemy.” (Clendinnen comments on the “desolate cadence” of these words). The ruler is not understood by the Mexica as normally benevolent though potentially dangerous; he is the enemy, and yet as the enemy he is indispensable. There is something profoundly alien in this thought, with its unsettling understanding of “legitimacy”, something I do not find anywhere in the classical Western tradition of political thought…But Aztec cosmology, it turns out, goes much further than this. The ruler embodies or channels Tezcatlipoca, who is often vaguely characterized as a god of “fate and war” (and normally downplayed in favor of Huizilopochtli, eg., in the current Te Papa exhibit on the Aztecs here in Wellington, who is more understandable as a straightforward god of war, and is viewed as the “patron” of the Tenochtitlan Mexica). But Tezcatlipoca is the more important deity: he is described at the beginning of Book 6 of the Florentine Codex as “the principal god” of the Mexica. And he is not a merciful or benevolent god; on the contrary, he represents a kind of arbitrary malice that is visited on all alike, and is variously addressed as the Enemy on Both Sides, the Mocker, He Whose Slaves We Are, and the Lord of the Smoking Mirror (for the smoky reflections in dark obsidian mirrors used by the shamans, “obscure intimations of what was to come endlessly dissolving back into obscurity”, as Clendinnen puts it [p. 148])…Clendinnen notes many other examples of the “shared and steady vision common to the different social groupings in Tenochtitlan” concerning “the casual, inventive, tireless malice of the only sacred force concerned with the fates of men”, p. 148

    …When reading these passages, I cannot help but think: how could the Mexica be reconciled to their social and natural worlds with such an arbitrary, even malignant conception of divine and political authority? How is a ruler or a deity who is simultaneously seen as an enemy inspire support and commitment? As Clendinnen puts it, the puzzle is that “submission to a power which is caprice embodied is a taxing enterprise, yet it is that which the most devoted Mexica appear to have striven to achieve” (p. 76). Yet she hits on the right answer, I think, when she interprets these statements in the context of the rituals of Mexica society. In particular, she shows the Aztec state as an extraordinary example of what Clifford Geertz, referring to pre-colonial Bali, once called the “theatre state.”

    I mentioned earlier that human sacrifice was one of the central practices of Mexica society. But this does not quite capture what was going on. Human sacrifice was the most intense part of the pervasive ritual practices that structured Mexica society, but it was never merely sacrifice. Sacrifice was the culminating act of a set of amazing spectacles, enormously powerful intensifiers of emotion that made use of the entire register of Aztec symbols and pharmacopeia, and drew on the full resources of the empire.

  135. {#linkBibliography-conflict)-2019 .docMetadata}, William Buckner (Traditions of Conflict) (2019-02-23):

    During an undetermined time period preceding European contact, a gargantuan, humanoid spirit-God conquered parts of the Sepik region of Papua New Guinea. With a voracious appetite for pork and yams—and occasional demands of ritual murder—Nggwal was the tutelary spirit for a number of Sepik horticulturalist societies…what specific demands does Nggwal make? The first is for food. Nggwal must be fed, and while it is the men who are his most devoted servants and the keepers of his great secrets, it is often the responsibility of the women to provide for his subsistence, “Women are well aware of Nggwal’s hunger, for to them falls much of the gardening, hauling and cooking needed to feed him”, Tuzin writes. But how does Nggwal consume the food offered to him? “Needless to say, it is not the Tambaran [Nggwal himself] which eats the pork but the men themselves, in secret conclaves”, and Tuzin continues describing the “feasts among Tambaran Cult members in secret seclusion, during which non-members are under the impression that the food is being given directly to the spirits.”

    …Despite the playful, Halloween-like aspects of this practice, the hangahiwa wandafunei [violent spirits] were a much more serious matter. 10% of the male masks portrayed hangahiwa wandafunei, and they were associated with the commission of ritually sanctioned murder. These murders committed by the violent spirits were always attributed to Nggwal.

    …Traditionally, hangahiwa wandafunei sought out victims who were alone in their garden or on the forest paths at dusk. Pigs, dogs and chickens were also fair game. After spearing the victim, the offending hangamu’w would escape back to its spirit house. The wearer would replace it with the other costumes and emerge without fear of detection—in time to join the general alarm aroused by the discovery of the body.

    Sometimes the wearer would not put the mask away, however, and instead he would take it to a nearby enemy village, where a relative or other acquaintance of his would take the mask and keep it in their own community’s spirit house, until it was time to be used and transferred once more. Through these ritual killings and the passage of costumes between communities, Nggwal impels cooperation between men of even hostile villages, and unites them in cult secrecy.

    Nggwal, who travels in structures of fiber and bone atop rivers of blood.


  137. 2019-horowitz.pdf: ⁠, Mark Horowitz, William Yaworsky, Kenneth Kickham (2019-10; sociology⁠, sociology  /​ ​​ ​preference-falsification):

    In recent decades the field of anthropology has been characterized as sharply divided between pro-science and anti-science factions. The aim of this study is to empirically evaluate that characterization. We survey anthropologists in graduate programs in the United States regarding their views of science and advocacy, moral and epistemic relativism, and the merits of evolutionary biological explanations. We examine anthropologists’ views in concert with their varying appraisals of major controversies in the discipline (⁠, ⁠, and ). We find that disciplinary specialization and especially gender and political orientation are statistically-significant predictors of anthropologists’ views. We interpret our findings through the lens of an intuitionist social psychology that helps explain the dynamics of such controversies as well as ongoing ideological divisions in the field.


  139. ⁠, Caroline Fraser (2019-08-06):

    [‘Caroline Fraser, herself raised in a Scientist household, traces the growth of the Church from a small, eccentric sect into a politically powerful and socially respectable religion. She takes us into the closed world of Eddy’s followers, who reject modern medicine even at the cost of their children’s lives. And she reveals just how Christian Science managed to gain extraordinary legal and congressional approval for its dubious practices.’

    Memoir of a former Christian Scientist, a Christian cult which believes all illness is spiritual and that medicine is useless/​​​​sinful and so whose adherents refuse medical treatment, describing her father’s slow decay from injuries and eventual death from a spreading gangrene that could have been treated. Author describes how (akin to Scientology) Christian Science is in decay itself, with rapidly declining numbers despite healthy financials and real estate assets from better days. While Christian Science may soon shrivel away, it leaves a toxic and literally infectious legacy: to profit off offering ‘treatment’ and enable its members to avoid real medical treatment for their children and themselves, Christian Science spearheaded the legislation of ‘religious exemptions’ to vaccines, empowering the current anti-vax movement, which may kill more children than Christian Science ever did.]

  140. 2019-wilmot.pdf: ⁠, Michael P. Wilmot, Deniz S. Ones (2019-11-12; conscientiousness):

    Significance: Conscientiousness (C) is the most potent noncognitive predictor of occupational performance. However, questions remain about how C relates to a plethora of occupational variables, what its defining characteristics and functions are in occupational settings, and whether its performance relation differs across occupations. To answer these questions, we quantitatively review 92 meta-analyses reporting relations to 175 occupational variables. Across variables, results reveal a substantial mean effect of ρM = 20.

    We then use results to synthesize 10 themes that characterize C in occupational settings. Finally, we discover that performance effects of C are weaker in high-complexity versus low-complexity to moderate-complexity occupations. Thus, for optimal occupational performance, we encourage decision makers to match C’s goal-directed motivation and behavioral restraint to more predictable environments.

    Evidence from more than 100 y of research indicates that conscientiousness (C) is the most potent noncognitive construct for occupational performance. However, questions remain about the magnitudes of its effect sizes across occupational variables, its defining characteristics and functions in occupational settings, and potential moderators of its performance relation. Drawing on 92 unique meta-analyses reporting effects for 175 distinct variables, which represent n > 1.1 million participants across k > 2,500 studies, we present the most comprehensive, quantitative review and synthesis of the occupational effects of C available in the literature. Results show C has effects in a desirable direction for 98% of variables and a grand mean of ρM = 0.20 (SD = 0.13), indicative of a potent, pervasive influence across occupational variables. Using the top 33% of effect sizes (ρ≥0.24), we synthesize 10 characteristic themes of C’s occupational functioning: (1) motivation for goal-directed performance, (2) preference for more predictable environments, (3) interpersonal responsibility for shared goals, (4) commitment, (5) perseverance, (6) self-regulatory restraint to avoid counterproductivity, and (7) proficient performance—especially for (8) conventional goals, (9) requiring persistence. Finally, we examine C’s relation to performance across 8 occupations. Results indicate that occupational complexity moderates this relation. That is, (10) high occupational complexity versus low-to-moderate occupational complexity attenuates the performance effect of C. Altogether, results suggest that goal-directed performance is fundamental to C and that motivational engagement, behavioral restraint, and environmental predictability influence its optimal occupational expression. We conclude by discussing applied and policy implications of our findings.

    [Keywords: conscientiousness, personality, meta-analysis, second-order meta-analysis, occupations]

  141. 2019-soto.pdf: ⁠, Christopher J. Soto (2019-01-01; psychology):

    The personality traits have been linked to dozens of life outcomes. However, metascientific research has raised questions about the replicability of behavioral science. The Life Outcomes of Personality Replication (LOOPR) Project was therefore conducted to estimate the replicability of the personality-outcome literature. Specifically, I conducted preregistered, high-powered (median n = 1,504) replications of 78 previously published trait–outcome associations. Overall, 87% of the replication attempts were statistically-significant in the expected direction. The replication effects were typically 77% as strong as the corresponding original effects, which represents a significant decline in effect size. The replicability of individual effects was predicted by the effect size and design of the original study, as well as the sample size and statistical power of the replication. These results indicate that the personality-outcome literature provides a reasonably accurate map of trait–outcome associations but also that it stands to benefit from efforts to improve replicability.

  142. 2011-kampfe.pdf: ⁠, Juliane Kämpfe, Peter Sedlmeier, Frank Renkewitz (2010-11-08; music-distraction):

    Background music has been found to have beneficial, detrimental, or no effect on a variety of behavioral and psychological outcome measures.

    This article reports a meta-analysis that attempts to summarize the impact of background music. A global analysis shows a null effect, but a detailed examination of the studies that allow the calculation of effects sizes reveals that this null effect is most probably due to averaging out specific effects. In our analysis, the probability of detecting such specific effects was not very high as a result of the scarcity of studies that allowed the calculation of respective effect sizes.

    Nonetheless, we could identify several such cases: a comparison of studies that examined background music compared to no music indicates that background music disturbs the reading process, has some small detrimental effects on memory, but has a positive impact on emotional reactions and improves achievements in sports. A comparison of different types of background music reveals that the tempo of the music influences the tempo of activities that are performed while being exposed to background music.

    It is suggested that effort should be made to develop more specific theories about the impact of background music and to increase the methodological quality of relevant studies.

    [Keywords: background music, effects of music, healthy adults, meta-analysis, methodological problems]

  143. ⁠, José Luis Ricón (2019-07-28):

    Is Bloom’s “Two Sigma” phenomenon real? If so, what do we do about it?

    Educational psychologist Benjamin Bloom found that one-on-one tutoring using mastery learning led to a two sigma(!) improvement in student performance. The results were replicated. He asks in his paper that identified the “2 Sigma Problem”: how do we achieve these results in conditions more practical (ie., more scalable) than one-to-one tutoring?

    In a related vein, this large-scale meta-analysis shows large (>0.5 Cohen’s d) effects from direct instruction using mastery learning. “Yet, despite the very large body of research supporting its effectiveness, DI has not been widely embraced or implemented.”

    • The literatures examined here are full of small sample, non-randomized trials, and highly heterogeneous results.
    • Tutoring in general, most likely, does not reach the 2-sigma level that Bloom suggested. Likewise, it’s unlikely that mastery learning provides a 1-sigma improvement.
      • But high quality tutors, and high quality software are likely able to reach a 2-sigma improvement and beyond.
    • All the methods (mastery learning, direct instruction, tutoring, software tutoring, ⁠, and spaced repetition) studied in this essay are found to work to various degrees, outlined below.
    • This essay covers many kinds of subjects being taught, and likewise many groups (special education vs regular schools, college vs K-12). The effect sizes reported here are averages that serve as general guidance.
    • The methods studied tend to be more effective for lower skilled students relative to the rest.
    • The methods studied work at all levels of education, with the exception of direct instruction: There is no evidence to judge its effectiveness at the college level.
    • The methods work substantially better when clear objectives and facts to be learned are set. There is little evidence of learning transfer: Practicing or studying X subject does not improve much performance outside of X.
    • There is some suggestive evidence that the underlying reasons these methods work are increased and repeated exposure to the material, the ⁠, and fine-grained feedback on performance in the case of tutoring.
    • Long term studies tend to find evidence of a fade-out effect, effect sizes decrease over time. This is likely due to the skills being learned not being practiced.

    Bloom noted that mastery learning had an effect size of around 1 (one sigma); while tutoring leads to d = 2. This is mostly an outlier case.

    Nonetheless, Bloom was on to something: Tutoring and mastery learning do have a degree of experimental support, and fortunately it seems that carefully designed software systems can completely replace the instructional side of traditional teaching, achieving better results, on par with one to one tutoring. However, designing them is a hard endeavour, and there is a motivational component of teachers that may not be as easily replicable purely by software.

    Overall, it’s good news that the effects are present for younger and older students, and across subjects, but the effect sizes of tutoring, mastery learning or DI are not as good as they would seem from Bloom’s paper. That said, it is true that tutoring does have large effect sizes, and that properly designed software does as well. The DARPA case study shows what is possible with software tutoring, in the case the effect sizes went even beyond Bloom’s paper.

  144. 2019-shewach.pdf: ⁠, Oren R. Shewach, Paul R. Sackett, Sander Quint (2019; psychology):

    The stereotype threat literature primarily comprises lab studies, many of which involve features that would not be present in high-stakes testing settings. We meta-analyze the effect of stereotype threat on cognitive ability tests, focusing on both laboratory and operational studies with features likely to be present in high stakes settings. First, we examine the features of cognitive ability test metric, stereotype threat cue activation strength, and type of non-threat control group, and conduct a focal analysis removing conditions that would not be present in high stakes settings. We also take into account a previously unrecognized methodological error in how data are analyzed in studies that control for scores on a prior cognitive ability test, which resulted in a biased estimate of stereotype threat. The focal sample, restricting the database to samples utilizing operational testing-relevant conditions, displayed a threat effect of d = −0.14 (k = 45, N = 3,532, SDδ = 0.31). Second, we present a comprehensive meta-analysis of stereotype threat. Third, we examine a small subset of studies in operational test settings and studies utilizing motivational incentives, which yielded d-values ranging from 0.00 to −0.14. Fourth, the meta-analytic database is subjected to tests of publication bias, finding nontrivial evidence for publication bias. Overall, results indicate that the size of the stereotype threat effect that can be experienced on tests of cognitive ability in operational scenarios such as college admissions tests and employment testing may range from negligible to small.

  145. 2019-letexier.pdf: ⁠, Thibault Le Texier (2019-08-05; psychology):

    The (SPE) is one of psychology’s most famous studies. It has been criticized on many grounds, and yet a majority of textbook authors have ignored these criticisms in their discussions of the SPE, thereby misleading both students and the general public about the study’s questionable scientific validity.

    Data collected from a thorough investigation of the SPE archives and interviews with 15 of the participants in the experiment further question the study’s scientific merit. These data are not only supportive of previous criticisms of the SPE, such as the presence of ⁠, but provide new criticisms of the SPE based on heretofore unknown information. These new criticisms include the biased and incomplete collection of data, the extent to which the SPE drew on a prison experiment devised and conducted by students in one of Zimbardo’s classes 3 months earlier, the fact that the guards received precise instructions regarding the treatment of the prisoners, the fact that the guards were not told they were subjects, and the fact that participants were almost never completely immersed by the situation.

    Possible explanations of the inaccurate textbook portrayal and general misperception of the SPE’s scientific validity over the past 5 decades, in spite of its flaws and shortcomings, are discussed.

    [Keywords: Stanford Prison Experiment, Zimbardo, epistemology]

  146. ⁠, Susannah Cahalan (2019-11-02):

    [Summary of investigation into David Rosenhan: like the Robbers Cave or Stanford Prison Experiment, his famous fake-insane patients experiment cannot be verified and many troubling anomalies have come to light. Cahalan is unable to find almost all of the supposed participants, Rosenhan hid his own participation & his own medical records show he fabricated details of his case, he throw out participant data that didn’t match his narrative, reported numbers are inconsistent, Rosenhan abandoned a lucrative book deal about it and avoided further psychiatric research, and showed some character traits of a fabulist eager to please.]

  147. ⁠, David Shariatmadari (2018-04-16):

    In 50s Middle Grove, things didn’t go according to plan either, though the surprise was of a different nature. Despite his pretence of leaving the 11-year-olds to their own devices, Sherif and his research staff, posing as camp counsellors and caretakers, interfered to engineer the result they wanted. He believed he could make the two groups, called the Pythons and the Panthers, sworn enemies via a series of well-timed “frustration exercises”. These included his assistants stealing items of clothing from the boys’ tents and cutting the rope that held up the Panthers’ homemade flag, in the hope they would blame the Pythons. One of the researchers crushed the Panthers’ tent, flung their suitcases into the bushes and broke a boy’s beloved ukulele. To Sherif’s dismay, however, the children just couldn’t be persuaded to hate each other…The robustness of the boy’s “civilised” values came as a blow to Sherif, making him angry enough to want to punch one of his young academic helpers. It turned out that the strong bonds forged at the beginning of the camp weren’t easily broken. Thankfully, he never did start the forest fire—he aborted the experiment when he realised it wasn’t going to support his hypothesis.

    But the Rockefeller Foundation had given Sherif $306,948$38,0001953. In his mind, perhaps, if he came back empty-handed, he would face not just their anger but the ruin of his reputation. So, within a year, he had recruited boys for a second camp, this time in Robbers Cave state park in Oklahoma. He was determined not to repeat the mistakes of Middle Grove.

    …At Robbers Cave, things went more to plan. After a tug-of-war in which they were defeated, the Eagles burned the Rattler’s flag. Then all hell broke loose, with raids on cabins, vandalism and food fights. Each moment of confrontation, however, was subtly manipulated by the research team. They egged the boys on, providing them with the means to provoke one another—who else, asks Perry in her book, could have supplied the matches for the flag-burning?

    …Sherif was elated. And, with the publication of his findings that same year, his status as world-class scholar was confirmed. The “Robbers Cave experiment” is considered seminal by social psychologists, still one of the best-known examples of “realistic conflict theory”. It is often cited in modern research. But was it scientifically rigorous? And why were the results of the Middle Grove experiment—where the researchers couldn’t get the boys to fight—suppressed? “Sherif was clearly driven by a kind of a passion”, Perry says. “That shaped his view and it also shaped the methods he used. He really did come from that tradition in the 30s of using experiments as demonstrations—as a confirmation, not to try to find something new.” In other words, think of the theory first and then find a way to get the results that match it. If the results say something else? Bury them…“I think people are aware now that there are real ethical problems with Sherif’s research”, she tells me, “but probably much less aware of the backstage [manipulation] that I’ve found. And that’s understandable because the way a scientist writes about their research is accepted at face value.” The published report of Robbers Cave uses studiedly neutral language. “It’s not until you are able to compare the published version with the archival material that you can see how that story is shaped and edited and made more respectable in the process.” That polishing up still happens today, she explains. “I wouldn’t describe him as a charlatan…every journal article, every textbook is written to convince, persuade and to provide evidence for a point of view. So I don’t think Sherif is unusual in that way.”

  148. 1991-lykken.pdf: ⁠, David T. Lykken (1991; psychology):

    [Lykken’s (1991) classic criticisms of psychology’s dominant research tradition, from the perspective of the Minnesotan psychometrics school, in association with Paul Meehl: psychology’s replication crisis, the constant fading-away of trendy theories, and inability to predict the real world the measurement problem, null hypothesis statistical-significance testing, and the granularity of research methods.]

    I shall argue the following theses:

    1. Psychology isn’t doing very well as a scientific discipline and something seems to be wrong somewhere.
    2. This is due partly to the fact that psychology is simply harder than physics or chemistry, and for a variety of reasons. One interesting reason is that people differ structurally from one another and, to that extent, cannot be understood in terms of the same theory since theories are guesses about structure.
    3. But the problems of psychology are also due in part to a defect in our research tradition; our students are carefully taught to behave in the same obfuscating, self-deluding, pettifogging ways that (some of) their teachers have employed.
  149. {#linkBibliography-(bbc)-2019 .docMetadata}, Kelly Oakes (BBC) (2019-08-20):

    Psychologist Russell Hurlburt at the University of Nevada, Las Vegas, has spent the last few decades training people to see inside their own minds more clearly in an attempt to learn something about our inner experiences at large. Though many individual studies on inner speech include only a small number of participants, making it hard to know whether their results apply more widely, Hurlburt estimates he’s been able to peek inside the minds of hundreds of people since he began his research. What he’s found suggests that the thoughts running through our heads are a lot more varied than we might suppose.

    For one, words don’t seem to feature as heavily in our day-to-day thoughts as many of us think they do. “Most people think that they think in words, but many people are mistaken about that”, he says. In one small study, for example, to find out what they were thinking during the course of reading. Only a quarter of their sampled thoughts featured words at all, and just 3% involved internal narration.

    …If people aren’t constantly talking to themselves, what are they doing?

    In his years of studying the inner workings of people’s minds, Hurlburt has come up with five categories of inner experiences: inner speaking, which comes in a variety of forms; inner seeing, which could feature images of things you’ve seen in real life or imaginary visuals; feelings, such as anger or happiness; sensory awareness, like being aware of the scratchiness of the carpet under your feet; and unsymbolised thinking, a trickier concept to get your head around, but essentially a thought that doesn’t manifest as words or images, but is undoubtedly present in your mind. But those categories leave room for variation, too. Take inner speaking, which can come in the form of a single word, a sentence, some kind of monologue, or even a conversation. The idea of an internal dialogue—rather than a monologue—will be familiar to anyone who’s ever rehearsed an important conversation, or rehashed an argument, in their mind. But the person we talk to inside our head is not always a stand in for someone else—often, that other voice is another aspect of ourselves.

    …Famira Racy, co-ordinator of the Inner Speech Lab at Mount Royal University, Canada, and her colleagues recently used a method called thought listing—which, unsurprisingly, involves getting participants to list their thoughts at certain times—to take a broader look at

    They found that the students in the study were talking to themselves about everything from school to their emotions, other people, and themselves, while they were doing everyday tasks like walking and getting in and out of bed. Though it has the same limitations as much research on inner speech—namely, you can’t always trust people to know what or how they were really thinking—the results appear consistent with previous work.

    “I can’t say for sure if it’s any more important [than other kinds of inner experience], but there’s been enough research done to show that inner speech plays an important role in self-regulation behaviour, problem solving, critical thinking and reasoning and future thinking”, Racy says…“It gives you a way to communicate with yourself using a meaningful structure”, says Racy. Or as one of her colleagues sometimes puts it: “Inner speech is your flashlight in the dark room that is your mind.”

  150. 2013-hurlburt.pdf: ⁠, Russell T. Hurlburt, Christopher L. Heavey, Jason M. Kelsey (2013-12-01; psychology):


    • Inner speaking is a common but not ubiquitous phenomenon of inner experience.
    • There are large individual differences in the frequency of inner speaking (from near 0% to near 100%).
    • There is substantial variability in the phenomenology of naturally occurring moments of inner speaking.
    • Use of an appropriate method is critical to the study of inner experience.
    • Descriptive Experience Sampling is designed to apprehend high fidelity descriptions of inner experience.

    Abstract: Inner speaking is a common and widely discussed phenomenon of inner experience. Based on our studies of inner experience using Descriptive Experience Sampling (a qualitative method designed to produce high fidelity descriptions of randomly selected pristine inner experience), we advance an initial phenomenology of inner speaking. Inner speaking does occur in many, though certainly not all, moments of pristine inner experience. Most commonly it is experienced by the person as speaking in his or her own naturally inflected voice but with no sound being produced. In addition to prototypical instances of inner speaking, there are wide-ranging variations that fit the broad category of inner speaking and large individual differences in the frequency with which individuals experience inner speaking. Our observations are discrepant from what many have said about inner speaking, which we attribute to the characteristics of the methods different researchers have used to examine inner speaking.

  151. 2018-hennecke.pdf: ⁠, Marie Hennecke, Thomas Czikmantori, Veronika Brandstätter (2018-12-10; psychology):

    We investigated the self-regulatory strategies people spontaneously use in their everyday lives to regulate their persistence during aversive activities. In pilot studies (pooled N = 794), we identified self-regulatory strategies from self-reports and generated hypotheses about individual differences in trait self-control predicting their use. Next, deploying ambulatory assessment (N = 264, 1940 reports of aversive/​​​​challenging activities), we investigated predictors of the strategies’ self-reported use and effectiveness (trait self-control and demand types). The popularity of strategies varied across demands. In addition, people higher in trait self-control were more likely to focus on the positive consequences of a given activity, set goals, and use emotion regulation. Focusing on positive consequences, focusing on negative consequences (of not performing the activity), thinking of the near finish, and emotion regulation increased perceived self-regulatory success across demands, whereas distracting oneself from the aversive activity decreased it. None of these strategies, however, accounted for the beneficial effects of trait self-control on perceived self-regulatory success. Hence, trait self-control and strategy use appear to represent separate routes to good self-regulation. By considering trait-approaches and process-approaches these findings promote a more comprehensive understanding of self-regulatory success and failure during people’s daily attempts to regulate their persistence.

  152. ⁠, John Perry (1996-02-23):

    All procrastinators put off things they have to do. Structured procrastination is the art of making this bad trait work for you. The key idea is that procrastinating does not mean doing absolutely nothing. Procrastinators seldom do absolutely nothing; they do marginally useful things, like gardening or sharpening pencils or making a diagram of how they will reorganize their files when they get around to it. Why does the procrastinator do these things? Because they are a way of not doing something more important. If all the procrastinator had left to do was to sharpen some pencils, no force on earth could get him do it. However, the procrastinator can be motivated to do difficult, timely and important tasks, as long as these tasks are a way of not doing something more important.

    Structured procrastination means shaping the structure of the tasks one has to do in a way that exploits this fact. The list of tasks one has in mind will be ordered by importance. Tasks that seem most urgent and important are on top. But there are also worthwhile tasks to perform lower down on the list. Doing these tasks becomes a way of not doing the things higher up on the list. With this sort of appropriate task structure, the procrastinator becomes a useful citizen. Indeed, the procrastinator can even acquire, as I have, a reputation for getting a lot done.

  153. 2019-obrien.pdf: {#linkBibliography-o’brien-2019 .docMetadata doi=“10.1037/​​pspa0000147”}, E. O'Brien (2019; culture):

    What would it be like to revisit a museum, restaurant, or city you just visited? To rewatch a movie you just watched? To replay a game you just played? People often have opportunities to repeat hedonic activities. Seven studies (total N = 3,356) suggest that such opportunities may be undervalued: Many repeat experiences are not as dull as they appear. Studies 1–3 documented the basic effect. All participants first completed a real-world activity once in full (Study 1, museum exhibit; Study 2, movie; Study 3, video game). Then, some predicted their reactions to repeating it whereas others actually repeated it. Predictors underestimated Experiencers’ enjoyment, even when experienced enjoyment indeed declined. Studies 4 and 5 compared mechanisms: neglecting the pleasurable byproduct of continued exposure to the same content (eg., fluency) versus neglecting the new content that manifests by virtue of continued exposure (eg., discovery), both of which might dilute uniform dullness. We found stronger support for the latter: The misprediction was moderated by stimulus complexity (Studies 4 and 5) and mediated by the amount of novelty discovered within the stimulus (Study 5), holding exposure constant. Doing something once may engender an inflated sense that one has now seen “it”, leaving people naïve to the missed nuances remaining to enjoy. Studies 6 and 7 highlighted consequences: Participants incurred costs to avoid repeats so to maximize enjoyment, in specific contexts for which repetition would have been as enjoyable (Study 6) or more enjoyable (Study 7) as the provided novel alternative. These findings warrant a new look at traditional assumptions about hedonic adaptation and novelty preferences. Repetition too could add an unforeseen spice to life.

  154. 2019-vrselja.pdf: ⁠, Zvonimir Vrselja, Stefano G. Daniele, John Silbereis, Francesca Talpo, Yury M. Morozov, André M. M. Sousa, Brian S. Tanaka, Mario Skarica, Mihovil Pletikos, Navjot Kaur, Zhen W. Zhuang, Zhao Liu, Rafeed Alkawadri, Albert J. Sinusas, Stephen R. Latham, Stephen G. Waxman, Nenad Sestan (2019-05-17; longevity):

    The brains of humans and other mammals are highly vulnerable to interruptions in blood flow and decreases in oxygen levels. Here we describe the restoration and maintenance of microcirculation and molecular and cellular functions of the intact pig brain under ex vivo normothermic conditions up to four hours post-mortem. We have developed an extracorporeal pulsatile-perfusion system and a haemoglobin-based, acellular, non-coagulative, echogenic, and cytoprotective perfusate that promotes recovery from anoxia, reduces reperfusion injury, prevents oedema, and metabolically supports the energy requirements of the brain. With this system, we observed preservation of cytoarchitecture; attenuation of cell death; and restoration of vascular dilatory and glial inflammatory responses, spontaneous synaptic activity, and active cerebral metabolism in the absence of global electrocorticographic activity. These findings demonstrate that under appropriate conditions the isolated, intact large mammalian brain possesses an underappreciated capacity for restoration of microcirculation and molecular and cellular activity after a prolonged post-mortem interval.

  155. {#linkBibliography-aging)-2019 .docMetadata}, Reason (Fight Aging) (2019-12-31):

    [Aging research over the past year, 2019. Categories include: The State of Funding, Conferences and Community, Clinical Development, Cellular Mitochondria in Aging, Nuclear DNA Damage, Cross-Links, Neurodegeneration, Upregulation of Cell Maintenance, In Vivo Cell Reprogramming, Parabiosis, The Gut Microbiome in Aging, Biomarkers of Aging, Cancer, The Genetics of Longevity, Regenerative Medicine, Odds and Ends, Short Articles, and In Conclusion.]

    As has been the case for a few years now, progress towards the implementation of rejuvenation therapies is accelerating dramatically, ever faster with each passing year. While far from everyone is convinced that near term progress in addressing human aging is plausible, it is undeniable that we are far further ahead than even a few years ago. Even the public at large is beginning to catch on. While more foresightful individuals of past generations could do little more than predict a future of rejuvenation and extended healthy lives, we are in a position to make it happen.

  156. ⁠, Baptiste Couvy-Duchesne, Lachlan T. Strike, Futao Zhang, Yan Holtz, Zhili Zheng, Kathryn E. Kemper, Loic Yengo, Olivier Colliot, Margaret J. Wright, Naomi R. Wray, Jian Yang, Peter M. Visscher (2019-07-09):

    The recent availability of large-scale neuroimaging cohorts (here the UK Biobank [UKB] and the Human Connectome Project [HCP]) facilitates deeper characterisation of the relationship between phenotypic and brain architecture variation in humans. We tested the association between 654,386 vertex-wise measures of cortical and subcortical morphology (from T1w and T2w MRI images) and behavioural, cognitive, psychiatric and lifestyle data. We found a statistically-significant association of grey-matter structure with 58 out of 167 UKB phenotypes spanning substance use, blood assay results, education or income level, diet, depression, being a twin as well as cognition domains (UKB discovery sample: n = 9,888). Twenty-three of the 58 associations replicated (UKB replication sample: n = 4,561; HCP, n = 1,110). In addition, differences in body size (height, weight, BMI, waist and hip circumference, body fat percentage) could account for a substantial proportion of the association, providing possible insight into previous MRI studies for psychiatric disorders where case status is associated with body mass index. Using the same linear mixed model, we showed that most of the associated characteristics (e.g. age, sex, body size, diabetes, being a twin, maternal smoking, body size) could be significantly predicted using all the brain measurements in out-of-sample prediction. Finally, we demonstrated other applications of our approach including a Region Of Interest (ROI) analysis that retain the vertex-wise complexity and ranking of the information contained across MRI processing options.

    Highlights: Our linear mixed model approach unifies association and prediction analyses for highly dimensional vertex-wise MRI data

    Grey-matter structure is associated with measures of substance use, blood assay results, education or income level, diet, depression, being a twin as well as cognition domains

    Body size (height, weight, BMI, waist and hip circumference) is an important source of covariation between the phenome and grey-matter structure

    Grey-matter scores quantify grey-matter based risk for the associated traits and allow to study phenotypes not collected

    The most general cortical processing (“fsaverage” mesh with no smoothing) maximises the brain-morphometricity for all UKB phenotypes

  157. {#linkBibliography-magazine)-2011 .docMetadata}, Sy Montgomery (Orion Magazine) (2011-10-25):

    [Discussion of the remarkable abilities & intelligence of octopuses, despite being small, fragile, asocial beings. With hundreds of millions of neurons (most in its arms, which appear to be able to think and act independently, coordinating with the other arms/​​​​mouth, with their immensely-strong suckers), octopus are able to recognize individuals and bear grudges (squirting water at the foe), somehow imitate color despite being color-blind, use tools, solve puzzles, and manipulate rocks to create shelters, they are noted escape artists: one octopus was found breaking out of its aquarium at night to feast in other tanks, sneaking back before humans returned.]

  158. 2019-delguidice.pdf: ⁠, Marco Del Giudice (2019-09; genetics  /​ ​​ ​selection):

    The ability of parasites to manipulate host behavior to their advantage has been studied extensively, but the impact of parasite manipulation on the evolution of neural and endocrine mechanisms has remained virtually unexplored. If selection for countermeasures has shaped the evolution of nervous systems, many aspects of neural functioning are likely to remain poorly understood until parasites—the brain’s invisible designers—are included in the picture.

    This article offers the first systematic discussion of brain evolution in light of parasite manipulation. After reviewing the strategies and mechanisms employed by parasites, the paper presents a taxonomy of host countermeasures with four main categories, namely: restrict access to the brain; increase the costs of manipulation; increase the complexity of signals; and increase robustness. For each category, possible examples of countermeasures are explored, and the likely evolutionary responses by parasites are considered.

    The article then discusses the metabolic, computational, and ecological constraints that limit the evolution of countermeasures. The final sections offer suggestions for future research and consider some implications for basic neuroscience and psychopharmacology.

    The paper aims to present a novel perspective on brain evolution, chart a provisional way forward, and stimulate research across the relevant disciplines.

    [Keywords: behavior, brain evolution, hormones, neurobiology, parasite-host interactions, parasite manipulation]

  159. {#linkBibliography-magazine)-2019 .docMetadata}, Matthew Shaer (Smithsonian Magazine) (2019-05):

    [Profile of the Marsili family, an Italian family which has a genetic mutation which renders pain far less painful but still felt: outright pain insensitivity is often fatal, but the Marsili condition is more moderate and so they are all alive and health, albeit much more injury-prone, like during skiing or sunbathing or childhood. In that condition, acute pain is felt, but then it fades and no chronic pain lingers. Scientists who had previously discovered a pain-insensitivity mutation in a Pakistani family (some dead) examined the Marsilis next, after years of testing candidate mutations, finally finding a hit which gene, when mutation of it were genetically engineered into mice, produced dramatically different, and the Marsili mutation specifically increased pain toleration.]

    The broad import of their analysis is that it showed that ZFHX2 was crucially involved in pain perception in a way nobody had previously understood. Unlike more frequently documented cases of pain insensitivity, for instance, the Marsili family’s mutation didn’t prevent the development of pain-sensing neurons; those were still there in typical numbers. Yet it was also different from the Pakistani family’s mutation, whose genetic anomaly disabled a single function in pain-sensing neurons. Rather, ZFHX2 appeared to regulate how other genes operated, including several genes already linked to pain processing and active throughout the nervous system, including in the brain—a sort of “master regulator”, in the words of Alexander Chesler, a neurobiologist specializing in the sensory nervous system at the National Institutes of Health, in Bethesda, Maryland, who was not involved in the study.

    “What’s so exciting is that this is a completely different class of pain insensitivity”, Chesler says. “It tells you that this particular pathway is important in humans. And that’s what gets people in the industry excited. It suggests that there are changes that could be made to somebody to make them insensitive to chronic pain.”

  160. 2018-banziger.pdf: ⁠, Hans Bänziger (2018-05-04; biology):

    Wild Lisotrigona cacciae (Nurse) and L. furva Engel were studied in their natural forest habitat at three sites in northern Thailand, May 2013–November 2014. The author, both experimenter and tear source, marked the minute bees while they drank from his eyes viewed in a mirror. All marked workers, 34 L. cacciae and 23 L. furva, came repeatedly to engorge, 34 and 27 times on average, respectively. The maximum number of times the same L. cacciae and L. furva came was 78 and 144 visits in one day, respectively; the maximum over two days was 145 visits by one L. cacciae; the maximum number of visiting days by the same bee was four over seven days by one L. furva which made 65 visits totally. The same forager may collect tears for more than 10h in a day, on average for 3h15min and 2h14min for L. cacciae and L. furva, respectively. Engorging from the inner eye corner averaged 3.1 and 2.2 min, respectively, but only 1.3 and 0.9 min when settled on the lower eye lid/​​​​ciliae. The interval between consecutive visits averaged 3.3 min and 3.8 min, respectively. Lachryphagy occurred during all months of the year, with 91–320 foragers a day during the hot season and 6–280 foragers during the rainy season; tear collecting resumed after a downpour. During the cold season eye visitation was reduced to 3–64 foragers, but none left her nest when the temperature was below 22°C. Flying ranges were greater than in comparable non-lachryphagous meliponines. It is proposed that Lisotrigona colonies have workers that are, besides nectar and pollen foragers, specialized tear collectors. Tears are 200 times richer in proteins than sweat, a secretion well-known to be imbibed by many meliponines. Digestion of proteins dissolved in tears is not hampered by an exine wall as in pollen, and they have bactericidal properties. These data corroborate the inference that Lisotrigona, which also visit other mammals, birds and reptiles, harvest lachrymation mainly for its content of proteins rather than only for salt and water.

  161. ⁠, Yoshiki Ohshima, Dan Amelang, Ted Kaehler, Bert Freudenberg, Aran Lunzer, ⁠, Ian Piumarta, Takashi Yamamiya, Alan Borning, Hesam Samimi, Bret Victor, Kim Rose (2012):

    [Technical report from a research project aiming at writing a GUI OS in 20k LoC; tricks include ASCII art networking DSLs & generic optimization for text layout⁠, which lets them implement a full OS, sound, GUI desktops, Internet networking & web browsers, a text/​​​​document editor etc, all in less lines of code that most OSes need for small parts of any of those.]

    …Many software systems today are made from millions to hundreds of millions of lines of program code that is too large, complex and fragile to be improved, fixed, or integrated. (One hundred million lines of code at 50 lines per page is 5000 books of 400 pages each! This is beyond human scale.) What if this could be made literally 1000 times smaller—or more? And made more powerful, clear, simple and robust?…The ’STEPS

    STEPS Aims At ‘Personal Computing’STEPS takes as its prime focus the dynamic modeling of ‘personal computing’ as most people think of it…word processor, spreadsheet, Internet browser, other productivity SW; User Interface and Command Listeners: windows, menus, alerts, scroll bars and other controls, etc.; Graphics and Sound Engine: physical display, sprites, fonts, compositing, rendering, sampling, playing; Systems Services: development system, database query languages, etc.; Systems Utilities: file copy, desk accessories, control panels, etc.; Logical Level of OS: eg. file management, Internet, and networking facilities, etc.; Hardware Level of OS: eg. memory manager, process manager, device drivers, etc.

  162. ⁠, Andy Matuschak, Michael Nielsen (2019-10):

    [Long writeup by Andy Matuschak and Michael Nielsen on experiment in integrating spaced repetition systems with a tutorial on quantum computing, Quantum Country: Quantum Computing For The Very Curious By combining explanation with spaced testing, a notoriously thorny subject may be learned more easily and then actually remembered—such a system demonstrating a possible ‘tool for thought’. Early results indicate users do indeed remember the quiz answers, and feedback has been positive.]

    Part I: Memory systems

    • Introducing the mnemonic medium
    • The early impact of the prototype mnemonic medium
    • Expanding the scope of memory systems: what types of understanding can they be used for?
    • Improving the mnemonic medium: making better cards
    • Two cheers for mnemonic techniques
    • How important is memory, anyway?
    • How to invent Hindu-Arabic numerals?

    Part II: Exploring tools for thought more broadly:

    • Mnemonic video

    • Why isn’t there more work on tools for thought today?

    • Questioning our basic premises

      • What if the best tools for thought have already been discovered?
      • Isn’t this what the tech industry does? Isn’t there a lot of ongoing progress on tools for thought?
      • Why not work on AGI or BCI instead?
    • Executable books

      • Serious work and the aspiration to canonical content
      • Stronger emotional connection through an inverted writing structure

    Summary and Conclusion

    … in Quantum Country an expert writes the cards, an expert who is skilled not only in the subject matter of the essay, but also in strategies which can be used to encode abstract, conceptual knowledge. And so Quantum Country provides a much more scalable approach to using memory systems to do abstract, conceptual learning. In some sense, Quantum Country aims to expand the range of subjects users can comprehend at all. In that, it has very different aspirations to all prior memory systems.

    More generally, we believe memory systems are a far richer space than has previously been realized. Existing memory systems barely scratch the surface of what is possible. We’ve taken to thinking of Quantum Country as a memory laboratory. That is, it’s a system which can be used both to better understand how memory works, and also to develop new kinds of memory system. We’d like to answer questions such as:

    • What are new ways memory systems can be applied, beyond the simple, declarative knowledge of past systems?
    • How deep can the understanding developed through a memory system be? What patterns will help users deepen their understanding as much as possible?
    • How far can we raise the human capacity for memory? And with how much ease? What are the benefits and drawbacks?
    • Might it be that one day most human beings will have a regular memory practice, as part of their everyday lives? Can we make it so memory becomes a choice; is it possible to in some sense solve the problem of memory?


  165. 2000-cook.pdf: ⁠, Richard I. Cook (2000; technology):

    1. Complex systems are intrinsically hazardous systems.
    2. Complex systems are heavily and successfully defended against failure.
    3. Catastrophe requires multiple failures—single point failures are not enough.
    4. Complex systems contain changing mixtures of failures latent within them
    5. Complex systems run in degraded mode.
    6. Catastrophe is always just around the corner.
    7. Post-accident attribution accident to a ‘root cause’ is fundamentally wrong.
    8. Hindsight biases post-accident assessments of human performance.
    9. Human operators have dual roles: as producers & as defenders against failure.
    10. All practitioner actions are gambles.
    11. Actions at the sharp end resolve all ambiguity.
    12. Human practitioners are the adaptable element of complex systems.
    13. Human expertise in complex systems is constantly changing.
    14. Change introduces new forms of failure.
    15. Views of ‘cause’ limit the effectiveness of defenses against future events.
    16. Safety is a characteristic of systems and not of their components.
    17. People continuously create safety.
    18. Failure free operations require experience with failure.
  166. ⁠, Terence Tao (2010-10):

    [Slideshow presentation on the “cosmic ladder”: how to calculate the distances between planets and stars by using geometry, brightness, radar, and progressively estimating further and further, solving one unknown at a time, from the Ancient Greeks to today.]

  167. ⁠, Jason Crawford (2019-07-13):

    The bicycle, as we know it today, was not invented until the late 1800s. Yet it was a simple mechanical invention. It would seem to require no brilliant inventive insight, and certainly no scientific background.

    …Technology factors are more convincing to me. They may have been necessary for bicycles to become practical and cheap enough to take off. But they weren’t needed for early experimentation. Frames can be built of wood. Wheels can be rimmed with metal. Gears can be omitted. Chains can be replaced with belts; some early designs even used treadles instead of pedals, and at least one design drove the wheels with levers, as on a steam locomotive. So what’s the real explanation?

    First, the correct design was not obvious. For centuries, progress was stalled because inventors were all trying to create multi-person four-wheeled carriages, rather than single-person two-wheeled vehicles. It’s unclear why this was; certainly inventors were copying an existing mode of transportation, but why would they draw inspiration only from the horse-and-carriage, and not from the horse-and-rider? (Some commenters have suggested that it was not obvious that a two-wheeled vehicle would balance, but I find this unconvincing given how many other things people have learned to balance on, from dugout canoes to horses themselves.) It’s possible (I’m purely speculating here) that early mechanical inventors had a harder time realizing the fundamental impracticability of the carriage design because they didn’t have much in the way of mathematical engineering principles to go on, but then again it’s unclear what led to Drais’s breakthrough. And even after Drais hit on the two-wheeled design, it took multiple iterations, which happened over decades, to get to a design that was efficient, comfortable, and safe.

    …But we can go deeper, and ask the questions that inspired my intense interest in this question in the first place. Why was no one even experimenting with two-wheeled vehicles until the 1800s? And why was no one, as far as we know, even considering the question of human-powered vehicles until the 1400s? Why weren’t there bicycle mechanics in the 1300s, when there were clockmakers, or at least by the 1500s, when we had watches? Or among the ancient Romans, who built water mills and harvesting machines? Or the Greeks, who built the Antikythera mechanism ? Even if they didn’t have tires and chains, why weren’t these societies at least experimenting with draisines? Or even the failed carriage designs?

  168. ⁠, Venkatesh Rao (2012-03-08):

    [Coins “Hall’s law”: “the maximum complexity of artifacts that can be manufactured at scales limited only by resource availability doubles every 10 years.” Economic history discussion of industrialization: the replacement of esoteric artisanal knowledge, based on trial-and-error and epitomized by a classic Sheffield steel recipe which calls for adding 4 white onions to iron, by formalized, specialized, rationalized processes such as interchangeable parts in a rifle produced by a factory system, which can create standardized parts at larger scales than craft-based processes, on which other systems can be built (once a reliable controlled source of parts exists). Examples include British gun-making, John Hall, the Montgomery Ward catalogue.]

    I believe this law held between 1825 and 1960, at which point the law hit its natural limits. Here, I mean complexity in the loose sense I defined before: some function of mechanical complexity and operating tempo of the machine, analogous to the transistor count and clock-rate of chips. I don’t have empirical data to accurately estimate the doubling period, but 10 years is my initial guess, based on the anecdotal descriptions from Morris’ book and the descriptions of the increasing presence of technology in the world fairs. Along the complexity dimension, mass-produced goods increased rapidly got more complex, from guns with a few dozen parts to late-model steam engines with thousands. The progress on the consumer front was no less impressive, with the Montgomery Ward catalog offering mass-produced pianos within a few years of its introduction for instance. By the turn of the century, you could buy entire houses in mail-order kit form. The cost of everything was collapsing. Along the tempo dimension, everything got relentlessly faster as well. Somewhere along the way, things got so fast thanks to trains and the telegraph, that time zones had to be invented and people had to start paying attention the second hand on clocks.

    …History is repeating itself. And the rerun episode we are living right now is not a pleasant one. The problem with history repeating itself of course, is that sometimes it does not. The fact that 1819–1880 map pretty well to 1959–2012 does not mean that 2012–2112 will map to 1880–1980. Many things are different this time around. But assuming history does repeat itself, what are we in for? If the Moore’s Law endgame is the same century-long economic-overdrive that was the Hall’s Law endgame, today’s kids will enter the adult world with prosperity and a fully-diffused Moore’s Law all around them. The children will do well. In the long term, things will look up. But in the long term, you and I will be dead.

  169. ⁠, Jinyun Yan, Birjodh Tiwana, Souvik Ghosh, Haishan Liu, Shaunak Chatterjee (2019-01-29):

    Organic updates (from a member’s network) and sponsored updates (or ads, from advertisers) together form the newsfeed on LinkedIn. The newsfeed, the default homepage for members, attracts them to engage, brings them value and helps LinkedIn grow. Engagement and Revenue on feed are two critical, yet often conflicting objectives. Hence, it is important to design a good Revenue-Engagement Tradeoff (RENT) mechanism to blend ads in the feed. In this paper, we design experiments to understand how members’ behavior evolve over time given different ads experiences. These experiences vary on ads density, while the quality of ads (ensured by relevance models) is held constant. Our experiments have been conducted on randomized member buckets and we use two experimental designs to measure the short term and long term effects of the various treatments. Based on the first three months’ data, we observe that the long term impact is at a much smaller scale than the short term impact in our application. Furthermore, we observe different member cohorts (based on user activity level) adapt and react differently over time.

  170. ⁠, Kicks Condor (2019-12):

    Eh, this is doomed—Waxy or Imperica should take a crack at this. The AV Club did a list of ‘things’. I wanted to cover stuff that wasn’t on there. A lot happened outside of celebrities, Twitter and momentary memes. (We all obviously love @electrolemon, “double rainbow”, Key & Peele’s Gremlins 2 Brainstorm, 10 hr vids, etc.)

    There is a master list of lists as well.

    Hope for this list—get u mad & u destroy me & u blog in 2020.

  171. ⁠, Kicks Condor ():

    [Homepage of programmer Kicks Condor; hypertext-oriented link compilation and experimental design blog.]

  172. ⁠, Daniel W. VanArsdale (2006):

    Apocryphal letters claiming divine origin circulated for centuries in Europe. After 1900, shorter more secular letters appeared in the US that promised good luck if copies were distributed and bad luck if not. Billions of these “luck chain letters” circulated in the next 100 years. As they replicated through the decades, some accumulated copying errors, offhand comments, and calculated innovations that helped them prevail in the competition with other chain letters. For example, complementary testimonials developed, one exploiting perceived good luck, another exploiting perceived bad luck. Twelve successive types of paper luck chain letters are identified which predominated US circulation at some time in the twentieth century. These types, and their major variations, are described and analyzed for their replicative advantage.

    In the 1970’s a luck chain letter from South America that touted a lottery winner invaded the US and was combined on one page with an indigenous chain letter. This combination rapidly dominated circulation. In 1979 a postscript concluding with “It Works” was added to one of these combination letters, and within a few years the progeny of this single letter had replaced all the millions of similar letters in circulation without this postscript. These and other events in paper chain letter history are described, and hypotheses are offered to explain advances and declines in circulation, including the near extinction of luck chain letters in the new millennium.

    Perhaps the most dramatic event in chain letter history was the advent of money chain letters. This was spawned by the infamous “Send-a-Dime” chain letter which flooded the world in 1935. The insight and methods of its anonymous author, likely a woman motivated by charity, are examined in detail in a separate article titled “The Origin of Money Chain Letters⁠.” This constitutes Section 4.1 below, where its link is repeated. It can be read independently from this treatise.

    The online Paper Chain Letter Archive contains the text and documentation of over 900 chain letters. Most of these texts have been transcribed from collected physical letters, but many come from published sources including daily newspapers present in online searchable archives. Some unusual items in the archive are: an anonymous 1917 chain letter giving advice on obtaining conscientious objector status; a 1920 Sinn Féin revolutionary communication; rare unpublished scatological parody letters from 1935; a bizarre chain letter invitation to a suicide from 1937⁠; and a libelous Proctor and Gamble boycott alleging satanism from 1986⁠. An annotated index provides easy access to all chain letters in the archive. An Annotated Bibliography on Chain Letters and Pyramid Schemes contains over 425 entries. A Glossary gives precise definitions for terms used here, facilitating the independent reading of sections.

  173. ⁠, Neal Stephenson (1996-12):

    [Classic longform essay by SF author in which he travels the world tracing the (surprisingly few) transcontinental fiber optic cables which bind the world together and power the Internet; cables combine cutting-edge technology, deep sea challenges, high finance, and global geo-politics/​​​​espionage all in one tiny package.]

  174. ⁠, Nadia Eghbal (2016-06-08):

    [Post-⁠/​​​​ discussion of the economics of funding open source software: universally used & economically invaluable as a public good anyone can & does use, it is also essentially completely unfunded, leading to serious problems in long-term maintenance & improvement, exemplified by the Heartbleed bug—core cryptographic code run by almost every networked device on the planet could not fund more than a part-time developer.]

    Our modern society—everything from hospitals to stock markets to newspapers to social media—runs on software. But take a closer look, and you’ll find that the tools we use to build software are buckling under demand…Nearly all software today relies on free, public code (called “open source” code), written and maintained by communities of developers and other talent. Much like roads or bridges, which anyone can walk or drive on, open source code can be used by anyone—from companies to individuals—to build software. This type of code makes up the digital infrastructure of our society today. Just like physical infrastructure, digital infrastructure needs regular upkeep and maintenance. In the United States, over half of government spending on transportation and water infrastructure goes just to maintenance.1 But financial support for digital infrastructure is much harder to come by. Currently, any financial support usually comes through sponsorships, direct or indirect, from software companies. Maintaining open source code used to be more manageable. Following the personal computer revolution of the early 1980s, most commercial software was proprietary, not shared. Software tools were built and used internally by companies, and their products were licensed to customers. Many companies felt that open source code was too nascent and unreliable for commercial use. In their view, software was meant to be charged for, not given away for free. Today, everybody uses open source code, including Fortune 500 companies, government, major software companies and startups. Sharing, rather than building proprietary code, turned out to be cheaper, easier, and more efficient.

    This increased demand puts additional strain on those who maintain this infrastructure, yet because these communities are not highly visible, the rest of the world has been slow to notice. Most of us take opening a software application for granted, the way we take turning on the lights for granted. We don’t think about the human capital necessary to make that happen. In the face of unprecedented demand, the costs of not supporting our digital infrastructure are numerous. On the risk side, there are security breaches and interruptions in service, due to infrastructure maintainers not being able to provide adequate support. On the opportunity side, we need to maintain and improve these software tools in order to support today’s startup renaissance, which relies heavily on this infrastructure. Additionally, open source work builds developers’ portfolios and helps them get hired, but the talent pool is remarkably less diverse than in tech overall. Expanding the pool of contributors can positively affect who participates in the tech industry at large.

    No individual company or organization is incentivized to address the problem alone, because open source code is a public good. In order to support our digital infrastructure, we must find ways to work together. Current examples of efforts to support digital infrastructure include the Linux Foundation’s Core Infrastructure Initiative and Mozilla’s Open Source Support (MOSS) program, as well as numerous software companies in various capacities. Sustaining our digital infrastructure is a new topic for many, and the challenges are not well understood. In addition, infrastructure projects are distributed across many people and organizations, defying common governance models. Many infrastructure projects have no legal entity at all. Any support strategy needs to accept and work with the decentralized, community-centric qualities of open source code. Increasing awareness of the problem, making it easier for institutions to contribute time and money, expanding the pool of open source contributors, and developing best practices and policies across infrastructure projects will all go a long way in building a healthy and sustainable ecosystem.

  175. 2020-arora.pdf: ⁠, Ashish Arora, Sharon Belenzon, Andrea Patacconi, Jungkyu Suh (2020; economics):

    A defining feature of modern economic growth is the systematic application of science to advance technology. However, despite sustained progress in scientific knowledge, recent productivity growth in the United States has been disappointing. We review major changes in the American innovation ecosystem over the past century. The past three decades have been marked by a growing division of labor between universities focusing on research and large corporations focusing on development. Knowledge produced by universities is not often in a form that can be readily digested and turned into new goods and services. Small firms and university technology transfer offices cannot fully substitute for corporate research, which had previously integrated multiple disciplines at the scale required to solve substantial technical problems. Therefore, whereas the division of innovative labor may have raised the volume of science by universities, it has also slowed, at least for a period of time, the transformation of that knowledge into novel products and processes.

  176. ⁠, J. H. Saltzer, D. P. Reed, D. D. Clark (1984):

    This paper presents a design principle that helps guide placement of functions among the modules of a distributed computer system. The principle, called ‘the end-to-end argument’, suggests that functions placed at low levels of a system may be redundant or of little value when compared with the cost of providing them at that low level. Examples discussed in the paper include bit error recovery, security using encryption, duplicate message suppression, recovery from system crashes, and delivery acknowledgement. Low level mechanisms to support these functions are justified only as performance enhancements.

  177. ⁠, James Hamilton (2012-02-29):

    Every couple of weeks I get questions along the lines of “should I checksum application files, given that the disk already has error correction?” or “given that TCP/​​​​IP has error correction on every communications packet, why do I need to have application level network error detection?” Another frequent question is “non-ECC mother boards are much cheaper—do we really need ECC on memory?” The answer is always yes. At scale, error detection and correction at lower levels fails to correct or even detect some problems. Software stacks above introduce errors. Hardware introduces more errors. Firmware introduces errors. Errors creep in everywhere and absolutely nobody and nothing can be trusted Over the years, each time I have had an opportunity to see the impact of adding a new layer of error detection, the result has been the same. It fires fast and it fires frequently. In each of these cases, I predicted we would find issues at scale. But, even starting from that perspective, each time I was amazed at the frequency the error correction code fired.

    On one high scale, on-premise server product I worked upon, page checksums were temporarily added to detect issues during a limited beta release. The code fired constantly, and customers were complaining that the new beta version was “so buggy they couldn’t use it”. Upon deep investigation at some customer sites, we found the software was fine, but each customer had one, and sometimes several, latent data corruptions on disk. Perhaps it was introduced by hardware, perhaps firmware, or possibly software. It could have even been corruption introduced by one of our previous release when those pages where last written. Some of these pages may not have been written for years. I was amazed at the amount of corruption we found and started reflecting on how often I had seen “index corruption” or other reported product problems that were probably corruption introduced in the software and hardware stacks below us. The disk has complex hardware and hundreds of thousands of lines of code, while the storage area network has complex data paths and over a million lines of code. The device driver has tens of thousands of lines of code. The operating systems has millions of lines of code. And our application had millions of lines of code. Any of us can screw-up, each has an opportunity to corrupt, and its highly likely that the entire aggregated millions of lines of code have never been tested in precisely the combination and on the hardware that any specific customer is actually currently running.

    …This incident reminds us of the importance of never trusting anything from any component in a multi-component system. Checksum every data block and have well-designed, and well-tested failure modes for even unlikely events. Rather than have complex recovery logic for the near infinite number of faults possible, have simple, brute-force recovery paths that you can use broadly and test frequently. Remember that all hardware, all firmware, and all software have faults and introduce errors. Don’t trust anyone or anything. Have test systems that bit flips and corrupts and ensure the production system can operate through these faults—at scale, rare events are amazingly common.

  178. End-to-end

  179. ⁠, C. Marchetti (1994-09):

    Personal travel appears to be much more under the control of basic instincts than of economic drives. This may be the reason for the systematic mismatch between the results of cost benefit analysis and the actual behavior of travelers. In this paper we put together a list of the basic instincts that drive and contain travelers’ behavior, showing how they mesh with technological progress and economic constraints.

    …the empirical conclusion reached by Zahavi is that all over the world the mean exposure time for man is around one hour per day.

    …When introducing mechanical transportation with speeds higher than 5 km/​​​​hr, the physical size of the city can grow in proportion, as the historical analysis applied to the city of Berlin clearly shows (Figure 2). The commuting fields, based on cars, of a dozen American cities are reported in Figure 3. On the same chart and to the same scale, the Greek villages of Figure 1 are shown in schematic form. Cars make all the difference. As they have a speed of 6 or 7 times greater than a pedestrian, they expand daily connected space 6 or 7 times in linear terms, or about 50 times in area. Ancient cities typically had a maximum population of about 1 million people. Today the population may tend to reach 50 million people in conurbations like Mexico City (Figure 4), with a population density equal to that of Hadrian’s Rome. If the Japanese complete a Shinkansen Maglev (a magnetically levitated train) connecting Tokyo to Osaka in less than one hour with a large transportation capacity, then we may witness a city of 100 million people. If we expand the reasoning, we can muse about a city of 1 billion people, which would require an efficient transportation system with a mean speed of only 150 km/​​​​hr.

    …There is another fundamental observation made by Zahavi that links instincts and money. Because of its generality it could be dubbed as a money instinct. People spend about 13% of their disposable income on traveling. The percentage is the same in Germany or Canada, now or in 1930. Within this budget, time and money are allocated between the various modes of transport available to the traveller in such a way as to maximize mean speed. The very poor man walks and makes 5 km/​​​​day, the very rich man flies and makes 500 km/​​​​day. The rest sit in between. People owning a car use it for about one hour a day (Figure 12) and travel about 50 km/​​​​day (Figure 13). People who do not have a car spend less than 13% of their disposable income, however, presumably because public services are underrated and consequently there is no possibility of spending that share of income traveling one hour per day (Figure 14). Contrary to the risk of all this “exposure”, the number of people killed by road traffic seems to be invariant to the number of vehicles (Figure 15).

    Technology introduces faster and faster means of transportation, which also are more expensive in terms of time of use. These new technologies are introduced roughly every 55 years in tune with the Kondratiev cycle. Their complete adoption takes about 100 years (Figure 16). We are now in the second Kondratiev for cars and most mobility comes from them. It was about 10 km/​​​​day earlier, and is now about 40 km/​​​​day. Airplanes are making inroads into this situation and they promise to bring the next leap forward in mobility, presumably with the help of Maglev trains. Hypersonic airplanes promise to glue the world into a single territory: the famous global village.

  180. ⁠, Pseudoerasmus (2017-10-02):

    So I illustrate the relevance of labour relations to economic development through the contrasting fortunes of India’s and Japan’s cotton textile industries in the interwar period, with some glimpses of Lancashire, the USA, interwar Shanghai, etc.

    TL;DR version: At the beginning of the 20th century, the Indian and the Japanese textile industries had similar levels of wages and productivity, and both were exporting to global markets. But by the 1930s, Japan had surpassed the UKto become the world’s dominant exporter of textiles; whilethe Indian industry withdrew behind the tariff protection of the British Raj.Technology, human capital, and industrial policy were minor determinants of this divergence, or at least they mattered conditional on labour relations.

    Indian textile mills were obstructed bymilitant workers who defended employment levels, resisted productivity-enhancing measures, and demanded high wages relative to effort. But Japanese mills suppressed strikes andbusted unions; extracted from workers much greater effort for a given increase in wages; and imposed technical & organisational changes at will. The bargaining position of workers was much weaker in Japan than in India, because Japan had a true “surplus labour” economy with a large number of workers ‘released’ from agriculture into industry. But late colonial India was rather ‘Gerschenkronian’, where employers’ options were more limited by a relatively inelastic supply of labour.

    The state also mattered. The British Raj did little to restrain on behalf of Indian capitalists the exercise of monopoly power by Indian workers. Britain had neither the incentive, nor the stomach, nor the legitimacy to do much about it. But a key element of the industrial policy of the pre-war Japanese state was repression of the labour movement.

    Note: By “labour repression” I do not mean coercing workers, or suppressing wage levels, but actions which restrain the effects of worker combinations.

    Nor am I saying unions are bad! I’ve written before that unions in Germany are great⁠.

    Also, I do not claim this post has any relevance for today’s developed countries. It’s mainly about labour-intensive manufacturing in historical industrialisation or in today’s developing countries.

  181. ⁠, Peter Thiel (2008-01-28):

    [An interesting thought experiment to assess what must happen for an “optimistic” version of the future to unfold, and the possibility of an impending apocalypse and how that might lead to financial bubbles. The article is eye opening, depressing and fascinating. Peter argues that science in all of its form (nuclear weapons, biological catastrophes, etc) has vastly increased the probability of some form of apocalypse; betting on the apocalypse makes no sense so rational investors don’t do it; globalization is the anti-apocalypse bet; financial bubbles are bets on globalization; and the recent slate of financial bubbles, which he calls unprecedented in history, are related to the growing sense of impending doom.]

    One would not have thought it possible for the internet bubble of the late 1990s, the greatest boom in the history of the world, to be replaced within five years by a real estate bubble of even greater magnitude and worse stupidity. Under more normal circumstances, one would not have thought that the same mistake could happen twice in the lifetimes of the people involved…

    The most straightforward explanation begins with the view that all of these bubbles are not truly separate, but instead represent different facets of a single Great Boom of unprecedented size and duration. As with the earlier bubbles of the modern age, the Great Boom has been based on a similar story of globalization, told and retold in different ways—and so we have seen a rotating series of local booms and bubbles as investors price a globally unified world through the prism of different markets.

    Nevertheless, this Great Boom is also very different from all previous bubbles. This time around, globalization either will succeed and humanity will achieve a degree of freedom and prosperity that can scarcely be imagined, or globalization will fail and capitalism or even humanity itself may come to an end. The real alternative to good globalization is world war. And because of the nature of today ’s technology, such a war would be apocalyptic in the twenty-first century. Because there is not much time left, the Great Boom, taken as a whole, either is not a bubble at all, or it is the final and greatest bubble in history…there is no good scenario for the world in which China fails.

    …But because we do not know how our story of globalization will end, we do not yet know which it is. Let us return to our thought experiment. Let us assume that, in the event of successful globalization, a given business would be worth $100/​​​​share, but that there is only an intermediate chance (say 1:10) of successful globalization. The other case is too terrible to consider. Theoretically, the share should be worth $10, but in every world where investors survive, it will be worth $100. Would it make sense to pay more than $10, and indeed any price up to $100? Whether in hope or desperation, the perceived lack of alternatives may push valuations to much greater extremes than in non-apocalyptic times.






  187. 2014-lewis.pdf: ⁠, Tasha L. Lewis, Brittany Haas (2014-03; economics):

    The Hermès brand is synonymous with a wealthy global elite clientele and its products have maintained an enduring heritage of craftsmanship that has distinguished it among competing luxury brands in the global market. Hermès has remained a family business for generations and has successfully avoided recent acquisition attempts by luxury group LVMH. Almost half of the luxury firm’s revenue ($1.90$1.52012B in 2012) is derived from the sale of its leather goods and saddlery, which includes its handbags. A large contributor to sales is global demand for one of its leather accessories, the Birkin bag, ranging in price from $12,298$10,0002014 to $307,458$250,0002014. Increased demand for the bag in the United States since 2002 resulted in an extensive customer waitlist lasting from months to a few years. Hermès retired the famed waitlist (sometimes called the ‘dream list’) in the United States in 2010, and while the waitlist has been removed, demand for the Birkin bag has not diminished and making the bag available to luxury consumers requires extensive, careful distribution management. In addition to inventory constraints related to demand for the Birkin bag in the United States, Hermès must also manage a range of other factors in the US market. These factors include competition with ‘affordable’ luxury brands like Coach, monitoring of unsolicited brand endorsers as well as counterfeit goods and resellers. This article examines some of the allocation practices used to carefully manage the Hermès brand in the US market.


  189. ⁠, Brittanny Newsom (2016-12-19):

    History · Design · Craftsmanship & Quality · How To Buy A Birkin · Demand & Exclusivity · The Secondhand Market · Clientele · Why the Birkin Is A Safe Investment · Investment Factors · Investment Pricing Factors · Comparisons with Other Investments · Fake vs. Real · How the Birkin Remains Dominant · The Media · The Defaced Birkin · Conclusion

    Birkin bags are carefully handcrafted. The creation process for each bag can take over 18 hours. That number can double if working on a Birkin accessorized with diamonds. The artisans who craft these bags are carefully screened and require years of high quality experience even before being considered for the job. “Hermès has a reputation of hiring mostly artisans who have graduated from the École Grégoire Ferrandi; a school that specializes in working with luxurious leathers.” It also typically takes about 2 years to train an Hermès craftsman, with each one supervised by an existing craftsman.

    Preparing the leather is the first step towards crafting the bag. The leather is examined for any defects an animal skin may have mosquito bites or wounds that must be repaired before the skin’s tanning. Leathers are obtained from different tanners in France, resulting in various smell sand textures. The stitching of the bag is also very precise. The bag is held together using wooden clamp, while the artisan applies each individual stitch on the bag. The linen that is used during the stitching process is waterproof and has a beeswax coating for rot prevention. Most Birkin bags are created with same color threads, but some rare bags have white threads even if the bag is not white. “More than 90% of the bag is hand stitched because it allows more freedom to shape the bag and makes it more resilient.” That’s when the hardware process begins. Unlike other bags, the hardware is attached using the unique Hermès process called “pearling” rather than by using screws. Artisans put a “small nail through a corner hole on the back of the clasp, the leather and the front clasp, take an awl with a concave tip and tap the bit of nail with a hammer gently in a circle until it is round like a tiny pearl.” This process ensures that the pearls will hold the two pieces of metal together forever. The bag is then turned right side out and ironed into shape.

    …As secondhand market sales have grown, interest from first time buyers has also increased. This shows the Birkin bag is an important sales channel for an expanding global luxury product market. Such growth has propelled the Birkin to near legendary status in a very demanding market. According to Bag Hunter, “Birkin bags have climbed in value by 500% over the past 35 years, and an increase expected to double over the next 10 years.”

    …Simply stated, it appears that the bag’s success hinges on this prestigious perception. A Birkin, terribly difficult to get is therefore highly coveted. In our global economy, that’s all the brand needs to pack the infinite waiting list. It is fashion’s version of Darwinism. We always want what we can’t have, so we will do whatever we can to get it. For instance, Victoria Beckham, the posh clothing designer, and wife of David Beckham reportedly owns about 100 Birkins, collectively valued at $2 million. It includes a pink Ostrich leather Birkin worth $150,000. Despite the fact that she has introduced her own line of handbags, she’s been spotted by the paparazzi wearing a Birkin bag. Kris Jenner also has a massive Birkin collection that she flaunts via social media and the willing participation of paparazzi. Her collection includes an Electric Blue 35cm which is supposedly worth $19,000. Actress Katie Holmes has gained attention for a bold red Birkin, while Julianne Moore has been seen wearing a hunter green 40cm with gold hardware. Julia Roberts and Eva Longoria all have even been seen with the bag. Even B-listed personalities such as reality star, Nicole Richie, with a black Birkin workout bag, is famously noted as frequently asking the paparazzi, “Did you get my bag?”. The Birkin has looked extra special on the arms of models, Alessandra Ambrosio and Kate Moss. Singers such as Jennifer Lopez and Courtney Love ironically show off their Birkins, and even world leaders such as Princess Mary of Denmark, with her black crocodile Birkin worth $44,500, is aware of its meaning and status.

  190. 2017-sichel.pdf: ⁠, Daniel E. Sichel (2017-04; economics):

    Many products—such as lighting and computing—have undergone revolutionary changes since the beginning of the industrial revolution. This paper considers the opposite end of the spectrum of product change, focusing on nails. Nails are a simple, everyday product whose form has changed relatively little over the last three centuries, and this paper constructs a continuous, constant-quality price index for nails since 1695. These data indicate that the price of nails fell substantially relative to an overall basket of consumption goods as reflected in the CPI, with the preferred index falling by a factor of about 15 times from the mid 1700s to the mid 1900s. While these declines were nowhere near as rapid as those for lighting and computing, they were still quite sizable and large enough to enable the development of other products and processes and contribute to downstream changes in patterns of economic activity. Moreover, with the relative price of nails having been so much higher in an earlier period, nails played a much more important role in economic activity in an earlier period than they do now. [A not yet completed section of the paper will use a growth accounting framework to assess the proximate sources of the change in the price of nails.]

  191. 2010-rost.pdf: ⁠, Katja Rost, Emil Inauen, Margit Osterloh, Bruno S. Frey (2010-01-12; economics):

    Purpose: This paper aims to analyse the governance structure of monasteries to gain new insights and apply them to solve agency problems of modern corporations. In an historic analysis of crises and closures it asks, if Benedictine monasteries were and are capable of solving agency problems. The analysis shows that monasteries established basic governance instruments very early and therefore were able to survive for centuries.

    Design/​​​​methodology/​​​​approach: The paper uses a dataset of all Benedictine abbeys that ever existed in Bavaria, Baden-Württemberg, and German-speaking Switzerland to determine their lifespan and the reasons for closures. The governance mechanisms are analyzed in detail. Finally, it draws conclusions relevant to the modern corporation. The theoretical foundations are based upon principal agency theory, psychological economics, as well as embeddedness theory.

    Findings: The monasteries that are examined show an average lifetime of almost 500 years and only a quarter of them dissolved as a result of agency problems. This paper argues that this success is due to an appropriate governance structure that relies strongly on internal control mechanisms.

    Research limitations/​​​​implications: Benedictine monasteries and stock corporations differ fundamentally regarding their goals. Additional limitations of the monastic approach are the tendency to promote groupthink, the danger of dictatorship and the life long commitment.

    Practical implications: The paper adds new insights into the corporate governance debate designed to solve current agency problems and facilitate better control.

    Originality/​​​​value: By analyzing monasteries, a new approach is offered to understand the efficiency of internal behavioral incentives and their combination with external control mechanisms in corporate governance.

  192. ⁠, Matthew Skala (2004-06-10):

    [Philosophy piece attempting to explain, via an amusing analogy to classic RPG game Paranoia, to programmers how the rest of the world sees information: as tainted, in a dualist immaterial sense, by their history. Two bits are not identical even if they are identical, because they may have different histories; these are recorded and enforced by consensual society-wide hallucinations, such as intellectual property law. This may be insane, like in Paranoia, but that is how the human world works, and why many clever copyright hacks will fail.]

  193. ⁠, Clayton Atreus (2008-02-24):

    [Paper/​​​​suicide note by a philosophy graduate who went on a motorcycle tour of Mexico and ran into a goat, instantly becoming a ⁠. Atreus discusses how paraplegia robs him of the ability to do almost everything he valued in life, from running to motorcycling to sex, while burdening him down with dead weight equivalent to hundreds of pounds, which make the simplest action, like getting out of a car, take minutes or hours, radically shortening his effective days. He is an ambulatory corpse, “two arms and a head”. Atreus discusses in detail the existential horror of his condition, from complete lack of bowel control requiring him to constantly dig his own feces out of his anus to being trapped in a wheelchair larger than a washing machine to the cruelty of well-intentioned encouragement to social alienation and his constant agonized awareness of everything he has lost. If the first question of philosophy is whether to commit suicide, Atreus finds that for him, the answer is “yes”. The paper/​​​​book concludes with his description of stabbing himself and slowly bleeding to death.]

    This book is born of pain. I wrote it out of compulsion during the most hellish time of my life. Writing it hurt me and was at times extremely unpleasant. Is the book my death-rattle or the sound of me screaming inside of my cage? Does its tone tell you I am angry or merely seeking a psychological expedient against the madness I see around me? The book is my creation but is also in many ways foreign to me for I am living in a foreign land. Most generally perhaps it is just the thoughts that passed through my head over the twenty months I spent moving toward death. I am certainly not a man who is at peace with his life, but on the contrary I despise it as I have never before despised anything. Who can sort it all out? Being imprisoned in the nightmarish cage of paraplegia has done all manner of violence to the deepest parts of me. Still, I have not gone mad. I am no literary genius and don’t expect everything I say to be understood, but if you would like to know what my experiences have been like, and what I am like, I will try my best to show you.

    What do I think of this book? I have no affection for it. I find it odious and unattractive and am very saddened that I wrote it. But it is what I had to say. It took on a life of its own and when I now step back and look at what I created I regard it with distaste. If I could, I would put all of these horrible thoughts in a box, seal it forever, then go out and live life. I would run in the sun, enjoy my freedom, and revel in myself. But that’s the point. I cannot go out and live life because this is not life. So instead I speak to you from the place I now occupy, between life and death.

    …Imagine a man cut off a few inches below the armpits. Neglect for a moment questions concerning how he eliminates waste and so forth, and just assume that the site of the “amputation” is, to borrow from Gogol, “as uniform as a newly fried pancake”. This man would be vastly, immensely better off than me. If you don’t know who is, he had a role in the ⁠. He was the guy who was essentially a torso with arms. He walked on his hands. How fortunate he was compared to me may not register right away, because the illusion I mentioned above would probably make you find Johnny Eck’s condition far more shocking than mine. But the truth is that mine is much more horrible than his, barring whatever social “advantages” the illusion of being whole might confer on me. The other day I saw a picture of a woman missing both legs. They were cut off mid-thigh. I thought that if only I was like her perhaps my life would be bearable. She was, in my opinion, better off than the pancake man, who is beyond any doubt far better off than me. One man said to me, “At least you didn’t lose your legs.” No, I did lose my legs, and my penis, and my pelvis. Let’s get something very clear about the difference between paraplegics and double-leg amputees. If tomorrow every paraplegic woke up as a double-leg amputee, the Earth itself would quiver with ecstasy from the collective bursting forth of joyous emotion. Tears of the most exquisitely overwhelming relief and happiness would stream down the cheeks of former paraplegics the world over. My wording here is deliberate. It’s no exaggeration. Losing both legs is bad, but paraplegia is ghoulishly, nightmarishly worse.

    Part of what I wanted in desiring to die in the company of those I loved was to reassure them and perhaps give them courage to face death well. That was something I really wanted to give to them and I’m sorry I can only do it with these words. I was driven almost mad by all of the things many other people said about paraplegia, suicide, and what was still possible in my condition. I hope everyone understands how all of that affected the tone of what I wrote. I was so frustrated with all of it, I thought it was so insane. But I only wanted to break free of it all and say what I felt. I felt like it stifled me so horribly.

    I cut some more and the blood is flowing well again. I’m surprised how long it is taking me to even feel anything. I thought I was dizzy but I’m not sure I am now. It’s 8:51 pm. I thought I would get cold but I’m not cold either, I’m actually hot but that’s probably the two sweaters. Starting to feel a little badly. Sweating, a little light-headed.

    I’m going to go now, done writing. Goodbye everyone.


  195. 2004-wallace-considerthelobster.html: ⁠, David Foster Wallace (2004-08-01; philosophy):

    [Originally published in the August 2004 issue of Gourmet magazine, this review of the 2003 Maine Lobster Festival generated some controversy among the readers of the culinary magazine. The essay is concerned with the ethics of boiling a creature alive in order to enhance the consumer’s pleasure, including a discussion of lobster sensory neurons.]

    A detail so obvious that most recipes don’t even bother to mention it is that each lobster is supposed to be alive when you put it in the kettle…Another alternative is to put the lobster in cold salt water and then very slowly bring it up to a full boil. Cooks who advocate this method are going mostly on the analogy to a frog, which can supposedly be kept from jumping out of a boiling pot by heating the water incrementally. In order to save a lot of research-summarizing, I’ll simply assure you that the analogy between frogs and lobsters turns out not to hold.

    …So then here is a question that’s all but unavoidable at the World’s Largest Lobster Cooker, and may arise in kitchens across the U.S.: Is it all right to boil a sentient creature alive just for our gustatory pleasure? A related set of concerns: Is the previous question irksomely PC or sentimental? What does ‘all right’ even mean in this context? Is it all just a matter of individual choice?

    …As far as I can tell, my own main way of dealing with this conflict has been to avoid thinking about the whole unpleasant thing. I should add that it appears to me unlikely that many readers of gourmet wish to think hard about it, either, or to be queried about the morality of their eating habits in the pages of a culinary monthly. Since, however, the assigned subject of this article is what it was like to attend the 2003 MLF, and thus to spend several days in the midst of a great mass of Americans all eating lobster, and thus to be more or less impelled to think hard about lobster and the experience of buying and eating lobster, it turns out that there is no honest way to avoid certain moral questions.




  199. ⁠, Brad Leithauser (2006-04-01):

    II. When the Smoke Clears

    The mind, that rambling bear, ransacks the sky
    In search of honey,
    Fish, berries, carrion. It minds no laws…
    As if the heavens were some canvas tent,
    It slashes through the firmament
    To prise up the sealed stores with its big paws.

    The mind, that sovereign camel, sees the sky
    For what it is:
    Each star a grain of sand along the vast
    Passage to that oasis where, below
    The pillared palms, the portico
    Of fronds, the soul may drink its fill at last.

    The mind, that gorgeous spider, webs the sky
    With lines so sheer
    They all but vanish, and yet star to star
    (Thread by considered thread) slowly entwines
    The universe in its designs—
    Un-earthing patterns where no patterns are.

    The mind, that termite, seems to shun the sky.
    It burrows down,
    Tunneling in upon that moment when,
    In Time—its element—will come a day
    The longest-shadowed tower sway,
    Unbroken sunlight fall to earth again.

    …DNA was unspooled in the year
    I was born, and the test-tube births
    Of cloned mammals emerged in a mere
    Half-century; it seems the earth’s
    Future’s now in the hands of a few
    Techies on a all-nighter who
    Sift the gene-alphabet like Scrabble tiles

    And our computer geeks are revealed, at last,
    As those quick-handed, sidelined little mammals
    In the dinosaurs’ long shadows—those least-
    Likely-to-succeed successors whose kingdom come
    Was the globe itself (an image best written down,
    Perhaps, beneath a streetlamp, late, in some
    Star-riddled Midwestern town).

    He wrote boys’ books and intuitively
    Recognized that the real
    Realist isn’t the one who details
    Lowdown heartland factories and farms
    As if they would last, but the one who affirms,
    From the other end of the galaxy,
    Ours is the age of perilous miracles.

  200. Killing-Rabbits

  201. 1974-lem-cyberiad-trurlselectronicbard.pdf: “The First Sally (A), or, Trurl's Electronic Bard”⁠, Stanisflaw Lem, Michael Kandel

  202. ⁠, Jordan Todorov (2019-07-03):

    So one can imagine the furor in 1963 when a German writer claimed to have uncovered the real story behind the fairy tale.

    According to Die Wahrheit über Hänsel und Gretel (The Truth About Hansel and Gretel), the two siblings were, in fact, adult brother and sister bakers, living in Germany during the mid-17th century. They murdered the witch, an ingenious confectioner in her own right, to steal her secret recipe for lebkuchen, a gingerbread-like traditional treat. The book published a facsimile of the recipe in question, as well as sensational photos of archaeological evidence.

    …The media picked up the story and turned it into national news. “Book of the week? No, it’s the book of the year, and maybe the century!” proclaimed the West German tabloid Abendzeitung in November 1963. The state-owned East German Berliner Zeitung came out with the headline “Hansel and Gretel—a duo of murderers?” and asked whether this could be “a criminal case from the early capitalist era.” The news spread like wildfire not only in Germany, but abroad too. Foreign publishers, smelling a profit, began negotiating for the translation rights. School groups, some from neighboring Denmark, traveled to the Spessart woods in the states of Bavaria and Hesse to see the newly discovered foundations of the witch’s house.

    As intriguing as The Truth About Hansel and Gretel might sound, however, none of it proved to be true. In fact, the book turned out to be a literary forgery concocted by Hans Traxler, a German children’s book writer and cartoonist, known for his sardonic sense of humor. "1963 marked the 100th anniversary of Jacob Grimm’s death“, says the now 90-year-old Traxler, who lives in Frankfurt, Germany.”So it was natural to dig into [the] Brothers Grimm treasure chest of fairy tales, and pick their most famous one, ‘Hansel and Gretel.’"

  203. ⁠, Peiran Tan (2019-10-17):

    [A look into the signature typefaces of Evangelion: Matisse EB, mechanical compression for distorted resizing, and ⁠. Covered typefaces: Matisse/​​​​Helvetica/​​​​Neue Helvetica/​​​​Times/​​​​Helvetica Condensed/​​​​Chicago/​​​​Cataneo/​​​​Futura/​​​​Eurostile/​​​​ITC Avant Garde Gothic/​​​​Gill Sans.]

    Evangelion was among the first anime to create a consistent typographic identity across its visual universe, from title cards to NERV’s user interfaces. Subcontractors usually painted anything type-related in an anime by hand, so it was a novel idea at the time for a director to use desktop typesetting to exert typographic control. Although sci-fi anime tended to use either sans serifs or hand lettering that mimicked sans serifs in 1995, Anno decided to buck that trend, choosing a display serif for stronger visual impact. After flipping through iFontworks’ specimen catalog, he personally selected the extra-bold (EB) weight of Matisse (マティス), a Mincho-style serif family…A combination of haste and inexperience gave Matisse a plain look and feel, which turned out to make sense for Evangelion. The conservative skeletal construction restrained the characters’ personality so it wouldn’t compete with the animation; the extreme stroke contrast delivered the desired visual punch. Despite the fact that Matisse was drawn on the computer, many of its stroke corners were rounded, giving it a hand-drawn, fin-de-siècle quality.

    …In addition to a thorough graphic identity, Evangelion also pioneered a deep integration of typography as a part of animated storytelling—a technique soon to be imitated by later anime. Prime examples are the show’s title cards and flashing type-only frames mixed in with the animation. The title cards contain nothing but crude, black-and-white Matisse EB, and are often mechanically compressed to fit into interlocking compositions. This brutal treatment started as a hidden homage to the title cards in old Toho movies from the sixties and seventies, but soon became visually synonymous with Evangelion after the show first aired. Innovating on the media of animated storytelling, Evangelion also integrates type-only flashes. Back then, these black-and-white, split-second frames were Anno’s attempt at imprinting subliminal messages onto the viewer, but have since become Easter eggs for die-hard Evangelion fans as well as motion signatures for the entire franchise.

    …Established in title cards, this combination of Matisse EB and all-caps Helvetica soon bled into various aspects of Evangelion, most notably the HUD user interfaces in NERV. Although it would be possible to attribute the mechanical compression to technical limitations or typographic ignorance, its ubiquitous occurrence did evoke haste and, at times, despair—an emotional motif perfectly suited to a post-apocalyptic story with existentialist themes.

  204. {#linkBibliography-future)-2014 .docMetadata}, Dave Addey (Typeset In The Future) (2014-12-01):

    [Discussion with screenshots of the classic Ridley Scott SF movie Alien, which makes extensive use of Helvetica, Futura, Eurostile Bold Extended, and other “modern” fonts to give a futuristic industrial feel to all of the (multilingual) spaceship/​​​​computer displays, controls, and credits; Alien also makes intriguing use of many logos, icons, and symbols for quick communication, tracing back to the Semiotic Standard.]

  205. ⁠, Dave Addey (2016-06-19; technology⁠, design⁠, fiction):

    [Discussion with screenshots of the classic Ridley Scott SF movie Blade Runner, which employs typography to disconcert the viewer, with unexpected choices, random capitalization and small caps, corporate branding/​​​​advertising, and the mashed-up creole multilingual landscape of noir cyberpunk LA (plus discussion of the buildings and sets, and details such as call costs being correctly inflation-adjusted).]

  206. ⁠, Tanner Greer (2019-04-19):

    The second point probably deserves more space than I was able to give in the LA Review of Books. Consider, for a moment, the typical schedule of a Beijing teenager:

    She will (depending on the length of her morning commute) wake up somewhere between 5:30 and 7:00 AM. She must be in her seat by 7:45, 15 minutes before classes start. With bathroom breaks and gym class excepted, she will not leave that room until the 12:00 lunch hour and will return to the same spot after lunch is ended for another four hours of instruction. Depending on whether she has after-school tests that day, she will be released from her classroom sometime between 4:10 and 4:40. She then has one hour to get a start on her homework, eat, and travel to the evening cram school her parents have enrolled her in. Math, English, Classical Chinese—there are cram schools for every topic on the gaokao. On most days of the week she will be there studying from 6:00 to 9:00 PM (if the family has the money, she will spend another six hours at these after-school schools on Saturday and Sunday mornings). Our teenager will probably arrive home somewhere around 10:00 PM, giving her just enough time to spend two or three hours on that day’s homework before she goes to bed. Rinse and repeat, day in and day out, for six years. The strain does not abate until she has defeated—or has been defeated by—the gaokao.

    This is well known, but I think the wrong aspects of this experience are emphasized. Most outsiders look at this and think: see how much pressure these Chinese kids are under. I look and think: how little privacy and independence these Chinese kids are given!

    To put this another way: Teenage demands for personal space are hardly unique to China. What makes China distinctive is the difficulty its teenagers have securing this goal. Chinese family life is hemmed in narrow bounds. The urban apartments that even well-off Chinese call their homes are tiny and crowded. Few have more than two bedrooms. Teenagers are often forced to share their bedroom with a grandparent. So small was the apartment of one 16-year-old I interviewed that she slept, without apparent complaint, in the same bed as her parents for her entire first year of high school. Where can a teenager like her go, what door could she slam, when she was angry with her family? Within the walls of her home there was no escape from the parental gaze.

    A Chinese teen has few better options outside her home. No middle-class Chinese teenager has a job. None have cars. The few that have boyfriends or girlfriends go about it as discreetly as possible. Apart from the odd music lesson here or there, what Americans call “extra-curricular activities” are unknown. One a recent graduate of a prestigious international high school in Beijing once explained to me the confusion she felt when she was told she would need to excel at an after-school activity to be competitive in American university admissions:

    “In tenth grade our home room teacher told us that American universities cared a lot about the things we do outside of school, so from now on we would need to find time to ‘cultivate a hobby.’ I remember right after he left the girl sitting at my right turned to me and whispered, ‘I don’t know how to cultivate a hobby. Do you?’”


  208. Cultural-Revolution

  209. ⁠, Edward Tufte (1997):

    Visual Explanations: Images and Quantities, Evidence and Narrative [Tufte #3] is about pictures of verbs, the representation of mechanism and motion, process and dynamics, causes and effects, explanation and narrative. Practical applications and examples include statistical graphics, charts for making important decisions in engineering and medicine, technical manuals, diagrams, design of computer interfaces and websites and on-line manuals, animations and scientific visualizations, techniques for talks, and design strategies for enhancing the rate of information transfer in print, presentations, and computer screens. The use of visual evidence in deciding to launch the space shuttle Challenger is discussed in careful detail. Video snapshots show redesigns of a supercomputer animation of a thunderstorm. The book is designed and printed to the highest standards, with luscious color throughout and four built-in flaps for showing motion and before/​​​​after effects.

    158 pages; ISBN 1930824157

    Cover of Visual Explanations
  210. 12#tufte

  211. Movies#free-solo

  212. Movies#weiner

  213. Movies#carmen

  214. Movies#akhnaten

  215. Movies#stalker

  216. Movies#freaks

  217. Movies#manon

  218. Movies#die-walkure

  219. Movies#madama-butterfly

  220. Movies#invasion-of-the-body-snatchers

  221. Anime#rurouni-kenshin

  222. Anime#the-thief-and-the-cobbler

  223. Anime#redline

  224. Anime#made-in-abyss

  225. Anime#concurrency

  226. Anime#battle-angel-alita

  227. MLP

  228. Anime#mlp-fim