newsletter/2020/01 (Link Bibliography)

“newsletter/​2020/​01” links:

  1. 01

  2. https://gwern.substack.com

  3. 12

  4. 13

  5. newsletter

  6. Changelog

  7. https://www.patreon.com/gwern

  8. Danbooru2020#danbooru2019

  9. GPT-2-preference-learning

  10. ⁠, Gwern Branwen (2019-02-19):

    (TWDNE) is a static website which uses JS to display random anime faces generated by StyleGAN neural networks, along with GPT-3-generated anime plot summaries. Followups: ⁠/​​​​⁠/​​​​⁠.

    A screenshot of “This Waifu Does Not Exist” (TWDNE) showing a random StyleGAN-generated anime face and a random GPT-3 text sample conditioned on anime keywords/​​​​phrases.
  11. TWDNE#twdnev3

  12. Faces#stylegan-2

  13. ⁠, disumbrationist (2020-01-12):

    When I originally trained the models in May 2019, I’d used the 345M version of ⁠, which at the time was the largest one that had publicly released. Last November, however, OpenAI ⁠.

    The 1.5B model requires much more memory to fine-tune than the 345M, so I was initially having a lot of difficulty getting it to work on Colab. Thankfully, I was contacted by     /​ ​​ ​​ ​​ ​u    /​ ​​ ​​ ​​ ​gwern (here’s his Patreon) and Shawn Presser (    /​ ​​ ​​ ​​ ​u    /​ ​​ ​​ ​​ ​shawwwn), who very generously offered to do the fine-tuning themselves if I provided them with the dataset. This training took about 2 weeks, and apparently required around $70K worth of credits, so in hindsight this upgrade definitely wouldn’t have been possible for me to do myself, without their assistance.

    Based on my tests of the new model so far, I’m pretty happy with the quality, and it is noticeably more coherent than the 345M version.

    One thing that I should point out about the upgrade is that the original 345M models had been separately fine-tuned for each subreddit individually (ie. there were 108 separate models), whereas the upgraded one is just a single 1.5B model that has been fine-tuned using a combined dataset containing the comments/​​​​submissions from all the subreddits that I scraped. The main reason for this decision is simply that it would not have been feasible to train ~100 separate 1.5B models. Also, there may have been benefits from transfer learning across subreddits, which wouldn’t occur with separate models.

    …Here is the full list of new bots to be added: /​​​​r/​​​​capitalismvsocialism · /r/​​​​chess · /r/​​​​conlangs · /r/​​​​dota2 · /​​​​r/​​​​etymology · /r/​​​​fiftyfifty · /r/​​​​hobbydrama · /r/​​​​markmywords · /​​​​r/​​​​moviedetails · /r/​​​​neoliberal · /r/​​​​obscuremedia · /r/​​​​recipes · /​​​​r/​​​​riddles · /r/​​​​stonerphilosophy · /r/​​​​subsimulatorgpt2 · /​​​​r/​​​​subsimulatorgpt2meta · /r/​​​​tellmeafact · /r/​​​​twosentencehorror · /​​​​r/​​​​ukpolitics · /r/​​​​wordavalanches · /r/​​​​wouldyourather · /r/​​​​zen

  14. Search#case-studies

  15. 2019-pgc.pdf: {#linkBibliography-(pgc)-2019 .docMetadata doi=“10.1016/​​j.cell.2019.11.020”}, Cross-Disorder Group of the Psychiatric Genomics Consortium (PGC) (2019-12-12; genetics  /​ ​​ ​correlation):

    • Three groups of highly genetically-related disorders among 8 psychiatric disorders
    • Identified 109 pleiotropic loci affecting more than one disorder
    • Pleiotropic genes show heightened expression beginning in 2nd prenatal trimester
    • Pleiotropic genes play prominent roles in neurodevelopmental processes

    Genetic influences on psychiatric disorders transcend diagnostic boundaries, suggesting substantial pleiotropy of contributing loci. However, the nature and mechanisms of these pleiotropic effects remain unclear. We performed analyses of 232,964 cases and 494,162 controls from genome-wide studies of anorexia nervosa, attention-deficit/​​​​hyperactivity disorder, autism spectrum disorder, bipolar disorder, major depression, obsessive-compulsive disorder, ⁠, and Tourette syndrome. Genetic correlation analyses revealed a meaningful structure within the eight disorders, identifying three groups of inter-related disorders. across these eight disorders detected 109 loci associated with at least two psychiatric disorders, including 23 loci with pleiotropic effects on four or more disorders and 11 loci with antagonistic effects on multiple disorders. The pleiotropic loci are located within genes that show heightened expression in the brain throughout the lifespan, beginning prenatally in the second trimester, and play prominent roles in neurodevelopmental processes. These findings have important implications for psychiatric nosology, drug development, and risk prediction.

  16. Everything

  17. ⁠, Perline A. Demange, Margherita Malanchini, Travis T. Mallard, Pietro Biroli, Simon R. Cox, Andrew D. Grotzinger, Elliot M. Tucker-Drob, Abdel Abdellaoui, Louise Arseneault, Avshalom Caspi, David Corcoran, Benjamin Domingue, Colter Mitchell, Elsje van Bergen, Dorret I. Boomsma, Kathleen M. Harris, Hill F. Ip, Terrie E. Moffitt, Richie Poulton, Joseph Prinz, Karen Sugden, Jasmin Wertz, Benjamin Williams, Eveline L. de Zeeuw, Daniel W. Belsky, K. Paige Harden, Michel G. Nivard (2020-01-15):

    Educational attainment (EA) is influenced by cognitive abilities and by other characteristics and traits. However little is known about the genetic architecture of these “non-cognitive” contributions to EA. Here, we use Genomic Structural Equation Modelling and results of genome-wide association studies (GWASs) of EA (n = 1,131,881) and cognitive test performance (n = 257,841) to estimate associations with variation in EA that is independent of cognitive ability. We identified 157 genome-wide loci and a polygenic architecture accounting for 57% of genetic in EA. Phenotypic and biological annotation revealed that (1) both cognitive and non-cognitive contributions to EA were genetically correlated with socioeconomic success and longevity; and (2) non-cognitive contributions to EA were related to personality, decision making, risk-behavior, and increased risk for psychiatric disorders; (3) non-cognitive and cognitive contributions to EA were enriched in the same tissues and cell types, but (4) showed different associations with gray-matter neuroimaging phenotypes.

  18. 2019-trumble.pdf: ⁠, Benjamin C. Trumble, Caleb E. Finch (2019-12; genetics  /​ ​​ ​selection):

    Global exposures to air pollution and cigarette smoke are novel in human evolutionary history and are associated with at least 12 million premature deaths per year. We investigate the history of the human exposome for relationships between novel environmental toxins and genetic changes during human evolution in 6 phases:

    1. Phase I: With increased walking on savannas, early human ancestors inhaled crustal dust, fecal aerosols, and spores; carrion scavenging introduced new infectious pathogens.
    2. Phase II: Domestic fire exposed early Homo to novel toxins from smoke and cooking.
    3. Phases III and IV: Neolithic to preindustrial Homo sapiens incurred infectious pathogens from domestic animals and dense communities with limited sanitation.
    4. *
    5. Phase V: Industrialization introduced novel toxins from fossil fuels, industrial chemicals, and tobacco at the same time infectious pathogens were diminishing. Thereby, pathogen-driven causes of mortality were replaced by chronic diseases driven by sterile inflammogens, exogenous and endogenous.
    6. Phase VI: Considers future health during global warming with increased air pollution and infections.

    We hypothesize that adaptation to some ancient toxins persists in genetic variations associated with inflammation and longevity.

    [Keywords: exposome, human evolution, genes, toxins, infections]

  19. ⁠, Carl Zimmer (2020-01-13):

    Scientists are still figuring out how air pollution causes these ailments. They are also puzzling over the apparent resilience that some people have to this modern onslaught. Some researchers now argue that the answers to these questions lie in our distant evolutionary past, millions of years before the first cigarette was lit and the first car hit the road.

    Our ancestors were bedeviled by airborne toxins even as bipedal apes walking the African savanna, argued Benjamin Trumble, a biologist at Arizona State University, and Caleb Finch of the University of Southern California, in the December issue of the Quarterly Review of Biology. Our forebears evolved defenses against these pollutants, the scientists propose. Today, those adaptations may provide protection, albeit limited, against tobacco smoke and other airborne threats. But our evolutionary legacy may also be a burden, Dr. Trumble and Dr. Finch speculated. Some genetic adaptations may have increased our vulnerability to diseases linked to air pollution. It is “a really creative, interesting contribution to evolutionary medicine”, said Molly Fox, an anthropologist at the University of California, Los Angeles, who was not involved in the new study. The story begins about seven million years ago. Africa at the time was gradually growing more arid. The Sahara emerged in northern Africa, while grasslands opened up in eastern and southern Africa. The ancestors of chimpanzees and gorillas remained in the retreating forests, but our ancient relatives adapted to the new environments. They evolved into a tall, slender frame well suited to walking and running long distances. Dr. Finch and Dr. Trumble believe that early humans faced another challenge that has gone largely overlooked: the air. Periodically, the savanna would have experienced heavy dust storms from the Sahara, and our distant ancestors may have risked harm to their lungs from breathing in the silica-rich particles. “When the dust is up, we’re going to see more pulmonary problems”, Dr. Finch said. Even today, Greek researchers have found that when Sahara winds reach their country, patients surge into hospitals with respiratory complaints. The dense foliage of tropical forests gave chimpanzees and gorillas a refuge from dust. But the earliest humans, wandering the open grasslands, had nowhere to hide. Dust was not the only hazard. The lungs of early humans also may have been irritated by the high levels of pollen and particles of fecal matter produced by the savanna’s vast herds of grazing animals. Dr. Finch and Dr. Trumble maintain that scientists should consider whether these new challenges altered our biology through ⁠. Is it possible, for instance, that people who are resilient to cigarette smoke have inherited genetic variants that protected their distant ancestors from cave fires?

    …“Most traditional people live in a highly smoky environment”, Dr. Finch said. “I think it has been a fact of human living for us even before our species.” Smoke created a new evolutionary pressure, he and Dr. Trumble believe. Humans evolved powerful liver enzymes, for example, to break down toxins passing into the bloodstream from the lungs. Gary Perdew, a molecular toxicologist at Penn State University, and his colleagues have found evidence of smoke-driven evolution in another gene, AHR. This gene makes a protein found on cells in the gut, lungs and skin. When toxins get snagged on the protein, cells release enzymes that break down the poisons. Other mammals use AHR to detoxify their food. But the protein is also effective against some of the compounds in wood smoke. Compared to other species, the human version produces a weaker response to toxins, perhaps because AHR protein is not the perfect protector—the fragments it leaves behind can cause tissue damage. Before fire, our ancestors did not need to use AHR very often; in theory, their bodies could tolerate the limited damage the protein caused. But when we began breathing smoke regularly and needing the AHR protein constantly, the gene might have become dangerous to our health. Dr. Perdew believes that humans evolved a weaker AHR response as a way to find “a sweet spot”, a compromise that minimized the damage of airborne pollutants without causing too many side effects. These adaptations were never perfect, as evidenced by the fact that millions of people still die today from indoor air pollution. But evolution doesn’t seek perfect health.

  20. ⁠, Hubbard, Troy D. Murray, Iain A. Bisson, William H. Sullivan, Alexis P. Sebastian, Aswathy Perry, George H. Jablonski, Nina G. Perdew, Gary H (2016):

    We have identified a fixed nonsynonymous sequence difference between humans (Val381; derived variant) and Neandertals (Ala381; ancestral variant) in the ligand-binding domain of the aryl hydrocarbon receptor (AHR) gene. In an sequence analysis of four Neandertal and Denisovan individuals compared with nine modern humans, there are only 90 total nucleotide sites genome-wide for which archaic hominins are fixed for the ancestral nonsynonymous variant and the modern humans are fixed for the derived variant. Of those sites, only 27, including Val381 in the AHR, also have no reported variability in the human dbSNP database, further suggesting that this highly conserved functional variant is a rare event. Functional analysis of the amino acid variant Ala381 within the AHR carried by Neandertals and nonhuman primates indicate enhanced polycyclic aromatic hydrocarbon (PAH) binding, DNA binding capacity, and AHR mediated transcriptional activity compared with the human AHR. Also relative to human AHR, the Neandertal AHR exhibited 150–1000 times greater sensitivity to induction of Cyp1a1 and Cyp1b1 expression by PAHs (e.g., benzo(a)pyrene). The resulting CYP1A1/​​​​CYP1B1 enzymes are responsible for PAH first pass metabolism, which can result in the generation of toxic intermediates and perhaps AHR-associated toxicities. In contrast, the human AHR retains the ancestral sensitivity observed in primates to nontoxic endogenous AHR ligands (e.g., indole, indoxyl sulfate). Our findings reveal that a functionally significant change in the AHR occurred uniquely in humans, relative to other primates, that would attenuate the response to many environmental pollutants, including chemicals present in smoke from fire use during cooking.

  21. ⁠, Nathan R. Treff, Jennifer Eccles, Lou Lello, Elan Bechor, Jeffrey Hsu, Kathryn Plunkett, Raymond Zimmerman, Bhavini Rana, Artem Samoilenko, Steven Hsu, Laurent C. A. M. Tellier (Genomic Prediction) (2019-12-04):

    For over 2 decades preimplantation genetic testing (PGT) has been in clinical use to reduce the risk of miscarriage and genetic disease in patients with advanced maternal age and risk of transmitting disease. Recently developed methods of genome-wide genotyping and machine learning algorithms now offer the ability to genotype embryos for polygenic disease risk with accuracy equivalent to adults. In addition, contemporary studies on adults indicate the ability to predict polygenic disorders with risk equivalent to monogenic disorders. Existing provide opportunities to model the clinical utility of polygenic disease risk reduction among sibling adults. Here, we provide a mathematical model for the use of embryo screening to reduce the risk of type 1 diabetes. Results indicate a 45–72% reduced risk with blinded genetic selection of one sibling. The first clinical case of polygenic risk scoring in human preimplantation embryos from patients with a family history of complex disease is reported. In addition to these data, several common and accepted practices place PGT for polygenic disease risk in the applicable context of contemporary reproductive medicine. In addition, prediction of risk for PCOS, endometriosis, and aneuploidy are of particular interest and relevance to patients with infertility and represent an important focus of future research on polygenic risk scoring in embryos.

  22. {#linkBibliography-eagle)-2020 .docMetadata}, Chelsea Katz (The Eagle) (2020-01-02):

    The first of her kind, CC the cloned is breaking more boundaries as she turns 18 years old. There are no big plans locally to mark the day, but CC—Carbon Copy or Copy Cat—will be the focus of a Dutch cartoon set for release today to celebrate her birthday, researcher and owner Duane Kraemer said.

    …CC is not only enjoying life as the Kraemers’ pet, but she has her own condo called the “kitty house” behind the Kraemers’ house where she lives with her three offspring, sired by a cat named Smokey. Those offspring, just by existing, helped CC make headlines in the scientific community. There had not been much research done in the reproduction success of clones—and none had been done with a cat. Tim, Zip and Tess were born Sept. 1, 2006, along with a fourth kitten that was stillborn. Not knowing CC’s reaction would be to her kittens, Kraemer said, they found CC was “the perfect mother” and had the innate maternal instincts they were hoping she would exhibit. Besides proving clones can successfully reproduce, CC also proved not all clones die young. “Dolly the sheep, that was the first of the mammals to be cloned by nuclear transfer, had died at, I think, at 6 years of age”, Kraemer said. “So the fact that CC didn’t die young was news.” About 20% of cloned animals have developmental abnormalities of some kind, he said, with some being serious enough to result in the animal’s death at a young age or at birth. However, the other 80% born without those conditions “would probably live to a normal variation of ages.”

  23. ⁠, Larks (2019-12-18):

    As in 2016⁠, 2017 and 2018⁠, I have attempted to review the research that has been produced by various organisations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which performs for global health charities, and somewhat similar to a securities analyst with regards to possible investments. My aim is basically to judge the output of each organisation in 2019 and compare it to their budget. This should give a sense of the organisations’ average cost-effectiveness. We can also compare their financial reserves to their 2019 budgets to get a sense of urgency.

    …Here are the un-scientifically-chosen hashtags: Agent Foundations · AI Theory · Amplification · Careers · CIRL · Decision Theory · Ethical Theory · Forecasting · Introduction · Misc · ML safety · Other Xrisk · Overview · Philosophy · Politics · RL · Security · Short-term · Strategy.

    • Research organisations reviewed: FHI (The Future of Humanity Institute) · CHAI (The Center for Human-Aligned AI) · MIRI (The ) · GCRI (The Global Catastrophic Risks Institute) · CSER (The Center for the Study of Existential Risk) · Ought · OpenAI · Google DeepMind · AI Safety camp · FLI (The Future of Life Institute) · AI Impacts · GPI (The Global Priorities Institute) · FRI (The Foundational Research Institute) · Median Group · CSET (The Center for Security and Emerging Technology) · Leverhulme Center for the Future of Intelligence · BERI (The Berkeley Existential Risk Initiative) · AI Pulse
    • Capital Allocators reviewed: LTFF (Long-term future fund) · OpenPhil (The Open Philanthropy Project)

    …The size of the field continues to grow, both in terms of funding and researchers. Both make it increasingly hard for individual donors. I’ve attempted to subjectively weigh the productivity of the different organisations against the resources they used to generate that output, and donate accordingly.

  24. ⁠, Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, Dario Amodei (2020-01-23):

    We study empirical scaling laws for language model performance on the loss.

    The loss scales as a with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/​​​​dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget.

    Larger models are substantially more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping substantially before convergence.

    Figure 1: Language modeling performance improves smoothly as we increase the model size, dataset size, and amount of compute used for training. For optimal performance all three factors must be scaled up in tandem. Empirical performance has a power-law relationship with each individual factor when not bottlenecked by the other two.
    Figure 15: Far beyond the model sizes we study empirically, we find a contradiction between our equations for L(Cmin) and L(D) due to the slow growth of data needed for compute-efficient training. The intersection marks the point before which we expect our predictions to break down. The location of this point is highly sensitive to the precise exponents from our power-law fits.
    3.2.1: Comparing to LSTMs and Universal Transformers: In Figure 7 we compare LSTM and Transformer performance as a function of non-embedding parameter count n. The LSTMs were trained with the same dataset and context length. We see from these figures that the LSTMs perform as well as Transformers for tokens appearing early in the context, but cannot match the Transformer performance for later tokens. We present power-law relationships between performance and context position in Appendix D.5, where increasingly large powers for larger models suggest improved ability to quickly recognize patterns.
    Appendix A: Summary of Power Laws
    Table 1: Summary of scaling laws—In this table we summarize the model size and compute scaling fits to equation (1.1) along with Nopt(C), with the loss in nats/​​​​token, and compute measured in petaflop-days. In most cases the irreducible losses match quite well between model size and compute scaling laws. The math compute scaling law may be affected by the use of weight decay, which typically hurts performance early in training and improves performance late in training. The compute scaling results and data for language are from [BMR+20], while_N_opt(C)comes from [KMH+20]. Unfortunately, even with data from the largest language models we cannot yet obtain a meaningful estimate for the entropy of natural language. [This is an updated scaling power law summary from >Henighan et al 2020.]
  25. ⁠, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby (2019-12-24):

    Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes—from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.

  26. https://ai.googleblog.com/2020/05/open-sourcing-bit-exploring-large-scale.html

  27. ⁠, Erik Wijmans, Abhishek Kadian, Ari Morcos, Stefan Lee, Irfan Essa, Devi Parikh, Manolis Savva, Dhruv Batra (2019-11-01):

    We present Decentralized Distributed Proximal Policy Optimization (DD-), a method for distributed in resource-intensive simulated environments. DD-PPO is distributed (uses multiple machines), decentralized (lacks a centralized server), and synchronous (no computation is ever stale), making it conceptually simple and easy to implement. In our experiments on training virtual robots to navigate in Habitat-Sim, DD-PPO exhibits near-linear scaling—achieving a speedup of 107× on 128 GPUs over a serial implementation. We leverage this scaling to train an agent for 2.5 Billion steps of experience (the equivalent of 80 years of human experience)—over 6 months of -time training in under 3 days of wall-clock time with 64 GPUs.

    This massive-scale training not only sets the state of art on Habitat Autonomous Navigation Challenge 2019, but essentially solves the task –near-perfect autonomous navigation in an unseen environment without access to a map, directly from an RGB-D camera and a GPS+Compass sensor. Fortuitously, error vs computation exhibits a power-law-like distribution; thus, 90% of peak performance is obtained relatively early (at 100 million steps) and relatively cheaply (under 1 day with 8 GPUs). Finally, we show that the scene understanding and navigation policies learned can be transferred to other navigation tasks—the analog of ImageNet pre-training + task-specific fine-tuning for embodied AI. Our model outperforms pre-trained CNNs on these transfer tasks and can serve as a universal resource (all models and code are publicly available).

  28. {#linkBibliography-wijmans-(facebook)-2020 .docMetadata}, Erik Wijmans, Abhishek Kadian (Facebook) (2020-01-21):

    The AI community has a long-term goal of building intelligent machines that interact effectively with the physical world, and a key challenge is teaching these systems to navigate through complex, unfamiliar real-world environments to reach a specified destination—without a preprovided map. We are announcing today that Facebook AI has created a new large-scale distributed reinforcement learning (RL) algorithm called DD-PPO, which has effectively solved the task of point-goal navigation using only an RGB-D camera, GPS, and compass data. Agents trained with DD-PPO (which stands for decentralized distributed proximal policy optimization) achieve nearly 100% success in a variety of virtual environments, such as houses and office buildings. We have also successfully tested our model with tasks in real-world physical settings using a LoCoBot and Facebook AI’s PyRobot platform. An unfortunate fact about maps is that they become outdated the moment they are created. Most real-world environments evolve—buildings and structures change, objects are moved around, and people and pets are in constant flux. By learning to navigate without a map, DD-PPO-trained agents will accelerate the creation of new AI applications for the physical world.

    Previous systems reached a 92% success rate on these tasks, but even failing 1 out of 100 times is not acceptable in the physical world, where a robot agent might damage itself or its surroundings by making an error. DD-PPO-trained agents reach their goal 99.9% of the time. Perhaps even more impressive, they do so with near-maximal efficiency, choosing a path that comes within 3% (on average) of matching the shortest possible route from the starting point to the goal. It is worth stressing how uncompromising this task is. There is no scope for mistakes of any kind—no wrong turn at a crossroads, no backtracking from a dead end, no exploration or deviation of any kind from the most direct path. We believe that the agent learns to exploit the statistical regularities in the floor plans of real indoor environments (apartments, houses, and offices) that are also present in our data sets. This improved performance is powered by a new, more effective system for distributed training (DD-PPO), along with the state-of-the-art speed and fidelity of Facebook AI’s open source AI Habitat platform.

    …We propose a simple, synchronous, distributed RL method that scales well. We call this method decentralized distributed proximal policy optimization, as it is decentralized (has no parameter server) and distributed (runs across many machines), and we use it to scale proximal policy optimization, a previously developed technique (Schulman et al 2017). In DD-PPO, each worker alternates between collecting experience in a resource-intensive, GPU-accelerated simulated environment and then optimizing the model. This distribution is synchronous—there is an explicit communication stage in which workers synchronize their updates to the model.

    The variability in experience collection runtime presents a challenge to using this method in RL. In supervised learning, all gradient computations take approximately the same time. In RL, some resource-intensive environments can take substantially longer to simulate. This introduces substantial synchronization overhead, as every worker must wait for the slowest to finish collecting experience. To address this, we introduced a preemption threshold where the rollout collection stage of these stragglers is forced to end early once some percentage, p percent, (we find 60% to work well) of the other workers are finished collecting their rollout, thereby dramatically improving scaling. Our system weighs all workers’ contributions to the loss equally and limits the minimum number of steps before preemption to one-fourth the maximum to ensure that all environments contribute to learning.

  29. ⁠, Dmytro Mishkin, Alexey Dosovitskiy, Vladlen Koltun (2019-01-30):

    Navigation research is attracting renewed interest with the advent of learning-based methods. However, this new line of work is largely disconnected from well-established classic navigation approaches. In this paper, we take a step towards coordinating these two directions of research. We set up classic and learning-based navigation systems in common simulated environments and thoroughly evaluate them in indoor spaces of varying complexity, with access to different sensory modalities. Additionally, we measure human performance in the same environments. We find that a classic pipeline, when properly tuned, can perform very well in complex cluttered environments. On the other hand, learned systems can operate more robustly with a limited sensor suite. Overall, both approaches are still far from human-level performance.

  30. ⁠, Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, Dhruv Batra (2019-04-02):

    We present Habitat, a platform for research in embodied artificial intelligence (AI). Habitat enables training embodied agents (virtual robots) in highly efficient photorealistic 3D simulation. Specifically, Habitat consists of: (i) Habitat-Sim: a flexible, high-performance 3D simulator with configurable agents, sensors, and generic 3D dataset handling. Habitat-Sim is fast—when rendering a scene from Matterport3D, it achieves several thousand frames per second (fps) running single-threaded, and can reach over 10,000 fps multi-process on a single GPU. (ii) Habitat-API: a modular high-level library for end-to-end development of embodied AI algorithms—defining tasks (e.g., navigation, instruction following, question answering), configuring, training, and benchmarking embodied agents.

    These large-scale engineering contributions enable us to answer scientific questions requiring experiments that were till now impracticable or ‘merely’ impractical. Specifically, in the context of point-goal navigation: (1) we revisit the comparison between learning and SLAM approaches from two recent works and find evidence for the opposite conclusion—that learning outperforms SLAM if scaled to an order of magnitude more experience than previous investigations, and (2) we conduct the first cross-dataset generalization experiments train, test x Matterport3D, Gibson for multiple sensors blind, RGB, RGBD, D and find that only agents with depth (D) sensors generalize across datasets. We hope that our open-source platform and these findings will advance research in embodied AI.

  31. ⁠, Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, Quoc V. Le (2020-01-27):

    We present Meena, a multi-turn open-domain chatbot trained end-to-end on data mined and filtered from social media conversations. This 2.6B parameter neural network is simply trained to minimize perplexity of the next token. We also propose a human evaluation metric called Sensibleness and Specificity Average (SSA), which captures key elements of a human-like multi-turn conversation. Our experiments show strong correlation between perplexity and SSA. The fact that the best perplexity end-to-end trained Meena scores high on SSA (72% on multi-turn evaluation) suggests that a human-level SSA of 86% is potentially within reach if we can better optimize perplexity. Additionally, the full version of Meena (with a filtering mechanism and tuned decoding) scores 79% SSA, 23% higher in absolute SSA than the existing chatbots we evaluated.

  32. ⁠, Daniel Adiwardana, Thang Luong (2020-01-28):

    Modern conversational agents (chatbots) tend to be highly specialized—they perform well as long as users don’t stray too far from their expected usage. To better handle a wide variety of conversational topics, open-domain dialog research explores a complementary approach attempting to develop a chatbot that is not specialized but can still chat about virtually anything a user wants. Besides being a fascinating research problem, such a conversational agent could lead to many interesting applications, such as further humanizing computer interactions, improving foreign language practice, and making relatable interactive movie and videogame characters.

    However, current open-domain chatbots have a critical flaw—they often don’t make sense. They sometimes say things that are inconsistent with what has been said so far, or lack common sense and basic knowledge about the world. Moreover, chatbots often give responses that are not specific to the current context. For example, “I don’t know”, is a sensible response to any question, but it’s not specific. Current chatbots do this much more often than people because it covers many possible user inputs.

    In ⁠, we present Meena, a 2.6 billion parameter end-to-end trained neural conversational model. We show that Meena can conduct conversations that are more sensible and specific than existing state-of-the-art chatbots. Such improvements are reflected through a new human evaluation metric that we propose for open-domain chatbots, called Sensibleness and Specificity Average (SSA), which captures basic, but important attributes for human conversation. Remarkably, we demonstrate that perplexity, an automatic metric that is readily available to any neural conversational models, highly correlates with SSA.

    …The Meena model has 2.6 billion parameters and is trained on 341 GB of text, filtered from public domain social media conversations. Compared to an existing state-of-the-art generative model, OpenAI ⁠, Meena has 1.7× greater model capacity and was trained on 8.5× more data.

    …For each chatbot, we collect between 1600 and 2400 individual conversation turns through about 100 conversations. Each model response is labeled by crowdworkers to indicate if it is sensible and specific. The sensibleness of a chatbot is the fraction of responses labeled “sensible”, and specificity is the fraction of responses that are marked “specific”. The average of these two is the SSA score. The results below demonstrate that Meena does much better than existing state-of-the-art chatbots by large margins in terms of SSA scores, and is closing the gap with human performance.

    Automatic Metric: Perplexity

    Researchers have long sought for an automatic evaluation metric that correlates with more accurate, human evaluation. Doing so would enable faster development of dialogue models, but to date, finding such an automatic metric has been challenging. Surprisingly, in our work, we discover that perplexity, an automatic metric that is readily available to any neural seq2seq model, exhibits a strong correlation with human evaluation, such as the SSA value. Perplexity measures the uncertainty of a language model. The lower the perplexity, the more confident the model is in generating the next token (character, subword, or word). Conceptually, perplexity represents the number of choices the model is trying to choose from when producing the next token.

    During development, we benchmarked eight different model versions with varying hyperparameters and architectures, such as the number of layers, attention heads, total training steps, whether we use Evolved Transformer or regular Transformer, and whether we train with hard labels or with distillation. As illustrated in the figure below, the lower the perplexity, the better the SSA score for the model, with a strong correlation coefficient (R2 = 0.93)…As advocated previously, we will continue our goal of lowering the perplexity of neural conversational models through improvements in algorithms, architectures, data, and compute.

  33. ⁠, Scott Alexander (2020-01-06):

    …Black is ⁠. Its excuse [for this chess blunder] is that it’s a text prediction program with no concept of chess. As far as it knows, it’s trying to predict short alphanumeric strings like “e2e4” or “Nb7”. Nobody told it this represents a board game. It doesn’t even have a concept of 2D space that it could use to understand such a claim. But it still captured my rook! Embarrassing!…Last month, I asked him if he thought GPT-2 could play chess. I wondered if he could train it on a corpus of chess games written in standard notation (where, for example, e2e4 means “move the pawn at square e2 to square e4”). There are literally millions of games written up like this. GPT-2 would learn to predict the next string of text, which would correspond to the next move in the chess game. Then you would prompt it with a chessboard up to a certain point, and it would predict how the chess masters who had produced its training data would continue the game—ie make its next move using the same heuristics they would. Gwern handed the idea to his collaborator Shawn Presser⁠, who had a working GPT-2 chess engine running within a week:…You can play against GPT-2 yourself by following the directions in the last tweet, though it won’t be much of a challenge for anyone better than I am.

    …What does this imply? I’m not sure (and maybe it will imply more if someone manages to make it actually good). It was already weird to see something with no auditory qualia learn passable poetic meter. It’s even weirder to see something with no concept of space learn to play chess. Is any of this meaningful? How impressed should we be that the same AI can write poems, compose music, and play chess, without having been designed for any of those tasks? I still don’t know.

    [Shawn comments on HN⁠. See also the much later ⁠/​​​​⁠/​​​​ who do the exact same thing in applying GPT-2 to Go SGF/​​​​chess PGN games. Shawn Presser’s encoding of the data turns out to be equivalent to ⁠.]

  34. ⁠, Ricson Cheng (2020-01-10):

    The Shannon entropy of natural English language is roughly one byte per word, depending on the dataset used. Shannon estimated the number of possible chess games to be 10120. I’ve also seen an estimate of 3 reasonable moves per ply (so 1040 possible 40 move games). This begs the question: just how much information is there in a chess move?…I treated this as a sequence modeling problem. An alternative (and possibly better) approach would be to explicitly make use of the board state. However as I was lazy, I did not do this. I was also motivated by the idea of recreating blindfold chess, which is challenging for humans, but unclear for computers (how would you blindfold a computer?—(also see Tom Murphy’s Elo World)). Also as the “Markovian” approach of simply predicting the move given the current board state has been done many, many times before, I decided this was more interesting.

    …The lichess.org game database contains at the time of writing roughly 1 billion games…I chose to use long algebraic notation, which specifies the start and end coordinate of every piece moved (for example, e2e4). “special” moves also include castling and promotion. There are slightly less than 2000 valid unique tokens in this notation

    …I used the transformer_big_single_gpu (henceforth known as T78) model from the tensor2tensor repository which has roughly 78 million parameters. I used the default hyperparameters and did not tune anything. I trained on a single 1080ti for almost 4 days (~2 million steps). This turns out to be roughly 50 million games, which is to say, the model only saw 25% of the dataset.

    While I wasn’t picky about getting only the best games, I did want some minimal quality control. Therefore I considered only games where both players had a rating of at least 1510 (I believe new accounts start at a rating of 1500), and where both players had at least 5 minutes to play the game, and where the game was at most 100 moves (200-ply). If both players had a rating of at least 2000, the time requirement was bypassed. Note that for time controls with increment, I converted it into a single number by assuming the game was 40 moves long. Roughly 21 of games passed this first filter. I further divided my dataset up by considering games where both players had a rating of at least 2000 and the time control was at least 5 minutes. Less than 1% of games met this filter, but I didn’t find this too worrying as that was still several million games…Instead of training 2 different models, or fine-tuning a trained model on the “better” games, I simply added 2 new tokens to the vocabulary, A, and B, and prefaced each game with one of the 2 tokens. A was used for the more stringent filter, and B for the rest. I did this primarily to save time. Note that it’s fairly trivial to “undo” this conditioning just by summing over the 2 possible first tokens. I was hoping strategy this would allow me to train with a massive dataset, but then to condition on A to generate higher quality games…Each sequence ends with an additional token to denote which side won, or a draw.

    Example data:

    A e2e4 d7d6 d2d4 e7e5 d4e5 d6e5 d1d8 e8d8 g1f3 f8d6 f1c4 c8e6 c4e6 f7e6 f3g5 d8e7 c1e3 h7h6 g5f3 g7g5 b1c3 a7a6 a1d1 g8f6 a2a3 f6g4 e3c1 d6c5 e1g1 b8c6 h2h3 g4f6 g2g4 a8g8 b2b4 c5a7 b4b5 a6b5 c3b5 a7b6 f1e1 g8d8 d1d8 h8d8 c1b2 f6d7 g1g2 d8f8 a3a4 f8f4 b2c1 f4f7 h3h4 g5h4 c1h6 h4h3 g2g3 d7c5 g4g5 f7f4 g5g6 c5e4 e1e4 f4e4 g6g7 b6f2 g3f2 e4g4 f3g5 g4g5 h6g5 e7f7 b5c7 f7g7 c7e6 g7g6 g5e3 g6f5… e6c5 b7b6 c5d7 c6b4 d7b6 f5e4 e3g5 b4c2 f2g3 c2b4 g3h3 b4a6 g5d2 e4d4 a4a5 d4c5 b6d7 c5b5 d7e5 a6c5 e5f3 c5e4 h3g4 e4d6 d2f4 d6e4 f3d4 b5a6 d4e6 2

    …Results:

    • A games: 2.15 bits per ply, 4.43 perplexity
    • B games: 2.26 bits per ply, 4.80 perplexity

    I “preregistered” a guess of 2.5 bits per ply before running any experiments. After seeing the results, I believe a better designed model could probably reach between 1.6 and 2.0 BPP. I also believe a larger model would perform better, as I was probably close to saturating the capacity of T78.

    [Response to ⁠, see Reddit⁠. Note that Ricson Cheng’s encoding uses the ‘inline metadata trick’ in a sense, but does not include ELO player skill metadata, or put the reward at the beginning or but at the end, and so is not a ⁠; Cheng got only halfway there by doing a quality split A/​​​​B and conditioning on ‘A’ tokens.]

  35. 2020-devito.pdf: ⁠, Nicholas J. DeVito, Seb Bacon, Ben Goldacre (2020-01-17; statistics  /​ ​​ ​bias):

    Background: Failure to report the results of a clinical trial can distort the evidence base for clinical practice, breaches researchers’ ethical obligations to participants, and represents an important source of research waste. The Food and Drug Administration Amendments Act (FDAAA) of 2007 now requires sponsors of applicable trials to report their results directly onto ClinicalTrials.gov within 1 year of completion. The first trials covered by the Final Rule of this act became due to report results in January, 2018. In this cohort study, we set out to assess compliance.

    Methods: We downloaded data for all registered trials on ClinicalTrials.gov each month from March, 2018, to September, 2019. All cross-sectional analyses in this manuscript were performed on data extracted from ClinicalTrials.gov on Sept 16, 2019; monthly trends analysis used archived data closest to the 15th day of each month from March, 2018, to September, 2019. Our study cohort included all applicable trials due to report results under FDAAA. We excluded all non-applicable trials, those not yet due to report, and those given a certificate allowing for delayed reporting. A trial was considered reported if results had been submitted and were either publicly available, or undergoing quality control review at ClinicalTrials.gov. A trial was considered compliant if these results were submitted within 1 year of the primary completion date, as required by the legislation. We described compliance with the FDAAA 2007 Final Rule, assessed trial characteristics associated with results reporting using logistic regression models, described sponsor-level reporting, examined trends in reporting, and described time-to-report using the method.

    Findings: 4209 trials were due to report results; 1722 (40·9%; 95% 39·4–42·2) did so within the 1-year deadline. 2686 (63·8%; 62·4–65·3) trials had results submitted at any time. Compliance has not improved since July, 2018. Industry sponsors were statistically-significantly more likely to be compliant than non-industry, non-US Government sponsors (odds ratio [OR] 3·08 [95% CI 2·52–3·77]), and sponsors running large numbers of trials were statistically-significantly more likely to be compliant than smaller sponsors (OR 11·84 [9·36–14·99]). The median delay from primary completion date to submission date was 424 days (95% CI 412–435), 59 days higher than the legal reporting requirement of 1 year.

    Interpretation: Compliance with the FDAAA 2007 is poor, and not improving. To our knowledge, this is the first study to fully assess compliance with the Final Rule of the FDAAA 2007. Poor compliance is likely to reflect lack of enforcement by regulators. Effective enforcement and action from sponsors is needed; until then, open public audit of compliance for each individual sponsor may help. We will maintain updated compliance data for each individual sponsor and trial at fdaaa.trialstracker.net⁠.

    Funding: Laura and John Arnold Foundation.

  36. {#linkBibliography-(science)-2020 .docMetadata doi=“10.1126/​​science.aba8123”}, Charles Piller (Science) (2020-01-13):

    The rule took full effect 2 years ago, on 2018-01-18, giving trial sponsors ample time to comply. But a Science investigation shows that many still ignore the requirement, while federal officials do little or nothing to enforce the law.

    Science examined more than 4700 trials whose results should have been posted on the NIH website ClinicalTrials.gov under the 2017 rule. Reporting rates by most large pharmaceutical companies and some universities have improved sharply, but performance by many other trial sponsors—including, ironically, NIH itself—was lackluster. Those sponsors, typically either the institution conducting a trial or its funder, must deposit results and other data within 1 year of completing a trial. But of 184 sponsor organizations with at least five trials due as of 2019-09-25, 30 companies, universities, or medical centers never met a single deadline. As of that date, those habitual violators had failed to report any results for 67% of their trials and averaged 268 days late for those and all trials that missed their deadlines. They included such eminent institutions as the Harvard University-affiliated Boston Children’s Hospital, the University of Minnesota, and Baylor College of Medicine—all among the top 50 recipients of NIH grants in 2019. The violations cover trials in virtually all fields of medicine, and the missing or late results offer potentially vital information for the most desperate patients. For example, in one long-overdue trial, researchers compared the efficacy of different chemotherapy regimens in 200 patients with advanced lymphoma; another—nearly 2 years late—tests immunotherapy against conventional chemotherapy in about 600 people with late-stage lung cancer.

    …Contacted for comment, none of the institutions disputed the findings of this investigation. In all 4768 trials Science checked, sponsors violated the reporting law more than 55% of the time. And in hundreds of cases where the sponsors got credit for reporting trial results, they have yet to be publicly posted because of quality lapses flagged by ClinicalTrials.gov staff (see sidebar).

    Although the 2017 rule, and officials’ statements at the time, promised aggressive enforcement and stiff penalties, neither NIH nor FDA has cracked down. FDA now says it won’t brandish its big stick—penalties of up to $12,103 a day for failing to report a trial’s results—until after the agency issues further “guidance” on how it will exercise that power. It has not set a date. NIH said at a 2016 briefing on the final rule that it would cut off grants to those who ignore the trial reporting requirements, as authorized in the 2007 law, but so far has not done so…NIH and FDA officials do not seem inclined to apply that pressure. Lyric Jorgenson, NIH deputy director for science policy, says her agency has been “trying to change the culture of how clinical trial results are reported and disseminated; not so much on the ‘aha, we caught you’, as much as getting people to understand the value, and making it as easy as possible to share and disseminate results.” To that end, she says, ClinicalTrials.gov staff have educated researchers about the website and improved its usability. As for FDA, Patrick McNeilly, an official at the agency who handles trial enforcement matters, recently told an industry conference session on ClinicalTrials.gov that “FDA has limited resources, and we encourage voluntary compliance.” He said the agency also reviews reporting of information on ClinicalTrials.gov as part of inspections of trial sites, or when it receives complaints. McNeilly declined an interview request, but at the conference he discounted violations of ClinicalTrials.gov reporting requirements found by journalists and watchdog groups. “We’re not going to blanketly accept an entire list of trials that people say are noncompliant”, he said.

    …It also highlights that pharma’s record has been markedly better than that of academia and the federal government.

    …But such good performance shouldn’t be an exception, Harvard’s Zarin says. “Further public accountability of the trialists, but also our government organizations, has to happen. One possibility is that FDA and NIH will be shamed into enforcing the law. Another possibility is that sponsors will be shamed into doing a better job. A third possibility is that ClinicalTrials.gov will never fully achieve its vital aspirations.”

  37. ⁠, David S. Yeager, Paul Hanselman, Gregory M. Walton, Jared S. Murray, Robert Crosnoe, Chandra Muller, Elizabeth Tipton, Barbara Schneider, Chris S. Hulleman, Cintia P. Hinojosa, David Paunesku, Carissa Romero, Kate Flint, Alice Roberts, Jill Trott, Ronaldo Iachan, Jenny Buontempo, Sophia Man Yang, Carlos M. Carvalho, P. Richard Hahn, Maithreyi Gopalan, Pratik Mhatre, Ronald Ferguson, Angela L. Duckworth, Carol S. Dweck (2019-08-07):

    A global priority for the behavioural sciences is to develop cost-effective, scalable interventions that could improve the academic outcomes of adolescents at a population level, but no such interventions have so far been evaluated in a population-generalizable sample. Here we show that a short (less than one hour), online growth mindset intervention—which teaches that intellectual abilities can be developed—improved grades among lower-achieving students and increased overall enrolment to advanced mathematics courses in a nationally representative sample of students in secondary education in the United States. Notably, the study identified school contexts that sustained the effects of the growth mindset intervention: the intervention changed grades when peer norms aligned with the messages of the intervention. Confidence in the conclusions of this study comes from independent data collection and processing, of analyses, and corroboration of results by a blinded ⁠.

  38. 1987-rossi

  39. {#linkBibliography-(jama)-2020 .docMetadata doi=“10.1001/​​jama.2019.21441”}, Rita Rubin (JAMA) (2020-01-15):

    [Summary of vegetarian activist/​​​​researcher reaction to recent reviews & meta-analysis indicating that the correlation of meat-eating with bad health often does not appear in epidemiological datasets, the randomized experiments do not support the strong claims, and the overall evidence that eating meat = bad health is low quality & weak:

    After breaking the embargo, they began lobbying against it, spamming the journal editor, demanding the papers be retracted before publication, denouncing it in talks, and contacting the Federal Trade Commission & district attorneys demanding they investigate; they justify these activities by saying that since high-quality evidence can’t be easily obtained in nutrition, there is no need for it, and accusing the authors of financial conflicts of interest and comparing them to global warming deniers.

    However, the conflicts of interest represent very small percentages of funding, and the vegetarian activist/​​​​researchers themselves are heavily funded by anti-meat interests, such as olive research institutions, walnut industry bodies, the egg industry, snack companies, and alternative diet groups, with the list of funders of one member including but far from limited to “the Research Network, the Almond Board of California, the International Nut and Dried Fruit Council; Soy Foods Association of North America; the Peanut Institute; Kellogg’s Canada; and Quaker Oats Canada.”]

  40. ⁠, Aaron E. Carroll, Tiffany S. Doherty (2019-11-19):

    For some time, medical and science organizations have been beating the drum that red and processed meat are bad for you. For almost as long, they have lamented that their efforts to inform the public have not convinced enough people to change their consumption. This month’s issue offers us food for thought on why. The field of nutritional epidemiology is plagued by observational studies that have conducted inappropriate analyses, accompanied by likely erroneous conclusions (1). Many studies selectively report results, and many lack an a priori hypothesis. Many use notoriously unreliable self-reports of food consumption while failing to collect or appropriately control for data on numerous potential confounders.

    …Four more studies join the evidence base this month, and because they review all of the evidence that came before, they cannot be accused of cherry-picking. The first was a meta-analysis of cohort studies that focused on how dietary patterns, including differing amounts of red or processed meat, affected all-cause mortality, cardiometabolic outcomes, and cancer incidence and mortality (6). More than 100 studies including more than 6 million participants were analyzed. The overall conclusions were that dietary patterns, including differences in meat consumption, may result in only small differences in risk outcomes over long periods.

    The next study was a meta-analysis that homed in specifically on cohort studies examining how reductions in red and processed meat might affect cancer incidence and mortality (7). It included 118 studies with more than 6 million participants, and it, too, found that the possible impact of reduced meat intake was very small. The third study was a meta-analysis of cohort studies that looked specifically at meat consumption and its relationship to all-cause mortality and cardiometabolic outcomes (8), and—once again—it found that any link was very small.

    …In a fourth analysis in this issue (9), researchers examined randomized that compared diets with differing amounts of red meat consumption for at least 6 months. They found 12 eligible studies, but one of them—the Women’s Health Initiative—was so large (almost 49 000 women) that it dominated the analysis. We can wish for more studies, and we could hope that they had more homogenous outcomes and better fidelity to assigned diets, but the overall conclusions from what they had were that “red meat may have little or no effect on major cardiometabolic outcomes and cancer mortality and incidence.”

    …it may be time to stop producing observational research in this area. These meta-analyses include millions of participants. Further research involving much smaller cohorts has limited value. High-quality randomized controlled trials are welcome, but only if they’re designed to tell us things we don’t already know.

  41. ⁠, Dena Zeraatkar, Mi Ah Han, Gordon H. Guyatt, Robin W. M. Vernooij, Regina El Dib, Kevin Cheung, Kirolos Milio, Max Zworth, Jessica J. Bartoszko, Claudia Valli, Montserrat Rabassa, Yung Lee, Joanna Zajac, Anna Prokop-Dorner, Calvin Lo, Malgorzata M. Bala, Pablo Alonso-Coello, Steven E. Hanna, Bradley C. Johnston (2019-11-19):

    Background: Dietary guidelines generally recommend limiting intake of red and processed meat. However, the quality of evidence implicating red and processed meat in adverse health outcomes remains unclear.

    Purpose: To evaluate the association between red and processed meat consumption and all-cause mortality, cardiometabolic outcomes, quality of life, and satisfaction with diet among adults.

    Data Sources: EMBASE (Elsevier), Cochrane Central Register of Controlled Trials (Wiley), Web of Science (Clarivate Analytics), CINAHL (EBSCO), and ProQuest from inception until July 2018 and MEDLINE from inception until April 2019, without language restrictions, as well as bibliographies of relevant articles.

    Study Selection: Cohort studies with at least 1000 participants that reported an association between unprocessed red or processed meat intake and outcomes of interest.

    Data Extraction: Teams of 2 reviewers independently extracted data and assessed risk of bias. One investigator assessed certainty of evidence, and the senior investigator confirmed the assessments.

    Data Synthesis: Of 61 articles reporting on 55 cohorts with more than 4 million participants, none addressed quality of life or satisfaction with diet. Low-certainty evidence was found that a reduction in unprocessed red meat intake of 3 servings per week is associated with a very small reduction in risk for cardiovascular mortality, stroke, myocardial infarction (MI), and type 2 diabetes. Likewise, low-certainty evidence was found that a reduction in processed meat intake of 3 servings per week is associated with a very small decrease in risk for all-cause mortality, cardiovascular mortality, stroke, MI, and type 2 diabetes.

    Limitation: Inadequate adjustment for known confounders, residual confounding due to observational design, and recall bias associated with dietary measurement.

    Conclusion: The magnitude of association between red and processed meat consumption and all-cause mortality and adverse cardiometabolic outcomes is very small, and the evidence is of low certainty.

  42. ⁠, Mi Ah Han, Dena Zeraatkar, Gordon H. Guyatt, Robin W. M. Vernooij, Regina El Dib, Ying Zhang, Abdullah Algarni, Gareth Leung, Dawid Storman, Claudia Valli, Montserrat Rabassa, Nadia Rehman, Michael K. Parvizian, Max Zworth, Luciane Cruz Lopes, Daegan Sit, Malgorzata M. Bala, Pablo Alonso-Coello, Bradley C. Johnston (2019-11-19):

    Background: Cancer incidence has continuously increased over the past few centuries and represents a major health burden worldwide.

    Purpose: To evaluate the possible causal relationship between intake of red and processed meat and cancer mortality and incidence.

    Data Sources: Embase, Cochrane Central Register of Controlled Trials, Web of Science, CINAHL, and ProQuest from inception until July 2018 and MEDLINE from inception until April 2019 without language restrictions.

    Study Selection: Cohort studies that included more than 1000 adults and reported the association between consumption of unprocessed red and processed meat and cancer mortality and incidence.

    Data Extraction: Teams of 2 reviewers independently extracted data and assessed risk of bias; 1 reviewer evaluated the certainty of evidence, which was confirmed or revised by the senior reviewer.

    Data Synthesis: Of 118 articles (56 cohorts) with more than 6 million participants, 73 articles were eligible for the dose-response meta-analyses, 30 addressed cancer mortality, and 80 reported cancer incidence. Low-certainty evidence suggested that an intake reduction of 3 servings of unprocessed meat per week was associated with a very small reduction in overall cancer mortality over a lifetime. Evidence of low to very low certainty suggested that each intake reduction of 3 servings of processed meat per week was associated with very small decreases in overall cancer mortality over a lifetime; prostate cancer mortality; and incidence of esophageal, colorectal, and breast cancer.

    Limitation: Limited causal inferences due to residual confounding in observational studies, risk of bias due to limitations in diet assessment and adjustment for confounders, recall bias in dietary assessment, and insufficient data for planned subgroup analyses.

    Conclusion: The possible absolute effects of red and processed meat consumption on cancer mortality and incidence are very small, and the certainty of evidence is low to very low.

  43. ⁠, Robin W. M. Vernooij, Dena Zeraatkar, Mi Ah Han, MD, Regina El Dib, Max Zworth, Kirolos Milio, Daegan Sit, Yung Lee, Huda Gomaa, Claudia Valli, Mateusz J. Swierz, Yaping Chang, Steven E. Hanna, Paula M. Brauer, John Sievenpiper, Russell de Souza, Pablo Alonso-Coello, Malgorzata M. Bala, Gordon H. Guyatt, Bradley C. Johnston (2019-11-19):

    Background: Studying dietary patterns may provide insights into the potential effects of red and processed meat on health outcomes.

    Purpose: To evaluate the effect of dietary patterns, including different amounts of red or processed meat, on all-cause mortality, cardiometabolic outcomes, and cancer incidence and mortality.

    Data Sources: Systematic search of MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials, CINAHL, Web of Science, and ProQuest Dissertations & Theses Global from inception to April 2019 with no restrictions on year or language.

    Study Selection: Teams of 2 reviewers independently screened search results and included prospective cohort studies with 1000 or more participants that reported on the association between dietary patterns and health outcomes.

    Data Extraction: Two reviewers independently extracted data, assessed risk of bias, and evaluated the certainty of evidence using GRADE (Grading of Recommendations Assessment, Development and Evaluation) criteria.

    Data Synthesis: Eligible studies that followed patients for 2 to 34 years revealed low-certainty to very-low-certainty evidence that dietary patterns lower in red and processed meat intake result in very small or possibly small decreases in all-cause mortality, cancer mortality and incidence, cardiovascular mortality, nonfatal coronary heart disease, fatal and nonfatal myocardial infarction, and type 2 diabetes. For all-cause, cancer, and cardiovascular mortality and incidence of some types of cancer, the total sample included more than 400 000 patients; for other outcomes, total samples included 4000 to more than 300 000 patients.

    Limitation: Observational studies are prone to residual confounding⁠, and these studies provide low-certainty or very-low-certainty evidence according to the GRADE criteria.

    Conclusion: Low-certainty or very-low-certainty evidence suggests that dietary patterns with less red and processed meat intake may result in very small reductions in adverse cardiometabolic and cancer outcomes.

  44. 2019-zeraatkar.pdf: ⁠, Dena Zeraatkar, Bradley C. Johnston, Jessica Bartoszko, Kevin Cheung, Malgorzata M. Bala, Claudia Valli, Montserrat Rabassa, Deagan Sit, Kirolos Milio, Behnam Sadeghirad, Arnav Agarwal, Adriana M. Zea, Yung Lee, Mi Ah Han, Robin W. M. Vernooij, Pablo Alonso-Coello, Gordon H. Guyatt, Regina El Dib (2019-10-01; longevity):

    Background: Few randomized trials have evaluated the effect of reducing red meat intake on clinically important outcomes.

    Purpose: To summarize the effect of lower versus higher red meat intake on the incidence of cardiometabolic and cancer outcomes in adults.

    Data Sources: EMBASE, CENTRAL, CINAHL, Web of Science, and ProQuest from inception to July 2018 and MEDLINE from inception to April 2019, without language restrictions.

    Study Selection: Randomized trials (published in any language) comparing diets lower in red meat with diets higher in red meat that differed by a gradient of at least 1 serving per week for 6 months or more.

    Data Extraction: Teams of 2 reviewers independently extracted data and assessed the risk of bias and the certainty of the evidence.

    Data Synthesis: Of 12 eligible trials, a single trial enrolling 48 835 women provided the most credible, though still low-certainty, evidence that diets lower in red meat may have little or no effect on all-cause mortality (hazard ratio [HR], 0.99 [95% CI, 0.95 to 1.03]), cardiovascular mortality (HR, 0.98 [CI, 0.91 to 1.06]), and cardiovascular disease (HR, 0.99 [CI, 0.94 to 1.05]). That trial also provided low-certainty to very-low-certainty evidence that diets lower in red meat may have little or no effect on total cancer mortality (HR, 0.95 [CI, 0.89 to 1.01]) and the incidence of cancer, including colorectal cancer (HR, 1.04 [CI, 0.90 to 1.20]) and breast cancer (HR, 0.97 [0.90 to 1.04]).

    Limitations: There were few trials, most addressing only surrogate outcomes, with heterogeneous comparators and small gradients in red meat consumption between lower versus higher intake groups.

    Conclusion: Low-certainty to very-low-certainty evidence suggests that diets restricted in red meat may have little or no effect on major cardiometabolic outcomes and cancer mortality and incidence.

  45. ⁠, Claudia Valli, Montserrat Rabassa, Bradley C. Johnston, Ruben Kuijpers, Anna Prokop-Dorner, Joanna Zajac, Dawid Storman, Monika Storman, Malgorzata M. Bala, Ivan Solà, Dena Zeraatkar, Mi Ah Han, Robin W. M. Vernooij, Gordon H. Guyatt, Pablo Alonso-Coello (2019-11-19):

    Background: A person’s meat consumption is often determined by their values and preferences.

    Purpose: To identify and evaluate evidence addressing health-related values and preferences regarding meat consumption.

    Data Sources: MEDLINE, EMBASE, Web of Science, Centre for Agriculture and Biosciences Abstracts, International System for Agricultural Science and Technology, and Food Science and Technology Abstracts were searched from inception to July 2018 without language restrictions.

    Study Selection: Pairs of reviewers independently screened search results and included quantitative and qualitative studies reporting adults’ health-related values and preferences regarding meat consumption.

    Data Extraction: Pairs of reviewers independently extracted data and assessed risk of bias.

    Data Synthesis: Data were synthesized into narrative form, and summaries were tabulated and certainty of evidence was assessed using the GRADE (Grading of Recommendations Assessment, Development and Evaluation) approach. Of 19 172 initial citations, 41 quantitative studies (38 addressed reasons for meat consumption and 5 addressed willingness to reduce meat consumption) and 13 qualitative studies (10 addressed reasons for meat consumption and 4 addressed willingness to reduce meat consumption) were eligible for inclusion. 13 studies reported that omnivores enjoy eating meat, 18 reported that these persons consider meat an essential component of a healthy diet, and 7 reported that they believe they lack the skills needed to prepare satisfactory meals without meat. Omnivores are generally unwilling to change their meat consumption. The certainty of evidence was low for both “reasons for meat consumption” and “willingness to reduce meat consumption in the face of undesirable health effects.”

    Limitation: Limited generalizability of findings to lower-income countries, low-certainty evidence for willingness to reduce meat consumption, and limited applicability to specific types of meat (red and processed meat).

    Conclusion: Low-certainty evidence suggests that omnivores are attached to meat and are unwilling to change this behavior when faced with potentially undesirable health effects.

  46. ⁠, Bradley C. Johnston, Dena Zeraatkar, Mi Ah Han, Robin W. M. Vernooij, Claudia Valli, Regina El Dib, Catherine Marshall, Patrick J. Stover, Susan Fairweather-Taitt, Grzegorz Wójcik, Faiz Bhatia, Russell de Souza, Carlos Brotons, Joerg J. Meerpohl, Chirag J. Patel, Benjamin Djulbegovic, Pablo Alonso-Coello, Malgorzata M. Bala, Gordon H. Guyatt (2019-11-19):

    Description: Dietary guideline recommendations require consideration of the certainty in the evidence, the magnitude of potential benefits and harms, and explicit consideration of people’s values and preferences. A set of recommendations on red meat and processed meat consumption was developed on the basis of 5 de novo systematic reviews that considered all of these issues.

    Methods: The recommendations were developed by using the Nutritional Recommendations (NutriRECS) guideline development process, which includes rigorous systematic review methodology, and GRADE methods to rate the certainty of evidence for each outcome and to move from evidence to recommendations. A panel of 14 members, including 3 community members, from 7 countries voted on the final recommendations. Strict criteria limited the conflicts of interest among panel members. Considerations of environmental impact or animal welfare did not bear on the recommendations. Four systematic reviews addressed the health effects associated with red meat and processed meat consumption, and 1 systematic review addressed people’s health-related values and preferences regarding meat consumption.

    Recommendations: The panel suggests that adults continue current unprocessed red meat consumption (weak recommendation, low-certainty evidence). Similarly, the panel suggests adults continue current processed meat consumption (weak recommendation, low-certainty evidence).

  47. ⁠, E. R. Farmer (2019-04-06):

    This is a follow-up to a post from earlier this year discussing the likelihood of encountering two identical packs of Skittles, that is, two packs having exactly the same number of candies of each flavor. Under some reasonable assumptions, it was estimated that we should expect to have to inspect “only about 400–500 packs” on average until encountering a first duplicate. This is interesting, because as described in that earlier post, there are millions of different possible packs– or even if we discount those that are much less likely to occur (like, say, a pack of nothing but red Skittles), then there are still hundreds of thousands of different “likely” packs that we might expect to encounter.

    So, on 12 January of this year, I started buying boxes of packs of Skittles. This past week, “only” 82 days, 13 boxes, 468 packs, and 27,740 individual Skittles later, I found the following identical 2.17-ounce packs.

    …this seemed like a great opportunity to demonstrate the predictive power of mathematics. A few months ago, we did some calculations on a cocktail napkin, so to speak, predicting that we should be able to find a pair of identical packs of Skittles with a reasonably—and perhaps surprisingly—small amount of effort. Actually seeing that effort through to the finish line can be a vivid demonstration for students of this predictive power of what might otherwise be viewed as “merely abstract” and not concretely useful mathematics.

  48. {#linkBibliography-guardian)-2011 .docMetadata}, Will Storr (The Guardian) (2011-07-17):

    Of all the secrets of war, there is one that is so well kept that it exists mostly as a rumour. It is usually denied by the perpetrator and his victim. Governments, aid agencies and human rights defenders at the UN barely acknowledge its possibility. Yet every now and then someone gathers the courage to tell of it…“That was hard for me to take”, Owiny tells me today. “There are certain things you just don’t believe can happen to a man, you get me? But I know now that sexual violence against men is a huge problem. Everybody has heard the women’s stories. But nobody has heard the men’s.”

    It’s not just in East Africa that these stories remain unheard. One of the few academics to have looked into the issue in any detail is Lara Stemple, of the University of California’s Health and Human Rights Law Project. Her study “Male Rape and Human Rights” notes incidents of male sexual violence as a weapon of wartime or political aggression in countries such as Chile, Greece, Croatia, Iran, Kuwait, the former Soviet Union and the former Yugoslavia. Twenty-one% of Sri Lankan males who were seen at a London torture treatment centre reported sexual abuse while in detention. In El Salvador, 76% of male political prisoners surveyed in the 1980s described at least one incidence of sexual torture. A study of 6,000 concentration-camp inmates in Sarajevo found that 80% of men reported having been raped…Dolan first heard of wartime sexual violence against men in the late 1990s while researching his PhD in northern Uganda, and he sensed that the problem might be dramatically underestimated. Keen to gain a fuller grasp of its depth and nature, he put up posters throughout Kampala in June 2009 announcing a “workshop” on the issue in a local school. On the day, 150 men arrived. In a burst of candour, one attendee admitted: “It’s happened to all of us here.”…a rare 2010 survey, published in the Journal of the American Medical Association, found that 22% of men and 30% of women in Eastern Congo reported conflict-related sexual violence.

    …Back at RLP I’m told about the other ways in which their clients have been made to suffer. Men aren’t simply raped, they are forced to penetrate holes in banana trees that run with acidic sap, to sit with their genitals over a fire, to drag rocks tied to their penis, to give oral sex to queues of soldiers, to be penetrated with screwdrivers and sticks. Atim has now seen so many male survivors that, frequently, she can spot them the moment they sit down. “They tend to lean forward and will often sit on one buttock”, she tells me. “When they cough, they grab their lower regions. At times, they will stand up and there’s blood on the chair. And they often have some kind of smell.”

  49. ⁠, Cécile Allegra (2017-11-03):

    Male rape is being used systematically in Libya as an instrument of war and political domination by rival factions, according to multiple testimonies gathered by investigators. Years of work by a Tunis-based group and witnessed by a journalist from Le Monde have produced harrowing reports from victims, and video footage showing men being sodomised by various objects, including rockets and broom handles. In several instances, witnesses say a victim was thrown into a room with other prisoners, who were ordered to rape him or be killed.

    The atrocity is being perpetrated to humiliate and neutralise opponents in the lawless, militia-dominated country. Male rape is such a taboo in Arab societies that the abused generally feel too damaged to rejoin political, military or civic life. One man, Ahmed, told investigators he was detained for four years in a prison in Tomina, on the outskirts of Misrata. “They separate you to subjugate you”, he said. “‘Subjugate the men’, that’s the expression that they use. So that you never hold your head up again. And they were filming everything with their phones.”They take a broom and fix it on the wall. If you want to eat, you have to take off your pants, back on to the broom and not move off until the jailer sees blood flowing. Nobody can escape it."

    …In one camp, south of Tripoli, a man called Ali recounted his experience. He was 39 but looked 65 and walked with a cane. “Some of us were locked in a room, naked, for a whole night with groups of migrants”, he said. “The guards did not release them until they had all raped each other. Fortunately, I didn’t go through that, I only got the stick and the wheel.” The “wheel” involved being put naked and folded double, through a tyre suspended from the ceiling, making it easier for torturers to penetrate him with weaponry. Ali said he now had physical problems, “leaks” as he called them.

    In another camp in southern Tripoli, Fathia said women were not immune. She said her entire family was violated by a militia from Misrata, with the men being deliberately targeted. “They dragged me in the street, in front of everyone, saying: ‘You raped our girls. We’ll do the same thing to you.’”The worst thing they did to me“, she whispered,”is to rape me in front of my eldest son. Since then, he won’t speak to me." Asked about other inmates who suffered a similar ordeal, Fathia said: “I only heard men’s voices. They were screaming, day and night.”

  50. {#linkBibliography-believer)-2019 .docMetadata}, Ash Sanders (The Believer) (2019-12-02):

    Eating fallen fruit and sleeping outside, however, didn’t provide him relief from his feelings of guilt and foreboding. He began to feel a dread that was inescapable and all-consuming. A devastating depression that he had suffered a few years before that fall semester returned. Normally a math phenom, Chris started failing his tests. In his apartment, he would sit in the dark—he didn’t want to waste electricity—listen to records, and cry. “I felt like I was slowly dying”, he said. A few months later, Chris left Davis to pursue a PhD in philosophy at the University of Kansas. But his condition didn’t improve. After having subsisted on scavenged persimmons and radishes for the entire fall term, he’d lost a dangerous amount of weight. His mother paid a visit to campus and, horrified by his appearance, immediately drove him to the grocery store to buy food. At home, Chris’s family had a hard time understanding the intensity of the self-denial that governed his life. His father and sister blamed his breakdown on abuse that Chris had suffered as a child; they believed his desire to escape society was a projection, an act of taking responsibility for something that wasn’t his fault. But Chris had a different explanation. When he was fifteen, his father had taken him and his sister on a trip to Mount St. Helens. Halfway up the mountain, they had passed clear-cut land. As Chris recalls, one moment there was only evergreen forest and the next moment there was nothing—just bare ground and stumps as far as he could see. A word came to his mind: evil…“They made it sound like I had a psychosis or a mental breakdown and that this is just the form it took, when really, shouldn’t anyone who is ethical and compassionate also choose to opt out of this society?”

    …I was working fifty-hour weeks, mostly unpaid. My mother, concerned, suggested that I take a break. But I refused. There was no pause button on climate change, so why should I get a break? On some days, Salt Lake City, where I lived, had exceptionally bad air quality, a thick soup of pollution settling between the mountains and the valley. The corridor between Salt Lake and Provo, where I’d gone to college, had been completely converted from farmland to strip malls in just ten years. To the south lay one of the biggest open-pit copper mines in the world, to the north was an industrial warren of refineries, and to the west was nuclear waste buried in clay-sealed chambers, reeking of death. That was just the local stuff. Coral reefs were collapsing, ocean ecosystems were overfished, and people in island nations were trapped between salted well water and the swallowing sea. Meanwhile, everyone around me was fine…Sometimes I could do it. Other times I got combative, desperate, contrary. Meanwhile, Chris got married and had two children. When we hung out, he was happier. But he was different too. In his purist days, he’d let his lawn go to seed, refusing to use scarce water resources to keep it green. Now he was living in the suburbs, putting in Kentucky bluegrass. “Why don’t you just keep your lawn the way it was?” I said, too urgently. “Because I’ve been sad my whole life”, Chris said, “and sometimes I just want to sit on my green lawn with my wife and feel love.” I knew it was just a lawn, but it upset me anyway.

    …I quit climate activism for a time, but I’ve kept going to therapy, and I keep confusing my therapists by talking about the end of the world. As it turns out, I’m not alone. A report released in 2012 by the National Wildlife Federation warned that climate change is creating a mental health crisis. The climate scientists, psychologists, and policy experts who authored the study estimated that two hundred million Americans will suffer from mental illness as a result of natural disasters, droughts, heat waves, and economic downturn. Recent disasters bear this out. In the wake of Hurricane Maria, Puerto Rico’s worst natural disaster on record, there was a 7% spike in among the children who survived. In the year after Hurricane Katrina, the suicide rate in New Orleans tripled, and the number of instances of depression and PTSD grew to what health experts described as near-epidemic levels. Even people who aren’t directly impacted by climate disasters can be affected. According to a 2017 report by the American Psychological Association, merely acknowledging the reality of climate change and its consequences can trigger chronic fear, fatalism, anger, and exhaustion—a condition that psychologists are increasingly referring to as eco-anxiety. Eco-anxiety can manifest in other serious ways. In 2008, in the midst of a severe drought in Australia, a seventeen-year-old boy refused to drink water because he was afraid that doing so would lead to the deaths of millions of people. Doctors diagnosed him with “climate delusion” and prescribed antidepressants. When they asked him why he took such drastic action, he said he felt guilty…Greta Thunberg, a sixteen-year-old Swedish girl who inspired the growing student climate strike movement, says that learning about climate change—and seeing adults’ inaction—contributed to a severe depression during which she stopped eating and drinking…other activists are turning the violence of climate change on themselves—like David Buckel, a human rights lawyer who in 2018 lit himself on fire in Prospect Park, in Brooklyn, to call attention to the scale of the climate plight…Quante told me that one of her earliest memories was learning that so many things around her were alive—the trees, the grass, the frogs. It terrified her to realize the harm she was capable of. One day, after it had rained, her mother made her walk along a worm-strewn sidewalk, and she screamed as she was dragged along. “We’re killing them!” she said. “We’re killing them!”…Van Susteren started having trouble sleeping. After getting into bed and closing her eyes, she would be ambushed by intrusive images. She would see refugees surrounded by barbed wire, animals trapped in the path of a hurricane, people stranded in floodwaters. The worst image was of a child. It wasn’t any child she knew, but a sort of representative for all children. The child looked at Van Susteren and asked the same question again and again: “Why didn’t you do anything?” As a psychiatrist, Van Susteren recognized her symptoms. The stress, the insomnia, the intrusive thoughts—they read like PTSD. And yet the trauma she was imagining hadn’t happened yet, or at least it hadn’t happened to her…Van Susteren coined a new term for her condition: pre-traumatic stress disorder…In the back of the class, a student started crying. “If I didn’t have hope, how could I live?” she asked.

    …Robert Salo, the doctor who diagnosed the Australian boy with climate psychosis, was careful to note the boy’s other symptoms (long-term depression, suicidal thoughts, and hearing voices) and the disproportionate sense of importance he placed on his own actions (believing that his own small water usage would lead to widespread deaths). Other critics have pointed out that climate delusion usually afflicts people who already suffer from other mental health maladies, and that the triggers for psychotic episodes generally take the form of the dominant political or cultural issues of the time, from nuclear holocaust to Cold War-era fears about the spread of communism.

  51. ⁠, Joseph Homer Saleh (2019-12-23):

    Popular culture associates the lives of Roman emperors with luxury, cruelty, and debauchery, sometimes rightfully so. One missing attribute in this list is, surprisingly, that this mighty office was most dangerous for its holder. Of the 69 rulers of the unified Roman Empire, from Augustus (d. 14 CE) to Theodosius (d. 395 CE), 62% suffered violent death. This has been known for a while, if not quantitatively at least qualitatively. What is not known, however, and has never been examined is the time-to-violent-death of Roman emperors. This work adopts the statistical tools of survival data analysis to an unlikely population, Roman emperors, and it examines a particular event in their rule, not unlike the focus of reliability engineering, but instead of their time-to-failure, their time-to-violent-death. We investigate the temporal signature of this seemingly haphazard stochastic process that is the violent death of a Roman emperor, and we examine whether there is some structure underlying the randomness in this process or not. Nonparametric and parametric results show that: (1) emperors faced a statistically-significantly high risk of violent death in the first year of their rule, which is reminiscent of infant mortality in reliability engineering; (2) their risk of violent death further increased after 12 years, which is reminiscent of wear-out period in reliability engineering; (3) their failure rate displayed a bathtub-like curve, similar to that of a host of mechanical engineering items and electronic components. Results also showed that the stochastic process underlying the violent deaths of emperors is remarkably well captured by a (mixture) Weibull distribution. We discuss the interpretation and possible reasons for this uncanny result, and we propose a number of fruitful venues for future work to help better understand the deeper etiology of the spectacle of regicide of Roman emperors.

  52. 1999-lee.pdf: ⁠, C. T. Lee, P. Williams, W. A. Hadden (1999-05; sociology):

    All parachute injuries from two local parachute centres over a 5-year period were analysed. Of 174 patients with injuries of varying severity, 94% were first-time charity-parachutists. The injury rate in charity-parachutists was 11% at an average cost of £3751 per casualty. 63% of casualties who were charity-parachutists required hospital admission, representing a serious injury rate of 7%, at an average cost of £5781 per patient. The amount raised per person for charity was £30. Each pound raised for charity cost the NHS £13.75 in return. Parachuting for charity costs more money than it raises, carries a high risk of serious personal injury and places a substantial burden on health resources.

  53. ⁠, Katherine Haenschen, Daniel J. Tamul (2019-12-20):

    Although extensive political communication research considers the content of candidate messages, scholars have largely ignored how those words are rendered—specifically, the typefaces in which they are set. If typefaces are found to have political attributes, that may impact how voters receive campaign messages. Our paper reports the results of two survey experiments demonstrating that individuals perceive typefaces, type families, and type styles to have ideological qualities. Furthermore, partisanship moderates subjects’ perceptions of typefaces: Republicans generally view typefaces as more conservative than Independents and Democrats. We also find evidence of affective polarization, in that individuals rate typefaces more favorably when perceived as sharing their ideological orientation. Results broaden our understanding of how meaning is conveyed in political communication, laying the groundwork for future research into the functions of typography and graphic design in contemporary political campaigns. Implications for political practitioners are also discussed. Keywords: Political communication, ideology, partisanship, typeface, graphic design. [Ranking: Blackletter, Times New Roman, Jubilat, Gill Sans, Birds of Paradise, Century Gothic, Sunrise.]

  54. https://www.dafont.com/sunrise-2.font

  55. 2019-forscher.pdf: ⁠, Patrick Forscher, Calvin Lai, Jordan Axt, Charles Ebersole, Michelle Herman, Patricia Devine, Brian Nosek (2019-08-19; psychology):

    Using a novel technique known as network meta-analysis, we synthesized evidence from 492 studies (87,418 participants) to investigate the effectiveness of procedures in changing implicit measures, which we define as response biases on implicit tasks. We also evaluated these procedures’ effects on explicit and behavioral measures. We found that implicit measures can be changed, but effects are often relatively weak (|ds| < .30). Most studies focused on producing short-term changes with brief, single-session manipulations. Procedures that associate sets of concepts, invoke goals or motivations, or tax mental resources changed implicit measures the most, whereas procedures that induced threat, affirmation, or specific moods/​​​​emotions changed implicit measures the least. Bias tests suggested that implicit effects could be inflated relative to their true population values. Procedures changed explicit measures less consistently and to a smaller degree than implicit measures and generally produced trivial changes in behavior. Finally, changes in implicit measures did not mediate changes in explicit measures or behavior. Our findings suggest that changes in implicit measures are possible, but those changes do not necessarily translate into changes in explicit measures or behavior.

  56. 2008-macknik.pdf: ⁠, Stephen L. Macknik, Mac King, James Randi, Apollo Robbins, Teller, John Thompson, Susana Martinez-Conde (2008-07-30; psychology  /​ ​​ ​illusion-of-depth):

    Just as vision scientists study visual art and illusions to elucidate the workings of the visual system, so too can cognitive scientists study cognitive illusions to elucidate the underpinnings of cognition. Magic shows are a manifestation of accomplished magic performers’ deep intuition for and understanding of human attention and awareness. By studying magicians and their techniques, neuroscientists can learn powerful methods to manipulate attention and awareness in the laboratory. Such methods could be exploited to directly study the behavioural and neural basis of consciousness itself, for instance through the use of brain imaging and other neural recording techniques.

    Table 1: Types of conjuring effects. [We adopt Lamont and Wiseman’s classification7 of conjuring or magic effects into 9 main categories.]
    Magic effects Examples Methodological strategies
    A ppearance: an object appears ‘as if by magic’ Pulling a rabbit out of a hat; the Miser’s Dream (in which hundreds of coins seem to appear where previously there were none)75, 94 (BOX 2; Supplementary information S2 (movie)); Mac King’s giant rock in a shoe trick75, 87 (Supplementary information S3 (movie))
    • The object was already there but was concealed (for example, the magician might conceal a coin in his or her hand prior to its production)
    • The object was secretly put into position (for example, in the Cups and Balls routine, various objects are secretly loaded under the cups during the routine)
    • The object is not there but seems to be (for example, a ‘medium’ can simulate the presence of a spirit at a seance by secretly touching a spectator)
    Vanish: an object disappears ‘as if by magic’ Vanishing of a coin; Penn and Teller’s underwater vanishing of a naval submarine; David Copperfield’s vanishing of the Statue of Liberty.
    • The object was not really where it appeared to be to begin with (for example, the magician fakes a transfer of a coin from the left hand to the right hand, then shows that the coin ‘disappeared’ from the right)
    • The object has been secretly removed (for example, the magician uses a secret device, called a gimmick, to pull an object into his sleeve)
    • The object is still there but is concealed (a coin can seem to vanish from the magician’s hand although in reality it is merely concealed)
    Tran sposition: an object changes position in space from position A to position B Houdini’s Metamorphosis (in which 2 people change places between locked boxes); Penn and Teller’s Hanging Man trick (in which Penn is apparently hanged to death, only to be found safe and sound in the audience)
    • The object seemed to be at A, but actually was already at B (for example, the magician fakes the transfer of a coin from the right to the left hand, then pretends to transfer the coin magically from left to right)
    • The object is still at A but seems to be at B (for example, the magician fakes a coin transfer from the left hand to the right and then, when revealing the coin by dropping it, uses sleight of hand to give the impression that it was dropped from the right hand)
    • The object was secretly moved from A to B (for example, a coin in the left hand is secretly transferred to the right hand and then is revealed there)
    • A duplicate object is used (for example, both hands hold identical coins that are revealed at different times to simulate a transfer)
    Re storation: an object is damaged and then restored to its original condition. Cutting and restoring a rope; sawing an assistant in half; tearing and restoring a newspaper; breaking and restoring rubber bands
    • The object was not really damaged
    • The object was not really restored
    • A duplicate is used
    Pe netration: matter seems to magically move through matter Chinese Linking Rings (metal rings that link and unlink magically); Houdini’s Walking Through A Wall trick; Coins Through The Table [or the Vanishing Bird Ca ge trick, fictionalized in an extreme way by The Prestige]
    • Penetrations combine the techniques used in the transposition and restoration categories
    Trans formation: an object changes form (size, color, shape, weight, etc.) Colour-Changing Card Trick; Spellbound (in which a coin turns into a different coin); the Professor’s Nightmare (in which 3 ropes of different length are made equal in length)

    Transformations can be seen as the vanishing of object A combined with the appearance of object B:

    • Object A was secretly switched with object B
    • Object B was always present but was initially disguised as object A
    • Object A is disguised as object B at the point of ‘transformation’
    E xtraordinary feats (including mental and physical feats) Extraordinary memory (remembering the names of all the audience members); extraordinary calculation (reporting the result of multiplying randomly selected 4-digit numbers); extraordinary strength; invulnerability (specific examples: walking on hot coals; Penn and Teller’s bullet-catching trick)
    • Might rely on relatively obscure scientific knowledge (such as mathematical or physiological knowledge). For example, walking on hot coals is harmless when performed correctly
    Te lekinesis: ‘magical’ levitation or animation of an object Levitation; spoon bending
    • The action is caused by an external force (for example, an invisible thread)
    • The action is caused by an internal force (elasticity, chemical reaction, magnetism, etc.)
    • The action did not actually occur (for example, a spoon bender can convince a spectator that a stationary spoon is still bending)
    Extrasensory perception (ESP; including c lairvoyance, telepathy, p recognition, mental control, etc.) Clairvoyance (acquiring information that is not known to others through ESP); telepathy (acquiring information that is known to others through ESP); precognition (acquiring information from the future); mental control (the performer influences the selection process of another person)
    • Controlling a spectator’s choices to give the illusion of free will
    • Discovering hidden information (for example, reading information that has been sealed in an envelope, fishing for or pumping information from a spectator, cold reading, etc.)
    • Revealing apparent proof that information announced by the spectator was previously known by the magician (for example, by writing the announcement on paper and using sleight of hand to make the paper seem to come out of an envelope that was sealed before the announcement)
  57. ⁠, Ariel Levy (2020-01-06):

    Cameron is entirely insensitive to physical pain. As a child, she fell and hurt her arm while roller-skating, but had no idea she’d broken it until her mother noticed that it was hanging strangely. Giving birth was no worse…Cameron was having a trapeziectomy, an operation to remove a small bone at the base of the thumb joint. Though her hands never hurt, they’d become so deformed by arthritis that she couldn’t hold a pen properly. She’d had a similar experience with her hip, which had recently been replaced; it didn’t hurt, but her family noticed that she wasn’t walking normally. She saw her local doctor about it several times, but the first question was always “How much pain are you in?” And the answer was always “None.” (“The third time I was there I think they figured, ‘We’ll just take an X-ray to shut this woman up’”, Cameron told me. “Then the X-ray came in and it was really bad. Everything was all distorted and mangled and crumbling. He said, ‘Wow. This has got to be done.’”)…Cameron is beguiled by the idea that she can help alleviate others’ suffering—she remembers the terrible migraines that tormented her mother. Her father, however, was pain-free. “I never saw him take an aspirin”, Cameron said. “I’m convinced he was the same as me, because I never heard my father complaining about any pain, ever. He died suddenly, of a brain hemorrhage—I think other people would have had a warning.” ·…People with severe congenital neuropathy tend to die young, because they injure themselves so frequently and severely. (Without pain, children are in constant danger. They swallow something burning hot, the esophagus ruptures, bacteria spill into the internal organs, and terminal sepsis sets in. They break their necks roughhousing. To protect some patients, doctors have removed all their teeth to prevent them from chewing off their tongues and bleeding to death.) ·…Cameron does not have neuropathy: she can feel all the sensations the rest of us do, except pain. The most striking difference between her and everyone else is the way she processes endocannabinoids—chemicals that exist naturally in every human brain. Endocannabinoids mitigate our stress response, and they bind to the same receptors as the THC in the kind of cannabis you smoke. Normally, they are broken down by an enzyme called fatty acid amide hydrolase, or FAAH. But Cameron has a mutation on her FAAH gene that makes the enzyme less effective—so her endocannabinoids build up. She has extraordinarily high levels of one in particular: anandamide, whose name is derived from the Sanskrit word for “bliss.” · About a third of the population has a mutation in the FAAH gene, which provides increased levels of anandamide. “That phenotype—low levels of anxiety, forgetfulness, a happy-go-lucky demeanor—isn’t representative of how everyone responds to cannabis, but you see a lot of the prototypical changes in them that occur when people consume cannabis”, said Matthew Hill, a biologist at the University of Calgary’s Hotchkiss Brain Institute, who was a co-author of the Cameron paper. The FAAH gene, like every gene, comes in a pair. People who have the mutation in one allele of the gene seem a little high; people who have it in both even more so. Jo Cameron is fully baked. “When I met Jo for the first time, I was just struck by her”, Cox, an affable forty-year-old with a scruffy beard, told me, one afternoon in his lab at U.C.L. “She was very chatty. Did you notice that?” (It’s hard to miss.) “I said to her, ‘Are you worried about what’s going to happen today?’ Because she was meeting our clinicians to have a skin biopsy and do quantitative sensory testing—pain-threshold tests. She said, ‘No. In fact, I’m never worried about anything.’” Cox told me that it was difficult to get through everything in the time they’d allotted, because Cameron was so friendly and loquacious with the scientists, even as they burned her, stuck her with pins, and pinched her with tweezers until she bled. This imperviousness to pain is what makes her distinct from everyone else with a FAAH mutation. They, like even the most committed stoners, can still get hurt. ·…I asked Matthew Hill—a renowned expert on cannabinoids and stress—if there was any downside to Cameron’s biology, and he laughed out loud. “Yes! From an evolutionary perspective, it would be tremendously destructive for a species to have that”, he said. Without fear, you drown in waves that you shouldn’t be swimming in; you take late-night strolls in cities that you don’t know; you go to work at a construction site and neglect to put on a hard hat. “Her phenotype is only beneficial in an environment where there is no danger”, Hill asserted. “If you can’t be concerned about a situation where you’d be at risk of something adverse happening to you, you are more likely to put yourself in one. Anxiety is a highly adaptive process: that’s why every mammalian species exhibits some form of it.” · Unlike other pain-insensitive people, Cameron has made it into her seventies without getting badly hurt. Sometimes she realizes that she’s burning her hand on the stove because she smells singeing; sometimes she cuts herself in the garden and sees that she’s bleeding. But none of that has been severe, and Cameron did raise two children safely into adulthood. “The human brain is very capable of learning, ‘This is what’s appropriate to do in this situation’”, Hill said. Cameron’s relative cautiousness may have developed imitatively. “And there may not have been that much threat presented to her—she’s lived in a rural community in Scotland”, he concluded. “Maybe she hasn’t had to deal with that much that would physically or emotionally harm her.” ·…One complicating question is how much of Cameron’s Cameronness is really a consequence of her FAAH mutation and FAAH OUT deletion. She has plenty of other genes, after all, and her upbringing and her early environment also played a role in making her who she is. Since the paper was published, Matthew Hill has heard from half a dozen people with pain insensitivity, and he told me that many of them seemed nuts. “If you had this phenotype and weren’t a generally pleasant person like Jo—maybe you’re, like, a douche-y frat boy—the way that you would process this might be entirely different. Our whole perception of this phenotype is explicitly based on the fact that it was Jo who presented it.”

  58. Backstop

  59. ⁠, Scott Alexander (2020-01-08):

    [Scott Alexander look back on how his ideas/​​​​beliefs evolved over the past decade of blogging at Jackdaws/​​​​LessWrong⁠/​​​​SlateStarCodex. Primary topics:

    1. Bayesian predictive coding as a unified theory of brain perception, control, behavior, and psychiatric disorders as bad priors/​​​​​​updates

      • Psychedelics use as modifying brain priors, explaining how psychedelics affect and sometimes benefit their users
      • trauma/​​​​​​​attachment disorder
    2. Philosophy of mental disease

    3. efficacy of SSRIs

    4. Genetics of psychiatric disorders, especially autism/​​​​​​transsexuals: ???

    5. Willpower: also predictive coding???

    6. Diet/​​​​​​weight loss: setpoints, somehow

    7. Existential risk: dissolving the Great Filter, raising AI risk awareness

    8. Secular stagnation: progress is slowing, perhaps because human populations aren’t growing exponentially

      • Baumol’s cost disease as core cause of economic stagnation and political backlash
    9. The Replication Crisis: even worse than he thought

    10. Psychological effects:

      • Placebo effect: much more powerless than he thought
      • Birth order effects: much more powerful than he thought
    11. Utilitarianism: still confused, but more towards rule-utilitarianism

    12. Politics: social media turbocharging tribalism/​​​​​​outgroup-bias

    13. Ideology of liberalism and SJWism

    14. Coordination problems as core problem of politics

    15. Enlightenment: not actually that great, possibly wireheading]

  60. ⁠, Zhang, Peixun Wang, Tianbing Xiong, Jian Xue, Feng Xu, Hailin Chen, Jianhai Zhang, Dianying Fu, Zhongguo Jiang, Baoguo (2014):

    Panda is regarded as Chinese national treasure. Most people always thought they were cute and just ate bamboo and had never imagined a panda could be vicious. Giant panda attacks on human are rare. There, we present three cases of giant panda attacks on humans at the Panda House at Beijing Zoo from September 2006 to June 2009 to warn people of the giant panda’s potentially dangerous behavior.

  61. {#linkBibliography-(wired)-2020 .docMetadata}, Sage Lazzaro () (2020-01-22):

    Whenever Courtney Cirone grabs her iPad, her cat Cooper runs over as though a bag of treats had just been shaken. He wants to watch YouTube⁠, specifically videos of squirrels and tiny birds scurrying about. “His eyes get super big, and he moves his head back and forth following the animals”, Cirone says. “He ducks his head down low like he’s hiding. One time he looked at me, meowing, like, ‘HELP ME CATCH THIS BASTARD.’” Cooper paws relentlessly at the screen, sometimes lunging at it head-first in an attempt to catch his digital prey. He loves these videos (along with clips of Dr. Phil). He’s so obsessed that Cirone limits his viewing to three times per week, because he sits very close and she’s cautious about protecting his eyes. When she turns her iPad off, he even sulks. If this sounds strange, it is and it’s not: Cats, famously the subjects of online videos, now sit on the other side, watching…Now she puts cat-targeted YouTube videos on for Jasper a few times weekly. He loves them so much that he’ll sit in front of the TV or in between Gall and her laptop to signal that he wants to watch.

    Beyond all the content for humans, there’s a growing world on YouTube specifically for our feline friends. Loved by certain cat owners and occasionally championed by veterinarians and animal scientists, these videos tap into cats’ instincts to stalk, chase, and hunt. Cat-targeted footage of small animals is particularly popular on the platform, posted by channels like Little Kitty & Family⁠, Handsome Nature⁠, and Videos for Your Cat⁠. One of the most prolific creators, Paul Dinning, has posted hundreds of videos for cats, including an eight-hour “Bird Bonanza” that’s amassed almost 7 million views. According to YouTube’s Trends and Insights team, Dinning created eight of the 10 most-viewed videos for cats in 2019…In 2019, videos containing the phrase “videos for cats” were viewed over 55 million on the platform, up 41% from 2018. “We now have this world where cats are an emerging audience”, Pettie says, “and movies for cats are an emerging trend.”…According to YouTube, videos targeted at dogs garnered only 6 million views last year.

    …Cat Games creator Max Gomboev, a motion designer from Russia, first started making these videos as a tribute to his late cat. After seeing how much other cat owners liked them and the experience they provided over cat-targeted mobile apps, like Cat Fishing 2, which offer much less variety, he started making videos more regularly. “It’s easier than installing an app, and you can show my videos on a TV”, Gomboev says. “Usually, I create a new video every 10 days. Cats like to watch something new.”.

  62. https://www.youtube.com/watch?v=xbs7FT7dXYc

  63. 2020-richmondrakerd.pdf: ⁠, Leah S. Richmond-Rakerd, Stephanie D’Souza, Signe Hald Andersen, Sean Hogan, Renate M. Houts, Richie Poulton, Sandhya Ramrakha, Avshalom Caspi, Barry J. Milne, Terrie E. Moffitt (2020-01-20; sociology):

    Health and social scientists have documented the hospital revolving-door problem, the concentration of crime, and long-term welfare dependence. Have these distinct fields identified the same citizens? Using administrative databases linked to 1.7 million New Zealanders, we quantified and monetized inequality in distributions of health and social problems and tested whether they aggregate within individuals. Marked inequality was observed: Gini coefficients equalled 0.96 for criminal convictions, 0.91 for public-hospital nights, 0.86 for welfare benefits, 0.74 for prescription-drug fills and 0.54 for injury-insurance claims. Marked aggregation was uncovered: a small population segment accounted for a disproportionate share of use-events and costs across multiple sectors. These findings were replicated in 2.3 million Danes. We then integrated the New Zealand databases with the four-decade-long Dunedin Study. The high-need/​​​​high-cost population segment experienced early-life factors that reduce workforce readiness, including low education and poor mental health. In midlife they reported low life satisfaction. Investing in young people’s education and training potential could reduce health and social inequalities and enhance population wellbeing.

  64. 2020-richmondrakerd-figure4-correlations.png

  65. 2016-belsky.pdf: ⁠, Daniel W. Belsky, Terrie E. Moffitt, David L. Corcoran, Benjamin Domingue, HonaLee Harrington, Sean Hogan, Renate Houts, Sandhya Ramrakha, Karen Sugden, Benjamin S. Williams, Richie Poulton, Avshalom Caspi (2016-06-01; genetics  /​ ​​ ​correlation):

    A previous (GWAS) of more than 100,000 individuals identified molecular-genetic predictors of educational attainment.

    We undertook in-depth life-course investigation of the polygenic score derived from this GWAS using the 4-decade Dunedin Study (N = 918). There were 5 main findings.

    1. polygenic scores predicted adult economic outcomes even after accounting for educational attainments.
    2. genes and environments were correlated: Children with higher polygenic scores were born into better-off homes.
    3. children’s polygenic scores predicted their adult outcomes even when analyses accounted for their social-class origins; social-mobility analysis showed that children with higher polygenic scores were more upwardly mobile than children with lower scores.
    4. polygenic scores predicted behavior across the life course, from early acquisition of speech and reading skills through geographic mobility and mate choice and on to financial planning for retirement.
    5. polygenic-score associations were mediated by psychological characteristics, including intelligence, self-control, and interpersonal skill. were small.

    Factors connecting GWAS sequence with life outcomes may provide targets for interventions to promote population-wide positive development.

    [Keywords: genetics, behavior genetics, intelligence, personality, adult development]

  66. ⁠, Caspi, Avshalom Houts, Renate M. Belsky, Daniel W. Harrington, Honalee Hogan, Sean Ramrakha, Sandhya Poulton, Richie Moffitt, Terrie E (2016):

    Policy-makers are interested in early-years interventions to ameliorate childhood risks. They hope for improved adult outcomes in the long run, bringing return on investment. How much return can be expected depends, partly, on how strongly childhood risks forecast adult outcomes. But there is disagreement about whether childhood determines adulthood. We integrated multiple nationwide administrative databases and electronic medical records with the four-decade Dunedin birth-cohort study to test child-to-adult prediction in a different way, by using a population-segmentation approach. A segment comprising one-fifth of the cohort accounted for 36% of the cohort’s injury insurance-claims; 40% of excess obese-kilograms; 54% of cigarettes smoked; 57% of hospital nights; 66% of welfare benefits; 77% of fatherless childrearing; 78% of prescription fills; and 81% of criminal convictions. Childhood risks, including poor age-three brain health, predicted this segment with large effect sizes. Early-years interventions effective with this population segment could yield very large returns on investment.

  67. 2017-akam-theexquisitelyenglishandamazinglylucrativeworldoflondonclerks.html: {#linkBibliography-news)-2017 .docMetadata}, Simon Akam (Bloomberg News) (2017-05-23; economics):

    Alex/​​​​John/​​​​Mark Taylor belongs to one of the last surviving professions of Dickensian London. Clerks have co-existed with chimney sweeps and gene splicers. It’s a trade that one can enter as a teenager, with no formal qualifications, and that’s astonishingly well-paid. A senior clerk can earn a half-million pounds per year, or more than $650,000, and some who are especially entrenched make far more.

    Clerks—pronounced “clarks”—have no equivalent in the U.S. legal system, and have nothing in common with the Ivy League-trained Supreme Court aides of the same spelling. They exist because in England and Wales, to simplify a bit, the role of lawyer is divided in two: There are solicitors, who provide legal advice from their offices, and there are barristers, who argue in court. Barristers get the majority of their business via solicitors, and clerks act as the crucial middlemen between the tribes—they work for and sell the services of their barristers, steering inquiring solicitors to the right man or woman. Clerks are by their own cheerful admission “wheeler-dealers”, what Americans might call hustlers. They take a certain pride in managing the careers of their bosses, the barristers—a breed that often combines academic brilliance with emotional fragility. Many barristers regard clerks as their pimps. Some, particularly at the junior end of the profession, live in terror of clerks. The power dynamic is baroque and deeply English, with a naked class divide seen in few other places on the planet. Barristers employ clerks, but a bad relationship can strangle their supply of cases. In his 1861 novel Orley Farm, Anthony Trollope described a barrister’s clerk as a man who “looked down from a considerable altitude on some men who from their professional rank might have been considered as his superiors.”…One of the most peculiar aspects of the clerk-barrister relationship is that clerks handle money negotiations with clients. Barristers argue that avoiding fee discussions keeps their own interactions with clients clean and uncomplicated, but as a consequence, they’re sometimes unaware of how much they actually charge. The practice also insulates and coddles them. Clerks become enablers of all sorts of curious, and in some cases self-destructive, behavior.

    …John Flood, a legal sociologist who in 1983 published the only book-length study of barristers’ clerks, subtitled The Law’s Middlemen, uses an anthropological lens to explain the relationship. He suggests that barristers, as the de facto priests of English law—with special clothes and beautiful workplaces—require a separate tribe to keep the temple flames alight and press money from their congregation. Clerks keep barristers’ hands clean; in so doing they accrue power, and they’re paid accordingly. I asked more than a dozen clerks and barristers, as well as a professional recruiter, what the field pays. Junior clerks, traditionally recruited straight after leaving school at 16 and potentially with no formal academic qualifications, start at £15,000 to £22,000 ($19,500 to $28,600); after 10 years they can make £85,000. Pay for senior clerks ranges from £120,000 to £500,000, and a distinct subset can earn £750,000. The Institute of Barristers’ Clerks disputed these figures, saying the lows were too low and the highs too high. But there’s no doubt that the best clerks are well-rewarded. David Grief, 63, a senior clerk at the esteemed Essex Court Chambers, spoke to me enthusiastically about his personal light airplane, a TB20 Trinidad.

    …Before the U.K. decimalized its currency in 1971, clerks received “shillings on the guinea” for each case fee. Under the new money system, the senior clerks’ take was standardized at 10% of their chambers’ gross revenue. Sometimes, but not always, they paid their junior staff and expenses out of this tithe. Chambers at the time were typically small, four to six barristers strong, but in the 1980s, they grew. As they added barristers and collected more money, each chambers maintained just one chief clerk, whose income soared. The system was opaque: The self-employed barristers didn’t know what their peers within their own chambers were paid, and in a precomputer age, with all transactions recorded in a byzantine paper system, barristers sometimes didn’t know what their clerks earned, either. Jason Housden, a longtime clerk who now works at Matrix Chambers, told me that, when he started out in the 1980s at another office, his senior clerk routinely earned as much as the top barristers and on occasion was the best-paid man in the building. · One anecdote from around the same time, possibly apocryphal, is widely shared. At a chambers that had expanded and was bringing in more money, three silks decided their chief clerk’s compensation, at 10%, had gotten out of hand. They summoned him for a meeting and told him so. In a tactical response that highlights all the class baggage of the clerk-barrister relationship, as well as the acute British phobia of discussing money, the clerk surprised the barristers by agreeing with them. “I’m not going to take a penny more from you”, he concluded. The barristers, gobsmacked and paralyzed by manners, never raised the pay issue again, and the clerk remained on at 10% until retirement. · Since the 1980s, fee structures have often been renegotiated when a senior clerk retires. Purely commission-based arrangements are now rare—combinations of salary and incentive are the rule, though some holdouts remain. Goddard told me last summer that he receives 3% of the entire take of the barristers at 4 Stone; later he said this was inaccurate, and that his pay was determined by a “complicated formula.” (Pupil barristers, as trainees are known, start there at £65,000 per year, and the top silks each make several million pounds.) · The huge sums that clerks earn, at least relative to their formal qualifications, both sit at odds with the feudal nature of their employment and underpin it. In some chambers, clerks still refer to even junior barristers as “sir” or “miss.” Housden remembers discussing this issue early in his career with a senior clerk. He asked the man whether he found calling people half his age “sir” demeaning. The reply was straightforward: “For three-quarters of a million pounds per year, I’ll call anyone sir.”

  68. Subscripts

  69. 2005-kanda.pdf: ⁠, Fusae Kanda (2005; japanese):

    The kusözu⁠, “painting of the nine stages of a decaying corpse”, portrays the sequential decay of a female cadaver in graphic detail. The shocking subject, rooted in Buddhist devotional practices, was regularly painted and reinterpreted during half a millennium of Japanese art. The images of a decaying corpse were charged with contextualized functionalities that have gone unrecognized in current scholarship. Through an examination of four major exemplars of the genre, this study shows how new meanings of the image were catalyzed by religious and social transformations.

    The kusozu, “painting of the nine stages of a decaying corpse” (hereafter, painting of the nine stages), was executed in Japan from approximately the thirteenth through the nineteenth centuries in various formats, including handscrolls, hanging scrolls, and printed books. The subject itself is derived from a traditional Buddhist doctrine that urges contemplation on the nine stages of a decaying corpse (kusokan, hereafter, contemplation on the nine stages). The teaching dates to the early fifth century and promotes a systematic meditation on the impurity of a decaying corpse as an aid to ardent devotees who wish to liberate themselves from sensual desires and affections.

    This paper explores unrecognized features of the paintings of the nine stages as they appear through almost half a millennium of Japanese art. We will see that these narrative paintings functioned as distinct visual agents for audiences in different eras. The functionality of the image shifted from a meditative focus for pietistic catharsis, to a didactic incentive for the pursuit of paradise, to an intercessory offering for the dead at merit transferal rites, to a popularized platform for politically manipulated precepts on feminine morality. After giving the textual and theological background for the nine stages of a decaying corpse, I will examine four images of the nine stages from different centuries, which I term the Nakamura, Raigoji, Dainenbutsuji, and Akagi versions. Finally, some remarks are offered on the enduring vitality of this sensational subject.

  70. ⁠, Strange Remains (2014-06-24):

    “Body of a Courtesan in Nine Stages” was painted on handscroll by Japanese artist Kobayashi Eitaku in the 1870’s. It’s not unusual for artists to study corpses and body parts because of their need to learn about the human form, and because of the historical connection between the science of anatomy and artistic illustration. What makes this style unique is that it’s part of a Japanese artistic tradition devoted specifically to the study of human postmortem changes that stretches back hundreds of years.

    “Body of a Courtesan in Nine Stages” is an example of kusozu, the illustration of a decomposing corpse, that was popular in Japanese art from about the 13th to 19th centuries…Though the painting maybe religious and/​​​​or scientific in nature, according to the British Museum it also has erotic themes. Because the subject matter is a courtesan, the curator notes for this piece at the British Museum say that this handscroll also falls into the genre of erotic art, or shunga. The word shunga means ‘picture of spring’ in Japanese. The word “spring” is a common synonym for sex. Below are all 9 panels. All images come from The British Museum.

  71. ⁠, Strange Remains (2015-03-06):

    I think I might be obsessed with kusozu, Japanese watercolor paintings that graphically depict human decomposition, which were popular between the 13th and 19th centuries; “Body of a Courtesan in Nine Stages” is another series in this genre featured previously on this site. Kusozu works of art were inspired by Buddhist beliefs and these paintings were meant to encourage people to ponder the temporary nature of the physical world. Kusozu watercolors also happen to be fantastic early studies of human decay and taphonomy, which is why one series, titled Kusozu: the death of a noble lady and the decay of her body, is currently on display as part of the “Forensics: The Anatomy of Crime” exhibit in London.

    According to the Wellcome Collection, Kusozu: the death of a noble lady and the decay of her body was painted some time in the 18th century. The below scenes include: (1) the woman’s impending death and her preparation for it; (2) the noble woman has just passed away and her loved ones are seated around her; (3) slight skin discoloration (maybe some liver mortis) and a bit of bloating of during early decomposition; (4) the onset of putrefaction with bloating and marbling; (5) advanced decomposition as seen by pervasive marbling, leakage of purge fluid from the mouth, and the abdominal cavity has burst open (6) caving of abdominal cavity and scavenging animals; (7) start of skeletonization and the disappearance of soft tissue; (8) complete skeletonization and scattering of remains; (9) finally human remains have been completely scattered or consumed by unseen animals so all that remains is a memorial for the deceased woman.

  72. ⁠, Alvaro de Menard (2020-01-17):

    [Summary of the that gripped Western classical literary scholarship for centuries: who wrote the Iliad/​​​​Odyssey, when, and how? They appear in Greek history out of nowhere: 2 enormously lengthy, sophisticated, beautiful, canonical, unified works that would dominate Western literature for millennia, and yet, appeared to draw on no earlier tradition nor did Homer have any earlier (non-spurious) works. How was this possible?

    The iconoclastic Analysts proposed it was a fraud, and the works were pieced together later out of scraps from many earlier poets. The Unitarians pointed to the overall quality; the complex (apparently planned) structure; the disagreements of Analysts on what parts were what pieces; and the Analysts’ inability to explain many anomalies in Homer: there are passages splicing together Greek dialects, passages which were metrical only given long-obsolete Greek letters/​​​​pronunciations, and even individual words which mixed up Greek dialects! (Not that these anomalies were all that much easier to explain by the Unitarian hypothesis of a single author).

    The eventual resolution relied an old hypothesis: that Homer was in fact the product of a lost ⁠. There was, unfortunately, no particular evidence for it, and so it never made any headway against the Analysts or Unitarians—until Milman Parry found a living oral tradition of epic poetry in the Balkans, and discovered in it all the signs of the Homeric poems, from repetitive epithets to a patchwork of dialects, and thus empirical examples of how long oral traditions could produce a work like Homer if one of them happened to get written down at some point.]

  73. 1933-parry.pdf: ⁠, Milman Perry (1933; history):

    In this essay on the method to be used in the comparative study of early poetries the view is set forth that the essential feature of such poetry is its oral form, and not such cultural likenesses as have been called “popular”, “primitive”, “natural”, or “heroic.” As an example of method those numerous cases are considered where we find both in Homer and in Southslavic heroic song a verse which expresses the same idea. The explanation is as follows. Oral poetry is largely composed out of fixed verses. Especially will ideas which recur with any frequency be expressed by a fixed verse. Thus where the two poetries express the same frequent idea they both tend to do it in just the length of a verse. Knowing this common feature in the oral form of the two poetries we can conclude that the extraordinary hold which heroic poetry has on the thought and conduct of the Southern Slavs provides us with an example of what heroic poetry must have been for the early Greeks.

  74. 1932-borges-thehomericversions.pdf: ⁠, Jorge Luis Borges (1932; borges):

    [6pg essay on the literary merits of different translations of Homer and the problems of translation: the Newman-Arnold debate encapsulates the basic problem of literality vs literary. Borges gives translations of one passage by Buckley, Butcher & Lang, Cowper, Pope, Chapman, and Butler. Which is best? See also Borges 1936, ⁠, a much more extended discussion of different translations of a work.]

  75. 1936-borges-thetranslatorsofthethousandandonenights.pdf: ⁠, Jorge Luis Borges (1936; borges):

    [18pg Borges essay on translations of the collection of Arab fairytales : each translator—⁠, ⁠, ⁠, ⁠, —criticized the previous translator by creation.]

    At Trieste, in 1872, in a palace with damp statues and deficient hygienic facilities, a gentleman on whose face an African scar told its tale-Captain Richard Francis Burton, the English consul-embarked on a famous translation of the Quitab alif laila ua laila, which the roumis know by the title The Thousand and One Nights. One of the secret aims of his work was the annihilation of another gentleman (also weather-beaten, and with a dark and Moorish beard) who was compiling a vast dictionary in England and who died long before he was annihilated by Burton. That gentleman was Edward Lane, the Orientalist, author of a highly scrupulous version of The Thousand and One Nights that had supplanted a version by Galland. Lane translated against Galland, Burton against Lane; to understand Burton we must understand this hostile dynasty.

  76. ⁠, Christian Swinehart (2009):

    [Visualizing CYOA by generating graphs and coloring events by desirability; Swinehart observes distinct patterns in network types, harshness, linearity, and highlights various curious anomalies and tricks CYOA books could play on the reader.]

    …To get a sense for the distribution of pages within the actual CYOA books, I’ve prepared a dataset of 12 books. The earliest dates from 1979 and at the later edge are a handful from 1986. They are laid out chronologically (or according to series order for books released in the same year) with the oldest at the top left and more recent books below. Each book has been arranged into rows of ten pages apiece. In scanning over the distribution of colors in this plot, one clear pattern is a gradual decline in the number of endings. The earliest books (in the top row) are awash in reds and oranges, with a healthy number of ‘winning’ endings mixed in. Later CYOA books tended to favor a single ‘best’ ending (see CYOA 44 & 53). The most extreme case of this was actually not a Choose Your Own Adventure book at all but a gamebook offshoot of the Zork text adventure series. The Cavern of Doom (labeled WDIDN 3 above) has a virtually linear progression where endings later in the book are increasingly better than those on earlier pages. This is reflected in the nearly unbroken spectrum from red to blue when scanning down the rows. The one outlier is the catastrophic ending seen in the third row from the bottom. This was a punishment page that could only be reached by cheating. Unlike most other endings in the book it does not offer to let you continue the story from a few pages back but instead calls you a cheater and leaves you with no choice but to start over from the beginning. Another surprising change over time is the decline in the number of choices in the books. The mess of light grey boxes in the top row gives way to books like A New Hope (CYOASW 1) which have more pages devoted to linear narrative than to decisions and endings combined. But to address this apparent pattern with more rigor it would be best to look at the numbers of pages in each category independent of their placement in the book…I’d be very curious to know the reason for this progression toward linearity. Presumably the invisible hand was guiding this development, but whether the hunger was for less difficulty in the books or simply for something with more in the way of traditional storytelling is harder to unravel. I could also imagine that this balance between interaction and exposition was peculiar to the individual writers, so this could merely reflect a changing set of practitioners.

  77. ⁠, Christian Swinehart ():

    Christian Swinehart is a graphic designer, software developer, and data artist. His practice focuses on interaction and user interface design with a specialty in data visualization. He is the founder and principal of Samizdat Drafting Co. and is an active participant in the open-source world as the author of the PlotDevice and Arbor.js visualization tools.

    Christian’s work is informed by a background in biology and computational modeling. His projects frequently employ simulation and numerical analysis as a means to communicate the structure within complex systems. Recent clients include The New York Times, Bloomberg, Gallup, Pentagram, Diller Scofidio + Renfro, and Allied Works Architects.

    Degrees Held:

    • MFA | Graphic Design (RISD, 2008)
    • Ph.D. | Computational Neuroscience (Brandeis University, 2005)
    • BS | Cognitive Science (Dickinson College, 1998)
  78. {#linkBibliography-obscura)-2017 .docMetadata}, Sarah Laskow (Atlas Obscura) (2017-06-13):

    The last installment of the original series came out in 1998, but since 2004, ⁠, founded by one of the series’ original authors, R. A. Montgomery, has been republishing classic volumes, as well as new riffs on the form of interactive fiction that seemed ubiquitous in the 1980s and ’90s. The new editions also carry an additional feature—maps of the hidden structure of each book.

    Tattoo of Death, Choose Your Own Adventure #22 (All maps courtesy of ChooseCo)

    For years, fans have been creating visualizations of the forking structures of “Choose Your Own Adventure” books. Often, they’re interested in the types of outcomes at the end of each path. One map labels each ending as “new life, return home, or death”, and another separates them into “cliffhanger, solution, or death.” Christian Swinehart’s extensive graphical analysis of the books labels the endings as “great, favorable, mediocre, disappointing, or catastrophic.”

    …Mapping the bones of the books can have other purposes, too. Nick Montfort, a poet and professor at the Massachusetts Institute of Technology who studies interactive fiction, has a habit of asking people what they know about “Choose Your Own Adventure” books. “They often say, ‘You have two choices after every page’”, he says. “That’s not true. Sometimes you have one choice. Sometimes you have more than two. When you show the maps, you can see that these books don’t look exactly the same.” The older volumes, for instance, tend to have more endings than the later ones, and three of the oldest—Journey Under the Sea, Space and Beyond, and By Balloon to the Sahara—have 42 endings each, more than any other books in the series…In just about every case, it can be surprising how a simple choice leads you down a complex path. In By Balloon to the Sahara, you’re in a balloon and are presented with a choice on the very first page. Storm clouds are on the horizon. Choice 1: “If you act now, you can release gas from the balloon and land before the storm overtakes you.” Choice 2: “Perhaps the storm will pass quickly. Maybe you can ride it out.” That’s just the beginning, since this book has the most decision points—48—of the series.

    …There is yet another possibility in these nonlinear books: hidden endings. Inside UFO 54-40 has a hidden ending that’s only available to a reader who ignores the decisions and flips to it without prompting. But it’s there. “It’s a two-page, big illustration of this city”, says Montfort, the MIT professor. “The land of Ultima. As you flip through the book, even if you’re being very obedient, you can’t help but wonder what this text is.”

    …Maps like the ones Chooseco created can reveal the structure of a book that gives readers choices, but though the multiple story lines are part of what makes the series so fun, they’re not the only thing that defines it. The meat of “Choose Your Own Adventure” stories are gender-neutral romps in worlds where there are no obviously right or wrong moral choices. There’s danger around bend, usually in the form of something like space monkeys, malicious ghosts, or conniving grown-ups. Even with a map, there’s no way to find out what really comes next without making a choice and flipping to another page.

  79. {#linkBibliography-antiquarian)-2020 .docMetadata}, Jimmy Maher (The Digital Antiquarian) (2020-01-24):

    A typical game of Master of Orion plays out over three broad stages. The first stage is the land grab, the wide-open exploration and colonization phase that happens before you meet your rival aliens. Here your challenge is to balance the economic development of your existing planets against your need to settle as many new ones as possible to put yourself in a good position for the mid-game. (When exactly do I stop spending my home planet’s resources on improving its own infrastructure and start using them to build more colony ships?) The mid-game begins when you start to bump into your rivals, and comes to entail much jockeying for influence, as the various races begin to sort themselves into rival factions. (The Alkaris, bird-like creatures, loathe the Mrrshans, the aforementioned race of frenzied pussycats, and their loathing is returned in kind. I don’t have strong feelings about either one—but whose side would it most behoove me to choose from a purely strategic perspective?) The end-game is nigh when the there is no more room for anyone to expand, apart from taking planets from a rival by force, and the once-expansive galaxy suddenly seems claustrophobic. It often, although by no means always, is marked by a massive war that finally secures somebody that elusive two-thirds majority in the Galactic Council.

    …Yet the core genius of Master of Orion actually lies in how resistant it is to generalization. It’s no exaggeration to say that there really is no “typical” game; I’ve enjoyed plenty which played out in nothing like the pattern I’ve just described for you. I’ve played games in which I never fired a single shot in anger, even ones where I’ve never built a single armed ship of war, just as I’ve played others where I was in a constant war for survival from beginning to end…Master of Orion can easily be read as the work of a designer who looked at Civilization and was unimpressed with its touchy-feely side, then set out to make a game that fixed all the other failings which that side obscured.

    Master of Orion, on the other hand, works hard at every turn to make such one-size-fits-all strategies impossible—and nowhere more so than in its tech tree. When a new game begins, each race is given a randomized selection of technologies that are possible for it to research, constituting only about half of the total number of technologies in the game. Thus, while a technology roughly equivalent to Civilization’s Railroads does exist in Master of Orion—Star Gates—you don’t know if this or any other technology is actually available to you until you advance far enough up the tree to reach the spot where it ought to be. You can’t base your entire strategy around a predictable technology progression. While you can acquire technologies that didn’t make it into your tree by trading with other empires, bullying them into giving them to you, or attacking their planets and taking them, that’s a much more fraught, uncertain path to go down than doing the research yourself, one that requires a fair amount of seat-of-your-pants strategy in its own right. Any way you slice it, in other words, you have to improvise. This one clever design choice has repercussions for every other aspect of the game. Take, for instance, the endlessly fascinating game-within-a-game of designing your fleet of starships. If the tech tree was static, players would inevitably settle upon a small set of go-to designs that worked for their style of play. As it is, though, every new ship is a fresh balancing act, its equipment calibrated to maximize your side’s technological strengths and mitigate its weaknesses, while also taking into careful account the strengths and weaknesses of the foe you expect to use it against, about which you’ve hopefully been compiling information through your espionage network. Do you build a huge number of tiny, fast, maneuverable fighters, or do you build just a few lumbering galactic dreadnoughts? Or do you build something in between? There are no universally correct answers, just sets of changing circumstances.

    …in Master of Orion, each race’s unique affordances force you to play it differently. Likewise, each opposing race’s affordances in combination with those of your own force you to respond differently to that race when you encounter it, whether on the other side of a diplomats’ table or on a battlefield in space. Further, most races have one technology they’re unusually good at researching and one they’re unusually bad at. Throw in varying degrees of affinity and prejudice toward the other races, and, again, you’ve got an enormous amount of variation which defies cookie-cutter strategizing.

    …Sometimes a status such as that enjoyed by Master of Orion arrives thanks to an historical accident or a mere flashy technical innovation, but that is definitively not the case here. Master of Orion remains as rewarding as ever in all its near-infinite variation. Personally, I like to embrace its dynamic spirit for everything it’s worth by throwing a (virtual) die to set up a new game, letting the Universe decide what size galaxy I play in, how many rivals I play with, and which race I play myself. The end result never fails to be enjoyable, whether it winds up a desperate free-for-all between 6 alien civilizations compressed into a tiny galaxy with just 24 stars, or a wide-open, stately game of peaceful exploration in a galaxy with over 100 of them. In short, Master of Orion is the most inexhaustible well of entertainment I’ve ever found in the form of a single computer game—a timeless classic that never fails to punish you for playing lazy, but never fails to reward you for playing well. I’ve been pulling it out to try to conquer another random galaxy at least once every year or two for half my life already. I suspect I’ll still be doing so until the day I die.

  80. ⁠, Bruce Gardner (2019-09-26):

    [Photo essay on making shiny balls of mud.]

    Hi there, this is Bruce Gardner. I am out of Albuquerque, New Mexico and my strange superpower is: I am very good at making mud balls, aka hikaru dorodango. I’m taking over the Laurence King blog today to introduce my new book, Dorodango: The Japanese Art of Making Mud Balls…Coming from the words doro, meaning “mud” and dango, a type of Japanese flour cake, hikaru dorodango consists of forming a mud ball by hand. Layers of increasingly fine dirt are added to the surface over the space of days to a point at which the dorodango can be polished to a high sheen (hikaru means “shining”)…I was introduced to hikaru dorodango by a William Gibson essay in Tate Magazine, way back in 2002. I was immediately bowled over by the idea of creating art from such a humble material; I have been creating mud balls ever since.

    …Here is an image of a few of my pieces that illustrate the scope of colour and texture that is possible with soil gathered from different locations (various parts of New Mexico, in this case).

    [5 of Gardner’s dorodangos on a window sill]

    …The process of creating hikaru dorodango is very conducive to flow: There is a repetitive quality to the work but it is still challenging as the dorodango changes, one minute to the next. Your mind remains engaged but you’re disconnected from everything else. Hours can easily slip by this way…How sturdy are they? That varies by soil. Some would shatter like glass if you dropped them. This one would dent your hardwood floor and roll away.

  81. 2002-gibson

  82. ⁠, Cathleen Schine (2020-01-16):

    [The Browser summary: “The amazing life of ⁠. She married and/​​​​or romanced ⁠, ⁠, and ⁠. She was”anti-Semitic, narcissistic, boastful, and untruthful“. Was she also an”ambitious young woman who longed to be a great composer but became instead a great muse to great men?“. Was she an”artist stunted by society’s restrictions on women?“. Was she a”grandiose groupie, expropriating the fame of her husbands and lovers?" Perhaps uniquely, she was all three." Mahler’s life dramatized the Viennese milieu, with absurd melodrama.]

    The Alma Schindler of her early diaries, which she began in 1898, is, indeed, appealing. They reveal an ebullient teenager full of serious opinions and enthusiasms, a flirtatious young woman giddy with the attentions of the cultural elite in culturally elite fin-de-siècle Vienna. Alma writes about crushes and kisses and assignations on the Ringstrasse, about vigorously practicing the piano and earnestly studying composition, about attending the opera, about buying dresses and fighting with her mama. She is a girl—a splendid girl in a splendid city at a splendid time. She is vain and unsure of herself, self-aggrandizing as only a serious, determined, sensitive young person can be. The early diaries, published in English in 1998, end in 1902, just before she married Gustav Mahler. Alma lived for another sixty-two years, years of vainglorious strutting, scheming, and disloyalty, years chronicled by her own memoirs and by her later diaries (which have not been translated into English). Mahler scholars have a name for the challenge that arises from her unreliable tendencies: the Alma Problem. “She is routinely accused of massaging the facts to serve her own legacy”, Haste writes, “of suppressing or editing her husband Gustav Mahler’s published letters to remove critical references to her, for instance—acts seen, particularly by Mahler scholars (for whom she was for some time their principal source), as tampering with the archive.”…Touched by her husband’s new devotion and convinced that he would die if she left him, Alma sent Gropius away. Gustav wrote her daily love poems, smothered her slippers in kisses, and listened again to her music, pronouncing it good and begging her to resume composing. Alma was undeniably talented, and her songs are admired today, but this episode points as much to her extraordinary power as a muse as to her gifts as an artist. Her daughter Anna said that when Alma

    just stopped in the doorway, you could immediately feel an electric charge… She was an incredibly passionate woman… And she really paid attention to everyone she spoke to. And encouraged them… She was able to enchant people in a matter of seconds.

    Albrecht Joseph, eventually Anna’s fifth husband, who was shocked by Alma’s dowdiness when he first met the legendary seductress in 1931, nevertheless noted that her “unique gift” was “a profound, uncanny understanding of what it was that [creative] men tried to achieve, an enthusiastic, orgiastic persuasion that they could do what they aimed at, and that she, Alma, fully understood what it was.” The intensity of her belief in art and genius had the effect of creating an almost violent sympathy. Gustav, like the other men she loved, did not think he could survive artistically without her. ·…And then there was Kokoschka. Alma later described her three-year affair with Oskar Kokoschka as “one violent struggle of love. Never before had I experienced so much strain, so much hell and so much paradise.” Jealous and controlling, the artist stalked her, patrolling her street after he left her house to make sure no other man visited. She refused to marry him, so while she was in Paris he stole her documents and posted the banns in the Döbling parish hall. “Oskar Kokoschka could only make love to me with the most peculiar game playing”, she later wrote. “As I refused to hit him during our hours of love, he began conjuring up the most appalling images of murder” of his supposed rivals “while whispering murkily to himself.” One night when she sang Parsifal at the piano, he whispered “a new, eerie text” into her ear, which caused her to scream and cry, then to swallow a toxic dose of bromine. (Kokoschka called the doctor.) · And through it all, he painted her. When she had an abortion (she wrote that she was afraid of “what might grow in me”), Kokoschka took a blood-stained cotton pad from her and kept it with him, saying, “That is, and will always be, my only child.” He painted bloody, murdered children. He drew “Alma Mahler Spinning with Kokoschka’s Intestine.” He insisted that she cover her arms with long sleeves. Kokoschka painted Alma entwined with him in a boat on a stormy sea, he painted Alma rising to the heavens while he stood in hell surrounded by snakes. Anna watched him work and asked, “Can’t you paint anything else except Mommy?” · When war came, Alma’s reaction was, as even the temperate Haste must admit, “an astonishing flourish of self-aggrandizement.” “I sometimes imagine”, Alma wrote, “that I was the one who ignited this whole world conflagration in order to experience some kind of development or enrichment—even if it be only death.” By now, she wanted to purify herself of the “evil fascination” of Kokoschka. She taunted him until he joined the cavalry, then broke off their relationship in unkind letters. In despair, Kokoschka insisted on being sent to the front, where he was wounded so badly he was reported dead in the Viennese papers. Though she later defiantly published a facsimile of Mahler’s manuscript of his Tenth Symphony, revealing (for a good price) his intimate, despairing notes, she was less keen on allowing her own letters to reach the public. After rushing to Kokoschka’s studio with her set of keys, she removed and burned her notes to him. · Though Kokoschka had not in fact died, her interest in him had. She was back to writing letters to Gropius. When she saw him while he was on leave, Haste writes, “their passion was rekindled”, and they got married. Kokoschka dealt with this rejection by commissioning a life-sized Alma doll, with instructions to “please make it possible that my sense of touch will be able to take pleasure in those parts where the layers of fat and muscle suddenly give way to a sinuous covering of skin.” The doll, covered in fluffy swan skin, suffered an ignominious end, beheaded and bedraggled in a courtyard the morning after Kokoschka threw a raucous farewell party for it.

  83. Books#an-introduction-to-japanese-court-poetry-miner-1968

  84. Movies#they-live

  85. Movies#wozzeck