newsletter/2019/12 (Link Bibliography)

“newsletter/​2019/​12” links:

  1. 12

  2. https://gwern.substack.com

  3. 11

  4. newsletter

  5. Changelog

  6. https://www.patreon.com/gwern

  7. GPT-2-music

  8. Causality#overview-the-current-situation

  9. Replication#further-reading

  10. Hydrocephalus

  11. GoodReads

  12. ⁠, Dave Liepmann (-CSS) ():

    One of the most distinctive features of Tufte’s style is his extensive use of sidenotes.3 Sidenotes are like footnotes, except they don’t force the reader to jump their eye to the bottom of the page, but instead display off to the side in the margin. Perhaps you have noticed their use in this document already. You are very astute.

    Sidenotes are a great example of the web not being like print. On sufficiently large viewports, Tufte CSS uses the margin for sidenotes, margin notes, and small figures. On smaller viewports, elements that would go in the margin are hidden until the user toggles them into view. The goal is to present related but not necessary information such as asides or citations as close as possible to the text that references them. At the same time, this secondary information should stay out of the way of the eye, not interfering with the progression of ideas in the main text.

    …If you want a sidenote without footnote-style numberings, then you want a margin note. Notice there isn’t a number preceding the note. On large screens, a margin note is just a sidenote that omits the reference number. This lessens the distracting effect taking away from the flow of the main text, but can increase the cognitive load of matching a margin note to its referent text.

  13. Search

  14. ⁠, W. David Hill, Neil M. Davies, Stuart J. Ritchie, Nathan G. Skene, Julien Bryois, Steven Bell, Emanuele Di Angelantonio, David J. Roberts, Shen Xueyi, Gail Davies, David C. M. Liewald, David J. Porteous, Caroline Hayward, Adam S. Butterworth, Andrew M. McIntosh, Catharine R. Gale, Ian J. Deary (2019-12-16):

    Socioeconomic position (SEP) is a multi-dimensional construct reflecting (and influencing) multiple socio-cultural, physical, and environmental factors. In a sample of 286,301 participants from ⁠, we identify 30 (29 previously unreported) independent-loci associated with income. Using a method to meta-analyze data from traits, we identify an additional 120 income-associated loci. These loci show clear evidence of functionality, with transcriptional differences identified across multiple cortical tissues, and links to GABA-ergic and serotonergic neurotransmission. By combining our genome wide association study on income with data from eQTL studies and chromatin interactions, 24 genes are prioritized for follow up, 18 of which were previously associated with intelligence. We identify intelligence as one of the likely causal, partly-heritable phenotypes that might bridge the gap between molecular genetic inheritance and phenotypic consequence in terms of income differences. These results indicate that, in modern era Great Britain, genetic effects contribute towards some of the observed socioeconomic inequalities.

  15. ⁠, Sophie von Stumm, Emily Smith-Woolley, Ziada Ayorech, Andrew McMillan, Kaili Rimfeld, Philip S. Dale, Robert Plomin (2019-11-23):

    The two best predictors of children’s educational achievement available from birth are parents’ (SES) and, recently, children’s inherited DNA differences that can be aggregated in genome-wide polygenic scores (GPS). Here, we chart for the first time the developmental interplay between these two predictors of educational achievement at ages 7, 11, 14 and 16 in a sample of almost 5,000 UK school children. We show that the prediction of educational achievement from both GPS and SES increases steadily throughout the school years. Using latent growth curve models, we find that GPS and SES not only predict educational achievement in the first grade but they also account for systematic changes in achievement across the school years. At the end of compulsory education at age 16, GPS and SES, respectively, predict 14% and 23% of the of educational achievement. Analyses of the extremes of GPS and SES highlight their influence and interplay: In children who have high GPS and come from high SES families, 77% go to university, whereas 21% of children with low GPS and from low SES backgrounds attend university. We find that the associations of GPS and SES with educational achievement are primarily additive, suggesting that their joint influence is particularly dramatic for children at the extreme ends of the distribution.

    Research Highlights

    • Genome-wide polygenic scores (GPS) and socioeconomic status (SES) account together for 27% of the variance in educational achievement from age 7 through 16 years
    • The predictive validity of GPS and SES increases over the course of compulsory schooling
    • The association of GPS and SES is primarily additive: their joint long-term influence is particularly pronounced in children at the extreme ends of the distribution
    • 77% of children with high GPS from high SES families go to university compared to 21% of children with low GPS from low SES
  16. ⁠, A. G. Allegrini, V. Karhunen, J. R. I. Coleman, S. Selzam, K. Rimfeld, S. von Stumm, J.-B. Pingault, R. Plomin (2019-12-06):

    Polygenic scores are increasingly powerful predictors of educational achievement. It is unclear, however, how sets of polygenic scores, which partly capture environmental effects, perform jointly with sets of environmental measures, which are themselves heritable, in prediction models of educational achievement.

    Here, for the first time, we systematically investigate gene-environment correlation (rGE) and interaction (G×E) in the joint analysis of multiple genome-wide polygenic scores (GPS) and multiple environmental measures as they predict tested educational achievement (EA). We predict EA in a representative sample of 7,026 16-year-olds, with 20 GPS for psychiatric, cognitive and anthropometric traits, and 13 environments (including life events, home environment, and SES) measured earlier in life. Environmental and GPS predictors were modelled, separately and jointly, in penalized regression models with out-of-sample comparisons of prediction accuracy, considering the implications that their interplay had on model performance.

    Jointly modelling multiple GPS and environmental factors statistically-significantly improved prediction of EA, with cognitive-related GPS adding unique independent information beyond SES, home environment and life events. We found evidence for rGE underlying variation in EA (rGE = 0.36; 95% = 0.29, 0.43). We estimated that 38% (95% CIs = 29%, 49%) of the GPS effects on EA were mediated by environmental effects, and in turn that 18% (95% CIs = 12%, 25%) of environmental effects were accounted for by the GPS model. Lastly, we did not find evidence that G×E effects collectively contributed to multivariable prediction.

    Our multivariable polygenic and environmental prediction model suggests widespread rGE and unsystematic G×E contributions to EA in adolescence.

  17. 2016-plomin.pdf#page=10: ⁠, Robert Plomin, John C. DeFries, Valerie S. Knopik, Jenae M. Neiderhiser (2016; genetics):

    Finding 7. Most measures of the “environment” show substantial genetic influence

    Although it might seem a peculiar thing to do, measures of the environment widely used in psychological science—such as parenting, social support, and life events—can be treated as dependent measures in genetic analyses. If they are truly measures of the environment, they should not show genetic influence. To the contrary, in 1991, Plomin and Bergeman conducted a review of the first 18 studies in which environmental measures were used as dependent measures in genetically sensitive designs and found evidence for genetic influence for these measures of the environment. Substantial genetic influence was found for objective measures such as videotaped observations of parenting as well as self-report measures of parenting, social support, and life events. How can measures of the environment show genetic influence? The reason appears to be that such measures do not assess the environment independent of the person. As noted earlier, humans select, modify, and create environments correlated with their genetic behavioral propensities such as personality and psychopathology (McAdams, Gregory, & Eley, 2013). For example, in studies of twin children, parenting has been found to reflect genetic differences in children’s characteristics such as personality and psychopathology (Avinun & Knafo, 2014; Klahr & Burt, 2014; Plomin, 1994).

    Since 1991, more than 150 articles have been published in which environmental measures were used in genetically sensitive designs; they have shown consistently that there is substantial genetic influence on environmental measures, extending the findings from family environments to neighborhood, school, and work environments. Kendler and Baker (2007) conducted a review of 55 independent genetic studies and found an average heritability of 0.27 across 35 diverse environmental measures (confidence intervals not available). Meta-analyses of parenting, the most frequently studied domain, have shown genetic influence that is driven by child characteristics (Avinun & Knafo, 2014) as well as by parent characteristics (Klahr & Burt, 2014). Some exceptions have emerged. Not surprisingly, when life events are separated into uncontrollable events (eg., death of a spouse) and controllable life events (eg., financial problems), the former show nonsignificant genetic influence. In an example of how all behavioral genetic results can differ in different cultures, Shikishima, Hiraishi, Yamagata, Neiderhiser, and Ando (2012) compared parenting in Japan and Sweden and found that parenting in Japan showed more genetic influence than in Sweden, consistent with the view that parenting is more child centered in Japan than in the West.

    Researchers have begun to use to replicate these findings from twin studies. For example, GCTA has been used to show substantial genetic influence on stressful life events (Power et al 2013) and on variables often used as environmental measures in epidemiological studies such as years of schooling (C. A. Rietveld, Medland, et al 2013). Use of GCTA can also circumvent a limitation of twin studies of children. Such twin studies are limited to investigating within-family (twin-specific) experiences, whereas many important environmental factors such as socioeconomic status (SES) are the same for two children in a family. However, researchers can use GCTA to assess genetic influence on family environments such as SES that differ between families, not within families. GCTA has been used to show genetic influence on family SES (Trzaskowski et al 2014) and an index of social deprivation (Marioni et al 2014).

  18. ⁠, Guillaume Laval, Etienne Patin, Pierre Boutillier, Lluis Quintana-Murci (2019-12-23):

    Over the last 100,000 years, humans have spread across the globe and encountered a highly diverse set of environments to which they have had to adapt. Genome-wide scans of selection are powerful to detect selective sweeps. However, because of unknown fractions of undetected sweeps and false discoveries, the numbers of detected sweeps often poorly reflect actual numbers of selective sweeps in populations. The thousands of soft sweeps on standing variation recently evidenced in humans have also been interpreted as a majority of mis-classified neutral regions. In such a context, the extent of human adaptation remains little understood. We present a new rationale to estimate these actual numbers of sweeps expected over the last 100,000 years (denoted by X) from genome-wide population data, both considering hard sweeps and selective sweeps on standing variation. We implemented an approximate Bayesian computation framework and showed, based on computer simulations, that such a method can properly estimate X. We then jointly estimated the number of selective sweeps, their mean intensity and age in several 1000G African, European and Asian populations. Our estimations of X, found weakly sensitive to demographic misspecifications, revealed very limited numbers of sweeps regardless the frequency of the selected alleles at the onset of selection and the completion of sweeps. We estimated ~80 sweeps in average across fifteen 1000G populations when assuming incomplete sweeps only and ~140 selective sweeps in non-African populations when incorporating complete sweeps in our simulations. The method proposed may help to address controversies on the number of selective sweeps in populations, guiding further genome-wide investigations of recent positive selection.

  19. ⁠, Yanan Yue, Yinan Kan, Weihong Xu, Hong-Ye Zhao, Yixuan Zhou, Xiaobin Song, Jiajia Wu, Juan Xiong, Dharmendra Goswami, Meng Yang, Lydia Lamriben, Mengyuan Xu, Qi Zhang, Yu Luo, Jianxiong Guo, Shenyi Mao, Deling Jiao, Tien Dat Nguyen, Zhuo Li, Jacob V. Layer, Malin Li, Violette Paragas, Michele E. Youd, Zhongquan Sun, Yuan Ding, Weilin Wang, Hongwei Dou, Lingling Song, Xueqiong Wang, Lei Le, Xin Fang, Haydy George, Ranjith Anand, Shi Yun Wang, William F. Westlin, Marc Guell, James Markmann, Wenning Qin, Yangbin Gao, Hongjiang Wei, George M. Church, Luhan Yang (2019-12-19):

    Xenotransplantation, specifically the use of porcine organs for human transplantation, has long been sought after as an alternative for patients suffering from organ failure. However, clinical application of this approach has been impeded by two main hurdles: 1) risk of transmission of porcine endogenous retroviruses (PERVs) and 2) molecular incompatibilities between donor pigs and humans which culminate in rejection of the graft. We previously demonstrated that all 25 copies of the PERV elements in the pig genome could be inactivated and live pigs successfully generated. In this study, we improved the scale of porcine germline editing from targeting a single repetitive locus with CRISPR to engineering 18 different loci using multiple genome engineering methods. we engineered the pig genome at 42 alleles using -Cas9 and transposon and produced PERVKO·3KO·9TG pigs which carry PERV inactivation, xeno-antigen KO and 9 effective human transgenes.. The engineered pigs exhibit normal physiology, fertility, and germline transmission of the edited alleles. In vitro assays demonstrated that these pigs gain significant resistance to human humoral and cell mediated damage, and coagulation dysregulations, similar to that of allotransplantation. Successful creation of PERVKO·3KO·9TG pigs represents a significant step forward towards safe and effective porcine xenotransplantation, which also represents a synthetic biology accomplishment of engineering novel functions in a living organism.

    One Sentence Summary

    Extensive genome engineering is applied to modify pigs for safe and immune compatible organs for human transplantation

  20. {#linkBibliography-(science)-2019 .docMetadata doi=“10.1126/​​science.aba6487”}, Kelly Servick (Science) (2019-12-19):

    If any swine is fit to be an organ donor for people, then the dozens of pigs snuffling around Qihan Bio’s facility in Hangzhou, China, may be the best candidates so far. The Chinese company and its U.S. collaborators reported today that they have used the genome editor CRISPR to create the most extensively genetically engineered pigs to date—animals whose tissues, the researchers say, finally combine all the features necessary for a safe and successful transplant into humans. “This is the first prototype”, says Luhan Yang, a geneticist at Qihan Bio. In a preprint published today on bioRxiv, Qihan researchers and collaborators, including Cambridge, Massachusetts-based eGenesis—which Yang co-founded with Harvard University geneticist George Church—described the new generation of animals and various tests on their cells; the researchers have already begun to transplant the pigs’ organs into nonhuman primates, a key step toward human trials.

    …In the new study, the team for the first time combined these PERV “knockouts” with a suite of other changes to prevent immune rejection, for a record-setting 13 modified genes. In pig ear cells, they removed three genes coding for enzymes that help produce molecules on pig cells that provoke an immune response. They also inserted six genes that inhibit various aspects of the human immune response and three more that help regulate blood coagulation. The researchers then put the DNA-containing nuclei of these edited cells into eggs from pig ovaries collected at a slaughterhouse. These eggs developed into embryos that were implanted into surrogate mothers. Cells from the resulting piglets got another round of edits to remove the PERV sequences, after which their DNA went into another set of egg cells to create a new generation of pigs with all the desired edits. (In future, Yang says, the team will try to make all the modifications in a single generation.)

    The resulting pigs appeared healthy and fertile with functioning organs, the team reports today. And initial tests of their cells in lab dishes suggest their organs will be much less prone to immune rejection than those of unmodified pigs: The tendency of the pig cells to bind to certain human antibodies was reduced by 90%, and the modified cells better survived interactions with human immune cells. But a key test is still to come: Yang says her team has begun to transplant organs from the highly edited pigs into monkeys to gauge their safety and longevity.

    The combination of edits described in the new paper is “a technical feat”, says Marilia Cascalho, a transplant immunologist at the University of Michigan in Ann Arbor. “Whether it offers an advantage [over other engineered pig organs]… the jury is out on that”, she says…Yang says that Qihan plans to remain “laser-focused” on preclinical studies in 2020, but expects to be testing pig organs in humans within 5 years. Many in the field now feel an inevitable momentum around xenotransplantation: “There is so much need for organs”, Cascalho says. “I think it’s going to be a reality.”

  21. 2017-niu.pdf: ⁠, Dong Niu, HongJiang Wei, Lin Lin, Haydy George, Tao Wang, IHsiu Lee, HongYe Zhao, Yong Wang, Yinan Kan, Ellen Shrock, Emal Lesha, Gang Wang, Yonglun Luo, Yubo Qing, Deling Jiao, Heng Zhao, Xiaoyang Zhou, Shouqi Wang, Hong Wei, Marc Gell, George M. Church, Luhan Yang (2017-08-10; genetics  /​ ​​ ​editing):

    Xenotransplantation is a promising strategy to alleviate the shortage of organs for human transplantation. In addition to the concern on pig-to-human immunological compatibility, the risk of cross-species transmission of porcine endogenous retroviruses (PERVs) has impeded the clinical application of this approach. Earlier, we demonstrated the feasibility of inactivating PERV activity in an immortalized pig cell line. Here, we confirmed that PERVs infect human cells, and observed the horizontal transfer of PERVs among human cells. Using CRISPR-Cas9, we inactivated all the PERVs in a porcine primary cell line and generated PERV-inactivated pigs via somatic cell nuclear transfer. Our study highlighted the value of PERV inactivation to prevent cross-species viral transmission and demonstrated the successful production of PERV-inactivated animals to address the safety concern in clinical xenotransplantation.

  22. ⁠, Chenglei Tian, Linlin Liu, Xiaoying Ye, Haifeng Fu, Xiaoyan Sheng, Lingling Wang, Huasong Wang, Dai Heng, Lin Liu (2019-12-24):

    • Granulosa cells can be reprogrammed to form oocytes by chemical reprogramming
    • Rock inhibition and crotonic acid facilitate the chemical induction of gPSCs from GCs
    • PGCLCs derived from gPSCs exhibit longer telomeres and high genomic stability

    The generation of genomically stable and functional oocytes has great potential for preserving fertility and restoring ovarian function. It remains elusive whether functional oocytes can be generated from adult female somatic cells through reprogramming to germline-competent pluripotent stem cells (gPSCs) by chemical treatment alone. Here, we show that somatic granulosa cells isolated from adult mouse ovaries can be robustly induced to generate gPSCs by a purely chemical approach, with additional Rock inhibition and critical reprogramming facilitated by crotonic sodium or acid. These gPSCs acquired high germline competency and could consistently be directed to differentiate into primordial-germ-cell-like cells and form functional oocytes that produce fertile mice. Moreover, gPSCs promoted by crotonylation and the derived germ cells exhibited longer telomeres and high genomic stability like PGCs in vivo, providing additional evidence supporting the safety and effectiveness of chemical induction, which is particularly important for germ cells in genetic inheritance.

    [Keywords: chemical reprogramming, pluripotent stem cell, oocyte, granulosa cell]

  23. Embryo-selection#limiting-step-eggs-or-scores

  24. Embryo-selection#iterated-embryo-selection

  25. ⁠, Cell Press (2019-12-24):

    Ovarian follicles are the basic functional unit of the ovary and consist of an oocyte, the immature egg, which is surrounded by granulosa cells. Besides being crucial to the development of follicles, studies have shown that granulosa cells possess plasticity that shows stem cell-like properties.

    “The thing about in vitro fertilization is that they only use the oocyte for the procedure”, says senior author Lin Liu, of the College of Life Sciences at Nankai University. “After the egg retrieval, the granulosa cells in the follicle are discarded. It got us thinking, what if we can utilize these granulosa cells? Since every egg has thousands of granulosa cells surrounding it, if we can induce them into pluripotent cells and turn those cells into oocytes, aren’t we killing two birds with one stone?”

    Granulosa cells tend to undergo cell death and differentiation once removed from the follicles. Liu and his team including Ph.D. students Chenglei Tian and Haifeng Fu developed a chemical “cocktail” with Rock inhibitor and crotonic acid for creating chemically induced pluripotent stem cells (CiPSCs) from granulosa cells. The research team introduced Rock inhibitor to prevent cell death and promote proliferation. In combination with other important small chemicals, crotonic acid facilitates the induction of granulosa cells into germline-competent pluripotent stem cells that exhibit pluripotency similar to embryonic stem cells.

  26. {#linkBibliography-(nyt)-2019 .docMetadata}, Sui-Lee Wee (NYT) (2019-12-30):

    A court in China on Monday sentenced He Jiankui, the researcher who shocked the global scientific community when he claimed that he had created the world’s first genetically edited babies, to three years in prison for carrying out “illegal medical practices.” In a surprise announcement from a trial that was closed to the public, the court in the southern city of Shenzhen found Dr. He guilty of forging approval documents from ethics review boards to recruit couples in which the man had H.I.V. and the woman did not, Xinhua, China’s official news agency, reported. Dr. He had said he was trying to prevent H.I.V. infections in newborns, but the state media on Monday said he deceived the subjects and the medical authorities alike.

    Dr. He, 35, sent the scientific world into an uproar last year when he announced at a conference in Hong Kong that he had created the world’s first genetically edited babies—twin girls. On Monday, China’s state media said his work had resulted in a third genetically edited baby, who had been previously undisclosed.

    Dr. He pleaded guilty and was also fined $430,000, according to Xinhua. In a brief trial, the court also handed down prison sentences to two other scientists who it said had “conspired” with him: Zhang Renli, who was sentenced to two years in prison, and Qin Jinzhou, who got a suspended sentence of one and a half years…The court said the trial had to be closed to the public to guard the privacy of the people involved.

  27. ⁠, Reddit ():

    [Subreddit for sharing 2 game transcripts and discussing bugs/​​​​issues/​​​​upgrades; FAQ⁠.]

  28. ⁠, Nick Walton (2019-12-14):

    AI Dungeon 2 is a completely AI generated text adventure built with OpenAI’s largest model. It’s a first of its kind game that allows you to enter and will react to any action you can imagine.

    What is this?

    Google Colab is a way to experience machine learning for free. Google provides GPUs that you can run code in. Because this game exploded however, Google likely won’t be able to allow free usage of it for AI Dungeon for very long. We are almost done making an app version of the game where you will be able to play AI Dungeon 2. Until that’s released you can still play the game here.

    Main mirrors of AI Dungeon 2 are currently down due to high download costs.

    We are using as a temporary solution to host game files and keep this game alive. It’s not fast, but it’s the best we’ve got right now.

    If you want to help, best thing you can do is to download this torrent file with game files and seed it indefinitely to the best of your ability. This will help new players download this game faster, and discover the vast worlds of AI Dungeon 2!

    • Follow [@nickwalton00](https:/​​​​​​/​​​​​​nitter.hu/​​​​​​nickwalton00) on Twitter for updates on when it will be available again.
    • Support AI Dungeon 2 on Patreon to help me to continue improving the game with all the awesome ideas I have for its future!

    How to play

    1. Click “Tools”-> “Settings…” -> “Theme” -> “Dark” (optional but recommended)
    2. Go to Main Game section below
    3. Run Install block
    4. Run Download Model block
    5. It will then take a couple minutes to boot up as the model is downloaded loaded onto the ⁠.
    6. Run the game block
    7. If you have questions about getting it to work then please go to github repo to get help.
  29. ⁠, Nick Walton (2019-11-26):

    [Demonstration dialogue of interacting with a -1.5b trained on text adventures/​​​​RPGs. The player chooses to join a band of orcs as a musician and tries to steer the game towards orc rights, with moderate success, reaching the Emperor himself.]

    In the first AI Dungeon, we created and deployed a deep learning generated text adventure using OpenAI’s 124M parameter GPT-2 model. In the interest of computational cost, possible actions and their results were generated and given to the player to choose from.

    In AI Dungeon 2 we do away with pregenerated actions and allow the user to enter any action. The model then continues generating the story resulting from that action. We also upgrade the size of our model to OpenAI’s largest 1.5B parameter model and fine-tune it on a collection of text adventures obtained from chooseyourstory.com. Following the example of the we also modified the sampling of our model to add a penalty to already generated words to reduce the repetition issues that GPT-2 has and enable a lower temperature setting.

    When playing AI Dungeon 2 the player can choose from several possible settings and characters. These choices generate a starting prompt and context sentence. The context sentence is always included as input to the model to help keep it grounded to the chosen setting while the starting prompt is the beginning of the story.

    [Dialogue sample:]

    …After the Gnoll War ends, the orcs decide to hold a big celebration. You immediately organize concert to entertain everyone. It goes well and you even win a few awards for your work. However, you still aren’t satisfied and you decide to do something else.

    > write plays for the orcs to perform in

    You write a series of plays based on the stories of the orcs. These works are performed throughout the Empire and they are very popular. You also take an interest in the orcs’ future and begin to study their language and culture. In time you even develop a liking for the orcs themselves.

    > found a human-orc friendship organization

    While you were studying the orcs, you came across a group called the Human-Orc Friendship Association (HOOFA). While they weren’t exactly like the Orcs, they had similar goals. You decided to form a separate chapter just for them.

    > write some pamphlets support orc-human friendship

    You wrote a pamphlet explaining what the HOOFA was and how it worked. You also explained that the orcs were the only race that could truly understand each other and that humans were inferior to the orcs.

    > promote inter-racial marriage between humans and orcs

    In order to promote inter-racial marriage between humans and orcs, you started a branch of the HOOFA called the Inter-Race Marriage Bureau (IRMB) which was located in the city of Kishak…

  30. ⁠, OpenAI, :, Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław Dębiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, Rafal Józefowicz, Scott Gray, Catherine Olsson, Jakub Pachocki, Michael Petrov, Henrique P. d. O. Pinto, Jonathan Raiman, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon Sidor, Ilya Sutskever, Jie Tang, Filip Wolski, Susan Zhang (2019-12-13):

    On April 13th, 2019, Five became the first AI system to defeat the world champions at an esports game. The game of Dota 2 presents novel challenges for AI systems such as long time horizons, imperfect information, and complex, continuous state-action spaces, all challenges which will become increasingly central to more capable AI systems. OpenAI Five leveraged existing reinforcement learning techniques, scaled to learn from batches of approximately 2 million frames every 2 seconds. We developed a distributed training system and tools for continual training which allowed us to train OpenAI Five for 10 months. By defeating the Dota 2 world champion (Team OG), OpenAI Five demonstrates that self-play can achieve superhuman performance on a difficult task.

  31. ⁠, OpenAI (2019-12-13):

    At OpenAI, we’ve used the multiplayer video game Dota 2 as a research platform for general-purpose AI systems. Our Dota 2 AI, called OpenAI Five, learned by playing over 10,000 years of games against itself. It demonstrated the ability to achieve expert-level performance, learn human-AI cooperation, and operate at internet scale.

    [OpenAI final report on OA5: timeline, training curve, index of blog posts.]

  32. ⁠, Deheng Ye, Zhao Liu, Mingfei Sun, Bei Shi, Peilin Zhao, Hao Wu, Hongsheng Yu, Shaojie Yang, Xipeng Wu, Qingwei Guo, Qiaobo Chen, Yinyuting Yin, Hao Zhang, Tengfei Shi, Liang Wang, Qiang Fu, Wei Yang, Lanxiao Huang (2019-12-20):

    We study the reinforcement learning problem of complex action control in the Multi-player Online Battle Arena (MOBA) 1v1 games. This problem involves far more complicated state and action spaces than those of traditional 1v1 games, such as Go and Atari series, which makes it very difficult to search any policies with human-level performance. In this paper, we present a deep reinforcement learning framework to tackle this problem from the perspectives of both system and algorithm. Our system is of low coupling and high scalability, which enables efficient explorations at large scale. Our algorithm includes several novel strategies, including control dependency decoupling, action mask, target attention, and dual-clip ⁠, with which our proposed actor-critic network can be effectively trained in our system. Tested on the MOBA game Honor of Kings, our AI agent, called Tencent Solo, can defeat top professional human players in full 1v1 games.

  33. ⁠, Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya (2020-01-13):

    Large models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from 𝑂(L2) to 𝑂(L log L), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N×, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.

    [blog: 1⁠, 2⁠; see also rotary embeddings]

  34. https://ai.googleblog.com/2020/01/reformer-efficient-transformer.html

  35. Attention

  36. ⁠, Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila (2019-12-03):

    The style-based architecture () yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from codes to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably attribute a generated image to a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.

  37. TWDNE#twdnev3

  38. 2019-kvarven.pdf: ⁠, Amanda Kvarven, Eirik Strømland, Magnus Johannesson (2019-12-23; statistics  /​ ​​ ​bias):

    Many researchers rely on to summarize research evidence. However, there is a concern that publication bias and selective reporting may lead to biased meta-analytic ⁠. We compare the results of meta-analyses to large-scale preregistered replications in psychology carried out at multiple laboratories. The multiple-laboratory replications provide precisely estimated effect sizes that do not suffer from publication bias or selective reporting. We searched the literature and identified 15 meta-analyses on the same topics as multiple-laboratory replications. We find that meta-analytic effect sizes are statistically-significantly different from replication effect sizes for 12 out of the 15 meta-replication pairs. These differences are systematic and, on average, meta-analytic effect sizes are almost 3 times as large as replication effect sizes. We also implement 3 methods of correcting meta-analysis for bias, but these methods do not substantively improve the meta-analytic results.

  39. {#linkBibliography-(stat)-2019 .docMetadata}, Sharon Begley (STAT) (2019-06-25; sociology  /​ ​​ ​preference-falsification):

    In the 30 years that biomedical researchers have worked determinedly to find a cure for Alzheimer’s disease, their counterparts have developed drugs that helped cut deaths from cardiovascular disease by more than half, and cancer drugs able to eliminate tumors that had been incurable. But for Alzheimer’s, not only is there no cure, there is not even a disease-slowing treatment.

    …In more than two dozen interviews, scientists whose ideas fell outside the dogma recounted how, for decades, believers in the dominant hypothesis suppressed research on alternative ideas: They influenced what studies got published in top journals, which scientists got funded, who got tenure, and who got speaking slots at reputation-buffing scientific conferences. The scientists described the frustrating, even career-ending, obstacles that they confronted in pursuing their research. A top journal told one that it would not publish her paper because others hadn’t. Another got whispered advice to at least pretend that the research for which she was seeking funding was related to the leading idea—that a protein fragment called beta-amyloid accumulates in the brain, creating neuron-killing clumps that are both the cause of Alzheimer’s and the key to treating it. Others could not get speaking slots at important meetings, a key showcase for research results. Several who tried to start companies to develop Alzheimer’s cures were told again and again by venture capital firms and major biopharma companies that they would back only an amyloid approach.

    …For all her regrets about the amyloid hegemony, Neve is an unlikely critic: She co-led the 1987 discovery of mutations in a gene called APP that increases amyloid levels and causes Alzheimer’s in middle age, supporting the then-emerging orthodoxy. Yet she believes that one reason Alzheimer’s remains incurable and untreatable is that the amyloid camp “dominated the field”, she said. Its followers were influential “to the extent that they persuaded the National Institute of Neurological Disorders and Stroke [part of the National Institutes of Health] that it was a waste of money to fund any Alzheimer’s-related grants that didn’t center around amyloid.” To be sure, NIH did fund some Alzheimer’s research that did not focus on amyloid. In a sea of amyloid-focused grants, there are tiny islands of research on oxidative stress, neuroinflammation, and, especially, a protein called tau. But Neve’s NINDS program officer, she said, “told me that I should at least collaborate with the amyloid people or I wouldn’t get any more NINDS grants.” (She hoped to study how neurons die.) A decade after her APP discovery, a disillusioned Neve left Alzheimer’s research, building a distinguished career in gene editing. Today, she said, she is “sick about the millions of people who have needlessly died from” the disease.

    Dr. Daniel Alkon, a longtime NIH neuroscientist who started a company to develop an Alzheimer’s treatment, is even more emphatic: “If it weren’t for the near-total dominance of the idea that amyloid is the only appropriate drug target”, he said, “we would be 10 or 15 years ahead of where we are now.”

    Making it worse is that the empirical support for the amyloid hypothesis has always been shaky. There were numerous red flags over the decades that targeting amyloid alone might not slow or reverse Alzheimer’s. “Even at the time the amyloid hypothesis emerged, 30 years ago, there was concern about putting all our eggs into one basket, especially the idea that ridding the brain of amyloid would lead to a successful treatment”, said neurobiologist Susan Fitzpatrick, president of the James S. McDonnell Foundation. But research pointing out shortcomings of the hypothesis was relegated to second-tier journals, at best, a signal to other scientists and drug companies that the criticisms needn’t be taken too seriously. Zaven Khachaturian spent years at NIH overseeing its early Alzheimer’s funding. Amyloid partisans, he said, “came to permeate drug companies, journals, and NIH study sections”, the groups of mostly outside academics who decide what research NIH should fund. “Things shifted from a scientific inquiry into an almost religious belief system, where people stopped being skeptical or even questioning.”

    …“You had a whole industry going after amyloid, hundreds of clinical trials targeting it in different ways”, Alkon said. Despite success in millions of mice, “none of it worked in patients.”

    Scientists who raised doubts about the amyloid model suspected why. Amyloid deposits, they thought, are a response to the true cause of Alzheimer’s and therefore a marker of the disease—again, the gravestones of neurons and synapses, not the killers. The evidence? For one thing, although the brains of elderly Alzheimer’s patients had amyloid plaques, so did the brains of people the same age who died with no signs of dementia, a pathologist discovered in 1991. Why didn’t amyloid rob them of their memories? For another, mice engineered with human genes for early Alzheimer’s developed both amyloid plaques and dementia, but there was no proof that the much more common, late-onset form of Alzheimer’s worked the same way. And yes, amyloid plaques destroy synapses (the basis of memory and every other brain function) in mouse brains, but there is no correlation between the degree of cognitive impairment in humans and the amyloid burden in the memory-forming hippocampus or the higher-thought frontal cortex. “There were so many clues”, said neuroscientist Nikolaos Robakis of the Icahn School of Medicine at Mount Sinai, who also discovered a mutation for early-onset Alzheimer’s. “Somehow the field believed all the studies supporting it, but not those raising doubts, which were very strong. The many weaknesses in the theory were ignored.”

  40. 2019-thomas.pdf: ⁠, Kelsey R. Thomas, Katherine J. Bangen, Alexandra J. Weigand, Emily C. Edmonds, Christina G. Wong, Shanna Cooper, Lisa Delano-Wood, Mark W. Bondi, for the Alzheimer's Disease Neuroimaging Initiative (2019-12-30; biology):

    Objective: To determine the temporal sequence of objectively defined subtle cognitive difficulties (Obj-SCD) in relation to amyloidosis and neurodegeneration, the current study examined the trajectories of amyloid PET and medial temporal neurodegeneration in participants with Obj-SCD relative to cognitively normal (CN) and mild cognitive impairment (MCI) groups.

    Method: A total of 747 Alzheimer’s Disease Neuroimaging Initiative participants (305 CN, 153 Obj-SCD, 289 MCI) underwent neuropsychological testing and serial amyloid PET and structural MRI examinations. examined 4-year rate of change in cortical 18F-florbetapir PET, entorhinal cortex thickness, and hippocampal volume in those classified as Obj-SCD and MCI relative to CN.

    Result: Amyloid accumulation was faster in the Obj-SCD group than in the CN group; the MCI and CN groups did not statistically-significantly differ from each other. The Obj-SCD and MCI groups both demonstrated faster entorhinal cortical thinning relative to the CN group; only the MCI group exhibited faster hippocampal atrophy than CN participants.

    Conclusion: Relative to CN participants, Obj-SCD was associated with faster amyloid accumulation and selective vulnerability of entorhinal cortical thinning, whereas MCI was associated with faster entorhinal and hippocampal atrophy. Findings suggest that Obj-SCD, operationally defined using sensitive neuropsychological measures, can be identified to or during the preclinical stage of amyloid deposition. Further, consistent with the Braak neurofibrillary staging scheme, Obj-SCD status may track with early entorhinal pathologic changes, whereas MCI may track with more widespread medial temporal change. Thus, Obj-SCD may be a sensitive and noninvasive predictor of encroaching amyloidosis and neurodegeneration, prior to frank cognitive impairment associated with MCI.

  41. ⁠, Kenneth G. Libbrecht (2019-10-14):

    This monograph reviews our current understanding of the physical dynamics of ice crystal growth, focusing on the spontaneous formation of complex structures from water vapor (called snow crystals) as a function of temperature, supersaturation, background gas pressure, and other extrinsic parameters. Snow crystal growth is a remarkably rich and rather poorly understood phenomenon, requiring a synthesis of concepts from materials science, crystal-growth theory, statistical mechanics, diffusion-limited solidification, finite-element modeling, and molecular surface processes. Building upon recent advances in precision measurement techniques, computation modeling methods, and molecular dynamics simulations of crystalline surfaces, I believe we are moving rapidly toward the long-sought goal of developing a full physical model of snow crystal formation, using ab initio molecular dynamics simulations to create a semi-empirical characterization of the nanoscale surface attachment kinetics, and then incorporating that into a full computational model that reproduces the growth of macroscopic crystalline structures. Section 1 of this monograph deals mainly with the material properties of ice Ih in equilibrium, including thermodynamics quantities, facet surface structures, terrace step energies, and crystal twinning behaviors.

  42. ⁠, Kenneth G. Libbrecht (1999-02-01):

    Welcome to SnowCrystals.com! Your online guide to snowflakes, snow crystals, and other ice phenomena. SnowCrystals.com has been bringing you snowflake photos and facts since February 1, 1999. Over 26 million visitors so far! [Photos / books / science; designer snowflakes, how to grow snowflakes, “identical-twin” snowflakes etc]

  43. ⁠, Scott Aaronson (2005-02-12):

    Can NP-complete problems be solved efficiently in the physical universe? I survey proposals including soap bubbles, protein folding, quantum computing, quantum advice, quantum adiabatic algorithms, quantum-mechanical nonlinearities, hidden variables, relativistic time dilation, analog computing, Malament-Hogarth spacetimes, quantum gravity, closed timelike curves, and “anthropic computing.” The section on soap bubbles even includes some “experimental” results. While I do not believe that any of the proposals will let us solve NP-complete problems efficiently, I argue that by studying them, we can learn something not only about computation but also about physics.

  44. ⁠, Michael B. Weissman (2019-02-25):

    A recent paper in Sci. Adv. by Miller et al concludes that GREs do not help predict whether physics grad students will get Ph.D.s. The paper makes numerous elementary statistics errors, including introduction of unnecessary collider-like stratification bias, variance inflation by collinearity and ⁠, omission of needed data (some subsequently provided), a peculiar choice of null hypothesis on subgroups, blurring the distinction between failure to reject a null and accepting a null, and an extraordinary procedure for radically inflating confidence intervals in a figure. Release of results of simpler models, e.g. without unnecessary stratification, would fix some key problems. The paper exhibits exactly the sort of research techniques which we should be teaching students to avoid.

  45. ⁠, Alexey Guzey (2019-11-15):

    …In the process of reading the book and encountering some extraordinary claims about sleep, I decided to compare the facts it presented with the scientific literature. I found that the book consistently overstates the problem of lack of sleep, sometimes egregiously so. It misrepresents basic sleep research and contradicts its own sources.

    In one instance, Walker claims that sleeping less than six or seven hours a night doubles one’s risk of cancer—this is not supported by the scientific evidence (Section 1.1). In another instance, Walker seems to have invented a “fact” that the WHO has declared a sleep loss epidemic (Section 4). In yet another instance, he falsely claims that the National Sleep Foundation recommends 8 hours of sleep per night, and then uses this “fact” to falsely claim that two-thirds of people in developed nations sleep less than the “the recommended eight hours of nightly sleep” (Section 5).

    Walker’s book has likely wasted thousands of hours of life and worsened the health of people who read it and took its recommendations at face value (Section 7).

  46. ⁠, Alexey Guzey ():

    I’m an independent researcher with background in Economics, Mathematics, and Cognitive Science. My biggest intellectual influences are Scott Alexander⁠, Dan Carlin⁠, Scott Adams, and Gwern⁠.

    Right now, I think about meta-science, biology and philanthropy⁠. My long-term goal is to make the future humane, aesthetic, and to make it happen faster.

    You can contact me at or via Twitter⁠, Telegram⁠, Facebook or VK

  47. ⁠, Andrew Gelman (2019-11-18):

    Asher Meir points to this hilarious post by Alexey Guzey entitled, ⁠.

    Just to start with, the post has a wonderful descriptive title. And the laffs start right away:

    [Table of contents for Guzey’s criticisms of Walker’s claims]

    Positively Nabokovian, I’d say. I mean it. The above table of contents makes me want to read more.

    I’ve not read Walker’s book and I don’t know anything about sleep research, so I won’t try to judge Guzey’s claims. I read through and I found Guzey’s arguments to be persuasive, but, hey, I’m easily persuaded.

    I’d be happy to read a followup article by Michael Walker, “Alexey Guzey’s ‘Matthew Walker’s “Why We Sleep” Is Riddled with Scientific and Factual Errors’ Is Riddled with Scientific and Factual Errors.” That (hypothetical) post could completely turn me around! Then, of course, I’d be waiting for Guzey’s reply, “Michael Walker’s ‘Alexey Guzey’s “Matthew Walker’s ‘Why We Sleep’ Is Riddled with Scientific and Factual Errors” Is Riddled with Scientific and Factual Errors’ Is Riddled with Scientific and Factual Errors.” At that point, I’d probably have heard enough to have formed a firm opinion. Right now, the ball is totally in Walker’s court.

    …Let me tell you a story. I went to graduate school at Harvard. Finest university in the world. My first day in a Harvard class, I was sitting with rapt attention, learning all sorts of interesting and important things (for reals; it was an amazing class that motivated me to become a statistician), sitting at one of those chairs with a desk attached to it, you know, the kind of chair where the desk part flips up so it’s in front of you, and, on the bottom of that desk was a wad of gum.

    Back when I was in junior high, gum was almost a form of currency. I’d buy a pack of grape Bubble Yum for a quarter at the corner store on the way to school, then chew it in the morning during the endless hours between first period and lunch. I’d put one piece of gum in my mouth, chew it until it lost all its flavor, then add the second piece, chew it etc., and continue until I had a massive wad, all five pieces, ultimately flavorless, and I’d chew and chew and blow huge bubbles when the teacher wasn’t looking.

    I’m not trying to make myself out into some big rebel here; the point is, we all did that. So of course there was yucky gum under all the desks. You knew to never run your hands under a desk, cos you never knew what might turn up. That was junior high.

    Then in high school, everyone was much more mature, a lot less gum chewing . . . but still, gum under the desks. I took classes at the University of Maryland, a fine university with an OK basketball team . . . still, they had gum. Then I went to MIT, one of the finest engineering schools in the world . . . yup, gum. But Harvard? I’d hoped Harvard was better than that. But it wasn’t.

    Anyway, that’s how I felt, learning that this purveyor of (possibly) horribly false claims is not just a professor of neuroscience at a top university—we know that top universities have lots of frauds—but was hired by Google. Google! Here I am, almost sixty years old (I don’t feel close to 60, but that’s my problem, not yours), and still there’s room for disillusionment.

  48. ⁠, Andrew Gelman (2019-11-24):

    So. It’s been a week since Alexey Guzey posted his wonderfully-titled article, “Matthew Walker’s ‘Why We Sleep’ Is Riddled with Scientific and Factual Errors.”…As of this writing, the ball remains in Walker’s court.

    I googled “matthew walker” “alexey guzey” and “matthew walker” sleep and a few other things, but nowhere did I find any response from Walker to Guzey’s criticisms.

    It’s hard for me to imagine that Walker hasn’t heard about Guzey’s article by now, but I guess it’s possible that he (Walker) is on vacation or that he’s preparing a response but has not finished yet…While we’re waiting for Walker to respond, I had a few more thoughts:

    1. A few years ago, if someone were to claim that a celebrated professor of neuroscience and psychology at a major university had published a book on his own field of expertise, and the book was full of scientific and factual errors, that would’ve been a major scandal, no? But now, we’re like, yeah, sure, that’s just more same old same old. As the saying goes, the big scandal is how little a scandal this has been.
    2. What would be really cool would be if NPR and Joe Rogan ran interviews with Alexey Guzey about this story. NPR probably won’t bite. But Joe Rogan . . . he might go for this, right? I bet Joe Rogan, or someone on his team, reads social media. And Rogan likes combat. He’s had Walker on his show, now time to have Guzey come in with the critique. That said, I don’t know that a podcast is the best format for such a debate. I think blogging is a better way to go, as then there’s enough space to lay out all the evidence.
    3. Assuming Guzey’s criticisms hold up, I’m still trying to figure out what happened with that book. How could Walker introduce so many errors on his own area of expertise (or, I guess I should say, supposed expertise)? Was he just really really confused? Did he delegate the research and writing to lazy research assistants? Did he feel that his underlying story was important so the details didn’t matter? Did he conduct his research by putting all his notes onto index cards, then mistype material off the cards? I just don’t have a good way of thinking about these things.
    4. Guzey’s article is careful and in many ways bulletproof: he backs up each of his statements, he doesn’t exaggerate (even for humorous purposes), and nobody seems to have found any mistakes in what he wrote. In addition, Guzey has gone on the web and responded to comments: where people claim he got things wrong, he has responded in detail.

    This is excellent behavior on Guzey’s part but I just want to say that it should not be required. Suppose, just for the sake of argument, that Guzey was gratuitously rude, that he made some claims without making his the evidence clear, even that he made some mistakes. Suppose that he spent 13 hours or even 1.3 hours rather than 130 hours writing this post, so that he only got to the highlights and didn’t carefully check everything he wrote? That would be unfortunate, but it wouldn’t make his critique less valid.

    What I’m saying is: by preparing a critique that’s clean, clear, well sourced, well written—actually enjoyable to read—, a critique that doesn’t make any questionable claims, by being so careful, Guzey has done us a favor. He’s made it easier to follow what he’s written, and he’s making it more difficult for someone to dismiss his arguments on superficial grounds. He’s raising the game, and that’s wonderful.

    But if Guzey hadn’t gone to that trouble, he could still be making a useful contribution. It would just be the duty of Walker to extract that contribution.

  49. ⁠, Andrew Gelman (2019-12-26):

    Last month on the book Why We Sleep, which had been dismantled in a by Alexey Guzey. A week later I looked again, and Walker had not responded to Guzey in any way. In the meantime, Why We Sleep has also been endorsed by O.G. software entrepreneur Bill Gates. Programmers typically have lots of personal experience of sleep deprivation, so this is a topic close to their hearts.

    As of this writing, it seems that Walker still has not responded to most of the points Guzey made about errors in his book. The closest thing I can find is this post dated 2019-12-19, titled “Why We Sleep: Responses to questions from readers.” The post is on a site called On Sleep that appears to have been recently set up—I say this because I see no internet record of it, and it currently has just this one post. I’m not trying to be some sort of sleuth here, I’m just trying to figure out what’s going on. For now, I’ll assume that this post is written by Walker.

    The post begins:

    The aim of the book, Why We Sleep, is to provide the general public access to a broad collection of sleep research. Below, I address thoughtful questions that have been raised regarding the book and its content in reviews⁠, online forums and direct emails that I have received. Related, I very much appreciate being made aware of any errors in the book requiring revision. I see this as a key part of good scholarship. Necessary corrections will be made in future editions.

    The first link above goes to a page of newspaper and magazine reviews, and the second link goes to Guzey’s post. I didn’t really see any questions raised regarding the book in those newspaper and magazine reviews, so I’m guessing that the “thoughtful questions” that Walker is referring to are coming entirely, or nearly entirely, from Guzey. It seems odd for Walker to cite “online forums” and only link to one of them. Also, although Walker links to Guzey, he does not address the specific criticisms Guzey made of his book.

    …Based on his book and his Ted talk, it seems that Walker has a message to send, and he doesn’t care much about the details. He’s sloppy with sourcing, gets a lot wrong, and has not responded well to criticism.

    But this does not mean we should necessarily dismiss his message. Ultimately his claims need to be addressed on their merits.

  50. ⁠, Andrew Gelman (2019-12-27):

    In his post, Matthew Walker’s “Why We Sleep” Is Riddled with Scientific and Factual Errors" (see our discussions ⁠, ⁠, and ), Alexey Guzey added the following stunner:

    [Screenshot of Guzey’s criticism: Walker, to bolster his more-sleep-is-always-better paradigm, has edited a research graph to remove a reduction in risk among those sleeping least.]

    We’ve left “super-important researcher too busy to respond to picky comments” territory and left “well-intentioned but sloppy researcher can’t keep track of citations” territory and entered “research misconduct” territory.

    …This seems like a good time to revisit that Dan Davies line:

    Good ideas do not need lots of lies told about them in order to gain public acceptance.

  51. ⁠, Kinkajoe (2019-11-16):

    I study sleep. While some of walker’s claims may be hyperbolic, I think they are within reason and justified by the important message he is trying to convey. Too many people have begun to forego sleep in their health choices, and he has helped raise awareness of sleep’s role in our health.

    Many of these criticisms are quite unfair or misunderstanding the science…

  52. #gwern-rosenthal-review

  53. {#linkBibliography-mclaughlin-news)-2019 .docMetadata}, Jenna McLaughlin, Zach Dorfman (Yahoo News) (2019-12-30):

    [Wide-ranging review of how social media, government database hacks, personal genomics, open-source intelligence, and pervasive surveillance are destroying traditional espionage, as undercover agents are unable to enter countries or recruit sources without being instantly exposed, forcing ever greater reliance on signals intelligence/​​​​hacking. Failures in OPSEC have resulted in entire countries going dark and the exposure of multiple US espionage networks and execution of sources, as well as embarrassing many countries when organizations like Bellingcat are able to expose agents and operations. While agencies like the FBI and CIA have begun adapting to the new reality, they have a long way to go, and countries like Russia or China or North Korea will only become harder to penetrate and obtain intelligence on.]

  54. {#linkBibliography-aging)-2019 .docMetadata}, Reason (Fight Aging) (2019-12-31):

    [Aging research over the past year, 2019. Categories include: The State of Funding, Conferences and Community, Clinical Development, Cellular Mitochondria in Aging, Nuclear DNA Damage, Cross-Links, Neurodegeneration, Upregulation of Cell Maintenance, In Vivo Cell Reprogramming, Parabiosis, The Gut in Aging, Biomarkers of Aging, Cancer, The Genetics of Longevity, Regenerative Medicine, Odds and Ends, Short Articles, and In Conclusion.]

    As has been the case for a few years now, progress towards the implementation of rejuvenation therapies is accelerating dramatically, ever faster with each passing year. While far from everyone is convinced that near term progress in addressing human aging is plausible, it is undeniable that we are far further ahead than even a few years ago. Even the public at large is beginning to catch on. While more foresightful individuals of past generations could do little more than predict a future of rejuvenation and extended healthy lives, we are in a position to make it happen.

  55. ⁠, David Shariatmadari (2018-04-16):

    In 50s Middle Grove, things didn’t go according to plan either, though the surprise was of a different nature. Despite his pretence of leaving the 11-year-olds to their own devices, Sherif and his research staff, posing as camp counsellors and caretakers, interfered to engineer the result they wanted. He believed he could make the two groups, called the Pythons and the Panthers, sworn enemies via a series of well-timed “frustration exercises”. These included his assistants stealing items of clothing from the boys’ tents and cutting the rope that held up the Panthers’ homemade flag, in the hope they would blame the Pythons. One of the researchers crushed the Panthers’ tent, flung their suitcases into the bushes and broke a boy’s beloved ukulele. To Sherif’s dismay, however, the children just couldn’t be persuaded to hate each other…The robustness of the boy’s “civilised” values came as a blow to Sherif, making him angry enough to want to punch one of his young academic helpers. It turned out that the strong bonds forged at the beginning of the camp weren’t easily broken. Thankfully, he never did start the forest fire—he aborted the experiment when he realised it wasn’t going to support his hypothesis.

    But the Rockefeller Foundation had given Sherif $306,948$38,0001953. In his mind, perhaps, if he came back empty-handed, he would face not just their anger but the ruin of his reputation. So, within a year, he had recruited boys for a second camp, this time in Robbers Cave state park in Oklahoma. He was determined not to repeat the mistakes of Middle Grove.

    …At Robbers Cave, things went more to plan. After a tug-of-war in which they were defeated, the Eagles burned the Rattler’s flag. Then all hell broke loose, with raids on cabins, vandalism and food fights. Each moment of confrontation, however, was subtly manipulated by the research team. They egged the boys on, providing them with the means to provoke one another—who else, asks Perry in her book, could have supplied the matches for the flag-burning?

    …Sherif was elated. And, with the publication of his findings that same year, his status as world-class scholar was confirmed. The “Robbers Cave experiment” is considered seminal by social psychologists, still one of the best-known examples of “realistic conflict theory”. It is often cited in modern research. But was it scientifically rigorous? And why were the results of the Middle Grove experiment—where the researchers couldn’t get the boys to fight—suppressed? “Sherif was clearly driven by a kind of a passion”, Perry says. “That shaped his view and it also shaped the methods he used. He really did come from that tradition in the 30s of using experiments as demonstrations—as a confirmation, not to try to find something new.” In other words, think of the theory first and then find a way to get the results that match it. If the results say something else? Bury them…“I think people are aware now that there are real ethical problems with Sherif’s research”, she tells me, “but probably much less aware of the backstage [manipulation] that I’ve found. And that’s understandable because the way a scientist writes about their research is accepted at face value.” The published report of Robbers Cave uses studiedly neutral language. “It’s not until you are able to compare the published version with the archival material that you can see how that story is shaped and edited and made more respectable in the process.” That polishing up still happens today, she explains. “I wouldn’t describe him as a charlatan…every journal article, every textbook is written to convince, persuade and to provide evidence for a point of view. So I don’t think Sherif is unusual in that way.”

  56. {#linkBibliography-(discover)-2012 .docMetadata}, Sarah Zhang (Discover) (2012-05-16):

    [Fungi are some of the most common organisms around, prolific, hardy, and fungal infections are major causes of infection-related mortality in plants and reptiles and can infect and kill almost anything, but mammals usually die of bacteria/​​​​viruses/​​​​parasites. Dying of a fungus is rare, and we hardly even get fungal infections except in odd places like our extremities (eg toes), or odd times of life like when bats hibernate. Why? Perhaps because we are warm-blooded, so our body heat is fatal to fungi. This explains why extremities or hibernating bats are vulnerable (colder). And perhaps this even played a role in the extinction of the dinosaurs and triumph of mammals?]

  57. ⁠, Randi D. Rotjan, Jeffrey R. Chabot, Sara M. Lewis (2010-04-01):

    Vacancy chains involve unique patterns of resource acquisition behaviors that determine how reusable resources are distributed through animal populations. Shell vacancy chains have been described for several hermit crab species, both terrestrial and marine, but little is known about the ecological and behavioral dynamics of shell choice in social versus solitary contexts. Here, we present a novel conceptual framework that differentiates 2 types of shell vacancy chain in hermit crabs and discuss fundamentally distinct predictions concerning the behavioral and ecological costs and benefits associated with synchronous versus asynchronous vacancy chains. In laboratory studies of the terrestrial hermit crab Coenobita clypeatus, we found support for the prediction that social context alters shell acquisition behaviors. Field observations demonstrated that both synchronous and asynchronous vacancy chains are common and revealed previously undescribed waiting and piggybacking behaviors that appear to facilitate synchronous vacancy chains. Additionally, simulation results from an agent-based model showed that population density and waiting behaviors can both influence the likelihood of synchronous vacancy chains. Together, these results indicate that better understanding of hermit crab resource acquisition requires studying social behaviors, including vacancy chain formation.

  58. ⁠, Ferris Jabr (2012-06-05):

    …As they grow, hermit crabs must move into larger shells, so they are always on the lookout for a more spacious dwelling. And an undamaged shell is preferable to a broken one, even if the shells are the same size. Knowing this, the researchers decided to dramatically change the available hermit crab real estate on Carrie Bow Cay. They placed 20 beautifully intact shells that were a little too big for most hermit crabs at various spots around the island and watched what happened.

    When a lone crab encountered one of the beautiful new shells, it immediately inspected the shelter with its legs and antennae and scooted out of its current home to try on the new shelter for size. If the new shell was a good fit, the crab claimed it. Classic hermit crab behavior. But if the new shell was too big, the crab did not scuttle away disappointed—instead, it stood by its discovery for anywhere between 15 minutes and 8 hours, waiting. This was unusual. Eventually other crabs showed up, each one trying on the shell. If the shell was also too big for the newcomers, they hung around too, sometimes forming groups as large as 20. The crabs did not gather in a random arrangement, however. Rather, they clamped onto one another in a conga line stretching from the largest to smallest animal—a behavior the biologists dubbed “piggybacking.”

    Only one thing could break up the chain of crabs: a Goldilocks hermit crab for whom the shell introduced by Lewis and Rotjan was just right. As soon as such a crab claimed its new home, all the crabs in queue swiftly exchanged shells in sequence. The largest crab at the front of the line seized the Goldilocks crab’s abandoned shell. The second largest crab stole into the first’s old shell. And so on.

    No one had ever documented such well-orchestrated shell swapping before, but similar behavior was not unknown. In 1986, Ivan Chase of Stony Brook University made the first observations of hermit crabs exchanging shells in a “vacancy chain”—a term originally coined by social scientists to describe the ways that people trade coveted resources like apartments and jobs. When one person leaves, another moves in. Since then, several researchers—including Lewis and Rotjan—have studied the behavior in different hermit crab species. Some preliminary evidence suggests that other animals use vacancy chains too, including clown fish, lobsters, octopuses and some birds. As Chase explains in the June issue of Scientific American, vacancy chains are an excellent way to distribute resources: Unlike more typical competition, a single vacancy chain benefits everyone involved—each individual gets an upgrade. So it makes sense that hermit crabs and other animals have evolved sophisticated social behaviors to make the most of vacancy chains.

  59. ⁠, Michael Tetley (2000-12-23):

    If you are a medical professional and have been trained in a “civilised” country you probably know next to nothing about the primate Homo sapiens and how they survive in the wild. You probably do not know that nature has provided an automatic manipulator to correct most spinal and peripheral joint lesions in primates. In common with millions of other so called civilised people you suffer unnecessarily from musculoskeletal problems and are discouraged about how to treat the exponential rise in low back pain throughout the developed world. Humans are one of 200 species of primates.1 All primates suffer from musculoskeletal problems; nature, recognising this fact, has given primates a way to correct them.

    The study of animals in the wild has been a lifelong pursuit. I grew up with tribal people and in 1953–4 commanded a platoon of African soldiers from 9 tribes, who taught me to sleep on my side without a pillow so that I could listen out for danger with both ears. I have organised over 14 expeditions all over the world to meet native peoples and study their sleeping and resting postures. They all adopted similar postures and exhibited few musculoskeletal problems. I must emphasise that this is not a comparison of genes or races but of lifestyles. I tried to carry out surveys to collect evidence but they were meaningless, as tribespeople give you the answer they think you want. They often object to having their photographs taken, so I have demonstrated the postures.

    Summary points:

    • Forest dwellers and nomads suffer fewer musculoskeletal lesions than “civilised” people
    • Nature’s automatic manipulator during sleep is the kickback against the vertebrae by the ribs when the chest is prevented from movement by the forest floor
    • Various resting postures correct different joints
    • Pillows are not necessary
  60. ⁠, Alexandra Barham, Michael Tetley (2009-02-13):

    Mike, who was born and raised in Kenya speaking its native language Swahili, was conscripted to command indigenous troops in the King’s African Rifles as unrest began to spread throughout his homeland. It was after Mau Mau militants ambushed a police truck that a battle erupted between the rivals. A clash Mike so vividly recalls as it marked the last time he could appreciate the gift of sight before it was lost. Remembering the battle, Mike said: “One of the Mau Mau threw a grenade at me and it landed by my foot. I jumped away from it and threw myself on the ground hoping that when it went off I wouldn’t get hit. The next thing I remember I was running flat out and I got a bullet in my right ear which came out of my right eye. My dad always said I didn’t have anything between my ears and now he’s got definite proof. The next thing I remember I fell over and as I picked myself up everything went black. I sat down and I can’t remember much more than that—not in a logical sense anyway.” Dissatisfied with blasting their victim with a rifle—nearly killing him—the Mau Mau rebels returned armed with machetes to cut up Mike, who lay helpless on the ground nursing his wound. Powerless to defend himself, Mike has always owed his survival to an ally soldier, Reguton—with whom he still has regular contact—who shot dead the 7 rebels.

    …Mike was transferred to a military hospital in England after the attack where he received the devastating news that he would never see again. Just a week before the shooting Mike had asked for his girlfriend’s hand in marriage, but following doctors’ gloomy prognosis he broke off the engagement. “After I was blinded I never thought I could look after a wife”, he said. “I didn’t think I would be able to look after myself let alone anyone else—it’s one of my biggest regrets.” But anxious not to allow his disability to blight the years ahead of him, Mike began learning the art of braille at St Dunstan’s, a national charity for the blind. Soon after Mike enrolled on a physiotherapy course with the Royal National Institute for the Blind (RNIB) suggested by his dad who felt the career suited his structural interests. It was during his training that he met his late wife Selma, and the couple eventually married in 1957. For the past 45 years Mike has been running a thriving physiotherapy clinic at his St Albans home and he remains committed to his work.

  61. ⁠, Ian Watson (1999):

    [A Scottish writer’s memoir of years working on & coping with eccentricities.

    Summoned to Kubrick’s secluded mansion and offered an enormous sum of money, Watson began collaborating on a film idea with Kubrick, who was a perfectionist who demanded endless marathon revisions of possible stories and ideas, only to throw them out and hare off on an entirely different avenue; he would spend extravagantly on travel or books on a topic or demand photos of a particular place or a specific item like a bag on sale only discard them without a second look, perennially challenging his assistants’ patience. (This attitude extended to his films, where he thought nothing of ordering in an entire plastic replica garden, only to decide it was inadequate, discard it, and order real palm trees flown in.) He was a lover of animals like ⁠, dogs, and birds, requiring a servant to mow grass & deliver it to a cat kept upstairs on a daily basis, although his affection was often quite as harmful as helpful (his generosity in ordering feeding of the birds made them obese). Careless of rough drafts, he’d lose printouts or erase disks, but even more paranoid, he would be infuriated when the local hacker who assisted them with computer problems restored files from backups the hacker had prudently kept. This paranoia further kept him terrified about global geopolitics, such as whether would trigger nuclear war in the Middle East.

    For all the surreal comedy, when Kubrick dies—A.I still being nowhere near filming, of course—and Watson writes up his memoirs, he finds that he misses Kubrick and “I remain sad that he’s gone.”]

  62. ⁠, Mark Lowrie, Claire Bessant, Robert J. Harvey, Andrew Sparkes, Laurent Garosi (2015-04-27):

    Objectives: This study aimed to characterise feline audiogenic reflex seizures (FARS).

    Methods: An online questionnaire was developed to capture information from owners with cats suffering from FARS. This was collated with the medical records from the primary veterinarian. 96 cats were included.

    Results: Myoclonic seizures were one of the cardinal signs of this syndrome (90⁄96), frequently occurring prior to generalised tonic-clonic seizures (GTCSs) in this population. Other features include a late onset (median 15 years) and absence seizures (6⁄96), with most seizures triggered by high-frequency sounds amid occasional spontaneous seizures (up to 20%). Half the population (48⁄96) had hearing impairment or were deaf. One-third of cats (35⁄96) had concurrent diseases, most likely reflecting the age distribution. Birmans were strongly represented (30⁄96). Levetiracetam gave good seizure control. The course of the epilepsy was non-progressive in the majority (68⁄96), with an improvement over time in some (23⁄96). Only 33⁄96 and 11⁄90 owners, respectively, felt the GTCSs and myoclonic seizures affected their cat’s quality of life (QoL). Despite this, many owners (50⁄96) reported a slow decline in their cat’s health, becoming less responsive (43⁄50), not jumping (41⁄50), becoming uncoordinated or weak in the pelvic limbs (24⁄50) and exhibiting dramatic weight loss (39⁄50). These signs were exclusively reported in cats experiencing seizures for >2 years, with 42⁄50 owners stating these signs affected their cat’s QoL.

    Conclusions and relevance: In gathering data on audiogenic seizures in cats, we have identified a new syndrome named FARS with a geriatric onset. Further studies are warranted to investigate potential genetic predispositions to this condition.

  63. ⁠, Kicks Condor (2019-12):

    Eh, this is doomed—Waxy or Imperica should take a crack at this. The AV Club did a list of ‘things’. I wanted to cover stuff that wasn’t on there. A lot happened outside of celebrities, Twitter and momentary memes. (We all obviously love @electrolemon, “double rainbow”, Key & Peele’s Gremlins 2 Brainstorm, 10 hr vids, etc.)

    There is a master list of lists as well.

    Hope for this list—get u mad & u destroy me & u blog in 2020.

  64. ⁠, Kicks Condor ():

    [Homepage of programmer Kicks Condor; hypertext-oriented link compilation and experimental design blog.]

  65. https://www.youtube.com/watch?v=QH2-TGUlwu4

  66. ⁠, Nick Paumgarten (2008-04-21):

    [Profile of elevator safety, technology, and economics: the history and present day of elevators, interwoven with a story of a man trapped in an elevator for 41 hours. Elevators are remarkably safe and over-engineered, and make skyscrapers, and hence dense cities, economically possible. Balancing elevator space with tenant space is a critical part of elevator design, as is routing between floors and figuring out the exact socially-acceptable density of passengers. Elevator technology continues advancing, driven by ultra-tall skyscrapers like the Burj Khalifa. Nevertheless, the standard elevator design is so simple, energetically-efficient, and safe that it’s hard to improve on.]

  67. ⁠, Stuart Cheshire (2001):

    [Seminal essay explaining why the rollout of “broadband” home connections to replace 56k dialups had not improved regular WWW browsing as much as people expected: while broadband had greater throughput, it had similar (or worse) latency.

    Because much of the wallclock time of any Internet connection is spent setting up and negotiating with the other end, and not that much is spent on the raw transfer of large numbers of bytes, the speedup is far smaller than one would expect by dividing the respective bandwidths.

    Further, while bandwidth/​​​​throughput is easy to improve by adding more or higher-quality connections and can be patched elsewhere in the system by adding parallelism or upgrading parts or investing in data compression, the latency-afflicted steps are stubbornly serial, any time lost is physically impossible to retrieve, and many steps are inherently limited by the speed of light—more capacious connections quickly run into ⁠, where the difficult-to-improve serial latency-bound steps dominate the overall task. As Cheshire summarizes it:]

    1. Fact One: Making more bandwidth is easy.
    2. Fact Two: Once you have bad latency you’re stuck with it.
    3. Fact Three: Current consumer devices have appallingly bad latency.
    4. Fact Four: Making limited bandwidth go further is easy.

    …That’s the problem with communications devices today. Manufacturers say “speed” when they mean “capacity”. The other problem is that as far as the end-user is concerned, the thing they want to do is transfer large files quicker. It may seem to make sense that a high-capacity slow link might be the best thing for the job. What the end-user doesn’t see is that in order to manage that file transfer, their computer is sending dozens of little control messages back and forth. The thing that makes computer communication different from television is interactivity, and interactivity depends on all those little back-and-forth messages.

    The phrase “high-capacity slow link” that I used above probably looked very odd to you. Even to me it looks odd. We’ve been used to wrong thinking for so long that correct thinking looks odd now. How can a high-capacity link be a slow link? High-capacity means fast, right? It’s odd how that’s not true in other areas. If someone talks about a “high-capacity” oil tanker, do you immediately assume it’s a very fast ship? I doubt it. If someone talks about a “large-capacity” truck, do you immediately assume it’s faster than a small sports car?

    We have to start making that distinction again in communications. When someone tells us that a modem has a speed of 28.8 kbit/​​​​sec we have to remember that 28.8 kbit/​​​​sec is its capacity, not its speed. Speed is a measure of distance divided by time, and ‘bits’ is not a measure of distance.

    I don’t know how communications came to be this way. Everyone knows that when you buy a hard disk you should check what its seek time is. The maximum transfer rate is something you might also be concerned with, but the seek time is definitely more important. Why does no one think to ask what a modem’s ‘seek time’ is? The latency is exactly the same thing. It’s the minimum time between asking for a piece of data and getting it, just like the seek time of a disk, and it’s just as important.

  68. ⁠, Dan Luu (2017-10-16; technology⁠, cs⁠, design):

    [Dan Luu continues his investigation of why computers feel so laggy and have such high latency compared to old computers (⁠, ⁠, ⁠, cf text editor analysis).

    He measures 21 keyboard latencies using a logic analyzer, finding a range of 15–60ms (!), representing a waste of a large fraction of the available ~100–200ms latency budget before a user notices and is irritated (“the median keyboard today adds as much latency as the entire end-to-end pipeline as a fast machine from the 70s.”). The latency estimates are surprising, and do not correlate with advertised traits. They simply have to be measured empirically.]

    We can see that, even with the limited set of keyboards tested, there can be as much as a 45ms difference in latency between keyboards. Moreover, a modern computer with one of the slower keyboards attached can’t possibly be as responsive as a quick machine from the 70s or 80s because the keyboard alone is slower than the entire response pipeline of some older computers. That establishes the fact that modern keyboards contribute to the latency bloat we’ve seen over the past forty years…Most keyboards add enough latency to make the user experience noticeably worse, and keyboards that advertise speed aren’t necessarily faster. The two gaming keyboards we measured weren’t faster than non-gaming keyboards, and the fastest keyboard measured was a minimalist keyboard from Apple that’s marketed more on design than speed.

  69. ⁠, Dan Luu (2017-07-18):

    These graphs show the distribution of latencies for each terminal. The y-axis has the latency in milliseconds. The x-axis is the percentile (eg., 50 means represents 50%-ile keypress ie., the median keypress). Measurements are with macOS unless otherwise stated. The graph on the left is when the machine is idle, and the graph on the right is under load. If we just look at median latencies, some setups don’t look too bad—terminal.app and emacs-eshell are at roughly 5ms unloaded, small enough that many people wouldn’t notice. But most terminals (st, alacritty, hyper, and iterm2) are in the range where you might expect people to notice the additional latency even when the machine is idle. If we look at the tail when the machine is idle, say the 99.9%-ile latency, every terminal gets into the range where the additional latency ought to be perceptible, according to studies on user interaction. For reference, the internally generated keypress to GPU memory trip for some terminals is slower than the time it takes to send a packet from Boston to Seattle and back, about 70ms.

    …Most terminals have enough latency that the user experience could be improved if the terminals concentrated more on latency and less on other features or other aspects of performance. However, when I search for terminal benchmarks, I find that terminal authors, if they benchmark anything, benchmark the speed of sinking stdout or memory usage at startup. This is unfortunate because most “low performance” terminals can already sink stdout many orders of magnitude faster than humans can keep up with, so further optimizing stdout throughput has a relatively small impact on actual user experience for most users. Likewise for reducing memory usage when an idle terminal uses 0.01% of the memory on my old and now quite low-end laptop. If you work on a terminal, perhaps consider relatively more latency and interactivity (eg., responsiveness to ^C) optimization and relatively less throughput and idle memory usage optimization.

  70. ⁠, Dan Luu (2017-02-08):

    A couple years ago, I took a road trip from Wisconsin to Washington and mostly stayed in rural hotels on the way. I expected the internet in rural areas too sparse to have cable internet to be slow, but I was still surprised that a large fraction of the web was inaccessible. Some blogs with lightweight styling were readable, as were pages by academics who hadn’t updated the styling on their website since 1995. But very few commercial websites were usable (other than Google). When I measured my connection, I found that the bandwidth was roughly comparable to what I got with a 56k modem in the 90s. The latency and packet loss were substantially worse than the average day on dialup: latency varied between 500ms and 1000ms and packet loss varied between 1% and 10%. Those numbers are comparable to what I’d see on dialup on a bad day.

    Despite my connection being only a bit worse than it was in the 90s, the vast majority of the web wouldn’t load…When Microsoft looked at actual measured connection speeds, they found that half of Americans don’t have broadband speed. Heck, AOL had 2 million dial-up subscribers in 2015, just AOL alone. Outside of the U.S., there are even more people with slow connections. I recently chatted with Ben Kuhn, who spends a fair amount of time in Africa, about his internet connection:

    I’ve seen ping latencies as bad as ~45 sec and packet loss as bad as 50% on a mobile hotspot in the evenings from Jijiga, Ethiopia. (I’m here now and currently I have 150ms ping with no packet loss but it’s 10am). There are some periods of the day where it ~never gets better than 10 sec and ~10% loss. The internet has gotten a lot better in the past ~year; it used to be that bad all the time except in the early mornings.

    …Let’s load some websites that programmers might frequent with a variety of simulated connections to get data on page load times…The timeout for tests was 6 minutes; anything slower than that is listed as FAIL. Pages that failed to load are also listed as FAIL. A few things that jump out from the table are:

    1. A large fraction of the web is unusable on a bad connection. Even on a good (0% packet loss, no ping spike) dialup connection, some sites won’t load…If you were to look at the 90%-ile results, you’d see that most pages fail to load on dialup and the “Bad” and “😱” connections are hopeless for almost all sites.
    2. Some sites will use a lot of data!

    …The flaw in the “page weight doesn’t matter because average speed is fast” [claim] is that if you average the connection of someone in my apartment building (which is wired for 1Gbps internet) and someone on 56k dialup, you get an average speed of 500 Mbps. That doesn’t mean the person on dialup is actually going to be able to load a 5MB website. The average speed of 3.9 Mbps comes from a 2014 Akamai report, but it’s just an average. If you look at Akamai’s 2016 report, you can find entire countries where more than 90% of IP addresses are slower than that!..“Use bcrypt” has become the mantra for a reasonable default if you’re not sure what to do when storing passwords. The web would be a nicer place if “use webpagetest” caught on in the same way. It’s not always the best tool for the job, but it sure beats the current defaults.

  71. ⁠, Chris Zacharias (2012-12-21):

    [Google engineer recounts the results of heavily optimizing YouTube to make it usable in slow Third World Countries: in an example of & ⁠, he found that despite making YouTube better for all users, average page load time got worse—because now Africans were actually able to use it.]

    When we plotted the data geographically and compared it to our total numbers broken out by region, there was a disproportionate increase in traffic from places like Southeast Asia, South America, Africa, and even remote regions of Siberia. Further investigation revealed that, in these places, the average page load time under [the optimized version] Feather was over 2 minutes! This meant that a regular video page, at over a megabyte, was taking more than 20 minutes to load! This was the penalty incurred before the video stream even had a chance to show the first frame. Correspondingly, entire populations of people simply could not use YouTube because it took too long to see anything. Under Feather, despite it taking over 2 minutes to get to the first frame of video, watching a video actually became a real possibility. Over the week, word of Feather had spread in these areas and our numbers were completely skewed as a result. Large numbers of people who were previously unable to use YouTube before were suddenly able to.

    Through Feather, I learned a valuable lesson about the state of the Internet throughout the rest of the world. Many of us are fortunate to live in high bandwidth regions, but there are still large portions of the world that do not. By keeping your client side code small and lightweight, you can literally open your product up to new markets.

  72. ⁠, Dan Luu (2017-12):

    I’ve had this nagging feeling that the computers I use today feel slower than the computers I used as a kid. As a rule, I don’t trust this kind of feeling because human perception has been shown to be unreliable in empirical studies, so I carried around a high-speed camera and measured the response latency of devices I’ve run into in the past few months. These are tests of the latency between a keypress and the display of a character in a terminal (see appendix for more details)…If we look at overall results, the fastest machines are ancient. Newer machines are all over the place. Fancy gaming rigs with unusually high refresh-rate displays are almost competitive with machines from the late 70s and early 80s, but “normal” modern computers can’t compete with thirty to forty year old machines.

    …Almost every computer and mobile device that people buy today is slower than common models of computers from the 70s and 80s. Low-latency gaming desktops and the iPad Pro can get into the same range as quick machines from thirty to forty years ago, but most off-the-shelf devices aren’t even close.

    If we had to pick one root cause of latency bloat, we might say that it’s because of “complexity”. Of course, we all know that complexity is bad. If you’ve been to a non-academic non-enterprise tech conference in the past decade, there’s a good chance that there was at least one talk on how complexity is the root of all evil and we should aspire to reduce complexity.

    Unfortunately, it’s a lot harder to remove complexity than to give a talk saying that we should remove complexity. A lot of the complexity buys us something, either directly or indirectly. When we looked at the input of a fancy modern keyboard vs. the Apple 2 keyboard, we saw that using a relatively powerful and expensive general purpose processor to handle keyboard inputs can be slower than dedicated logic for the keyboard, which would both be simpler and cheaper. However, using the processor gives people the ability to easily customize the keyboard, and also pushes the problem of “programming” the keyboard from hardware into software, which reduces the cost of making the keyboard. The more expensive chip increases the manufacturing cost, but considering how much of the cost of these small-batch artisanal keyboards is the design cost, it seems like a net win to trade manufacturing cost for ease of programming.

  73. {#linkBibliography-switch)-2018 .docMetadata}, Mark McGranaghan (Ink & Switch) (2018-11):

    You spend lots of time waiting on your computer. You pause while apps start and web pages load. Spinner icons are everywhere. Hardware gets faster, but software still feels slow. What gives? If you use your computer to do important work, you deserve fast software. Too much of today’s software falls short. At the Ink & Switch research lab we’ve researched why that is, so that we can do better. This article shares we’ve learned…Let’s look at an example of how latency can add up:

    Latency waterfall example: A hypothetical example of end-to-end latency from input to display. Dashed vertical lines indicate cycles the pipeline needs to wait for.

    …There is a deep stack of technology that makes a modern computer interface respond to a user’s requests. Even something as simple as pressing a key on a keyboard and having the corresponding character appear in a text input box traverses a lengthy, complex gauntlet of steps, from the scan rate of the keyboard, through the OS and framework processing layers, through the graphics card rendering and display refresh rate. There is reason for this complexity, and yet we feel sad that computer users trying to be productive with these devices are so often left waiting, watching spinners, or even just with the slight but still perceptible sense that their devices simply can’t keep up with them.

    • What feels slow

      • Latency not throughput
      • Touch interfaces
      • Typing
      • Mousing
      • Applications
      • Real-world apps
    • Where slowness comes from

      • Input devices
      • Sample rates
      • Displays and GPUs
      • Cycle stacking
      • Runtime overhead
      • Latency by design
      • User-hostile work
      • Application code
      • Putting it together
    • Toward fast software

    • References

  74. ⁠, Pavel Fatin (2015-12-20):

    In this article I examine human and machine aspects of typing latency (“typing lag”) and present experimental data on latency of popular text / code editors. The article is inspired by my work on implementing “zero-latency typing” in IntelliJ IDEA.

    1. Human side

      1.1. Feedback 1.2. Motor skill 1.3. Internal model 1.4. Multisensory integration 1.5. Effects

    2. Machine side

      2.1. Input latency 2.2. Processing latency 2.3. Output latency 2.4. Total latency

    3. Editor benchmarks

      3.1. Configuration 3.2. Methodology 3.3. Windows 3.4. Linux 3.5. VirtualBox

    4. Summary

    5. Links

    …To measure processing delays experimentally I created Typometer—a tool to determine and analyze visual latency of text editors (sources). Typometer works by generating OS input events and using screen capture to measure the delay between a keystroke and a corresponding screen update. Hence, the measurement encompasses all the constituents of processing latency (i. e. OS queue, VM, editor, GPU pipeline, buffering, window manager and possible V-Sync). That is the right thing to do, because all those components are inherently intertwined with the editor, and in principle, editor application has influence on all the parts….[He tested 9] Editors: Atom 1.1 / Eclipse 4.5.1 / Emacs 24.5.1 / Gedit 3.10.4 / GVim 7.4.712 / IntelliJ Idea CE 15.0 / Netbeans 8.1 / Notepad++ 6.8.4 / Sublime Text 3083.

    “Editor latency in MS Windows (text file) in milliseconds”

    Apparently, editors are not created equal (at least, from the standpoint of latency).

  75. ⁠, Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, Mark McGranaghan (Ink & Switch) (2019-04):

    [PDF version]

    Cloud apps like Google Docs and Trello are popular because they enable real-time collaboration with colleagues, and they make it easy for us to access our work from all of our devices. However, by centralizing data storage on servers, cloud apps also take away ownership and agency from users. If a service shuts down, the software stops functioning, and data created with that software is lost.

    In this article we propose “local-first software”: a set of principles for software that enables both collaboration and ownership for users. Local-first ideals include the ability to work offline and collaborate across multiple devices, while also improving the security, privacy, long-term preservation, and user control of data.

    We survey existing approaches to data storage and sharing, ranging from email attachments to web apps to Firebase-backed mobile apps, and we examine the trade-offs of each. We look at Conflict-free Replicated Data Types (CRDTs): data structures that are multi-user from the ground up while also being fundamentally local and private. CRDTs have the potential to be a foundational technology for realizing local-first software.

    We share some of our findings from developing local-first software prototypes at Ink & Switch over the course of several years. These experiments test the viability of CRDTs in practice, and explore the user interface challenges for this new data model. Lastly, we suggest some next steps for moving towards local-first software: for researchers, for app developers, and a startup opportunity for entrepreneurs.

    …in the cloud, ownership of data is vested in the servers, not the users, and so we became borrowers of our own data. The documents created in cloud apps are destined to disappear when the creators of those services cease to maintain them. Cloud services defy long-term preservation. No Wayback Machine can restore a sunsetted web application. The cannot preserve your Google Docs.

    In this article we explored a new way forward for software of the future. We have shown that it is possible for users to retain ownership and control of their data, while also benefiting from the features we associate with the cloud: seamless collaboration and access from anywhere. It is possible to get the best of both worlds.

    But more work is needed to realize the local-first approach in practice. Application developers can take incremental steps, such as improving offline support and making better use of on-device storage. Researchers can continue improving the algorithms, programming models, and user interfaces for local-first software. Entrepreneurs can develop foundational technologies such as CRDTs and peer-to-peer networking into mature products able to power the next generation of applications.

    • Motivation: collaboration and ownership

    • Seven ideals for local-first software

      • No spinners: your work at your fingertips
      • Your work is not trapped on one device
      • The network is optional
      • Seamless collaboration with your colleagues
      • The Long Now
      • Security and privacy by default
      • You retain ultimate ownership and control
    • Existing data storage and sharing models

      • How application architecture affects user experience
        • Files and email attachments
        • Web apps: Google Docs, Trello, Figma
        • Dropbox, Google Drive, Box, OneDrive, etc.
        • Git and GitHub
      • Developer infrastructure for building apps
        • Web app (thin client)
        • Mobile app with local storage (thick client)
        • Backend-as-a-Service: Firebase, CloudKit, Realm
        • CouchDB
    • Towards a better future

      • CRDTs as a foundational technology
      • Ink & Switch prototypes
        • Trello clone
        • Collaborative drawing
        • Media canvas
        • Findings
      • How you can help
        • For distributed systems and programming languages researchers
        • For Human-Computer Interaction (HCI) researchers
        • For practitioners
        • Call for startups
    • Conclusions

  76. 2019-kleppmann.pdf: ⁠, Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, Mark McGranaghan (Ink & Switch) (2019-10-23; cs):

    Cloud apps like Google Docs and Trello are popular because they enable real-time collaboration with colleagues, and they make it easy for us to access our work from all of our devices. However, by centralizing data storage on servers, cloud apps also take away ownership and agency from users. If a service shuts down, the software stops functioning, and data created with that software is lost.

    In this article we propose local-first software, a set of principles for software that enables both collaboration and ownership for users. Local-first ideals include the ability to work offline and collaborate across multiple devices, while also improving the security, privacy, long-term preservation, and user control of data.

    We survey existing approaches to data storage and sharing, ranging from email attachments to web apps to Firebase-backed mobile apps, and we examine the trade-offs of each. We look at Conflict-free Replicated Data Types (CRDTs): data structures that are multi-user from the ground up while also being fundamentally local and private. CRDTs have the potential to be a foundational technology for realizing local-first software.

    We share some of our findings from developing local-first software prototypes at the Ink & Switch research lab over the course of several years. These experiments test the viability of CRDTs in practice, and explore the user interface challenges for this new data model. Lastly, we suggest some next steps for moving towards local-first software: for researchers, for app developers, and a startup opportunity for entrepreneurs.

    [Keywords: collaboration software, mobile computing, data ownership, CRDTs, peer-to-peer communication]

  77. ⁠, Norman Hardy (2002):

    [2002?] Short technology essay based on (!) discussing a perennial pattern in computing history dubbed the ‘Wheel of Reincarnation’ for how old approaches inevitably reincarnate as the exciting new thing: shifts between ‘local’ and ‘remote’ computing resources, which are exemplified by repeated cycles in graphical display technologies from dumb ‘terminals’ which display only raw pixels to smart devices which interpret more complicated inputs like text or vectors or programming languages (eg ). These cycles are driven by cost, latency, architectural simplicity, and available computing power.

    The Wheel of Reincarnation paradigm has played out for computers as well, in shifts from local terminals attached to mainframes to PCs to smartphones to ‘cloud computing’.

  78. ⁠, T. H. Meyer, I. E. Sutherland (1968-06):

    The flexibility and power needed in the channel for a computer display are considered. To work efficiently, such a channel must have a sufficient number of instruction that it is best understood as a small processor rather than a powerful channel. As it was found that successive improvements to the display processor design lie on a circular path, by making improvements one can return to the original simple design plus one new general purpose computer for each trip around. The degree of physical separation between display and parent computer is a key factor in display processor design.

    [Keywords: display processor design, display system, computer graphics, graphic terminal, displays, graphics, display generator, display channel, display programming, graphical interaction, remote displays, Wheel of Reincarnation]

  79. Computers

  80. 1573-christoffelplantijn-bible-plantinpolyglot-genesis.jpg

  81. ⁠, Buttersafe (2012-04-24):

    The Floppy Toast

    The morning started off the way that every morning did
    A small meal and cup of joe to perk up drooping lids.

    But when it came to making toast,
    Today was not a copy.
    Even after it emerged,
    The toast was hella floppy.

    He tried and tried to make the toast
    Become what it should be,
    Yet every time the timer stopped
    It popped up floppily.

    He simply couldn’t understand this strange phenomenon,
    And so he tried to get some facts by calling up his mom.

    “That’s stupid weird”, his mother offered, somewhat groggily.
    “But you’re a grown up dude. Why don’t you solve this without me?”

    So he decided he would take
    His toast into the doctor,
    Who hopefully would find a way
    To make it much less softer.

    “Though floppy human body parts
    Can be firmed up with pills,
    Alas your loaf’s not stricken with
    Those special types of ills.”

    “It seems your toast must have been cursed,
    By magics dark and bleak,
    And so a magic answer to your problems
    You must seek.”

    “Mt. Crazy-Hot is home to dangers
    Quite antagonistic,
    But at the summit lives a very helpful
    Wise old mystic.”

    What choice had he except to scale this ancient no man’s land?
    His breakfast problems had already gotten out of hand.

    He tucked away his floppy toast,
    And started on his way.
    His muscles burned first from the climb,
    And then burned from the flames.

    With every step the fires licked
    His body up and down
    Which frankly did not nearly feel
    As sexy as it sounds.

    And then from the inferno rose a creature of the deep
    Who did not seem too happy to be woken from its sleep.

    Long time the manxome foe he pondered, brewing strategy
    Until the monster chose to force his hand more rapidly.

    An incendiary breath is
    Quite the motivator,
    To run like hell and save your clever
    Plans for some point later.

    And so he ran and ran and ran
    And ran and ran and ran
    And ran and ran and ran and ran
    Then ran into a man.

    “You must be the mystic with the culinary talents!
    You’ve got to help me sir! My morning toast hangs in the balance!”

    “No matter how I tried,
    Its floppiness was unimpeded.
    Were my methods lacking something
    That was absolutely needed?”

    “Though I am wise, perhaps the one with answers here is you.
    Look upon your meal again, you may see something new.”

    And lo, he did produce it
    For his enigmatic host
    But no longer was it floppy!
    ’Twas the perfect piece of toast!

    The oils he secreted from his skin
    Amidst the climb
    Had soaked into the slice
    As if a butter most divine!

    And after in that butter
    It had practically been drowned
    The scorching flames transformed it
    To a crispy, golden brown!

    “Perfection comes through hardship,
    A truth hidden from my eyes.
    This cursed toast revealed it.
    ’Twas a blessing in disguise!

    Despite the difficulties,
    Through my journey I stayed strong.
    It seems that the perfect toast
    Was inside me all along!”

    “I’m glad this less has awakened
    Wisdom from within.
    And next time you can simply think
    To plug the toaster in.”

  82. ⁠, Sarah Jeong (2018-07-16):

    On June 4th, a group of lawyers shuffled into a federal court in Manhattan to argue over two trademark registrations. The day’s hearing was the culmination of months of internet drama—furious blog posts, Twitter hashtags, YouTube videos, claims of doxxing, and death threats…They were gathered there that day because one self-published romance author was suing another for using the word “cocky” in her titles. And as absurd as this courtroom scene was—with a federal judge soberly examining the shirtless doctors on the cover of an “MFM Menage Romance”—it didn’t even begin to scratch the surface.

    The fight over #Cockygate, as it was branded online, emerged from the strange universe of Amazon Kindle Unlimited, where authors collaborate and compete to game Amazon’s algorithm. Trademark trolling is just the beginning: There are private chat groups, ebook exploits, conspiracies to seed hyper-specific trends like “Navy SEALs” and “mountain men”, and even a controversial sweepstakes in which a popular self-published author offered his readers a chance to win diamonds from Tiffany’s if they reviewed his new book…A genre that mostly features shiny, shirtless men on its covers and sells ebooks for ¢99 a pop might seem unserious. But at stake are revenues sometimes amounting to a million dollars a year, with some authors easily netting six figures a month. The top authors can drop $50,000 on a single ad campaign that will keep them in the charts—and see a worthwhile return on that investment.

    …According to Willink, over the course of RWA, Valderrama told her about certain marketing and sales strategies, which she claimed to handle for other authors. Valderrama allegedly said that she organized newsletter swaps, in which authors would promote each other’s books to their respective mailing lists. She also claimed to manage review teams—groups of assigned readers who were expected to leave reviews for books online. According to Willink, Valderrama’s authors often bought each other’s books to improve their ranking on the charts—something that she arranged, coordinating payments through her own account. Valderrama also told her that she used multiple email addresses to buy authors’ books on iBooks when they were trying to hit the USA Today list. When Valderrama invited Willink to a private chat group of romance authors, Willink learned practices like chart gaming and newsletter placement selling—and much more—were surprisingly common.

    …In yet more screencaps, members discuss the mechanics of “book stuffing.” Book stuffing is a term that encompasses a wide range of methods for taking advantage of the Kindle Unlimited revenue structure. In Kindle Unlimited, readers pay $9.99 a month to read as many books as they want that are available through the KU program. This includes both popular mainstream titles like the Harry Potter series and self-published romances put out by authors like Crescent and Hopkins. Authors are paid according to pages read, creating incentives to produce massively inflated and strangely structured books. The more pages Amazon thinks have been read, the more money an author receives.

    …Book stuffing is particularly controversial because Amazon pays authors from a single communal pot. In other words, Kindle Unlimited is a zero-sum game. The more one author gets from Kindle Unlimited, the less the other authors get. The romance authors Willink was discovering didn’t go in for clumsy stuffings of automatic translations or HTML cruft; rather, they stuffed their books with ghostwritten content or repackaged, previously published material. In the latter case, the author will bait readers with promises of fresh content, like a new novella, at the end of the book. Every time a reader reads to the end of a 3,000-page book, the author earns almost 14 dollars. For titles that break into the top of the Kindle Unlimited charts, this trick can generate a fortune.

  83. ⁠, Clayton Atreus (2008-02-24):

    [Paper/​​​​suicide note by a philosophy graduate who went on a motorcycle tour of Mexico and ran into a goat, instantly becoming a ⁠. Atreus discusses how paraplegia robs him of the ability to do almost everything he valued in life, from running to motorcycling to sex, while burdening him down with dead weight equivalent to hundreds of pounds, which make the simplest action, like getting out of a car, take minutes or hours, radically shortening his effective days. He is an ambulatory corpse, “two arms and a head”. Atreus discusses in detail the existential horror of his condition, from complete lack of bowel control requiring him to constantly dig his own feces out of his anus to being trapped in a wheelchair larger than a washing machine to the cruelty of well-intentioned encouragement to social alienation and his constant agonized awareness of everything he has lost. If the first question of philosophy is whether to commit suicide, Atreus finds that for him, the answer is “yes”. The paper/​​​​book concludes with his description of stabbing himself and slowly bleeding to death.]

    This book is born of pain. I wrote it out of compulsion during the most hellish time of my life. Writing it hurt me and was at times extremely unpleasant. Is the book my death-rattle or the sound of me screaming inside of my cage? Does its tone tell you I am angry or merely seeking a psychological expedient against the madness I see around me? The book is my creation but is also in many ways foreign to me for I am living in a foreign land. Most generally perhaps it is just the thoughts that passed through my head over the twenty months I spent moving toward death. I am certainly not a man who is at peace with his life, but on the contrary I despise it as I have never before despised anything. Who can sort it all out? Being imprisoned in the nightmarish cage of paraplegia has done all manner of violence to the deepest parts of me. Still, I have not gone mad. I am no literary genius and don’t expect everything I say to be understood, but if you would like to know what my experiences have been like, and what I am like, I will try my best to show you.

    What do I think of this book? I have no affection for it. I find it odious and unattractive and am very saddened that I wrote it. But it is what I had to say. It took on a life of its own and when I now step back and look at what I created I regard it with distaste. If I could, I would put all of these horrible thoughts in a box, seal it forever, then go out and live life. I would run in the sun, enjoy my freedom, and revel in myself. But that’s the point. I cannot go out and live life because this is not life. So instead I speak to you from the place I now occupy, between life and death.

    …Imagine a man cut off a few inches below the armpits. Neglect for a moment questions concerning how he eliminates waste and so forth, and just assume that the site of the “amputation” is, to borrow from Gogol, “as uniform as a newly fried pancake”. This man would be vastly, immensely better off than me. If you don’t know who is, he had a role in the ⁠. He was the guy who was essentially a torso with arms. He walked on his hands. How fortunate he was compared to me may not register right away, because the illusion I mentioned above would probably make you find Johnny Eck’s condition far more shocking than mine. But the truth is that mine is much more horrible than his, barring whatever social “advantages” the illusion of being whole might confer on me. The other day I saw a picture of a woman missing both legs. They were cut off mid-thigh. I thought that if only I was like her perhaps my life would be bearable. She was, in my opinion, better off than the pancake man, who is beyond any doubt far better off than me. One man said to me, “At least you didn’t lose your legs.” No, I did lose my legs, and my penis, and my pelvis. Let’s get something very clear about the difference between paraplegics and double-leg amputees. If tomorrow every paraplegic woke up as a double-leg amputee, the Earth itself would quiver with ecstasy from the collective bursting forth of joyous emotion. Tears of the most exquisitely overwhelming relief and happiness would stream down the cheeks of former paraplegics the world over. My wording here is deliberate. It’s no exaggeration. Losing both legs is bad, but paraplegia is ghoulishly, nightmarishly worse.

    Part of what I wanted in desiring to die in the company of those I loved was to reassure them and perhaps give them courage to face death well. That was something I really wanted to give to them and I’m sorry I can only do it with these words. I was driven almost mad by all of the things many other people said about paraplegia, suicide, and what was still possible in my condition. I hope everyone understands how all of that affected the tone of what I wrote. I was so frustrated with all of it, I thought it was so insane. But I only wanted to break free of it all and say what I felt. I felt like it stifled me so horribly.

    I cut some more and the blood is flowing well again. I’m surprised how long it is taking me to even feel anything. I thought I was dizzy but I’m not sure I am now. It’s 8:51 pm. I thought I would get cold but I’m not cold either, I’m actually hot but that’s probably the two sweaters. Starting to feel a little badly. Sweating, a little light-headed.

    I’m going to go now, done writing. Goodbye everyone.

  84. https://advrider.com/f/threads/seattle-to-argentina-on-a-klr650.136505/

  85. https://old.reddit.com/r/TheMotte/comments/e79pzs/2_arms_1_head_not_nsfw/

  86. {#linkBibliography-(atlantic)-2019 .docMetadata}, Michael Erard (Atlantic) (2019-01-16):

    …“There’s so much so in sorrow”, he said at one point. “Let me down from here”, he said at another. “I’ve lost my modality.” To the surprise of his family members, the lifelong atheist also began hallucinating angels and complaining about the crowded room—even though no one was there.

    Felix’s 53-year-old daughter, Lisa Smartt, kept track of his utterances, writing them down as she sat at his bedside in those final days. Smartt majored in linguistics at UC Berkeley in the 1980s and built a career teaching adults to read and write. Transcribing Felix’s ramblings was a sort of coping mechanism for her, she says…eventually she wrote a book, Words on the Threshold⁠, published in early 2017, about the linguistic patterns in 2,000 utterances from 181 dying people, including her father. Despite the limitations of this book, it’s unique—it’s the only published work I could find when I tried to satisfy my curiosity about how people really talk when they die.

    …Many people die in such silence, particularly if they have advanced dementia or Alzheimer’s that robbed them of language years earlier. For those who do speak, it seems their vernacular is often banal. From a doctor I heard that people often say, “Oh fuck, oh fuck.” Often it’s the names of wives, husbands, children. “A nurse from the hospice told me that the last words of dying men often resembled each other”, wrote Hajo Schumacher in a September essay in Der Spiegel⁠. “Almost everyone is calling for ‘Mommy’ or ‘Mama’ with the last breath.”…Delirium is so frequent then, wrote the New Zealand psychiatrist Sandy McLeod, that “it may even be regarded as exceptional for patients to remain mentally clear throughout the final stages of malignant illness.” About half of people who recover from postoperative delirium recall the disorienting, fearful experience.

    …He also repeated words and phrases, often ones that made no sense. “The green dimension! The green dimension!” (Repetition is common in the speech of people with dementia and also those who are delirious.) Smartt found that repetitions often expressed themes such as gratitude and resistance to death. But there were also unexpected motifs, such as circles, numbers, and motion. “I’ve got to get off, get off! Off of this life”, Felix had said…In Final Gifts, the hospice nurses Callanan and Kelley note that “the dying often use the metaphor of travel to alert those around them that it is time for them to die.” They quote a 17-year-old, dying of cancer, distraught because she can’t find the map. “If I could find the map, I could go home! Where’s the map? I want to go home!”

  87. ⁠, Scott Alexander (2013-07-17):

    [Essay by psychiatrist about care of the dying in American healthcare: people die agonizing, slow, expensive deaths, prolonged by modern healthcare, deprived of all dignity and joy by disease and decay. There is little noble about it.]

    You will become bedridden, unable to walk or even to turn yourself over. You will become completely dependent on nurse assistants to intermittently shift your position to avoid pressure ulcers. When they inevitably slip up, your skin develops huge incurable sores that can sometimes erode all the way to the bone, and which are perpetually infected with foul-smelling bacteria. Your limbs will become practically vestigial organs, like the appendix, and when your vascular disease gets too bad, one or more will be amputated, sacrifices to save the host. Urinary and fecal continence disappear somewhere in the process, so you’re either connected to catheters or else spend a while every day lying in a puddle of your own wastes until the nurses can help you out…

    Somewhere in the process your mind very quietly and without fanfare gives up the ghost. It starts with forgetting a couple of little things, and progresses…They don’t remember their own names, they don’t know where they are or what they’re doing there, and they think it’s the 1930s or the 1950s or don’t even have a concept of years at all. When you’re alert and oriented “x0”, the world becomes this terrifying place where you are stuck in some kind of bed and can’t move and people are sticking you with very large needles and forcing tubes down your throat and you have no idea why or what’s going on.

    So of course you start screaming and trying to attack people and trying to pull the tubes and IV lines out. Every morning when I come in to work I have to check the nurses’ notes for what happened the previous night, and every morning a couple of my patients have tried to pull all of their tubes and lines out. If it’s especially bad they try to attack the staff, and although the extremely elderly are really bad at attacking people this is nevertheless Unacceptable Behavior and they have to be restrained ie tied down to the bed. A presumably more humane alternative sometimes used instead or in addition is to just drug you up on all of those old-timey psychiatric medications that actual psychiatrists don’t use anymore because of their bad reputation…Nevertheless, this is the way many of my patients die. Old, limbless, bedridden, ulcerated, in a puddle of waste, gasping for breath, loopy on morphine, hopelessly demented, in a sterile hospital room with someone from a volunteer program who just met them sitting by their bed.

    …I work in a Catholic hospital. People here say the phrase “culture of life” a lot, as in “we need to cultivate a culture of life.” They say it almost as often as they say “patient-centered”. At my hospital orientation, a whole bunch of nuns and executives and people like that got up and told us how we had to do our part to “cultivate a culture of life.”

    And now every time I hear that phrase I want to scream. 21st century American hospitals do not need to “cultivate a culture of life”. We have enough life. We have life up the wazoo. We have more life than we know what to do with. We have life far beyond the point where it becomes a sick caricature of itself. We prolong life until it becomes a sickness, an abomination, a miserable and pathetic flight from death that saps out and mocks everything that made life desirable in the first place. 21st century American hospitals need to cultivate a culture of life the same way that Newcastle needs to cultivate a culture of coal, the same way a man who is burning to death needs to cultivate a culture of fire.

    And so every time I hear that phrase I want to scream, or if I cannot scream, to find some book of hospital poetry that really is a book of hospital poetry and shove it at them, make them read it until they understand. There is no such book, so I hope it will be acceptable if I just rip off of Wilfred Owen directly:

    If in some smothering dreams you too could pace
    Behind the gurney that we flung him in,
    And watch the white eyes writhing in his face,
    His hanging face, like a devil’s sack of sin;
    If you could hear, at every jolt, the blood
    Come gargling from the froth-corrupted lungs,
    Obscene with cancer, bitter with the cud
    Of vile, incurable sores on innocent tongues
    My friend, you would not so pontificate
    To reasoners beset by moral strife
    The old lie: we must try to cultivate
    A culture of life.

  88. ⁠, Wislawa Szymborska (1976):

    The buzzard has nothing to fault himself with.
    Scruples are alien to the black panther.
    Piranhas do not doubt the rightness of their actions.
    The rattlesnake approves of himself without reservations.

    The self-critical jackal does not exist.
    The locust, alligator, trichina, horsefly
    live as they live and are glad of it.

    The killer whale’s heart weighs one hundred kilos
    but in other respects it is light.

    There is nothing more animal-like
    than a clear conscience
    on the third planet of the Sun.

  89. ⁠, Mary Oliver (1990):

    The kingfisher rises out of the black wave
    like a blue flower, in his beak
    he carries a silver leaf. I think this is
    the prettiest world—so long as you don’t mind
    a little dying, how could there be a day in your whole life
    that doesn’t have its splash of happiness?
    There are more fish than there are leaves
    on a thousand trees, and anyway the kingfisher
    wasn’t born to think about it, or anything else.
    When the wave snaps shut over his blue head, the water
    remains water—hunger is the only story
    he has ever heard in his life that he could believe.
    I don’t say he’s right. Neither
    do I say he’s wrong. Religiously he swallows the silver leaf
    with its broken red river, and with a rough and easy cry
    I couldn’t rouse out of my thoughtful body
    if my life depended on it, he swings back
    over the bright sea to do the same thing, to do it
    (as I long to do something, anything) perfectly.

  90. ⁠, Czesław Miłosz (1984):

    Let us not talk philosophy, drop it, Jeanne. So many words, so much paper, who can stand it. I told you the truth about my distancing myself. I’ve stopped worrying about my misshapen life. It was no better and no worse than the usual human tragedies.

    For over thirty years we have been waging our dispute As we do now, on the island under the skies of the tropics. We flee a downpour, in an instant the bright sun again, And I grow dumb, dazzled by the emerald essence of the leaves.

    We submerge in foam at the line of the surf, We swim far, to where the horizon is a tangle of banana bush, With little windmills of palms. And I am under accusation: That I am not up to my oeuvre, That I do not demand enough from myself, As I could have learned from Karl Jaspers, That my scorn for the opinions of this age grows slack.

    I roll on a wave and look at white clouds.

    You are right, Jeanne, I don’t know how to care about the salvation of my soul. Some are called, others manage as well as they can. I accept it, what has befallen me is just. I don’t pretend to the dignity of a wise old age. Untranslatable into words, I chose my home in what is now, In things of this world, which exist and, for that reason, delight us: Nakedness of women on the beach, coppery cones of their breasts, Hibiscus, alamanda, a red lily, devouring With my eyes, lips, tongue, the guava juice, the juice of la prune de Cythère, Rum with ice and syrup, lianas-orchids In a rain forest, where trees stand on the stilts of their roots.

    Death, you say, mine and yours, closer and closer, We suffered and this poor earth was not enough. The purple-black earth of vegetable gardens Will be here, either looked at or not. The sea, as today, will breathe from its depths. Growing small, I disappear in the immense, more and more free.

    Guadeloupe

  91. ⁠, Moxie Marlinspike ():

    “I made a series of mistakes that culminated in the worst sailing accident of my life, and almost took me to the bottom of the ocean.”

    [One fall evening after work, Marlinspike and a friend made a simple plan to sail a 15-foot catamaran out 600 feet into the San Francisco Bay, where they’d drop and row back in a smaller boat, leaving the sailboat to wait for their next adventure. (Anarchist sailors don’t like to pay dockage fees.) Marlinspike headed out into the bay on the catamaran with his friend following in a rowboat. Only after Marlinspike had passed the pier did he realize the wind was blowing at a treacherous 30 miles an hour. He decided to turn back but discovered that he’d misrigged the craft and had to fix his mistake. As the sun sank toward the horizon, he shouted to his friend that they should give up and return to shore, and the friend rowed back to safety.

    Then, without warning, the wind gusted. The catamaran flipped, throwing Marlinspike into the ice-cold water. “The suddenness of it was unbelievable, as if I was on a tiny model made of paper which someone had simply flicked with their finger”, he would later write in a blog post about the experience. Soon the boat was fully upside down, pinned in place by the wind. Marlinspike tried to swim for shore. But the pier was too far away, the waves too strong, and he could feel his body succumbing to hypothermia, blackness creeping into the edges of his vision. He headed back to the overturned boat. Alone now in the dark, he clung to the hull, took stock of the last hour’s events, and realized, with slow and lonely certainty, that he was very likely going to die.

    When a tugboat finally chanced upon his soaked and frozen form he was nearly unconscious and had to be towed up with a rope. When he arrived at the hospital, Marlinspike says, the nurses told him his temperature was so low their digital thermometers couldn’t register it. As he recovered over the next days, he had the sort of realization that sometimes results from a near-death experience. “It definitely sharpened my focus”, he says of the incident. “It made me question what I was doing with my life.”

    Marlinspike’s time at Twitter had given him an ambitious sense of scale: He was determined to encrypt core chunks of the Internet. A normal person might have quit sailing. Instead, Marlinspike quit Twitter. A year and a day after he had started, he walked away from over $1 million in company stock.]

  92. ⁠, Edward Tufte (1997):

    Visual Explanations: Images and Quantities, Evidence and Narrative [Tufte #3] is about pictures of verbs, the representation of mechanism and motion, process and dynamics, causes and effects, explanation and narrative. Practical applications and examples include statistical graphics, charts for making important decisions in engineering and medicine, technical manuals, diagrams, design of computer interfaces and websites and on-line manuals, animations and scientific visualizations, techniques for talks, and design strategies for enhancing the rate of information transfer in print, presentations, and computer screens. The use of visual evidence in deciding to launch the space shuttle Challenger is discussed in careful detail. Video snapshots show redesigns of a supercomputer animation of a thunderstorm. The book is designed and printed to the highest standards, with luscious color throughout and four built-in flaps for showing motion and before/​​​​after effects.

    158 pages; ISBN 1930824157

    Cover of Visual Explanations
  93. Red

  94. Sidenotes

  95. 1976-rosenthal-experimenterexpectancyeffects.pdf: ⁠, Robert Rosenthal (1976; statistics  /​ ​​ ​bias):

    Within the context of a general discussion of the unintended effects of scientists on the results of their research, this work reported on the growing evidence that the hypothesis of the behavioral scientist could come to serve as self-fulfilling prophecy, by means of subtle processes of communication between the experimenter and the human or animal research subject. [The Science Citation Index (SCI) and the Social Sciences Citation Index (SSCI) indicate that the book has been cited over 740 times since 1966 [as of 1979].] —“Citation Classic”

    [Enlarged Edition, expanded with discussion of the Pygmalion effect etc: ISBN 0-470-01391-5]

  96. Books#experimenter-effects-in-behavioral-research-rosenthal-1976

  97. http://garfield.library.upenn.edu/classics1979/A1979HZ32400001.pdf

  98. https://www.amazon.com/String-Beads-Complete-Princess-Translations/dp/B005ZOI7Q0

  99. Books#string-of-beads-complete-poems-of-princess-shikishi-shikishi-1993

  100. Anime#rurouni-kenshin-2014

  101. Movies#the-magic-flute

  102. MLP

  103. Anime#mlp-fim

  104. Anime#owarimonogatari

  105. https://www.youtube.com/watch?v=ko0fT6CEWag

  106. https://soundcloud.com/mihonymusic/mihony-cute-swing