popups.js”, (2019-08-21; ):
popups.jsparses a HTML document and looks for
<a>links which have the
docMetadataattribute class, and the attributes
data-popup-abstract. (These attributes are expected to be populated already by the HTML document’s compiler, however, they can also be done dynamically. See
wikipedia-popups.jsfor an example of a library which does Wikipedia-only dynamically on page loads.)
wikipedia-popups.js”, (2019-07-29; ):
https://en.wikipedia.org/api/rest_v1/). All summaries are loaded on page load so as to have minimal latency (on-mouseover summary loading is noticeably slow). If a page has many Wikipedia links on it, this can result in quite a few requests; the summaries can instead be provided statically, encoded into data attributes. (This also allows encoding summaries/previews of arbitrary websites by whatever is compiling the HTML.) See
/static/js/popups.jsfor a JS library which takes that approach instead.
Twin studies and other analyses of inheritance of sexual orientation in humans has indicated that same-sex sexual behavior has a genetic component. Previous searches for the specific genes involved have been underpowered and thus unable to detect genetic signals. Ganna et al. perform a genome-wide association study on 493,001 participants from the United States, the United Kingdom, and Sweden to study genes associated with sexual orientation (see the Perspective by Mills). They find multiple loci implicated in same-sex sexual behavior indicating that, like other behavioral traits, nonheterosexual behavior is polygenic.
Introduction: Across human societies and in both sexes, some 2 to 10% of individuals report engaging in sex with same-sex partners, either exclusively or in addition to sex with opposite-sex partners. Twin and family studies have shown that same-sex sexual behavior is partly genetically influenced, but previous searches for the specific genes involved have been underpowered to detect effect sizes realistic for complex traits.
Rationale: For the first time, new large-scale datasets afford sufficient statistical power to identify genetic variants associated with same-sex sexual behavior (ever versus never had a same-sex partner), estimate the proportion of variation in the trait accounted for by all variants in aggregate, estimate the genetic correlation of same-sex sexual behavior with other traits, and probe the biology and complexity of the trait. To these ends, we performed genome-wide association discovery analyses on 477,522 individuals from the United Kingdom and United States, replication analyses in 15,142 individuals from the United States and Sweden, and follow-up analyses using different aspects of sexual preference.
Results: In the discovery samples (UK Biobank and 23andMe), 5 autosomal loci were statistically-significantly associated with same-sex sexual behavior. Follow-up of these loci suggested links to biological pathways that involve sex hormone regulation and olfaction. 3 of the loci were statistically-significant in a meta-analysis of smaller, independent replication samples. Although only a few loci passed the stringent statistical corrections for genome-wide multiple testing and were replicated in other samples, our analyses show that many loci underlie same-sex sexual behavior in both sexes. In aggregate, all tested genetic variants accounted for 8 to 25% of variation in male and female same-sex sexual behavior, and the genetic influences were positively but imperfectly correlated between the sexes [genetic correlation coefficient (rg)= 0.63; 95% confidence intervals, 0.48 to 0.78]. These aggregate genetic influences partly overlapped with those on a variety of other traits, including externalizing behaviors such as smoking, cannabis use, risk-taking, and the personality trait “openness to experience.” Additional analyses suggested that sexual behavior, attraction, identity, and fantasies are influenced by a similar set of genetic variants (rg > 0.83); however, the genetic effects that differentiate heterosexual from same-sex sexual behavior are not the same as those that differ among nonheterosexuals with lower versus higher proportions of same-sex partners, which suggests that there is no single continuum from opposite-sex to same-sex preference.
Conclusion: Same-sex sexual behavior is influenced by not one or a few genes but many. Overlap with genetic influences on other traits provides insights into the underlying biology of same-sex sexual behavior, and analysis of different aspects of sexual preference underscore its complexity and call into question the validity of bipolar continuum measures such as the Kinsey scale. Nevertheless, many uncertainties remain to be explored, including how sociocultural influences on sexual preference might interact with genetic influences. To help communicate our study to the broader public, we organized workshops in which representatives of the public, activists, and researchers discussed the rationale, results, and implications of our study.
2019-lopez.pdf: “Genomic Evidence for Local Adaptation of Hunter-Gatherers to the African Rainforest”, (2019-08-08; ):
- A strong selective sweep at TRPS1 occurred in African rainforest hunter-gatherers
- Pleiotropic height genes lead to polygenic selection signals for reproductive age
- Pathogen-driven selection, mostly viral, has been pervasive among hunter-gatherers
- Post-admixture selection has maintained adaptive variation in hunter-gatherers
Summary: African rainforests support exceptionally high biodiversity and host the world’s largest number of active hunter-gatherers [1, 2, 3]. The genetic history of African rainforest hunter-gatherers and neighboring farmers is characterized by an ancient divergence more than 100,000 years ago, together with recent population collapses and expansions, respectively [4, 5, 6, 7, 8, 9, 10, 11, 12]. While the demographic past of rainforest hunter-gatherers has been deeply characterized, important aspects of their history of genetic adaptation remain unclear. Here, we investigated how these groups have adapted—through classic selective sweeps, polygenic adaptation, and selection since admixture—to the challenging rainforest environments. To do so, we analyzed a combined dataset of 566 high-coverage exomes, including 266 newly generated exomes, from 14 populations of rainforest hunter-gatherers and farmers, together with 40 newly generated, low-coverage genomes. We find evidence for a strong, shared selective sweep among all hunter-gatherer groups in the regulatory region of TRPS1—primarily involved in morphological traits. We detect strong signals of polygenic adaptation for height and life history traits such as reproductive age; however, the latter appear to result from pervasive pleiotropy of height-associated genes. Furthermore, polygenic adaptation signals for functions related to responses of mast cells to allergens and microbes, the IL-2 signaling pathway, and host interactions with viruses support a history of pathogen-driven selection in the rainforest. Finally, we find that genes involved in heart and bone development and immune responses are enriched in both selection signals and local hunter-gatherer ancestry in admixed populations, suggesting that selection has maintained adaptive variation in the face of recent gene flow from farmers.
[Keywords: natural selection, genetic adaptation, rainforest, height, immunity, hunter-gatherers, admixture, Africa, positive selection, polygenic adaptation]
Five Russian couples who are deaf want to try the CRISPR gene-editing technique so they can have a biological child who can hear, biologist Denis Rebrikov has told New Scientist. He plans to apply to the relevant Russian authorities for permission in “a couple of weeks”…Both would-be parents in each couple have a recessive form of deafness, meaning that all their children would normally inherit the same condition. While the vast majority of genetic diseases can be prevented by screening IVF embryos before implantation, with no need for gene-editing, this is not an option for these couples. Several reports have suggested that—if it can be done safely—editing the genes of babies might be justified in this kind of situation…Now Rebrikov has told New Scientist that he also wants to prevent children inheriting a form of deafness caused by mutations in the GJB2 gene. In western Siberia, many people have a missing DNA letter in position 35 of the GJB2 gene. Having one copy has no effect, but those who inherit this mutation from both parents never develop the ability to hear. Rebrikov has found five couples in which both would-be parents are deaf because of this mutation and don’t want their children to be deaf too. So he plans to use CRISPR to correct this mutation in IVF embryos from these couples. All these embryos will have the mutation in both copies of the GJB2 gene—correcting one copy using a method known as homology-directed repair will prevent deafness. “Technically, it is achievable”, says Burgio.
Can humans get arbitrarily capable reinforcement learning (RL) agents to do their bidding? Or will sufficiently capable RL agents always find ways to bypass their intended objectives by shortcutting their reward signal? This question impacts how far RL can be scaled, and whether alternative paradigms must be developed in order to build safe artificial general intelligence. In this paper, we study when an RL agent has an instrumental goal to tamper with its reward process, and describe design principles that prevent instrumental goals for two different types of reward tampering (reward function tampering and RF-input tampering). Combined, the design principles can prevent both types of reward tampering from being instrumental goals. The analysis benefits from causal influence diagrams to provide intuitive yet precise formalizations.
“Designing agent incentives to avoid reward tampering”, (2019-08-14):
From an AI safety perspective, having a clear design principle and a crisp characterization of what problem it solves means that we don’t have to guess which agents are safe. In this post and paper we describe how a design principle called ‘current-RF optimization’ avoids the reward function tampering problem.
…One way to prevent the agent from tampering with the reward function is to isolate or encrypt the reward function. However, we do not expect such solutions to scale indefinitely with our agent’s capabilities, as a sufficiently capable agent may find ways around most defenses. In our new paper, we describe a more principled way to fix the reward tampering problem. Rather than trying to protect the reward function, we change the agent’s incentives for tampering with it.
The fix relies on a slight change to the RL framework that gives the agent query access to the reward function. In the rocks and diamonds environment, this can be done by specifying to the agent how the purple nodes describe the reward function.
Using query access to the reward function, we can design a model-based agent that uses the current reward function to evaluate rollouts of potential policies (a current-RF agent, for short). For example, in the rocks and diamonds environment, a current-RF agent will look at the current reward description, and at time 1 see that it should collect diamonds. This is the criteria by which it will choose its first action, which will be going upwards towards the diamond. Note that the reward description is still changeable, just as before. Still, the current-RF agent will not use the reward-tampering possibility, because it is focused on satisfying the current reward description.
“Has dynamic programming improved decision making?”, (2018-08-22):
Dynamic programming (DP) is an extremely powerful tool for solving a wide class of sequential decision making problems under uncertainty. In principle, it enables us to compute optimal decision rules that specify the best possible decision to take in any given situation. This article reviews developments in DP and contrasts its revolutionary impact on economics, operations research, engineering, and artificial intelligence, with the comparative paucity of real world applications where DP is actually used to improve decision making. I discuss the literature on numerical solution of DPs and its connection to the literature on reinforcement learning (RL) and artificial intelligence (AI).
Despite amazing, highly publicized successes of these algorithms that result in superhuman levels of performance in board games such as chess or Go, I am not aware of comparably successful applications of DP for helping individuals and firms to solve real-world problems. I point to the fuzziness of many real world decision problems and the difficulty in mathematically formulating and modeling them as key obstacles to wider application of DP to improve decision making. Nevertheless, I provide several success stories where DP has demonstrably improved decision making and discuss a number of other examples where it seems likely that the application of DP could have substantial value.
I conclude that ‘applied DP’ offers substantial promise for economic policy making if economists can let go of the empirically untenable assumption of unbounded rationality and try to tackle the challenging decision problems faced every day by individuals and firms.
[Keywords: actor-critic algorithms, Alpha Zero, approximate dynamic programming, artificial intelligence, behavioral economics, Bellman equation, bounded rationality, curse of dimensionality, computational complexity, decision rules, dynamic pricing, dynamic programming, employee compensation, Herbert Simon, fleet sizing, identification problem, individual and firm behavior life-cycle problem, locomotive allocation, machine learning, Markov decision processes, mental models, model-free learning, neural networks, neurodynamic programming, offline versus online training, optimal inventory management, optimal replacement, optimal search, principle of decomposition, Q-learning, revenue management, real-time dynamic programming, reinforcement learning, Richard Bellman, structural econometrics, supervised versus unsupervised learning]
Larger language models are dramatically more useful for NLP tasks such as article completion, question answering, and dialog systems. Training the largest neural language model has recently been the best way to advance the state of the art in NLP applications. Two recent papers, BERT and GPT-2, demonstrate the benefits of large scale language modeling. Both papers leverage advances in compute and available text corpora to substantially surpass state of the art performance in natural language understanding, modeling, and generation. Training these models requires hundreds of exaflops of compute and clever memory management to trade recomputation for a reduced memory footprint. However, for very large models beyond a billion parameters, the memory on a single GPU is not enough to fit the model along with the parameters needed for training, requiring model parallelism to split the parameters across multiple GPUs. Several approaches to model parallelism exist, but they are difficult to use, either because they rely on custom compilers, or because they scale poorly or require changes to the optimizer.
In this work, we implement a simple and efficient model parallel approach by making only a few targeted modifications to existing PyTorch transformer implementations. Our code is written in native Python, leverages mixed precision training, and utilizes the NCCL library for communication between GPUs. We showcase this approach by training an 8.3 billion parameter transformer language model with 8-way model parallelism and 64-way data parallelism on 512 GPUs, making it the largest transformer based language model ever trained at 24× the size of BERT and 5.6× the size of GPT-2. We have published the code that implements this approach at our GitHub repository.
Our experiments are conducted on NVIDIA’s DGX SuperPOD. Without model parallelism, we can fit a baseline model of 1.2B parameters on a single V100 32GB GPU, and sustain 39 TeraFLOPS during the overall training process, which is 30% of the theoretical peak FLOPS for a single GPU in a DGX2-H server. Scaling the model to 8.3 billion parameters on 512 GPUs with 8-way model parallelism, we achieved up to 15.1 PetaFLOPS sustained performance over the entire application and reached 76% scaling efficiency compared to the single case.
“GPT-2: 6-Month Follow-Up”, (2019-08-20):
We’re releasing the 774 million parameter GPT-2 language model after the release of our small 124M model in February, staged release of our medium 355M model in May, and subsequent research with partners and the AI community into the model’s potential for misuse and societal benefit. We’re also releasing an open-source legal agreement to make it easier for organizations to initiate model-sharing partnerships with each other, and are publishing a technical report about our experience in coordinating with the wider AI research community on publication norms.
…Research from these partners will factor into our future release decisions, as will observing how the 774M model is used, and discussing language models with researchers and policymakers to understand the considerations around larger models. As part of our staged release strategy, our current plan is to release the 1558M parameter model in a few months, but it’s plausible that findings from a partner, or malicious usage of our 774M model, could change this.
“OpenGPT-2: We Replicated GPT-2-1.5b Because You Can Too”, (2019-08-22):
Recently, large language models like BERT, XLNet, GPT-2, and GROVER have demonstrated impressive results generating new content and multiple tasks. Since Open-AI has not released their largest model [GPT-2-1.5b] at this time, we seek to replicate the model to allow others to build on our pretrained model and further improve it. You can access the model and generate text using our Google Colab.
…We demonstrate that many of the results of the paper can be replicated by two masters students…Because our replication efforts are not unique, and large language models are the current most effective means of countering generated text, we believe releasing our model is a reasonable first step towards countering the potential future abuse of these kinds of models.
We base our implementation off of the GROVER model4 and modify their codebase to match the language modeling training objective of GPT-2. Since their model was trained on a similarly large corpus, much of the code and hyperparameters proved readily reusable. We did not substantially change the hyperparameters from GROVER.
From start to finish, we estimate that we use under $500,000 in cloud compute for all of our experiments including searching for hyper-parameters and testing various cleaning methods on our datasets. The cost of training the model from scratch using our code is about $50,000.
…Despite the differences in our training distribution, we do report similar perplexities over most datasets.
Progress itself is understudied. By ‘progress,’ we mean the combination of economic, technological, scientific, cultural, and organizational advancement that has transformed our lives and raised standards of living over the past couple of centuries. For a number of reasons, there is no broad-based intellectual movement focused on understanding the dynamics of progress, or targeting the deeper goal of speeding it up. We believe that it deserves a dedicated field of study. We suggest inaugurating the discipline of ‘Progress Studies.’
Before digging into what Progress Studies would entail, it’s worth noting that we still need a lot of progress. We haven’t yet cured all diseases; we don’t yet know how to solve climate change; we’re still a very long way from enabling most of the world’s population to live as comfortably as the wealthiest people do today; we don’t yet understand how best to predict or mitigate all kinds of natural disasters; we aren’t yet able to travel as cheaply and quickly as we’d like; we could be far better than we are at educating young people. The list of opportunities for improvement is still extremely long.
…Plenty of existing scholarship touches on these topics, but it takes place in a highly fragmented fashion and fails to directly confront some of the most important practical questions.
Imagine you want to know how to most effectively select and train the most talented students. While this is an important challenge facing educators, policy makers, and philanthropists, knowledge about how best to do so is dispersed across a very long list of different fields. Psychometrics literature investigates which tests predict success. Sociologists consider how networks are used to find talent. Anthropologists investigate how talent depends on circumstances, and a historiometric literature studies clusters of artistic creativity. There’s a lively debate about when and whether ‘10,000 hours of practice’ are required for truly excellent performance. The education literature studies talent-search programs such as the Center for Talented Youth. Personality psychologists investigate the extent to which openness or conscientiousness affect earnings. More recently, there’s work in sportometrics, looking at which numerical variables predict athletic success. In economics, Raj Chetty and his co-authors have examined the backgrounds and communities liable to best encourage innovators. Thinkers in these disciplines don’t necessarily attend the same conferences, publish in the same journals, or work together to solve shared problems.
When we consider other major determinants of progress, we see insufficient engagement with the central questions. For example, there’s a growing body of evidence suggesting that management practices determine a great deal of the difference in performance between organizations. One recent study found that a particular intervention—teaching better management practices to firms in Italy—improved productivity by 49 percent over 15 years when compared with peer firms that didn’t receive the training. How widely does this apply, and can it be repeated? Economists have been learning that firm productivity commonly varies within a given sector by a factor of two or three, which implies that a priority in management science and organizational psychology should be understanding the drivers of these differences. In a related vein, we’re coming to appreciate more and more that organizations with higher levels of trust can delegate authority more effectively, thereby boosting their responsiveness and ability to handle problems. Organizations as varied as Y Combinator, MIT’s Radiation Lab, and ARPA have astonishing track records in catalyzing progress far beyond their confines. While research exists on all of these fronts, we’re underinvesting considerably. These examples collectively indicate that one of our highest priorities should be figuring out interventions that increase the efficacy, productivity, and innovative capacity of human organizations…
2019-isakov.pdf: “Is the FDA too conservative or too aggressive?: A Bayesian decision analysis of clinical trial design”, (2019-01-04; ):
Implicit in the drug-approval process is a host of decisions—target patient population, control group, primary endpoint, sample size, follow-up period, etc.—all of which determine the trade-off between Type I and Type II error. We explore the application of Bayesian decision analysis (BDA) to minimize the expected cost of drug approval, where the relative costs of the two types of errors are calibrated using U.S. Burden of Disease Study 2010 data. The results for conventional fixed-sample randomized clinical-trial designs suggest that for terminal illnesses with no existing therapies such as pancreatic cancer, the standard threshold of 2.5% is substantially more conservative than the BDA-optimal threshold of 23.9% to 27.8%. For relatively less deadly conditions such as prostate cancer, 2.5% is more risk-tolerant or aggressive than the BDA-optimal threshold of 1.2% to 1.5%. We compute BDA-optimal sizes for 25 of the most lethal diseases and show how a BDA-informed approval process can incorporate all stakeholders’ views in a systematic, transparent, internally consistent, and repeatable manner.
“Scott and Scurvy: How the Cure for Scurvy Was Lost”, (2010-06-03):
[Scott’s Antarctic expedition in 1911 was plagued by the disease scurvy, despite its having been “conquered in 1747, when the Scottish physician James Lind proved in one of the first controlled medical experiments that citrus fruits were an effective cure for the disease.” How it all went wrong would make a case study for a philosophy of science class.
The British Admiralty switched their scurvy cure from lemon juice to lime juice in 1860. The new cure was much less effective, but by that time advances in technology meant that most sea voyages were so short that there was little or no danger of scurvy anyway. So poor Scott’s expedition, as well as applying ‘state-of-the-art’ (ie. wrong) cures, were falling back on a ‘tried-and-true’ remedy that in fact had been largely ineffective already for 50 years… without anyone noticing.]
An unfortunate series of accidents conspired with advances in technology to discredit the cure for scurvy. What had been a simple dietary deficiency became a subtle and unpredictable disease that could strike without warning. Over the course of fifty years, scurvy would return to torment not just Polar explorers, but thousands of infants born into wealthy European and American homes. And it would only be through blind luck that the actual cause of scurvy would be rediscovered, and vitamin C finally isolated, in 1932.
…So when the Admiralty began to replace lemon juice with an ineffective substitute in 1860, it took a long time for anyone to notice. In that year, naval authorities switched procurement from Mediterranean lemons to West Indian limes. The motives for this were mainly colonial—it was better to buy from British plantations than to continue importing lemons from Europe. Confusion in naming didn’t help matters. Both “lemon” and “lime” were in use as a collective term for citrus, and though European lemons and sour limes are quite different fruits, their Latin names (citrus medica, var. limonica and citrus medica, var. acida) suggested that they were as closely related as green and red apples. Moreover, as there was a widespread belief that the antiscorbutic properties of lemons were due to their acidity, it made sense that the more acidic Caribbean limes would be even better at fighting the disease.
In this, the Navy was deceived. Tests on animals would later show that fresh lime juice has a quarter of the scurvy-fighting power of fresh lemon juice. And the lime juice being served to sailors was not fresh, but had spent long periods of time in settling tanks open to the air, and had been pumped through copper tubing. A 1918 animal experiment using representative samples of lime juice from the navy and merchant marine showed that the ‘preventative’ often lacked any antiscorbutic power at all.
By the 1870s, therefore, most British ships were sailing without protection against scurvy. Only speed and improved nutrition on land were preventing sailors from getting sick.
…In the course of writing this essay, I was tempted many times to pick a villain. Maybe the perfectly named Almroth Wright, who threw his considerable medical reputation behind the ptomaine theory and so delayed the proper re-understanding of scurvy for many years. Or the nameless Admiralty flunky who helped his career by championing the switch to West Indian limes. Or even poor Scott himself, sermonizing about the virtues of scientific progress while never conducting a proper experiment, taking dreadful risks, and showing a most unscientific reliance on pure grit to get his men out of any difficulty.
But the villain here is just good old human ignorance, that master of disguise. We tend to think that knowledge, once acquired, is something permanent. Instead, even holding on to it requires constant, careful effort.
2017-mercier.pdf: “How Gullible are We? A Review of the Evidence from Psychology and Social Science”, (2017-05-18; ):
A long tradition of scholarship, from ancient Greece to Marxism or some contemporary social psychology, portrays humans as strongly gullible—wont to accept harmful messages by being unduly deferent. However, if humans are reasonably well adapted, they should not be strongly gullible: they should be vigilant toward communicated information. Evidence from experimental psychology reveals that humans are equipped with well-functioning mechanisms of epistemic vigilance. They check the plausibility of messages against their background beliefs, calibrate their trust as a function of the source’s competence and benevolence, and critically evaluate arguments offered to them. Even if humans are equipped with well-functioning mechanisms of epistemic vigilance, an adaptive lag might render them gullible in the face of new challenges, from clever marketing to omnipresent propaganda. I review evidence from different cultural domains often taken as proof of strong gullibility: religion, demagoguery, propaganda, political campaigns, advertising, erroneous medical beliefs, and rumors. Converging evidence reveals that communication is much less influential than often believed—that religious proselytizing, propaganda, advertising, and so forth are generally not very effective at changing people’s minds. Beliefs that lead to costly behavior are even less likely to be accepted. Finally, it is also argued that most cases of acceptance of misguided communicated information do not stem from undue deference, but from a fit between the communicated information and the audience’s preexisting beliefs.
[Keywords: epistemic vigilance, gullibility, trust]
“Mass Shootings: Definitions and Trends”, (2018-03-02):
There is no standard definition of what constitutes a mass shooting. Media outlets, academic researchers, and law enforcement agencies frequently use different definitions when discussing mass shootings, leading to different assessments of the frequency with which mass shootings occur and about whether mass shootings are more common now than they were a decade or two ago.
…These definitions matter. Depending on which data source is referenced, there were seven, 65, 332, or 371 mass shootings in the United States in 2015 (see table below), and those are just some examples. More-restrictive definitions (eg., Mother Jones) focus on the prevalence of higher-profile events motivated by mass murder, but they omit more-common incidents occurring in connection with domestic violence or criminal activity, which make up about 80 percent of mass shooting incidents with four or more fatally injured victims (Krouse & Richardson, 2015).
…In 2014, thereleased a study showing that “active shooting incidents” had increased at an average annual rate of 16 percent between 2000 and 2013 (Blair and Schweit, 2014). In contrast to the varied definitions for mass shootings, there is an agreed-upon definition among government agencies for active shooter: “an individual actively engaged in killing or attempting to kill people in a confined and populated area; in most cases, active shooters use firearm(s) and there is no pattern or method to their selection of victims” (U.S. Department of Homeland Security, 2008, p. 2). Using a modified version of this definition to include incidents that had multiple offenders or occurred in confined spaces, Blair and Schweit (2014) found that active shootings had increased from only one incident in 2000 to 17 in 2013.
…In their analysis of mass shooting trends from 1999 to 2013, Krouse and Richardson (2015) distinguished between mass shootings occurring in public locations that are indiscriminate in nature (“mass public shootings”), mass shootings in which the majority of victims are members of the offender’s family and that are not attributable to other criminal activity (“familicide mass shootings”), and mass shootings that occur in connection to some other criminal activity (“other felony mass shootings”). The two figures below show trends in these types of mass shooting incidents and fatalities, respectively, using the data provided in Krouse and Richardson (2015). Extending the data back to the 1970s, two studies found evidence of a slight increase in the frequency of mass public shootings over the past three decades (Cohen, Azrael, and Miller, 2014; Krouse & Richardson, 2015). However, using an expanded definition that includes domestic-related or felony-related killings, there is little evidence to suggest that mass shooting incidents or fatalities have increased (Cohen, Azrael, and Miller, 2014; Krouse & Richardson, 2015; Fox & Fridel, 2016). Thus, different choices about how to define a mass shooting result in different findings for both the prevalence of these events at a given time and whether their frequency has changed over time.
…Definitional issues aside, the relative rarity of mass shooting events makes analysis of trends particularly difficult. Chance variability in the annual number of mass shooting incidents makes it challenging to discern a clear trend, and trend estimates will be sensitive to outliers and to the time frame chosen for analysis. For example, while Krouse and Richardson (2015) found evidence of an upward trend in mass public shootings from 1999 to 2013, they noted that the increase was driven largely by 2012, which had an unusually high number of mass public shooting incidents. Additionally, Lott (2015) showed that thestudy’s estimate of a dramatic increase in active-shooter incidents was largely driven by the choice of 2000 as the starting date, because that year had an unusually low number of shooting incidents; extending the analysis to cover 1977 onward and adjusting the data to exclude events with fewer than two fatalities, Lott (2015) found a much smaller and statistically insignificant increase (less than 1 percent annually) in mass shooting fatalities over time.
“Dying the Christian Science way: the horror of my father's last days; The anti-medical dogma of Christian Science led my father to an agonising death. Now the church itself is in decline—and it can't happen fast enough”, (2019-08-06):
[‘Caroline Fraser, herself raised in a Scientist household, traces the growth of the Church from a small, eccentric sect into a politically powerful and socially respectable religion. She takes us into the closed world of Eddy’s followers, who reject modern medicine even at the cost of their children’s lives. And she reveals just how Christian Science managed to gain extraordinary legal and congressional approval for its dubious practices.’
Memoir of a former Christian Scientist, a Christian cult which believes all illness is spiritual and that medicine is useless/sinful and so whose adherents refuse medical treatment, describing her father’s slow decay from injuries and eventual death from a spreading gangrene that could have been treated. Author describes how (akin to Scientology) Christian Science is in decay itself, with rapidly declining numbers despite healthy financials and real estate assets from better days. While Christian Science may soon shrivel away, it leaves a toxic and literally infectious legacy: to profit off offering ‘treatment’ and enable its members to avoid real medical treatment for their children and themselves, Christian Science spearheaded the legislation of ‘religious exemptions’ to vaccines, empowering the current anti-vax movement, which may kill more children than Christian Science ever did.]
“Why Don't Colleges Get Rid of Their Bad Fraternities? A yearlong investigation of Greek houses reveals their endemic, lurid, and sometimes tragic problems—and a sophisticated system for shifting the blame”, (2014-03-01):
[History and investigation of legal records/settlements involving US college fraternities. Author finds that fraternities are involved in a remarkable number of serious, often fatal, injuries in part because of deliberate decisions to preserve traditions such as bunk beds for drunken partiers deliberately placed next to permanently-wide-open windows on the 2nd or 3rd story of frat buildings. Fraternities are able to survive because of their long history, including highly valuable real estate next to universities acquired in their earliest days (many frats being older than many American universities), and because of carefully-tailored insurance and regulations which enable them to push legal liability onto the students or members for the slightest infraction, such as bringing an additional bottle of beer, and thus responsibility for anything that might happen (like falling out of an open window); frat members are debriefed by the frat’s lawyers immediately after incidents with an eye to finding one who can be blamed, before the frat members can realize that the lawyers are not there to help them. While the frat members in question may have no assets to be sued over, their (frequently middle or upper-class) parents do, and may lose their houses in the subsequent lawsuits.]
“What people can’t understand”, Hiers said, gently picking up each tiny rabbit and placing it in the nest, “is how much fun Vietnam was. I loved it. I loved it, and I can’t tell anybody.” Hiers loved war. And as I drove back from Vermont in a blizzard, my children asleep in the back of the car, I had to admit that for all these years I also had loved it, and more than I knew. I hated war, too. Ask me, ask any man who has been to war about his experience, and chances are we’ll say we don’t want to talk about it—implying that we hated it so much, it was so terrible, that we would rather leave it buried. And it is no mystery why men hate war. War is ugly, horrible, evil, and it is reasonable for men to hate all that. But I believe that most men who have been to war would have to admit, if they are honest, that somewhere inside themselves they loved it too, loved it as much as anything that has happened to them before or since. And how do you explain that to your wife, your children, your parents, or your friends?
…I spent most of my combat tour in Vietnam trudging through its jungles and rice paddies without incident, but I have seen enough of war to know that I never want to fight again, and that I would do everything in my power to keep my son from fighting. Then why, at the oddest times—when I am in a meeting or running errands, or on beautiful summer evenings, with the light fading and children playing around me—do my thoughts turn back fifteen years to a war I didn’t believe in and never wanted to fight? Why do I miss it?
I miss it because I loved it, loved it in strange and troubling ways. When I talk about loving war I don’t mean the romantic notion of war that once mesmerized generations raised on Walter Scott. What little was left of that was ground into the mud at Verdun and Passchendaele: honor and glory do not survive the machine gun. And it’s not the mindless bliss of martyrdom that sends Iranian teenagers armed with sticks against Iraqi tanks. Nor do I mean the sort of hysteria that can grip a whole country, the way during the Falklands war the English press inflamed the lust that lurks beneath the cool exterior of Britain. That is vicarious war, the thrill of participation without risk, the lust of the audience for blood. It is easily fanned, that lust; even the invasion of a tiny island like Grenada can do it. Like all lust, for as long as it lasts it dominates everything else; a nation’s other problems are seared away, a phenomenon exploited by kings, dictators, and presidents since civilization began.
“From Third World to First: The Singapore Story–1965–2000”, (2000-10-03):
Few gave tiny Singapore much chance of survival when it was granted independence in 1965. How is it, then, that today the former British colonial trading post is a thriving Asian metropolis with not only the world’s number one airline, best airport, and busiest port of trade, but also the world’s fourth-highest per capita real income?
The story of that transformation is told here by Singapore’s charismatic, controversial founding father, Lee Kuan Yew. Rising from a legacy of divisive colonialism, the devastation of the Second World War, and general poverty and disorder following the withdrawal of foreign forces, Singapore now is hailed as a city of the future. This miraculous history is dramatically recounted by the man who not only lived through it all but who fearlessly forged ahead and brought about most of these changes.
Delving deep into his own meticulous notes, as well as previously unpublished government papers and official records, Lee details the extraordinary efforts it took for an island city-state in Southeast Asia to survive at that time.
Lee explains how he and his cabinet colleagues finished off the communist threat to the fledgling state’s security and began the arduous process of nation building: forging basic infrastructural roads through a land that still consisted primarily of swamps, creating an army from a hitherto racially and ideologically divided population, stamping out the last vestiges of colonial-era corruption, providing mass public housing, and establishing a national airline and airport.
In this illuminating account, Lee writes frankly about his trenchant approach to political opponents and his often unorthodox views on human rights, democracy, and inherited intelligence, aiming always “to be correct, not politically correct.” Nothing in Singapore escaped his watchful eye: whether choosing shrubs for the greening of the country, restoring the romance of the historic Raffles Hotel, or openly, unabashedly persuading young men to marry women as well educated as themselves. Today’s safe, tidy Singapore bears Lee’s unmistakable stamp, for which he is unapologetic: “If this is a nanny state, I am proud to have fostered one.”
Though Lee’s domestic canvas in Singapore was small, his vigor and talent assured him a larger place in world affairs. With inimitable style, he brings history to life with cogent analyses of some of the greatest strategic issues of recent times and reveals how, over the years, he navigated the shifting tides of relations among America, China, and Taiwan, acting as confidant, sounding board, and messenger for them. He also includes candid, sometimes acerbic pen portraits of his political peers, including the indomitable Margaret Thatcher and Ronald Reagan, the poetry-spouting Jiang Zemin, and ideologues George Bush and Deng Xiaoping.
Lee also lifts the veil on his family life and writes tenderly of his wife and stalwart partner, Kwa Geok Choo, and of their pride in their three children—particularly the eldest son, Hsien Loong, who is now Singapore’s deputy prime minister.
For more than three decades, Lee Kuan Yew has been praised and vilified in equal measure, and he has established himself as a force impossible to ignore in Asian and international politics. From Third World to First offers readers a compelling glimpse into this visionary’s heart, soul, and mind.
“Peter Thiel's Religion”, (2019-08-04):
We’ll study religion through the lens of Peter Thiel. He’s an investor who found wealth in PayPal, a student who found wisdom in Libertarian ideals, and a philosopher who found faith in the resurrection of Jesus Christ. was raised as an Evangelical and inherited the Christianity of his parents. But his beliefs are “somewhat heterodox.” In a profile in the New Yorker, said: “I believe Christianity to be true. I don’t feel a compelling need to convince other people of that.”
Three simple statements will lead us towards our ultimate answer about the importance of religion:
- Don’t copy your neighbors
- Time moves forward
- The future will be different from the present
Rather than focusing on Thiel’s actions, I’ve chosen to focus on his ideas. First, we’ll explore the principles of Peter Thiel’s worldview. We’ll begin by explaining Thiel’s connection to a French philosopher named Rene Girard. We’ll return to old books like The Bible, old ideas like sacrifice, and old writers like Shakespeare, and see why this ancient wisdom holds clues for modern life. Then, we’ll return to the tenets of the Christian story. We’ll cover the shift from cyclical time to linear time, which was spurred by technological development and human progress. We’ll see why the last book in The Bible, The Book of Revelation, is a core pillar of Thiel’s philosophy. Then, we’ll close with Thiel’s advice and wisdom almost as old as Cain and Abel: the Ten Commandments.
…Mimetic conflict emerges when two people desire the same, scarce resource. Like lions in a cage, we mirror our enemies, fight because of our sameness, and ascend status hierarchies instead of providing value for society. Only by observing others do we learn how and what to desire. Our Mimetic nature is simultaneously our biggest strength and biggest weakness. When it goes right, imitation is a shortcut to learning. But when it spirals out of control, Mimetic imitation leads to envy, violence, and bitter, ever-escalating violence…Girard observed that even when you put a group of kids together in a room full of toys, they’ll inevitably desire the same toy instead of finding their own toy to play with. A rivalry will emerge. Human see, human want.
…Here’s what I do know:is trying to save the world from apocalypse. The Book of Revelation paints two outcomes for the future of humanity: catastrophic apocalypse or a new heaven and a new earth…The probability of a civilization-ending apocalypse is increasing. Just because we no longer believe that Zeus can strike humans with sky-lighting thunderbolts, doesn’t mean that existential risk isn’t possible. Like Girard, he worries that the world is becoming more Mimetic. Worse, globalization is raising the threat of runaway mimesis and an apocalyptic world with cold corpses, dead horses, and splintered guns.
…Christianity promises a Living Hope that enables believers to endure unimaginable suffering. A hope so resilient that like a Captain America’s shield, it can survive any evil, any sickness, or any torture. No matter the obstacles, certainty about the future gives you the confidence to act in the present. Thiel’s idea of Definite Optimism is Christian theology cloaked in secular language. By raising our spirits, a positive vision for the future unites society and raises our spirits. And that’s what the Western world needs right now. Technological growth is the best way to reduce suffering in the world. Technological progress has stagnated since the 1970s, which contributes to the vile political atmosphere and the pessimism of modern Westerners.says we should acknowledge our lack of progress, dream up a vision of Definite Optimism, and guided by Christian theology, work to make it a reality.
2019-letexier.pdf: “Debunking the Stanford Prison Experiment”, (2019-08-05; ):
The Stanford Prison Experiment (SPE) is one of psychology’s most famous studies. It has been criticized on many grounds, and yet a majority of textbook authors have ignored these criticisms in their discussions of the SPE, thereby misleading both students and the general public about the study’s questionable scientific validity.
Data collected from a thorough investigation of the SPE archives and interviews with 15 of the participants in the experiment further question the study’s scientific merit. These data are not only supportive of previous criticisms of the SPE, such as the presence of demand characteristics, but provide new criticisms of the SPE based on heretofore unknown information. These new criticisms include the biased and incomplete collection of data, the extent to which the SPE drew on a prison experiment devised and conducted by students in one of Zimbardo’s classes 3 months earlier, the fact that the guards received precise instructions regarding the treatment of the prisoners, the fact that the guards were not told they were subjects, and the fact that participants were almost never completely immersed by the situation.
Possible explanations of the inaccurate textbook portrayal and general misperception of the SPE’s scientific validity over the past 5 decades, in spite of its flaws and shortcomings, are discussed.
[Keywords: Stanford Prison Experiment, Zimbardo, epistemology]
1992-rymer.pdf: “A Silent Childhood”, (1992-04-13; ):
Annals Of Science about a case of child abuse in which a child named Genie was kept isolated from the world, locked in a restraining harness in a silent bedroom in her parent’s house in Temple City, California. She was either harnessed to an infant’s potty chair, unable to move anything except her fingers and hands, feet and toes, she was left to sit, tied-up, hour after hour, often into the night, day after day, month after month, year after year. At night, when Genie was not forgotten, she was placed into another restraining garment—a sleeping bag which her father had fashioned to hold Genie’s arms stationary. In effect, it was a straight jacket. Describes her environment, and the “toys” she was given to “play” with. Because of two plastic raincoats that were sometimes hung in the room, she had an inordinate fondness for anything plastic. She was incarcerated by her father for 11 1⁄2 of the first 13 years of her life in a silent room. She could not speak when she was rescued, and only learned to talk when she reached the hospital. Tells about the fallout, both in human terms and legally, surrounding the research into her linguistic abilities. Investigations of Genie’s brain unveiled the utter dominance of her “spatial” right hemisphere over her “linguistic” left…This may have been why she was unable to grasp grammar—because she was using the wrong equipment…From the misfortunes of brain-damaged people, it is clear that language tasks are dispersed within their left-hemisphere home. Someone whose brain is injured above the left ear will still be able to speak, but there will be no idea behind the word strings…Tells about a suit her mother, Irene, brought against the hospital when her therapy sessions with hospital staff were included in research results by Susan Curtiss, a graduate student studying Genie. The results of Curtiss’s doctorate study seemed to both confirm and deny linguist Noam Chomsky’s theory about language acquisition. Genie was shuttled from foster home to foster home after the scientists at the hospital (including the head of research, David Rigler, who adopted her for four years) ran out of grant money. She is currently institutionalized in an adult home for the mentally retarded, and in the words of one scientist, Jay Shurley, filled with a soul-sickness, and sinking into an apparent’ replica of an organic dementia.
“The Structure of Individual Differences in the Cognitive Abilities of Children and Chimpanzees: Table 1. Primate Cognition Test Battery: Description of Tasks and Mean Proportion (With Standard Deviation) of Correct Responses by Chimpanzees and Human Children”, (2010-01-01):
[Table comparing human children and chimpanzee performance on the PCTB. The means are similar.]
Psychologist Russell Hurlburt at the University of Nevada, Las Vegas, has spent the last few decades training people to see inside their own minds more clearly in an attempt to learn something about our inner experiences at large. Though many individual studies on inner speech include only a small number of participants, making it hard to know whether their results apply more widely, Hurlburt estimates he’s been able to peek inside the minds of hundreds of people since he began his research. What he’s found suggests that the thoughts running through our heads are a lot more varied than we might suppose.
For one, words don’t seem to feature as heavily in our day-to-day thoughts as many of us think they do. “Most people think that they think in words, but many people are mistaken about that”, he says. In one small study, for example, 16 college students were given short stories before being randomly sampled to find out what they were thinking during the course of reading. Only a quarter of their sampled thoughts featured words at all, and just 3% involved internal narration.
…If people aren’t constantly talking to themselves, what are they doing?
In his years of studying the inner workings of people’s minds, Hurlburt has come up with five categories of inner experiences: inner speaking, which comes in a variety of forms; inner seeing, which could feature images of things you’ve seen in real life or imaginary visuals; feelings, such as anger or happiness; sensory awareness, like being aware of the scratchiness of the carpet under your feet; and unsymbolised thinking, a trickier concept to get your head around, but essentially a thought that doesn’t manifest as words or images, but is undoubtedly present in your mind. But those categories leave room for variation, too. Take inner speaking, which can come in the form of a single word, a sentence, some kind of monologue, or even a conversation. The idea of an internal dialogue—rather than a monologue—will be familiar to anyone who’s ever rehearsed an important conversation, or rehashed an argument, in their mind. But the person we talk to inside our head is not always a stand in for someone else—often, that other voice is another aspect of ourselves.
…Famira Racy, co-ordinator of the Inner Speech Lab at Mount Royal University, Canada, and her colleagues recently used a method called thought listing—which, unsurprisingly, involves getting participants to list their thoughts at certain times—to take a broader look at why and when people use inner speech, as well as what they say to themselves.
They found that the students in the study were talking to themselves about everything from school to their emotions, other people, and themselves, while they were doing everyday tasks like walking and getting in and out of bed. Though it has the same limitations as much research on inner speech—namely, you can’t always trust people to know what or how they were really thinking—the results appear consistent with previous work.
“I can’t say for sure if it’s any more important [than other kinds of inner experience], but there’s been enough research done to show that inner speech plays an important role in self-regulation behaviour, problem solving, critical thinking and reasoning and future thinking”, Racy says…“It gives you a way to communicate with yourself using a meaningful structure”, says Racy. Or as one of her colleagues sometimes puts it: “Inner speech is your flashlight in the dark room that is your mind.”
2013-hurlburt.pdf: “Toward a phenomenology of inner speaking”, (2013-12-01; ):
- Inner speaking is a common but not ubiquitous phenomenon of inner experience.
- There are large individual differences in the frequency of inner speaking (from near 0% to near 100%).
- There is substantial variability in the phenomenology of naturally occurring moments of inner speaking.
- Use of an appropriate method is critical to the study of inner experience.
- Descriptive Experience Sampling is designed to apprehend high fidelity descriptions of inner experience.
Abstract: Inner speaking is a common and widely discussed phenomenon of inner experience. Based on our studies of inner experience using Descriptive Experience Sampling (a qualitative method designed to produce high fidelity descriptions of randomly selected pristine inner experience), we advance an initial phenomenology of inner speaking. Inner speaking does occur in many, though certainly not all, moments of pristine inner experience. Most commonly it is experienced by the person as speaking in his or her own naturally inflected voice but with no sound being produced. In addition to prototypical instances of inner speaking, there are wide-ranging variations that fit the broad category of inner speaking and large individual differences in the frequency with which individuals experience inner speaking. Our observations are discrepant from what many have said about inner speaking, which we attribute to the characteristics of the methods different researchers have used to examine inner speaking.
2011-mccarthyjones.pdf: “The varieties of inner speech: Links between quality of inner speech and psychopathological variables in a sample of young adults”, (2011-12; ):
- We develop a questionnaire to assess a number of qualities of inner speech.
- We examine its correlations with psychopathology in young adults.
- The inner speech questionnaire was found to have satisfactory psychometrics.
- Anxiety, but not depression, correlated with specific varieties of inner speech.
- Proneness to auditory hallucinations correlated with levels of dialogic inner speech.
A resurgence of interest in inner speech as a core feature of human experience has not yet coincided with methodological progress in the empirical study of the phenomenon. The present article reports the development and psychometric validation of a novel instrument, the Varieties of Inner Speech Questionnaire (VISQ), designed to assess the phenomenological properties of inner speech along dimensions of dialogicality, condensed/expanded quality, evaluative/motivational nature, and the extent to which inner speech incorporates other people’s voices. In response to findings that some forms of psychopathology may relate to inner speech, anxiety, depression, and proneness to auditory and visual hallucinations were also assessed. Anxiety, but not depression, was found to be uniquely positively related to both evaluative/motivational inner speech and the presence of other voices in inner speech. Only dialogic inner speech predicted auditory hallucination-proneness, with no inner speech variables predicting levels of visual hallucinations/disturbances. Directions for future research are discussed.
[Keywords: Anxiety, Auditory hallucination, Cognitive behavioral therapy, Depression, Dialogic, Inner speech, Rumination, Vygotsky]
“Structured Procrastination”, (1996-02-23):
All procrastinators put off things they have to do. Structured procrastination is the art of making this bad trait work for you. The key idea is that procrastinating does not mean doing absolutely nothing. Procrastinators seldom do absolutely nothing; they do marginally useful things, like gardening or sharpening pencils or making a diagram of how they will reorganize their files when they get around to it. Why does the procrastinator do these things? Because they are a way of not doing something more important. If all the procrastinator had left to do was to sharpen some pencils, no force on earth could get him do it. However, the procrastinator can be motivated to do difficult, timely and important tasks, as long as these tasks are a way of not doing something more important.
Structured procrastination means shaping the structure of the tasks one has to do in a way that exploits this fact. The list of tasks one has in mind will be ordered by importance. Tasks that seem most urgent and important are on top. But there are also worthwhile tasks to perform lower down on the list. Doing these tasks becomes a way of not doing the things higher up on the list. With this sort of appropriate task structure, the procrastinator becomes a useful citizen. Indeed, the procrastinator can even acquire, as I have, a reputation for getting a lot done.
“The Generative Adversarial Brain”, (2019-07-21; ):
The idea that the brain learns generative models of the world has been widely promulgated. Most approaches have assumed that the brain learns an explicit density model that assigns a probability to each possible state of the world. However, explicit density models are difficult to learn, requiring approximate inference techniques that may find poor solutions. An alternative approach is to learn an implicit density model that can sample from the generative model without evaluating the probabilities of those samples. The implicit model can be trained to fool a discriminator into believing that the samples are real. This is the idea behind generative adversarial algorithms, which have proven adept at learning realistic generative models. This paper develops an adversarial framework for probabilistic computation in the brain. It first considers how generative adversarial algorithms overcome some of the problems that vex prior theories based on explicit density models. It then discusses the psychological and neural evidence for this framework, as well as how the breakdown of the generator and discriminator could lead to delusions observed in some mental disorders.
…Our sensory inputs are impoverished, and yet our experience of the world feels richly detailed. For example, our fovea permits us access to a high fidelity region of the visual field only twice the size of our thumbnail held at arm’s length. But we don’t experience the world as though looking through a tiny aperture. Instead, our brains feed us a “grand illusion” of panoptic vision (Chater, 2018; Noe et al 2000; Odegaard et al 2018). Similarly, we receive no visual input in the region of the retina that connects to the optic nerve, yet under normal circumstances we are unaware of this blind spot. Moreover, even when we receive high fidelity visual input, we may still fail to witness dramatic changes in scenes (Simons, 2000), as though our brains have contrived imaginary scenes that displace the true scenes.
…First, how can we explain the phenomenology of illusion: why do some illusions feel real, as though one is actually seeing them, whereas other inferences carry information content without the same perceptual experience. For example, Ramachandran and Hirstein (1997) use the example of gazing at wallpaper in a bathroom, where the wallpaper in your visual periphery is ‘filled in’ (you subjectively experience it as high fidelity even though objectively you perceive it with low fidelity), but the wallpaper behind your head is not filled in. In other words, you infer that the wallpaper continues behind your head, and you may even know this with high confidence, but you do not have the experience of seeing the wallpaper behind your head. Thus, the vividness or “realness” of perceptual experience is not a simple function of belief strength. So what is it a function of? Second, how can we explain the peculiar ways that the inferential apparatus breaks down? In particular, how can we understand the origins of delusions, hallucinations, and confabulations that arise in certain mental disorders? While Bayesian models have been developed to explain these phenomena, they fall short in certain ways that we discuss later on.
Based on: the characteristic distribution of neural activity, personal accounts of intense pleasure and pain, the way various pain scales have been described by their creators, and the results of a pilot study we conducted which ranks, rates, and compares the hedonic quality of extreme experiences, we suggest that the best way to interpret pleasure and pain scales is by thinking of them as logarithmic compressions of what is truly a long-tail. The most intense pains are orders of magnitude more awful than mild pains (and symmetrically for pleasure).
This should inform the way we prioritize altruistic interventions and plan for a better future. Since the bulk of suffering is concentrated in a small percentage of experiences, focusing our efforts on preventing cases of intense suffering likely dominates most utilitarian calculations.
An important pragmatic takeaway from this article is that if one is trying to select an effective career path, as a heuristic it would be good to take into account how one’s efforts would cash out in the prevention of extreme suffering (see: ‘Hell-Index’), rather than just QALYs and wellness indices that ignore the long-tail. Of particular note as promising Effective Altruist careers, we would highlight working directly to develop remedies for specific, extremely painful experiences. Finding scalable treatments for migraines, kidney stones, childbirth, cluster headaches, CRPS, and fibromyalgia may be extremely high-impact (cf. ‘Treating Cluster Headaches and Migraines Using N,N-DMT and Other Tryptamines’, ‘Using Ibogaine to Create Friendlier Opioids’, and ‘Frequency Specific Microcurrent for Kidney-Stone Pain’). More research efforts into identifying and quantifying intense suffering currently unaddressed would also be extremely helpful. Finally, if the positive valence scale also has a long-tail, focusing one’s career in developing bliss technologies may pay-off in surprisingly good ways (whereby you may stumble on methods to generate high-valence healing experiences which are orders of magnitude better than you thought were possible).
[Keywords: consciousness research, Effective Altruism, ethics, Hedonic Tone, meaning, psychedelic, sex, spirituality, valence]
For Paul, it started with a fishing trip. For Lenny, it was an addict whose knuckles were covered in sores. Dawn found pimples clustered around her swimming goggles. Kendra noticed ingrown hairs. Patricia was attacked by sand flies on a Gulf Coast beach. Sometimes the sickness starts as blisters, or lesions, or itching, or simply a terrible fog settling over the mind, over the world.
For me, Morgellons disease started as a novelty: people said they had a strange ailment, and no one—or hardly anyone—believed them. But there were a lot of them, reportedly 12,000, and their numbers were growing. Their illness manifested in many ways, including fatigue, pain, and formication (a sensation of insects crawling over the skin). But the defining symptom was always the same: fibers emerging from their bodies. Not just fibers but fuzz, specks, and crystals. They didn’t know what this stuff was, or where it came from, or why it was there, but they knew—and this was what mattered, the important word—that it was real.
…Browne’s “harsh hairs” were the early ancestors of today’s fibers. Photos online show them in red, white, and blue—like the flag—and also black and translucent. These fibers are the kind of thing you describe in relation to other kinds of things: jellyfish or wires, animal fur or taffy candy or a fuzzball off your grandma’s sweater. Some are called goldenheads, because they have a golden-colored bulb. Others simply look sinister, technological, tangled.
Patients started bringing these threads and flecks and fuzz to their doctors, storing them in Tupperware or matchboxes, and dermatologists actually developed a term for this phenomenon. They called it “the matchbox sign”, an indication that patients had become so determined to prove their disease that they might be willing to produce fake evidence.
…This isn’t an essay about whether Morgellons disease is real. That’s probably obvious by now. It’s an essay about what kinds of reality are considered prerequisites for compassion. It’s about this strange sympathetic limbo: Is it wrong to speak of empathy when you trust the fact of suffering but not the source?
This is the rallying cry of the Lyme Warrior. Spend a while browsing
#lymewarrioron Instagram and what you find looks like wellness content at first. There are selfies, shots of food, talk of toxins, exhortations toward self-care. There are more extensive arrays of supplements than you might expect. Then the IVs snake into view. There are hospital gowns and seats at outpatient-treatment centers and surgically implanted ports displayed with pride. This is wellness predicated on the constant certainty that all is not well. Like Hadid, the Lyme Warriors struggle against those who would doubt their condition, and, like Hadid, they are firm in their resolve. They have a name, and they have each other.
Where Murray sought to answer a question, the warrior who now takes up the cause of chronic Lyme is seeking to affirm an answer. For this community of patients, Lyme has come to function as something more expansive than a diagnosis. While Lyme disease is a specific medical condition—one that may manifest more severely or less, be treated more easily or less—chronic Lyme is something else altogether. (The medical establishment generally avoids using the term chronic Lyme, and because of this establishment wariness, advocates who believe Lyme is a chronic infection now sometimes advise patients to avoid it too.) This version of Lyme has no consistent symptoms, no fixed criteria, and no accurate test. This Lyme is a kind of identity. Lyme is a label for a state of being, a word that conveys your understanding of your lived experience. Lyme provides the language to articulate that experience and join with others who share it. In the world of chronic Lyme, doctors are trustworthy (or not) based on their willingness to treat Lyme. Tests are trustworthy (or not) based on their ability to confirm Lyme. Lyme is the fundamental fact, and you work backward from there. Lyme is a community with a cause: the recognition of its sufferers’ suffering—and, with it, the recognition of Lyme.
1987-simonton.pdf: “Developmental antecedents of achieved eminence”, (1987; ):
[Literature review of Simonton & other’s research into life history predictors of great accomplishment in the arts/sciences/politics/etc, particularly childhood: what variables seem to correlate with later eminence? Simonton discusses as predictors: 1. intelligence; 2. birth order (first-born); 3. extreme motivation/drive; 3. parental loss/orphanhood (!); 4. a previous generation of role models to imitate; 5. formal education (or lack thereof); 6. global circumstances/‘zeitgeist’.
On nature-nurture, Simonton deprecates the role of genetics, arguing that genius counts fluctuate too much and are too sporadic over time to reflect primarily genetics, but see Lykken et al on ‘emergenesis’, dysgenics, and tail effects in order statistics (especially the Lotka curve/log-normal distribution ‘leaky pipeline’ Simonton is so familiar with) for why this argument is weak.]
2019-kredlow.pdf: “The Efficacy of Modafinil as a Cognitive Enhancer: A Systematic Review and Meta-Analysis”, (2019-08-19; ):
Background: Animal models and human studies have identified the potential of modafinil as a cognitive enhancing agent, independent of its effects on promoting wakefulness in sleep-deprived samples. Given that single-dose applications of other putative memory enhancers (eg, D-cycloserine, yohimbine, and methylene blue) have shown success in enhancing clinical outcomes for anxiety-related disorders, we conducted a meta-analytic review examining the potential for single-dose effects for on cognitive functioning in non-sleep-deprived adults.
Methods: A total of 19 placebo-controlled trials that examined the effects of single-dose versus placebo on the cognitive domains of attention, executive functioning, memory, or processing speed were identified, allowing for the calculation of 67 cognitive domain-specific .
Results: The overall positive effect ofover placebo across all cognitive domains was small and statistically-significant (g = 0.10; 95% , 0.05–0.15; p < 0.001). No statistically-significant differences between cognitive domains were found. Likewise, no statistically-significant moderation was found for dose (100 mg vs 200 mg) or for the populations studied (psychiatric vs nonpsychiatric).
Conclusions: In conclusion, the available evidence indicates only limited potential forto act as a cognitive enhancer outside sleep-deprived populations.
A 27-year-old male patient fasted under supervision for 382 days and has subsequently maintained his normal weight. Blood glucose concentrations around 30 mg/100 ml were recorded consistently during the last 8 months, although the patient was ambulant and attending as an out-patient. Responses to glucose and tolbutamide tolerance tests remained normal. The hyperglycaemic response to glucagon was reduced and latterly absent, but promptly returned to normal during carbohydrate refeeding. After an initial decrease was corrected, plasma potassium levels remained normal without supplementation. A temporary period of hypercalcaemia occurred towards the end of the fast. Decreased plasma magnesium concentrations were a consistent feature from the first month onwards. After 100 days of fasting there was a marked and persistent increase in the excretion of urinary cations and inorganic phosphate, which until then had been minimal. These increases may be due to dissolution of excessive soft tissue and skeletal mass. Prolonged fasting in this patient had no ill-effects.
…During the 382 days of the fast, the patient’s weight decreased from 456 to 180lb. Five years after undertaking the fast, Mr A.B.’s weight remains around 196lb…The amount of weight lost and the rate of loss were not strikingly different from that of an earlier patient (Stewart, Fleming & Robertson, 1966) who reduced his weight from 432 to 235lb during 350 days of intermittent starvation.
…We wish to express our gratitude to Mr A. B. for his cheerful cooperation and steadfast application to the task of achieving a normal physique.
[A study of the Mississippi River, its history, and efforts by the US Army Corps of Engineers to hold it in place.] It was published in February, 1987, and it’s about the Herculean effort of the US Army Corps of Engineers to control the flow of the Mississippi River, the fourth-longest river in the world. “Atchafalaya” is the name of the “distributary waterscape” that threatens to capture and redirect the flow of the Mississippi. If that happens, the cities and industrial centers of Southern Louisiana could find themselves sitting, uselessly, next to a “tidal creek”, and economic ruin would be the inevitable result. To prevent that, the Corps of Engineers embarks on a vast project to artificially freeze the naturally shifting landscape. McPhee meets the engineers and explores the structures they’ve built to “preserve 1950 … in perpetuity.”…Like the Mississippi, “Atchafalaya” is long—around twenty-seven thousand words. But it’s all available online, and it gives you a real sense of what it’s like not just to live and work beside one of the world’s great rivers but actually to struggle with it.
“Why did we wait so long for the bicycle?”, (2019-07-13):
The bicycle, as we know it today, was not invented until the late 1800s. Yet it was a simple mechanical invention. It would seem to require no brilliant inventive insight, and certainly no scientific background.
…Technology factors are more convincing to me. They may have been necessary for bicycles to become practical and cheap enough to take off. But they weren’t needed for early experimentation. Frames can be built of wood. Wheels can be rimmed with metal. Gears can be omitted. Chains can be replaced with belts; some early designs even used treadles instead of pedals, and at least one design drove the wheels with levers, as on a steam locomotive. So what’s the real explanation?
First, the correct design was not obvious. For centuries, progress was stalled because inventors were all trying to create multi-person four-wheeled carriages, rather than single-person two-wheeled vehicles. It’s unclear why this was; certainly inventors were copying an existing mode of transportation, but why would they draw inspiration only from the horse-and-carriage, and not from the horse-and-rider? (Some commenters have suggested that it was not obvious that a two-wheeled vehicle would balance, but I find this unconvincing given how many other things people have learned to balance on, from dugout canoes to horses themselves.) It’s possible (I’m purely speculating here) that early mechanical inventors had a harder time realizing the fundamental impracticability of the carriage design because they didn’t have much in the way of mathematical engineering principles to go on, but then again it’s unclear what led to Drais’s breakthrough. And even after Drais hit on the two-wheeled design, it took multiple iterations, which happened over decades, to get to a design that was efficient, comfortable, and safe.
…But we can go deeper, and ask the questions that inspired my intense interest in this question in the first place. Why was no one even experimenting with two-wheeled vehicles until the 1800s? And why was no one, as far as we know, even considering the question of human-powered vehicles until the 1400s? Why weren’t there bicycle mechanics in the 1300s, when there were clockmakers, or at least by the 1500s, when we had watches? Or among the ancient Romans, who built water mills and harvesting machines? Or the Greeks, who built the Antikythera mechanism ? Even if they didn’t have tires and chains, why weren’t these societies at least experimenting with draisines? Or even the failed carriage designs?
“Hall's Law: The Nineteenth Century Prequel to Moore's Law”, (2012-03-08):
[Coins “Hall’s law”: “the maximum complexity of artifacts that can be manufactured at scales limited only by resource availability doubles every 10 years.” Economic history discussion of industrialization: the replacement of esoteric artisanal knowledge, based on trial-and-error and epitomized by a classic Sheffield steel recipe which calls for adding 4 white onions to iron, by formalized, specialized, rationalized processes such as interchangeable parts in a rifle produced by a factory system, which can create standardized parts at larger scales than craft-based processes, on which other systems can be built (once a reliable controlled source of parts exists). Examples include British gun-making, John Hall, the Montgomery Ward catalogue.]
I believe this law held between 1825 and 1960, at which point the law hit its natural limits. Here, I mean complexity in the loose sense I defined before: some function of mechanical complexity and operating tempo of the machine, analogous to the transistor count and clock-rate of chips. I don’t have empirical data to accurately estimate the doubling period, but 10 years is my initial guess, based on the anecdotal descriptions from Morris’ book and the descriptions of the increasing presence of technology in the world fairs. Along the complexity dimension, mass-produced goods increased rapidly got more complex, from guns with a few dozen parts to late-model steam engines with thousands. The progress on the consumer front was no less impressive, with the Montgomery Ward catalog offering mass-produced pianos within a few years of its introduction for instance. By the turn of the century, you could buy entire houses in mail-order kit form. The cost of everything was collapsing. Along the tempo dimension, everything got relentlessly faster as well. Somewhere along the way, things got so fast thanks to trains and the telegraph, that time zones had to be invented and people had to start paying attention the second hand on clocks.
…History is repeating itself. And the rerun episode we are living right now is not a pleasant one. The problem with history repeating itself of course, is that sometimes it does not. The fact that 1819–1880 map pretty well to 1959–2012 does not mean that 2012–2112 will map to 1880–1980. Many things are different this time around. But assuming history does repeat itself, what are we in for? If the Moore’s Law endgame is the same century-long economic-overdrive that was the Hall’s Law endgame, today’s kids will enter the adult world with prosperity and a fully-diffused Moore’s Law all around them. The children will do well. In the long term, things will look up. But in the long term, you and I will be dead.
2019-kwong.pdf: “Hard Drive of Hearing: Disks that Eavesdrop with a Synthesized Microphone”, (2019-05-01; ):
Security conscious individuals may take considerable measures to disable sensors in order to protect their privacy. However, they often overlook the cyberphysical attack surface exposed by devices that were never designed to be sensors in the first place. Our research demonstrates that the mechanical components in magnetic hard disk drives behave as microphones with sufficient precision to extract and parse human speech. These unintentional microphones sense speech with high enough fidelity for the Shazam service to recognize a song recorded through the hard drive. This proof of concept attack sheds light on the possibility of invasion of privacy even in absence of traditional sensors. We also present defense mechanisms, such as the use of ultrasonic aliasing, that can mitigate acoustic eavesdropping by synthesized microphones in hard disk drives.
“The Road to Clarity”, (2007-08-12):
Looking at a sign in Clearview after reading one in Highway Gothic is like putting on a new pair of reading glasses: there’s a sudden lightness, a noticeable crispness to the letters. The Federal Highway Administration granted Clearview interim approval in 2004, meaning that individual states are free to begin using it in all their road signs. More than 20 states have already adopted the typeface, replacing existing signs one by one as old ones wear out. Some places have been quicker to make the switch—much of Route I-80 in western Pennsylvania is marked by signs in Clearview, as are the roads around Dallas-Fort Worth International Airport—but it will very likely take decades for the rest of the country to finish the roadside makeover. It is a slow, almost imperceptible process. But eventually the entire country could be looking at Clearview.
…Meeker initially assumed that the solution to the nation’s highway sign problem lay in the clean utilitarian typefaces of Europe. One afternoon in the late fall of 1992, Meeker was sitting in his Larchmont office with a small team of designers and engineers. He suggested that the group get away from the computer screens and out of the office to see what actually worked in the open air at long distances. They grabbed all the roadsigns Meeker had printed—nearly 40 metal panels set in a dozen different fonts of varying weights—and headed across the street to the Larchmont train station, where they rested the signs along a railing. They then hiked to the top of a nearby hill. When they stopped and turned, they were standing a couple hundred feet from the lineup below. There was the original Highway Gothic; British Transport, the road typeface used in the United Kingdom; Univers, found in the Paris Metro and on Apple computer keyboards; DIN 1451, used on road and train signage in Germany; and also Helvetica, the classic sans-serif seen in modified versions on roadways in a number of European countries. “There was something wrong with each one”, Meeker remembers. “Nothing gave us the legibility we were looking for.” The team immediately realized that it would have to draw something from scratch.
In those early days, the company, just like almost everybody else in Washington, primarily produced Red Delicious apples, plus a few Goldens and Grannies—familiar workhorse varieties that anybody was allowed to grow. Back then, the state apple commission advertised its wares with a poster of a stoplight: one apple each in red, green, and yellow. Today, across more than 4,000 acres of McDougall apple trees, you won’t find a single Red; every year, you’ll also find fewer acres of the apples that McDougall calls “core varieties”, the more modern open-access standards such as Gala and Fuji. Instead, McDougall is betting on what he calls “value-added apples”: Ambrosias, whose rights he licensed from a Canadian company; Envy, Jazz, and Pacific Rose, whose intellectual properties are owned by the New Zealand giant Enzafruit; and a brand-new variety, commercially available for the first time this year and available only to Washington-state growers: the Cosmic Crisp.
…The Cosmic Crisp is debuting on grocery stores after this fall’s harvest, and in the nervous lead-up to the launch, everyone from nursery operators to marketers wanted me to understand the crazy scope of the thing: the scale of the plantings, the speed with which mountains of commercially untested fruit would be arriving on the market, the size of the capital risk. People kept saying things like “unprecedented”, “on steroids”, “off the friggin’ charts”, and “the largest launch of a single produce item in American history.”
McDougall took me to the highest part of his orchard, where we could look down at all its hundreds of very expensively trellised and irrigated acres (he estimated the costs to plant each individual acre at $60,000 to $65,000, plus another $12,000 in operating costs each year), their neat, thin lines of trees like the stitching over so many quilt squares. “If you’re a farmer, you’re a riverboat gambler anyway”, McDougall said. “But Cosmic Crisp—woo!” I thought of the warning of one former fruit-industry journalist that, with so much on the line, the enormous launch would have to go flawlessly: “It’s gotta be like the new iPhone.”
…Though Washington State University owns the WA 38 patent, the breeding program has received funding from the apple industry, so it was agreed, over some objections by people who worried that quality would be diluted, that the variety should be universally and exclusively available to Washington growers. (Growers of Cosmic Crisp pay royalties both on every tree they buy and on every box they sell, money that will fund future breeding projects as well as the shared marketing campaign.) The apple tested so well that WSU, in collaboration with commercial nurseries, began producing apple saplings as fast as possible; the plan was to start with 300,000 trees, but growers requested 4 million, leading to a lottery for divvying up the first available trees. Within three years, the industry had sunk 13 million of them, plus more than half a billion dollars, into the ground. Proprietary Variety Management expects that the number of Cosmic Crisp apples on the market will grow by millions of boxes every year, outpacing Pink Lady and Honeycrisp within about five years of its launch.
2019-kamenica.pdf: “Bayesian Persuasion and Information Design”, (2019-08-01; ):
A school may improve its students’ job outcomes if it issues only coarse grades. Google can reduce congestion on roads by giving drivers noisy information about the state of traffic. A social planner might raise everyone’s welfare by providing only partial information about solvency of banks. All of this can happen even when everyone is fully rational and understands the data-generating process. Each of these examples raises questions of what is the (socially or privately) optimal information that should be revealed. In this article, I review the literature that answers such questions.
A standard Lloyd’s contract defined disgrace in vague terms—as “any criminal act, or any offence against public taste or decency…which degrades or brings that person into disrepute or provokes insult or shock to the community.” Most effective policies rely on precise terms and evidence that both sides can agree on—the Richter scale, a hospital bill. Subjective wording leads to disputes. Insurance “has to involve no litigation”, says Bill Hubbard, CEO of the entertainment insurer HCC Specialty Group. “You know the Supreme Court justice who said, ‘I know pornography when I see it’? You can’t settle claims that way.”
The contracts were much clearer on the definition of what didn’t merit a payout: Many of them exempted non-felonious offenses and acts committedto the policy’s start date. Even if the All the Money producers had bought a policy, Spacey’s past transgressions might have been excluded, treated as preexisting conditions.
While these limitations kept the industry small, the foibles of the rich and famous only increased demand for a better product. Tiger Woods’s 2009 car crash, followed by revelations of his infidelities, cost him $29$222009 million in contracts with brands like AT&T and Gatorade—which was nothing compared to what they cost the companies. A UC Davis study put the brands’ shareholder losses somewhere between $6$52009 billion and $16$122009 billion.
But it wasn’t Woods who made disgrace insurance look viable; it was reality television. A few months before the golfer’s car crash came what one underwriter refers to only as “that Viacom loss.” Ryan Jenkins, then a contestant on the VH1 reality show Megan Wants a Millionaire and the star of an upcoming season of I Love Money, became the lead suspect in his wife’s murder and killed himself a few days later. Megan was canceled after three episodes and the Money season shelved entirely, costing Viacom seven figures in losses. That’s when the company started buying disgrace insurance.
Thousands of reality shows have been insured in the ensuing decade, many of them via two insurance brokers, Gallagher Entertainment and HUB International. HUB’s managing director, Bob Jellen, can recall about half a dozen claims paying out since the Jenkins murder. He wouldn’t offer specifics, but others have given two examples: P.I. Moms, which was canceled in 2011 following fraud and drug charges, and Spike TV’s Bar Rescue, after an owner killed a country singer in his own rescued bar. “It’s something we don’t advertise”, says Jellen of disgrace insurance. “You don’t have to sell people on disgrace.”
2004-wallace-considerthelobster.html: “Consider the Lobster: For 56 years, the Maine Lobster Festival has been drawing crowds with the promise of sun, fun, and fine food. One visitor would argue that the celebration involves a whole lot more”, (2004-08-01; ):
[Originally published in the August 2004 issue of Gourmet magazine, this review of the 2003 Maine Lobster Festival generated some controversy among the readers of the culinary magazine. The essay is concerned with the ethics of boiling a creature alive in order to enhance the consumer’s pleasure, including a discussion of lobster sensory neurons.]
A detail so obvious that most recipes don’t even bother to mention it is that each lobster is supposed to be alive when you put it in the kettle…Another alternative is to put the lobster in cold salt water and then very slowly bring it up to a full boil. Cooks who advocate this method are going mostly on the analogy to a frog, which can supposedly be kept from jumping out of a boiling pot by heating the water incrementally. In order to save a lot of research-summarizing, I’ll simply assure you that the analogy between frogs and lobsters turns out not to hold.
…So then here is a question that’s all but unavoidable at the World’s Largest Lobster Cooker, and may arise in kitchens across the U.S.: Is it all right to boil a sentient creature alive just for our gustatory pleasure? A related set of concerns: Is the previous question irksomely PC or sentimental? What does ‘all right’ even mean in this context? Is it all just a matter of individual choice?
…As far as I can tell, my own main way of dealing with this conflict has been to avoid thinking about the whole unpleasant thing. I should add that it appears to me unlikely that many readers of gourmet wish to think hard about it, either, or to be queried about the morality of their eating habits in the pages of a culinary monthly. Since, however, the assigned subject of this article is what it was like to attend the 2003 MLF, and thus to spend several days in the midst of a great mass of Americans all eating lobster, and thus to be more or less impelled to think hard about lobster and the experience of buying and eating lobster, it turns out that there is no honest way to avoid certain moral questions.
“A science fiction writer of the Fifties”, (2006-04-01):
II. When the Smoke Clears
The mind, that rambling bear, ransacks the sky
In search of honey,
Fish, berries, carrion. It minds no laws…
As if the heavens were some canvas tent,
It slashes through the firmament
To prise up the sealed stores with its big paws.
The mind, that sovereign camel, sees the sky
For what it is:
Each star a grain of sand along the vast
Passage to that oasis where, below
The pillared palms, the portico
Of fronds, the soul may drink its fill at last.
The mind, that gorgeous spider, webs the sky
With lines so sheer
They all but vanish, and yet star to star
(Thread by considered thread) slowly entwines
The universe in its designs—
Un-earthing patterns where no patterns are.
The mind, that termite, seems to shun the sky.
It burrows down,
Tunneling in upon that moment when,
In Time—its element—will come a day
The longest-shadowed tower sway,
Unbroken sunlight fall to earth again.
…DNA was unspooled in the year
I was born, and the test-tube births
Of cloned mammals emerged in a mere
Half-century; it seems the earth’s
Future’s now in the hands of a few
Techies on a caffeinated all-nighter who
Sift the gene-alphabet like Scrabble tiles
And our computer geeks are revealed, at last,
As those quick-handed, sidelined little mammals
In the dinosaurs’ long shadows—those least-
Likely-to-succeed successors whose kingdom come
Was the globe itself (an image best written down,
Perhaps, beneath a streetlamp, late, in some
Star-riddled Midwestern town).
He wrote boys’ books and intuitively
Recognized that the real
Realist isn’t the one who details
Lowdown heartland factories and farms
As if they would last, but the one who affirms,
From the other end of the galaxy,
Ours is the age of perilous miracles.
So one can imagine the furor in 1963 when a German writer claimed to have uncovered the real story behind the fairy tale.
According to Die Wahrheit über Hänsel und Gretel (The Truth About Hansel and Gretel), the two siblings were, in fact, adult brother and sister bakers, living in Germany during the mid-17th century. They murdered the witch, an ingenious confectioner in her own right, to steal her secret recipe for lebkuchen, a gingerbread-like traditional treat. The book published a facsimile of the recipe in question, as well as sensational photos of archaeological evidence.
…The media picked up the story and turned it into national news. “Book of the week? No, it’s the book of the year, and maybe the century!” proclaimed the West German tabloid Abendzeitung in November 1963. The state-owned East German Berliner Zeitung came out with the headline “Hansel and Gretel—a duo of murderers?” and asked whether this could be “a criminal case from the early capitalist era.” The news spread like wildfire not only in Germany, but abroad too. Foreign publishers, smelling a profit, began negotiating for the translation rights. School groups, some from neighboring Denmark, traveled to the Spessart woods in the states of Bavaria and Hesse to see the newly discovered foundations of the witch’s house.
As intriguing as The Truth About Hansel and Gretel might sound, however, none of it proved to be true. In fact, the book turned out to be a literary forgery concocted by Hans Traxler, a German children’s book writer and cartoonist, known for his sardonic sense of humor. "1963 marked the 100th anniversary of Jacob Grimm’s death“, says the now 90-year-old Traxler, who lives in Frankfurt, Germany.”So it was natural to dig into [the] Brothers Grimm treasure chest of fairy tales, and pick their most famous one, ‘Hansel and Gretel.’"
“An Inside Look at the Surprisingly Violent Quidditch World Cup”, (2012-05-04):
The Quidditch World Cup sounds dorky, and make no mistake: it is. But these sorcery-loving Harry Potter fans play pretty rough, as Eric Hansen found out when he captained a bad-news team of ex-athletes, ultimate Frisbee studs, slobs, drunks, and some people he knows from Iceland. Brooms up, and may the best Muggles win.
…But there were portents of violence, like when I spoke to a longtime player who gave me strange-sounding advice that I relayed to the team. “‘Hide your girls?’” Josh kept asking. “What does that even mean?”
…“Drepa, drepa, drekka blód!” we shouted, thinking then that “Kill, kill, drink blood” was the height of irony.
…A goalie—the keeper—guards his team’s hula-hoops, usually by swatting the quaffle out of the air with his hand. Or so we thought…I try, but he barges past with the flailing arms and unblinking eyes of a proper Potter psycho. For reasons unknown, just shy of our goal the bastard chooses to ignore the hoops and instead clobbers my wife, Hrund, who isn’t even in the game.
I see the whole episode from just inches away, a dirty lock of his hair waving in my face as I sprint behind him. One moment she’s relaxing on the sideline, looking away, not even holding a broom. The next, this freak lowers his non-broom-carrying shoulder and blasts her in the sternum. The impact sends her flying through the dusky air, nearly completing a full back layout before landing on her head.
…I didn’t catch a whiff of the terrifying stench of Quid Kid hostility until I ambled out into the parking lot at the south gate and ended up chatting with a tired ambulance driver who was having a smoke. He was one of 30 EMTs posted at the event. “Easy duty”, I said. “This is just the quiet before another storm”, he corrected. “I’ve had eight concussions, two people taken to the hospital, bloody noses, scrapes, twisted ankles. I stopped counting injuries after 10.” My teammates weren’t as surprised by these stats as I expected. One recalled stopping a young female chaser just short of the goal, only to have the girl yell an extremely unprintable comment. Another teammate recalled watching a man in Division 1 lift a girl, spin her like the blades of a helicopter, and throw her to the dirt. The violence was not only pervasive but gender neutral. Hide your girls, indeed.