September 2019 gwern.net newsletter with 2 behavioral genetics analyses and links on AI text generation, Registered Reports, political polarization, and history of technology; 4 movie reviews.
2019-08-03–2021-01-04
finished
certainty: log
importance: 0
This is the September 2019 edition of the Gwern.net newsletter; previous, August 2019 (archives). This is a summary of the revision-history RSS feed, overlapping with my Changelog & /
Writings
- Sperm selection as minor enhancement to embryo selection
- On Selective Emigration and Personality Trait Change in Scandinavia
- Gwern.net: added Bitcoin support to
Inflation.hs
; CSS optimizations to cut mobile load time by half; full-width image support
Media
Links
Genetics:
Everything Is Heritable:
- “Genetic ‘General Intelligence’, Objectively Determined and Measured”, de la Fuente et al 2019 (the UKBB cognitive measures properly analyzed using genetic correlations, allowing better prediction of the genetic g ‘generalist genes’ and also test-specific cognitive performances)
- “Social and non-social autism symptoms and trait domains are genetically dissociable”, Warrier et al 2019 (more evidence that embryo selection for intelligence would not also lead to greater social dysfunctionality/
autism) - “The Genetics of Human Skin and Hair Pigmentation”, Pavan & Sturm 2019
- “Extreme inbreeding in a European ancestry sample from the contemporary UK population”, Yengo et al 2019 (>10% ROH → -0.3–0.7SD on 7 traits like height/
intelligence) - UKBB announces WGS of all n = 500,000 participants
- “A Prospective Analysis of Genetic Variants Associated with Human Lifespan”, Wright et al 2019
Engineering:
- “Controlled modelling of human epiblast and amnion development using stem cells”, Zheng et al 2019 (creation of quasi-embryos from stem cells; towards IES; media: TR, Nature)
- “The Perfect Milk Machine: How Big Data Transformed the Dairy Industry” (for estimates of limits to selection/
how valuable the perfect cow would be, see Cole & VanRaden 2011) - “Genetics-based methods for agricultural insect pest management”, Alphey & Bonsall 2019
AI:
“Fine-Tuning Language Models from Human Preferences”, Ziegler et al 2019 (blog; source; training better text generation using human ratings of quality)
Matters Of Scale:
- “Megatron-LM: Training Multi-Billion Parameter Language Models Using GPU Model Parallelism”, Shoeybi et al 2019 (text samples; earlier blog)
- “Exascale Deep Learning for Scientific Inverse Problems”, Laanait et al 2019
- “Large-scale Pretraining for Neural Machine Translation with Tens of Billions of Sentence Pairs”, Meng et al 2019
- “Learning to Seek: Autonomous Source Seeking with Deep Reinforcement Learning Onboard a Nano Drone Microcontroller”, Duisterhof et al 2019 (reminds me of Bonsai)
Statistics/
- “Registered reports: an early example and analysis”, Wiseman et al 2019 (parapsychology was the first field to use Registered Reports in 1976, which cut statistical-significance rates by 2/
3rds: Johnson 1975a, Johnson 1975b, Johnson 19761) - “Methods for Studying Coincidences”, Diaconis & Mosteller 1989
- “The Power of Two Random Choices: A Survey of Techniques and Results”, Mitzenmacher et al 2001
- “Sparkline theory and practice”, Edward Tufte (examples & questions)
Politics/
- “Constitutional Hardball”, Tushnet 2004
- “Anthropology’s Science Wars: Insights from a New Survey”, Horowitz et al 2019 (visualizations)
- “Leninthink: On the practice behind the theory of Marxism-Leninism”
- “Dear Young Eccentric”
Psychology/
- “Effect of Lower Versus Higher Red Meat Intake on Cardiometabolic and Cancer Outcomes: A Systematic Review of Randomized Trials”, Zeraatkar et al 2019 (“diets lower in red meat may have little or no effect on all-cause mortality (HR 0.99 [95% CI, 0.95 to 1.03])”; on the accusations of conflicts of interest)
- “Reading Lies: Nonverbal Communication and Deception”, Vrij et al 2019
- “Is Cognitive Functioning Impaired in Methamphetamine Users? A Critical Review”, Hart et al 2012
- “Dumb or smart asses? Donkey’s (Equus asinus) cognitive capabilities share the heritability and variation patterns of human’s (Homo sapiens) cognitive capabilities”, González et al 2019
- “Nature’s Spoils: The underground food movement ferments revolution”
Technology:
- “STEPS Toward Expressive Programming Systems: ‘A Science Experiment’”, Ohshima et al 2012 (writing a GUI OS in 20k LoC; tricks include ASCII art networking DSLs & generic optimization for text layout)
- “Secrets by the thousands”, Walker 1946 (more on the post-WWII brain drain from Germany/
Europe to the USA) - “Secrets of the Little Blue Box”, Rosenbaum 1971; “Terminal Delinquents: Once, They Stole Hubcaps And Shot Out Street-Lights. Now They’re Stealing Your Social Security Number And Shooting Out Your Credit Rating. A Layman’s Guide To Computer High Jinks”, Hitt & Tough 1990 (I feel this article must have been a reference for Hackers)
- “The Radioactive Boy Scout: When a teenager attempts to build a breeder reactor”, Silverstein 1998
- “Fancy Euclid’s Elements in TeX”, Slyusarev Sergey; “Making of Byrne’s Euclid”, Nicholas Rougeux (how to create a beautiful interactive HTML version of Byrne’s unique color-diagram book interpretation of Euclid’s Elements (Tufte discussion); you shouldn’t do this sort of thing for everything, but maybe you should try to do it for something)
- “SpeechJammer: A System Utilizing Artificial Speech Disturbance with Delayed Auditory Feedback”, Kurihara & Tsukada 2012
- “Mark of Integrity”, Jonathan Allen (on juiced cards, luminous readers, sunning the deck, and other card sharpers’ tricks for card marking)
Economics:
- “Managing an iconic old luxury brand in a new luxury economy: Hermès handbags in the US market”, Lewis & Haas 2014 (Summary; see also “Birkin Demand: A Sage & Stylish Investment”, Newsom 2016)
Fiction:
- Bakker’s Second Apocalypse & Frank Herbert’s Dune: time loops & finding freedom in an unfree universe (compare Ted Chiang’s “Story of Your Life”)
Misc:
Film/TV
Live-action:
- Falling Down (1993)
Animated:
Music
- “ann~庵~” (茶太 feat. Chata; 『spill over』 {C95}) [rock]
- “cardiac sound” (Kirin; Cleave Spark the Instrumental {C95}) [trance]
MLP:
- “Getting Stronger” (Michelle Creber, Black Gryph0n, Baasik; Getting Stronger {2016}) [rock]
- “Landscapes” (Mane in Green {2016}) [electronic]
- “Moonlight” (Black Gryph0n & Baasik; IMmortal {2014}) [trance]
- “Faster Than You Know” (Black Gryph0n & Baasik; IMmortal {2014}) [pop]
Doujin:
- “Natsukage -endless summer lights-” (Falmaki; Natsukage -endless summer lights- {C94}) [post-rock]
Link Bibliography
Bibliography of page links in reading order (with annotations when available):
“August 2019 News”, (2019-07-21):
August 2019 gwern.net newsletter with 2 short essays, a major new site features, and links on AI, progress, and technology; 1 short book review, and 2 long movie reviews on ‘Gone with the Wind’ and ‘Shin Godzilla’.
“Gwern.net newsletter archives”, (2013-12-01):
Newsletter tag: archive of all issues back to 2013 for the gwern.net newsletter (monthly updates, which will include summaries of projects I’ve worked on that month (the same as the changelog), collations of links or discussions from my subreddit, and book/movie reviews.)
“Changelog”, (2013-09-15):
This page is a changelog for Gwern.net: a monthly reverse chronological list of recent major writings/
changes/ additions. Following my writing can be a little difficult because it is often so incremental. So every month, in addition to my regular /
r/ subreddit submissions, I write up reasonably-interesting changes and send it out to the mailing list in addition to a compilation of links & reviews (archives).Gwern “/r/gwern subreddit”, (2018-10-01):
A subreddit for posting links of interest and also for announcing updates to gwern.net (which can be used as a RSS feed). Submissions are categorized similar to the monthly newsletter and typically will be collated there.
“Sperm Phenotype Selection”, (2019-08-17):
Sperm can be selected on traits such as mobility, which are measures of quality. These may be correlated with genetics for adult traits, and one can select from billions of sperm. Estimating the gain, it is probably worthwhile but small.
A possible adjunct to embryo selection is sperm selection. Non-destructive sequencing is not yet possible, but measuring phenotypic correlates of genetic quality (such as sperm speed/motility) is. These correlations of sperm quality/genetic quality are, however, small and confounded in current studies by between-individual variation. Optimistically, the gain from such sperm selection is probably small, <0.1SD, and there do not appear to be any easy ways to boost this effect. Sperm selection is probably cost-effective and a good enhancement of existing IVF practices, but not particularly notable.
“Selective Emigration and Personality Trait Change”, (2019-09-03):
Knudsen 2019 finds that the emigration of 25% of the Scandinavian population to the USA 1850–1920 was driven in part by more ‘individualistic’ personality factors among emigrants, leading to permanent decreases in mean ‘individualism’ in the home countries. This is attributed to cultural factors, rather than genetics. I model the overall migration as a simple truncation selection scenario, and find that in a simple model under reasonable assumptions, the entire effect could be genetic.
“Genetic “General Intelligence,” Objectively Determined and Measured”, (2019-09-12):
It has been known for 115 years that, in humans, diverse cognitive traits are positively intercorrelated; this forms the basis for the general factor of intelligence (g). We directly test for a genetic basis for g using data from seven different cognitive tests (n = 11,263 to N = 331,679) and genome-wide autosomal single nucleotide polymorphisms. A genetic g factor accounts for 58.4% (SE = 4.8%) of the genetic variance in the cognitive traits, with trait-specific genetic factors accounting for the remaining 41.6%. We distill genetic loci broadly relevant for many cognitive traits (g) from loci associated with only individual cognitive traits. These results elucidate the etiological basis for a long-known yet poorly-understood phenomenon, revealing a fundamental dimension of genetic sharing across diverse cognitive traits.
“Social and non-social autism symptoms and trait domains are genetically dissociable”, (2019-09-03):
The core diagnostic criteria for autism comprise two symptom domains – social and communication difficulties, and unusually repetitive and restricted behaviour, interests and activities. There is some evidence to suggest that these two domains are dissociable, though this hypothesis has not yet been tested using molecular genetics. We test this using a genome-wide association study (N=51,564) of a non-social trait related to autism, systemising, defined as the drive to analyse and build systems. We demonstrate that systemising is heritable and genetically correlated with autism. In contrast, we do not identify significant genetic correlations between social autistic traits and systemising. Supporting this, polygenic scores for systemising are significantly and positively associated with restricted and repetitive behaviour but not with social difficulties in autistic individuals. These findings strongly suggest that the two core domains of autism are genetically dissociable, and point at how to fractionate the genetics of autism.
“The Genetics of Human Skin and Hair Pigmentation”, (2019):
Human skin and hair color are visible traits that can vary dramatically within and across ethnic populations. The genetic makeup of these traits—including polymorphisms in the enzymes and signaling proteins involved in melanogenesis, and the vital role of ion transport mechanisms operating during the maturation and distribution of the melanosome—has provided new insights into the regulation of pigmentation. A large number of novel loci involved in the process have been recently discovered through four large-scale genome-wide association studies in Europeans, two large genetic studies of skin color in Africans, one study in Latin Americans, and functional testing in animal models. The responsible polymorphisms within these pigmentation genes appear at different population frequencies, can be used as ancestry-informative markers, and provide insight into the evolutionary selective forces that have acted to create this human diversity.
“Extreme inbreeding in a European ancestry sample from the contemporary UK population”, (2019-09-03):
In most human societies, there are taboos and laws banning mating between first-degreee and second-degree relatives, but actual prevalence and effects on health and fitness are poorly quantified. Here, we leverage a large observational study of ~450,000 participants of European ancestry from the UK Biobank (UKB) to quantify extreme inbreeding (EI) and its consequences. We use genotyped SNPs to detect large runs of homozygosity (ROH) and call EI when >10% of an individual’s genome comprise ROHs. We estimate a prevalence of EI of ~0.03%, i.e., ~$. EI cases have phenotypic means between 0.3 and 0.7 standard deviation below the population mean for 7 traits, including stature and cognitive ability, consistent with inbreeding depression estimated from individuals with low levels of inbreeding. Our study provides DNA-based quantification of the prevalence of EI in a European ancestry sample from the UK and measures its effects on health and fitness traits.
In most human societies, there are taboos and laws banning mating between first- and second-degree relatives, but actual prevalence and effects on health and fitness are poorly quantified. Here, we leverage a large observational study of ~450,000 participants of European ancestry from the UK Biobank (UKB) to quantify extreme inbreeding (EI) and its consequences. We use genotyped SNPs to detect large runs of homozygosity (ROH) and call EI when >10% of an individual’s genome comprise ROHs. We estimate a prevalence of EI of ~0.03%, i.e., ~. EI cases have phenotypic means between 0.3 and 0.7 standard deviation below the population mean for 7 traits, including stature and cognitive ability, consistent with inbreeding depression estimated from individuals with low levels of inbreeding. Our study provides DNA-based quantification of the prevalence of EI in a European ancestry sample from the UK and measures its effects on health and fitness traits.
“UK Biobank leads the way in genetics research to tackle chronic diseases”, (2019-09-11):
A £200 million investment from government, industry and charity cements UK Biobank’s reputation as a world-leading health resource to tackle the widest range of common and chronic diseases—including dementia, mental illness, cancer and heart disease. The investment provides for the whole genome sequencing of 450,000 UK Biobank participants. A Vanguard study, funded by the Medical Research Council to sequence the first 50,000 individuals, is already underway.
…The ambitious project is funded with:
- £50 million by the UK Government’s research and innovation agency, UK Research and Innovation (UKRI) through the Industrial Strategy Challenge Fund;
- £50 million from The Wellcome Trust charity;
- £100 million in total from pharmaceutical companies Amgen, AstraZeneca, GlaxoSmithKline (GSK) and Johnson & Johnson (J&J).
…At the end of May 2020, the consortium of pharmaceutical companies will be provided independently with access for analysis to the first tranche of sequence data (anticipated to be for about 125,000 participants) linked to all of the other data in the UK Biobank resource. After an exclusive access period of 9 months, the whole genome sequence data will be made available to all other approved researchers around the world. A similar exclusive access period will also apply on the completion of the sequencing. The period of exclusive access mirrors the arrangements that UK Biobank had with the exome sequencing project which is being undertaken by Regeneron in the US and other industry partners. The first tranche of exome data on 50,000 participants is now being used in more than 100 research projects worldwide.
“A Prospective Analysis of Genetic Variants Associated with Human Lifespan”, (2019-09):
We present a massive investigation into the genetic basis of human lifespan. Beginning with a genome-wide association (GWA) study using a de-identified snapshot of the unique AncestryDNA database—more than 300,000 genotyped individuals linked to pedigrees of over 400,000,000 people—we mapped six genome-wide significant loci associated with parental lifespan. We compared these results to a GWA analysis of the traditional lifespan proxy trait, age, and found only one locus, APOE, to be associated with both age and lifespan. By combining the AncestryDNA results with those of an independent UK Biobank dataset, we conducted a meta-analysis of more than 650,000 individuals and identified fifteen parental lifespan-associated loci. Beyond just those significant loci, our genome-wide set of polymorphisms accounts for up to 8% of the variance in human lifespan; this value represents a large fraction of the heritability estimated from phenotypic correlations between relatives.
“Controlled modelling of human epiblast and amnion development using stem cells”, (2019-09-11):
Early human embryonic development involves extensive lineage diversification, cell-fate specification and tissue patterning1. Despite its basic and clinical importance, early human embryonic development remains relatively unexplained owing to interspecies divergence2,3 and limited accessibility to human embryo samples. Here we report that human pluripotent stem cells (hPSCs) in a microfluidic device recapitulate, in a highly controllable and scalable fashion, landmarks of the development of the epiblast and amniotic ectoderm parts of the conceptus, including lumenogenesis of the epiblast and the resultant pro-amniotic cavity, formation of a bipolar embryonic sac, and specification of primordial germ cells and primitive streak cells. We further show that amniotic ectoderm-like cells function as a signalling centre to trigger the onset of gastrulation-like events in hPSCs. Given its controllability and scalability, the microfluidic model provides a powerful experimental system to advance knowledge of human embryology and reproduction. This model could assist in the rational design of differentiation protocols of hPSCs for disease modelling and cell therapy, and in high-throughput drug and toxicity screens to prevent pregnancy failure and birth defects.
-
…Already, Badger-Bluff Fanny Freddie has 346 daughters who are on the books and thousands more that will be added to his progeny count when they start producing milk. This is quite a career for a young animal: He was only born in 2004.
There is a reason, of course, that the semen that Badger-Bluff Fanny Freddie produces has become such a hot commodity in what one artificial-insemination company calls “today’s fast paced cattle semen market.” In January of 2009, before he had a single daughter producing milk, the United States Department of Agriculture took a look at his lineage and more than 50,000 markers on his genome and declared him the best bull in the land. And, three years and 346 milk-providing and data-providing daughters later, it turns out that they were right. “When Freddie [as he is known] had no daughter records our equations predicted from his DNA that he would be the best bull,” USDA research geneticist Paul VanRaden emailed me with a detectable hint of pride. “Now he is the best progeny tested bull (as predicted).”
Data-driven predictions are responsible for a massive transformation of America’s dairy cows. While other industries are just catching on to this whole “big data” thing, the animal sciences—and dairy breeding in particular—have been using large amounts of data since long before VanRaden was calculating the outsized genetic impact of the most sought-after bulls with a pencil and paper in the 1980s. Dairy breeding is perfect for quantitative analysis. Pedigree records have been assiduously kept; relatively easy artificial insemination has helped centralized genetic information in a small number of key bulls since the 1960s; there are a relatively small and easily measurable number of traits—milk production, fat in the milk, protein in the milk, longevity, udder quality—that breeders want to optimize; each cow works for three or four years, which means that farmers invest thousands of dollars into each animal, so it’s worth it to get the best semen money can buy. The economics push breeders to use the genetics.
The bull market (heh) can be reduced to one key statistic, lifetime net merit, though there are many nuances that the single number cannot capture. Net merit denotes the likely additive value of a bull’s genetics. The number is actually denominated in dollars because it is an estimate of how much a bull’s genetic material will likely improve the revenue from a given cow. A very complicated equation weights all of the factors that go into dairy breeding and—voila—you come out with this single number. For example, a bull that could help a cow make an extra 1000 pounds of milk over her lifetime only gets an increase of $1 in net merit while a bull who will help that same cow produce a pound more protein will get $3.41 more in net merit. An increase of a single month of predicted productive life yields $35 more.
…In 1942, when my father was born, the average dairy cow produced less than 5,000 pounds of milk in its lifetime. Now, the average cow produces over 21,000 pounds of milk. At the same time, the number of dairy cows has decreased from a high of 25 million around the end of World War II to fewer than nine million today…a mere 70 years of quantitative breeding optimized to suit corporate imperatives quadrupled what all previous civilization had accomplished.
…John Cole, yet another USDA animal improvement scientist, generated an estimate of the perfect bull by choosing the optimal observed genetic sequences and hypothetically combining them. He found that the optimal bull would have a net merit value of $7,515, which absolutely blows any current bull out of the water. In other words, we’re nowhere near creating the perfect milk machine.
“Use of haplotypes to estimate Mendelian sampling effects and selection limits”, (2011):
Limits to selection and Mendelian sampling (MS) terms can be calculated using haplotypes by summing the individual additive effects on each chromosome. Haplotypes were imputed for 43 382 single nucleotide polymorphisms (SNP) in 1455 Brown Swiss, 40 351 Holstein and4064 Jersey bulls and cows using the Fortran program
findhap.f90
, which combines population and pedigree haplotyping methods. Lower and upper bounds of MS variance were calculated for daughter pregnancy rate (a measure of fertility), milk yield, lifetime net merit (a measure of profitability) and protein yield assuming either no or complete linkage among SNP on the same chromosome. Calculated selection limits were greater than the largest direct genomic values observed in all breeds studied. The best chromosomal genotypes generally consisted of two copies of the same haplotype even after adjustment for inbreeding. Selection of animals rather than chromosomes may result in slower progress, but limits may be the same because most chromosomes will become homozygous with either strategy. Selection on functions of MS could be used to change variances in later generations.…Lifetime net merit: Lower selection limits for NM$ with no adjustment for inbreeding were $3857 (BS), $7515 (HO) and $4678 (JE). Adjusted values were slightly smaller and were $3817 (BS), $7494 (HO) and $4606 (JE). Upper bounds had values of $9140 (BS), $23 588 (HO) and $11517 (JE) and were not adjusted for inbreeding because they were calculated from individual loci rather than complete haplotypes. The largest DGV among all genotyped animals in each breed were $1102 (BS), $2528 (HO) and $1556 (JE). The top active bulls (AI and foreign bulls with semen distributed in the US that are in or above the 80th percentile, based on NM) in each breed following the August 2010 genetic evaluation had GEBV (Genomic estimated breeding value) for NM$ of +$1094 (BS: 054BS00374), +$1588 (HO: 001HO08784) and +$1292 (JE: 236JE00146).
…If two copies of each of the 30 best haplotypes in the US Holstein population were combined in a single animal (Lower bounds of selection limit/SLC for NM$), it would have a GEBV for NM$ of +$7515 (Figure 5), approximately five times larger than that of the current best Holstein bull in the US, whose GEBV for NM$ are +1588.
“Genetics-based methods for agricultural insect pest management.”, (2018):
The sterile insect technique is an area-wide pest control method that reduces agricultural pest populations by releasing mass-reared sterile insects, which then compete for mates with wild insects. Contemporary genetics-based technologies use insects that are homozygous for a repressible dominant lethal genetic construct rather than being sterilized by irradiation.Engineered strains of agricultural pest species, including moths such as the diamondback moth Plutella xylostella and fruit flies such as the Mediterranean fruit fly Ceratitis capitata, have been developed with lethality that only operates on females.Transgenic crops expressing insecticidal toxins are widely used; the economic benefits of these crops would be lost if toxin resistance spread through the pest population. The primary resistance management method is a high-dose/refuge strategy, requiring toxin-free crops as refuges near the insecticidal crops, as well as toxin doses sufficiently high to kill wild-type insects and insects heterozygous for a resistance allele.Mass-release of toxin-sensitive engineered males (carrying female-lethal genes), as well as suppressing populations, could substantially delay or reverse the spread of resistance. These transgenic insect technologies could form an effective resistance management strategy.We outline some policy considerations for taking genetic insect control systems through to field implementation.
“AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence”, (2019-05-27):
Perhaps the most ambitious scientific quest in human history is the creation of general artificial intelligence, which roughly means AI that is as smart or smarter than humans. The dominant approach in the machine learning community is to attempt to discover each of the pieces required for intelligence, with the implicit assumption that some future group will complete the Herculean task of figuring out how to combine all of those pieces into a complex thinking machine. I call this the "manual AI approach". This paper describes another exciting path that ultimately may be more successful at producing general AI. It is based on the clear trend in machine learning that hand-designed solutions eventually are replaced by more effective, learned solutions. The idea is to create an AI-generating algorithm (AI-GA), which automatically learns how to produce general AI. Three Pillars are essential for the approach: (1) meta-learning architectures, (2) meta-learning the learning algorithms themselves, and (3) generating effective learning environments. I argue that either approach could produce general AI first, and both are scientifically worthwhile irrespective of which is the fastest path. Because both are promising, yet the ML community is currently committed to the manual approach, I argue that our community should increase its research investment in the AI-GA approach. To encourage such research, I describe promising work in each of the Three Pillars. I also discuss AI-GA-specific safety and ethical considerations. Because it it may be the fastest path to general AI and because it is inherently scientifically interesting to understand the conditions in which a simple algorithm can produce general AI (as happened on Earth where Darwinian evolution produced human intelligence), I argue that the pursuit of AI-GAs should be considered a new grand challenge of computer science research.
“Fine-Tuning Language Models from Human Preferences”, (2019-09-18):
Reward learning enables the application of reinforcement learning (RL) to tasks where reward is defined by human judgment, building a model of reward by asking humans questions. Most work on reward learning has used simulated environments, but complex information about values is often expressed in natural language, and we believe reward learning for language is a key to making RL practical and safe for real-world tasks. In this paper, we build on advances in generative pretraining of language models to apply reward learning to four natural language tasks: continuing text with positive sentiment or physically descriptive language, and summarization tasks on the TL;DR and CNN/Daily Mail datasets. For stylistic continuation we achieve good results with only 5,000 comparisons evaluated by humans. For summarization, models trained with 60,000 comparisons copy whole sentences from the input but skip irrelevant preamble; this leads to reasonable ROUGE scores and very good performance according to our human labelers, but may be exploiting the fact that labelers rely on simple heuristics.
“Fine-Tuning GPT-2 from Human Preferences”, (2019-09-19):
We’ve fine-tuned the 774M parameter GPT-2 language model using human feedback for various tasks, successfully matching the preferences of the external human labelers, though those preferences did not always match our own. Specifically, for summarization tasks the labelers preferred sentences copied wholesale from the input (we’d only asked them to ensure accuracy), so our models learned to copy. Summarization required 60k human labels; simpler tasks which continue text in various styles required only 5k. Our motivation is to move safety techniques closer to the general task of “machines talking to humans,” which we believe is key to extracting information about human values.
This work applies human preference learning to several natural language tasks: continuing text with positive sentiment or physically descriptive language using the BookCorpus, and summarizing content from the TL;DR and CNN/Daily Mail datasets. Each of these tasks can be viewed as a text completion problem: starting with some text X, we ask what text Y should follow. [For summarization, the text is the article plus the string “TL;DR:”.]
We start with a pretrained language model (the 774M parameter version of GPT-2) and fine-tune the model by asking human labelers which of four samples is best. Fine-tuning for the stylistic continuation tasks is sample efficient: 5,000 human samples suffice for strong performance according to humans. For summarization, models trained with 60,000 comparisons learn to copy whole sentences from the input while skipping irrelevant preamble; this copying is an easy way to ensure accurate summaries, but may exploit the fact that labelers rely on simple heuristics.
Bugs can optimize for bad behavior
One of our code refactors introduced a bug which flipped the sign of the reward. Flipping the reward would usually produce incoherent text, but the same bug also flipped the sign of the KL penalty. The result was a model which optimized for negative sentiment while preserving natural language. Since our instructions told humans to give very low ratings to continuations with sexually explicit text, the model quickly learned to output only content of this form. This bug was remarkable since the result was not gibberish but maximally bad output. The authors were asleep during the training process, so the problem was noticed only once training had finished. A mechanism such as Toyota’s Andon cord could have prevented this, by allowing any labeler to stop a problematic training process.
Looking forward
We’ve demonstrated reward learning from human preferences on two kinds of natural language tasks, stylistic continuation and summarization. Our results are mixed: for continuation we achieve good results with very few samples, but our summarization models are only “smart copiers”: they copy from the input text but skip over irrelevant preamble. The advantage of smart copying is truthfulness: the zero-shot and supervised models produce natural, plausible-looking summaries that are often lies. We believe the limiting factor in our experiments is data quality exacerbated by the online data collection setting, and plan to use batched data collection in the future.
We believe the application of reward learning to language is important both from a capability and safety perspective. On the capability side, reinforcement learning lets us correct mistakes that supervised learning would not catch, but RL with programmatic reward functions “can be detrimental to model quality.” On the safety side, reward learning for language allows important criteria like “don’t lie” to be represented during training, and is a step towards scalable safety methods such as a debate and amplification.
“lm-human-preferences”, (2019-09-14):
Code for the paper ‘Fine-Tuning Language Models from Human Preferences’. Status: Archive (code is provided as-is, no updates expected). We provide code for:
- Training reward models from human labels
- Fine-tuning language models using those reward models
It does not contain code for generating labels. However, we have released human labels collected for our experiments, at
gs://lm-human-preferences/labels
. For those interested, the question and label schemas are simple and documented inlabel_types.py
.The code has only been tested using the smallest GPT-2 model (124M parameters). This code has only been tested using Python 3.7.3. Training has been tested on GCE machines with 8 V100s, running Ubuntu 16.04, but development also works on Mac OS X.
“Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism”, (2019-09-17):
Recent work in language modeling demonstrates that training large transformer models advances the state of the art in Natural Language Processing applications. However, very large models can be quite difficult to train due to memory constraints. In this work, we present our techniques for training very large Transformer models and implement a simple, efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our approach does not require a new compiler or library changes, is orthogonal and complementary to pipeline model parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We illustrate this approach by converging Transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain 15.1 PetaFLOPs across the entire application with 76 GPU baseline that sustains 39 TeraFLOPs, which is 30 demonstrate that large language models can further advance the state of the art (SOTA), we train an 8.3 billion parameter Transformer language model similar to GPT-2 and a 3.9 billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5 datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9 compared to SOTA accuracy of 89.4%).
“MegatronLM: Training Billion+ Parameter Language Models Using GPU Model Parallelism”, (2019-08-13):
Larger language models are dramatically more useful for NLP tasks such as article completion, question answering, and dialog systems. Training the largest neural language model has recently been the best way to advance the state of the art in NLP applications. Two recent papers, BERT and GPT-2, demonstrate the benefits of large scale language modeling. Both papers leverage advances in compute and available text corpora to significantly surpass state of the art performance in natural language understanding, modeling, and generation. Training these models requires hundreds of exaflops of compute and clever memory management to trade recomputation for a reduced memory footprint. However, for very large models beyond a billion parameters, the memory on a single GPU is not enough to fit the model along with the parameters needed for training, requiring model parallelism to split the parameters across multiple GPUs. Several approaches to model parallelism exist, but they are difficult to use, either because they rely on custom compilers, or because they scale poorly or require changes to the optimizer.
In this work, we implement a simple and efficient model parallel approach by making only a few targeted modifications to existing PyTorch transformer implementations. Our code is written in native Python, leverages mixed precision training, and utilizes the NCCL library for communication between GPUs. We showcase this approach by training an 8.3 billion parameter transformer language model with 8-way model parallelism and 64-way data parallelism on 512 GPUs, making it the largest transformer based language model ever trained at 24× the size of BERT and 5.6× the size of GPT-2. We have published the code that implements this approach at our GitHub repository.
Our experiments are conducted on NVIDIA’s DGX SuperPOD. Without model parallelism, we can fit a baseline model of 1.2B parameters on a single V100 32GB GPU, and sustain 39 TeraFLOPS during the overall training process, which is 30% of the theoretical peak FLOPS for a single GPU in a DGX2-H server. Scaling the model to 8.3 billion parameters on 512 GPUs with 8-way model parallelism, we achieved up to 15.1 PetaFLOPS sustained performance over the entire application and reached 76% scaling efficiency compared to the single GPU case.
“Exascale Deep Learning for Scientific Inverse Problems”, (2019-09-24):
We introduce novel communication strategies in synchronous distributed Deep Learning consisting of decentralized gradient reduction orchestration and computational graph-aware grouping of gradient tensors. These new techniques produce an optimal overlap between computation and communication and result in near-linear scaling (0.93) of distributed training up to 27,600 NVIDIA V100 GPUs on the Summit Supercomputer. We demonstrate our gradient reduction techniques in the context of training a Fully Convolutional Neural Network to approximate the solution of a longstanding scientific inverse problem in materials imaging. The efficient distributed training on a dataset size of 0.5 PB, produces a model capable of an atomically-accurate reconstruction of materials, and in the process reaching a peak performance of 2.15(4) EFLOPS.
“Large-scale Pretraining for Neural Machine Translation with Tens of Billions of Sentence Pairs”, (2019-09-26):
In this paper, we investigate the problem of training neural machine translation (NMT) systems with a dataset of more than 40 billion bilingual sentence pairs, which is larger than the largest dataset to date by orders of magnitude. Unprecedented challenges emerge in this situation compared to previous NMT work, including severe noise in the data and prohibitively long training time. We propose practical solutions to handle these issues and demonstrate that large-scale pretraining significantly improves NMT performance. We are able to push the BLEU score of WMT17 Chinese-English dataset to 32.3, with a significant performance boost of +3.2 over existing state-of-the-art results.
“Learning to Seek: Tiny Robot Learning (tinyRL) for Source Seeking on a Nano Quadcopter”, (2019-09-25):
We present fully autonomous source seeking onboard a highly constrained nano quadcopter, by contributing application-specific system and observation feature design to enable inference of a deep-RL policy onboard a nano quadcopter. Our deep-RL algorithm finds a high-performance solution to a challenging problem, even in presence of high noise levels and generalizes across real and simulation environments with different obstacle configurations. We verify our approach with simulation and in-field testing on a Bitcraze CrazyFlie using only the cheap and ubiquitous Cortex-M4 microcontroller unit. The results show that by end-to-end application-specific system design, our contribution consumes almost three times less additional power, as compared to competing learning-based navigation approach onboard a nano quadcopter. Thanks to our observation space, which we carefully design within the resource constraints, our solution achieves a 94 environments, as compared to the previously achieved 80 strategy to a simple finite state machine (FSM), geared towards efficient exploration, and demonstrate that our policy is more robust and resilient at obstacle avoidance as well as up to 70 this end, we contribute a cheap and lightweight end-to-end tiny robot learning (tinyRL) solution, running onboard a nano quadcopter, that proves to be robust and efficient in a challenging task using limited sensory input.
“Registered reports: an early example and analysis”, (2019-01-16):
The recent ‘replication crisis’ in psychology has focused attention on ways of increasing methodological rigor within the behavioral sciences. Part of this work has involved promoting ‘Registered Reports’, wherein journals peer review papers prior to data collection and publication. Although this approach is usually seen as a relatively recent development, we note that a prototype of this publishing model was initiated in the mid-1970s by parapsychologist Martin Johnson in the European Journal of Parapsychology (EJP). A retrospective and observational comparison of Registered and non-Registered Reports published in the EJP during a seventeen-year period provides circumstantial evidence to suggest that the approach helped to reduce questionable research practices. This paper aims both to bring Johnson’s pioneering work to a wider audience, and to investigate the positive role that Registered Reports may play in helping to promote higher methodological and statistical standards.
…The final dataset contained 60 papers: 25 RRs and 35 non-RRs. The RRs described 31 experiments that tested 131 hypotheses, and the non-RRs described 60 experiments that tested 232 hypotheses.
28.4% of the statistical tests reported in non-RRs were significant (66/232: 95% CI [21.5%–36.4%]); compared to 8.4% of those in the RRs (11/131: 95% CI [4.0%–16.8%]). A simple 2 × 2 contingency analysis showed that this difference is highly statistically significant (Fisher’s exact test: p < 0.0005, Pearson chi-square=20.1, Cohen’s d = 0.48).
…Parapsychologists investigate the possible existence of phenomena that, for many, have a low a priori likelihood of being genuine (see, e.g., Wagenmakers et al., 2011). This has often resulted in their work being subjected to a considerable amount of critical attention (from both within and outwith the field) that has led to them pioneering several methodological advances prior to their use within mainstream psychology, including the development of randomisation in experimental design (Hacking, 1988), the use of blinds (Kaptchuk, 1998), explorations into randomisation and statistical inference (Fisher, 1924), advances in replication issues (Rosenthal, 1986), the need for pre-specification in meta-analysis (Akers, 1985; Milton, 1999; Kennedy, 2004), and the creation of a formal study registry (Watt, 2012; Watt & Kennedy, 2015). Johnson’s work on RRs provides another striking illustration of this principle at work.
“Scholarly peer review”, (2020-12-22):
Scholarly peer review is the process of subjecting an author's scholarly work, research, or ideas to the scrutiny of others who are experts in the same field, before a paper describing this work is published in a journal, conference proceedings or as a book. The peer review helps the publisher decide whether the work should be accepted, considered acceptable with revisions, or rejected.
“Models of Control and Control of Bias”, (1975):
The author discusses how to increase the quality and reliability of the research and reporting process in experimental parapsychology. Three levels of bias and control of bias are discussed. The levels are referred to as Model 1, Model 2 and Model 3 respectively.
- Model 1 is characterized by its very low level of intersubjective control. The reliability of the results depends to a very great extent upon the reliability of the investigator and the editor.
- Model 2 is relevant to the case when the experimenter is aware of the potential risk of making both errors of observation and recording and tries to control this bias. However, this model of control does not make allowances for the case when data are intentionally manipulated.
- Model 3 depicts a rather sophisticated system of control. One feature of this model is, that selective reporting will become harder since the editor has to make his decision as regards the acceptance or rejection of an experimental article prior to the results being obtained, and subsequently based upon the quality of the outline of the experiment. However, it should be stressed, that not even this model provides a fool-proof guarantee against deliberate fraud.
It is assumed that the models of bias and control of bias under discussion are relevant to most branches of the behavioral sciences.
“Editorial [EJP editorial on registered reports]”, (1975):
This copy represents our first 'real' issue of the European Journal of Parapsychology…As far as experimental articles are concerned, we would like to ask potential contributors to try and adhere to the publishing policy which we have outlined in the editorial of the demonstration copy, and which is also discussed at some length in the article: 'Models of Bias and Control of Bias' [Johnson 1975a], in this issue. In short we shall try to avoid selective reporting and yet at the same time we shall try to refrain from making our journal a graveyard for all those studies which did not 'turn out'. These objectives may be fulfilled by the editorial rule of basing our judgment entirely on our impressions of the quality of the design and methodology of the planned study. The acceptance or rejection of a manuscript should if possible take place prior to the carrying out and the evaluation of the results of the study.
-
…even the most proper use of statistics may lead to spurious correlations or conclusions if there are inadequacies regarding the research process itself. One of these sources of error in the research process is related to selective reporting; another to human limitations with regard to the ability to make reliable observations or evaluations. Dunette (1) says:
The most common variant is, of course, the tendency to bury negative results. I only recently became aware of the massive size of this great graveyard for dead studies when a colleague expressed gratification that only a third of his studies ‘turned out’—as he put it. Recently, a second variant of this secret game was discovered, quite inadvertently, by Wolins 1962, when he wrote to 37 authors to ask for the raw-data on which they had based recent journal articles. Wolins found that of the 37 who replied, 21 reported their data to be either misplaced, lost, or inadvertently destroyed. Finally, after some negotiation, Wolins was able to complete 7 re-analyses on the data supplied from 5 authors. Of the 7, he found gross errors in 3—errors so great as to clearly change the outcome of the experiments already reported.
It should also be stressed that Rosenthal and others have demonstrated that experimenters tend to arrive at results found to be in full agreement with their expectancies, or with the expectancies of those within the scientific establishment in charge of the rewards. Even if some of Rosenthal’s results have been questioned [especially the ‘Pygmalion effect’] the general tendency seems to be unaffected.
I guess we can all agree upon the fact that selective reporting in studies on the reliability and validity, of for instance a personality test, is a bad thing. But what could be the reason for selective reporting? Why does a research worker manipulate his dead? Is it only because the research worker has a ‘weak’ mind or does there exist some kind of ‘steering field’ that exerts such an influence that improper behavior on the part of the research worker occurs?
It seems rather reasonable to assume that the editors of professional journals or research leaders in general could exert a certain harmful influence in this connection…There is no doubt at all in my mind about the ‘filtering’ or ‘shaping’ effect an editor may exert upon the output of his journal…As I see it, the major risk of selective reporting is not primarily a statistical one, but rather the research climate which the underlying policy create (“you are ‘good’ if you obtain supporting results; you are”no-good" if you only arrive at chance results").
…The analysis I carried out has had practical implications for the publication policy which we have stated as an ideal for our new journal: the European Journal of Parapsychology.
“Fads, fashions, and folderol in psychology”, (1966):
[Influential early critique of academic psychology: weak theories, no predictions, poor measurements, poor replicability, high levels of publication bias, non-progressive theory building, and constant churn; many of these criticisms would be taken up by the 'Minnesota school' of Bouchard/Meehl/Lykken/etc.]
Fads include brain-storming, Q technique, level of aspiration, forced choice, critical incidents, semantic differential, role playing, and need theory. Fashions include theorizing and theory building, criterion fixation, model building, null-hypothesis testing, and sensitivity training. Folderol includes tendencies to be fixated on theories, methods, and points of view, conducting "little" studies with great precision, attaching dramatic but unnecessary trappings to experiments, grantsmanship, coining new names for old concepts, fixation on methods and apparatus, etc.
“Responsibility for Raw Data”, (1962-09):
Comments on a Iowa State University graduate student's endeavor of requiring data of a particular kind in order to carry out a study for his master's thesis. This student wrote to 37 authors whose journal articles appeared in APA journals between 1959 and 1961. Of these authors, 32 replied. 21 of those reported the data misplaced, lost, or inadvertently destroyed. 2 of the remaining 11 offered their data on the conditions that they be notified of our intended use of their data, and stated that they have control of anything that we would publish involving these data. Errors were found in some of the raw data that was obtained which caused a dilemma of either reporting the errors or not. The commentator states that if it were clearly set forth by the APA that the responsibility for retaining raw data and submitting them for scrutiny upon request lies with the author, this dilemma would not exist. The commentator suggests that a possibly more effective means of controlling quality of publication would be to institute a system of quality control whereby random samples of raw data from submitted journal articles would be requested by editors and scrutinized for accuracy and the appropriateness of the analysis performed.
“Methods for Studying Coincidences”, (1989-01-01):
This article illustrates basic statistical techniques for studying coincidences. These include data-gathering methods (informal anecdotes, case studies, observational studies, and experiments) and methods of analysis (exploratory and confirmatory data analysis, special analytic techniques, and probabilistic modeling, both general and special purpose). We develop a version of the birthday problem general enough to include dependence, inhomogeneity, and almost and multiple matches. We review Fisher’s techniques for giving partial credit for close matches. We develop a model for studying coincidences involving newly learned words. Once we set aside coincidences having apparent causes, four principles account for large numbers of remaining coincidences: hidden cause; psychology, including memory and perception; multiplicity of endpoints, including the counting of “close” or nearly alike events as if they were identical; and the law of truly large numbers, which says that when enormous numbers of events and people and their interactions cumulate over time, almost any outrageous event is bound to occur. These sources account for much of the force of synchronicity. [Keywords: birthday problems, extrasensory perception, Jung, Kammerer, multiple endpoints, rare events, synchronicity]
…Because of our different reading habits, we readers are exposed to the same words at different observed rates, even when the long-run rates are the same Some words will appear relatively early in your experience, some relatively late. More than half will appear before their expected time of appearance, probably more than 60% of them if we use the exponential model, so the appearance of new words is like a Poisson process. On the other hand, some words will take more than twice the average time to appear, about 1⁄7 of them (1⁄e2) in the exponential model. They will look rarer than they actually are. Furthermore, their average time to reappearance is less than half that of their observed first appearance, and about 10% of those that took at least twice as long as they should have to occur will appear in less than 1⁄20 of the time they originally took to appear. The model we are using supposes an exponential waiting time to first occurrence of events. The phenomenon that accounts for part of this variable behavior of the words is of course the regression effect.
…We now extend the model. Suppose that we are somewhat more complicated creatures, that we require k exposures to notice a word for the first time, and that k is itself a Poisson random variable…Then, the mean time until the word is noticed is (𝜆 + 1)T, where T is the average time between actual occurrences of the word. The variance of the time is (2𝜆 + 1)T2. Suppose T = 1 year and 𝜆 = 4. Then, as an approximation, 5% of the words will take at least time [𝜆 + 1 + 1.65 (2𝜆 + 1)(1⁄2)]T or about 10 years to be detected the first time. Assume further that, now that you are sensitized, you will detect the word the next time it appears. On the average it will be a year, but about 3% of these words that were so slow to be detected the first time will appear within a month by natural variation alone. So what took 10 years to happen once happens again within a month. No wonder we are astonished. One of our graduate students learned the word “formication” on a Friday and read part of this manuscript the next Sunday, two days later, illustrating the effect and providing an anecdote. Here, sensitizing the individual, the regression effect, and the recall of notable events and the non-recall of humdrum events produce a situation where coincidences are noted with much higher than their expected frequency. This model can explain vast numbers of seeming coincidences.
“The Power of Two Random Choices: A Survey of Techniques and Results”, (2001):
…we begin with a simple problem that demonstrates a powerful fundamental idea. Suppose that n balls are thrown into n bins, with each ball choosing a bin independently and uniformly at random. Then the maximum load, or the largest number of balls in any bins, is approximately log n / log log n with high probability. Now suppose instead that the balls are placed sequentially, and each ball is placed in the least loaded of d≥2 bins chosen independently and uniformly at random. Azar et al 1999 showed that in this case, the maximum load is log log n / log d + Θ(1) with high probability.
The important implication of this result is that even a small amount of choice can lead to drastically different results in load balancing. Indeed, having just two random choices (ie d = 2) yields a large reduction in the maximum load by just a constant factor. Over the past several years, there has been a great deal of research investigating this phenomenon. The picture that has emerged from this research is that the power of two choices is not simply an artifact of the simple balls-and-bins model, but a general and robust phenomenon applicable to a wide variety of situations. Indeed, this two-choice paradigm continues to be applied and refined, and new results appear frequently. Applications of the two-choice paradigm:…Hashing, Shared memory emulations, load balancing, low-congestion circuit routing.
[See also “The Power of Two Choices in Randomized Load Balancing”, Mitzenmacher 1996; Nginx/HAProxy, Marc Brooker.]
https:/
/ www.edwardtufte.com/ bboard/ q-and-a-fetch-msg?msg_id=0001OR&topic_id=1 “Edward Tufte”, (2020-12-27):
Edward Rolf Tufte is an American statistician and professor emeritus of political science, statistics, and computer science at Yale University. He is noted for his writings on information design and as a pioneer in the field of data visualization.
“Constitutional Hardball”, (2004):
For the past several years I have been noticing a phenomenon that seems to me new in my lifetime as a scholar of constitutional law. I call the phenomenon constitutional hardball. This Essay develops the idea that there is such a practice, that there is a sense in which it is new, and that its emergence (or re-emergence) is interesting because it signals that political actors understand that they are in a position to put in place a new set of deep institutional arrangements of a sort I call a "constitutional order". A shorthand sketch of constitutional hardball is this: it consists of political claims and practices-legislative and executive initiatives-that are without much question within the bounds of existing constitutional doctrine and practice but that are nonetheless in some tension with existing pre-constitutional understandings. It is hardball because its practitioners see themselves as playing for keeps in a special kind of way; they believe the stakes of the political controversy their actions provoke are quite high, and that their defeat and their opponents' victory would be a serious, perhaps permanent setback to the political positions they hold.
“Anthropology's Science Wars: Insights from a New Survey”, (2019-10):
In recent decades the field of anthropology has been characterized as sharply divided between pro-science and anti-science factions. The aim of this study is to empirically evaluate that characterization. We survey anthropologists in graduate programs in the United States regarding their views of science and advocacy, moral and epistemic relativism, and the merits of evolutionary biological explanations. We examine anthropologists’ views in concert with their varying appraisals of major controversies in the discipline (Chagnon/Tierney, Mead/Freeman, and Menchú/Stoll). We find that disciplinary specialization and especially gender and political orientation are significant predictors of anthropologists’ views. We interpret our findings through the lens of an intuitionist social psychology that helps explain the dynamics of such controversies as well as ongoing ideological divisions in the field.
https:/
/ public.tableau.com/ profile/ jurijfedorov#!/ vizhome/ AnthropologysScienceWars/ Field “Leninthink: On the practice behind the theory of Marxism-Leninism”, (2019-10):
[This re-appraisal of Lenin is just about as damning as any re-appraisal of anybody could possibly be. “He invented a form of government we have come to call totalitarian, which rejected in principle the idea of any private sphere outside of state control. He invented the one-party state, a term that would previously have seemed self-contradictory since a party was, by definition, a part. He believed that state power had to be based on sheer terror, and so he created the terrorist state. Violence was a goal in itself”]
“Dear Young Eccentric”, (2012-01-05):
Weird folks are often tempted to give up on grand ambitions, thinking there is little chance the world will let them succeed. Turns out, however, it isn’t as bad as all that. Especially if your main weirdness is in the realm of ideas…I’ve known some very successful people with quite weird ideas. But these folks mostly keep regular schedules of sleep and bathing. Their dress and hairstyles are modest, they show up on time for meetings, and they finish assignments by deadline. They are willing to pay dues and work on what others think are important for a while, and they have many odd ideas they’d pursue if given a chance, instead of just one overwhelming obsession. They are willing to keep changing fields, careers, and jobs until they find one that works for them…if you are not overtly rebellious, you can get away with a lot of abstract idea rebellion—few folks will even notice such deviations, and fewer still will care. So, ask yourself, do you want to look like a rebel, or do you want to be a rebel?
“Effect of Lower Versus Higher Red Meat Intake on Cardiometabolic and Cancer Outcomes: A Systematic Review of Randomized Trials”, (2019-10-01):
Background: Few randomized trials have evaluated the effect of reducing red meat intake on clinically important outcomes.
Purpose: To summarize the effect of lower versus higher red meat intake on the incidence of cardiometabolic and cancer outcomes in adults.
Data Sources: EMBASE, CENTRAL, CINAHL, Web of Science, and ProQuest from inception to July 2018 and MEDLINE from inception to April 2019, without language restrictions.
Study Selection: Randomized trials (published in any language) comparing diets lower in red meat with diets higher in red meat that differed by a gradient of at least 1 serving per week for 6 months or more.
Data Extraction: Teams of 2 reviewers independently extracted data and assessed the risk of bias and the certainty of the evidence.
Data Synthesis: Of 12 eligible trials, a single trial enrolling 48 835 women provided the most credible, though still low-certainty, evidence that diets lower in red meat may have little or no effect on all-cause mortality (hazard ratio [HR], 0.99 [95% CI, 0.95 to 1.03], cardiovascular mortality (HR, 0.98 [CI, 0.91 to 1.06]), and cardiovascular disease (HR, 0.99 [CI, 0.94 to 1.05]). That trial also provided low-certainty to very-low-certainty evidence that diets lower in red meat may have little or no effect on total cancer mortality (HR, 0.95 [CI, 0.89 to 1.01]) and the incidence of cancer, including colorectal cancer (HR, 1.04 [CI, 0.90 to 1.20]) and breast cancer (HR, 0.97 [0.90 to 1.04]).
Limitations: There were few trials, most addressing only surrogate outcomes, with heterogeneous comparators and small gradients in red meat consumption between lower versus higher intake groups.
Conclusion: Low-certainty to very-low-certainty evidence suggests that diets restricted in red meat may have little or no effect on major cardiometabolic outcomes and cancer mortality and incidence.
“Backlash Over Meat Dietary Recommendations Raises Questions About Corporate Ties to Nutrition Scientists”, (2020-01-15):
[Summary of vegetarian activist/researcher reaction to recent reviews & meta-analysis indicating that the correlation of meat-eating with bad health often does not appear in epidemiological datasets, the randomized experiments do not support the strong claims, and the overall evidence that eating meat = bad health is low quality & weak:
- “Meat Consumption and Health: Food for Thought”, Carroll & Doherty 2019 (editorial)
- “Red and Processed Meat Consumption and Risk for All-Cause Mortality and Cardiometabolic Outcomes: A Systematic Review and Meta-analysis of Cohort Studies”, Zeraatkar et al 2019a
- “Reduction of Red and Processed Meat Intake and Cancer Mortality and Incidence: A Systematic Review and Meta-analysis of Cohort Studies”, Han et al 2019
- “Patterns of Red and Processed Meat Consumption and Risk for Cardiometabolic and Cancer Outcomes: A Systematic Review and Meta-analysis of Cohort Studies”, Vernooij et al 2019
- “Effect of Lower Versus Higher Red Meat Intake on Cardiometabolic and Cancer Outcomes: A Systematic Review of Randomized Trials”, Zeraatkar et al 2019b
- “Health-Related Values and Preferences Regarding Meat Consumption: A Mixed-Methods Systematic Review”, Valli et al 2019
- “Unprocessed Red Meat and Processed Meat Consumption: Dietary Guideline Recommendations From the Nutritional Recommendations (NutriRECS) Consortium”, Johnston et al 2019
After breaking the embargo, they began lobbying against it, spamming the journal editor, demanding the papers be retracted before publication, denouncing it in talks, and contacting the Federal Trade Commission & district attorneys demanding they investigate; they justify these activities by saying that since high-quality evidence can’t be easily obtained in nutrition, there is no need for it, and accusing the authors of financial conflicts of interest and comparing them to global warming deniers.
However, the conflicts of interest represent very small percentages of funding, and the vegetarian activist/researchers themselves are heavily funded by anti-meat interests, such as olive research institutions, walnut industry bodies, the egg industry, snack companies, and alternative diet groups, with the list of funders of one member including but far from limited to “the Pulse Research Network, the Almond Board of California, the International Nut and Dried Fruit Council; Soy Foods Association of North America; the Peanut Institute; Kellogg’s Canada; and Quaker Oats Canada.”]
“Reading Lies: Nonverbal Communication and Deception”, (2019:01):
The relationship between nonverbal communication and deception continues to attract much interest, but there are many misconceptions about it. In this review, we present a scientific view on this relationship. We describe theories explaining why liars would behave differently from truth tellers, followed by research on how liars actually behave and individuals’ ability to detect lies. We show that the nonverbal cues to deceit discovered to date are faint and unreliable and that people are mediocre lie catchers when they pay attention to behavior. We also discuss why individuals hold misbeliefs about the relationship between nonverbal behavior and deception—beliefs that appear very hard to debunk. We further discuss the ways in which researchers could improve the state of affairs by examining nonverbal behaviors in different ways and in different settings than they currently do.
“Is Cognitive Functioning Impaired in Methamphetamine Users? A Critical Review”, (2011-11-16):
The prevailing view is that recreational methamphetamine use causes a broad range of severe cognitive deficits, despite the fact that concerns have been raised about interpretations drawn from the published literature. This article addresses an important gap in our knowledge by providing a critical review of findings from recent research investigating the impact of recreational methamphetamine use on human cognition. Included in the discussion are findings from studies that have assessed the acute and long-term effects of methamphetamine on several domains of cognition, including visuospatial perception, attention, inhibition, working memory, long-term memory, and learning. In addition, relevant neuroimaging data are reviewed in an effort to better understand neural mechanisms underlying methamphetamine-related effects on cognitive functioning. In general, the data on acute effects show that methamphetamine improves cognitive performance in selected domains, that is, visuospatial perception, attention, and inhibition. Regarding long-term effects on cognitive performance and brain-imaging measures, statistically significant differences between methamphetamine users and control participants have been observed on a minority of measures. More importantly, however, the clinical significance of these findings may be limited because cognitive functioning overwhelmingly falls within the normal range when compared against normative data. In spite of these observations, there seems to be a propensity to interpret any cognitive and/or brain difference(s) as a clinically significant abnormality. The implications of this situation are multiple, with consequences for scientific research, substance-abuse treatment, and public policy.
-
Scientific evidence for intelligence in donkeys could expose their historical unmerited cognitive derogatory status. Psychometric testing enables quantifying animal cognitive capabilities and their genetic background. Owing to the impossibility to use the language-dependent scales that are widely used to measure intelligence in humans, we used a nonverbal operant-conditioning problem-solving test to compute a human-analogous IQ, scoring the information of thirteen cognitive processes from 300 genetically tested donkeys. Principal components and Bayesian analyses were used to compute the variation in cognitive capabilities explained by the cognitive processes tested and their genetic parameters, respectively. According to our results, IQ may explain over 62% of the cognitive variance, and 0.06 to 0.38 heritabilities suggest that we could ascribe a significant proportion to interacting genes describing the same patterns previously reported for humans and other animal species. Our results address the existence of a human-analogous heritable component and mechanisms underneath intelligence and cognition in probably one of the most traditionally misunderstood species from a cognitive perspective. [Keywords: cognition, g, genetic parameters, asses, intelligence quotient]
“Nature's Spoils: The underground food movement ferments revolution”, (2010-11-22):
[Discussion of food subcultures: dumpster divers, raw food enthusiasts, fermenters, roadkill, and 'high' (fully rotten meat) food advocates, with visits to gay commune Hickory Knoll and raw milk dairies. The author ultimately draws the line at trying high game, however.]
When Torma unclamped his jar, a sickly-sweet miasma filled the air—an odor as natural as it was repellent. Decaying meat produces its own peculiar scent molecules, I later learned, with names like putrescine and cadaverine. I could still smell them on my clothes hours later. Torma stuck two fingers down the jar and fished out a long, wet sliver. “Want a taste?” he said.
It was the end of a long day. I’d spent most of it consuming everything set before me: ants, acorns, raw milk, dumpster stew, and seven kinds of mead, among other delicacies. But even Katz took a pass on high meat. While Torma threw back his head and dropped in his portion, like a seal swallowing a mackerel, we quietly took our leave. “You have to trust your senses,” Katz said, as we were driving away. “To me, that smelled like death.”
“STEPS Toward Expressive Programming Systems: "A Science Experiment"”, (2012):
[Technical report from a research project aiming at writing a GUI OS in 20k LoC; tricks include ASCII art networking DSLs & generic optimization for text layout, which lets them implement a full OS, sound, GUI desktops, Internet networking & web browsers, a text/document editor etc, all in less lines of code that most OSes need for small parts of any of those.]
…Many software systems today are made from millions to hundreds of millions of lines of program code that is too large, complex and fragile to be improved, fixed, or integrated. (One hundred million lines of code at 50 lines per page is 5000 books of 400 pages each! This is beyond human scale.) What if this could be made literally 1000 times smaller—or more? And made more powerful, clear, simple and robust?…The ‘STEPS Towards Expressive Programming Systems’ project is taking the familiar world of personal computing used by more than a billion people every day—currently requiring hundreds of millions of lines of code to make and sustain—and substantially recreating it using new programming techniques and ‘architectures’ in dramatically smaller amounts of program code. This is made possible by new advances in design, programming, programming languages, systems organizations, and the use of science to analyze and create models of software artifacts.
STEPS Aims At ‘Personal Computing’—STEPS takes as its prime focus the dynamic modeling of ‘personal computing’ as most people think of it…word processor, spreadsheet, Internet browser, other productivity SW; User Interface and Command Listeners: windows, menus, alerts, scroll bars and other controls, etc.; Graphics and Sound Engine: physical display, sprites, fonts, compositing, rendering, sampling, playing; Systems Services: development system, database query languages, etc.; Systems Utilities: file copy, desk accessories, control panels, etc.; Logical Level of OS: e.g. file management, Internet, and networking facilities, etc.; Hardware Level of OS: e.g. memory manager, process manager, device drivers, etc.
http:/
/ www.moserware.com/ 2008/ 04/ towards-moores-law-software-part-3-of-3.html “Secrets by the thousands”, (1946-10-01):
Someone wrote to Wright Field recently, saying he understood this country had got together quite a collection of enemy war secrets, that many were now on public sale, and could he, please, be sent everything on German jet engines. The Air Documents Division of the Army Air Forces answered: “Sorry—but that would be fifty tons”. Moreover, that fifty tons was just a small portion of what is today undoubtedly the biggest collection of captured enemy war secrets ever assembled. ..It is estimated that over a million separate items must be handled, and that they, very likely, practically all the scientific, industrial and military secrets of Nazi Germany. One Washington official has called it “the greatest single source of this type of material in the world, the first orderly exploitation of an entire country’s brain-power.”
What did we find? You’d like some outstanding examples from the war secrets collection?
…the tiniest vacuum tube I had ever seen. It was about half thumb-size. Notice it is heavy porcelain—not glass—and thus virtually indestructible. It is a thousand watt—one-tenth the size of similar American tubes…“That’s Magnetophone tape,” he said. “It’s plastic, metallized on one side with iron oxide. In Germany that supplanted phonograph recordings. A day’s Radio program can be magnetized on one reel. You can demagnetize it, wipe it off and put a new program on at any time. No needle; so absolutely no noise or record wear. An hour-long reel costs fifty cents.”…He showed me then what had been two of the most closely-guarded, technical secrets of the war: the infra-red device which the Germans invented for seeing at night, and the remarkable diminutive generator which operated it. German cars could drive at any, speed in a total blackout, seeing objects clear as day two hundred meters ahead. Tanks with this device could spot; targets two miles away. As a sniper scope it enabled German riflemen to pick off a man in total blackness…We got, in addition, among these prize secrets, the technique and the machine for making the world’s most remarkable electric condenser…The Kaiser Wilhelm Institute for Silicate Research had discovered how to make it and—something which had always eluded scientists—in large sheets. We know now, thanks to FIAT teams, that ingredients of natural mica were melted in crucibles of carbon capable of taking 2,350 degrees of heat, and then—this was the real secret—cooled in a special way…“This is done on a press in one operation. It is called the ‘cold extrusion’ process. We do it with some soft, splattery metals. But by this process the Germans do it with cold steel! Thousands of parts now made as castings or drop forgings or from malleable iron can now be made this way. The production speed increase is a little matter of one thousand per cent.” This one war secret alone, many American steel men believe, will revolutionize dozens of our metal fabrication industries.
…In textiles the war secrets collection has produced so many revelations, that American textile men are a little dizzy.But of all the industrial secrets, perhaps, the biggest windfall came from the laboratories and plants of the great German cartel, I. G. Farbenindustrie. Never before, it is claimed, was there such a store-house of secret information. It covers liquid and solid fuels, metallurgy, synthetic rubber, textiles, chemicals, plastics. drugs, dyes. One American dye authority declares: “It includes the production know-how and the secret formulas for over fifty thousand dyes. Many of them are faster and better than ours. Many are colors we were never able to make. The American dye industry will be advanced at least ten years.”
…Milk pasteurization by ultra-violet light…how to enrich the milk with vitamin D…cheese was being made—“good quality Hollander and Tilsiter”—by a new method at unheard-of speed…a continuous butter making machine…The finished product served as both animal and human food. Its caloric value is four times that of lean meat, and it contains twice as much protein. The Germans also had developed new methods of preserving food by plastics and new, advanced refrigeration techniques…German medical researchers had discovered a way to produce synthetic blood plasma.
…When the war ended, we now know, they had 138 types of guided missiles in various stages of production or development, using every known kind of remote control and fuse: radio, radar, wire, continuous wave, acoustics, infra-red, light beams, and magnetics, to name some; and for power, all methods of jet propulsion for either subsonic or supersonic speeds. Jet propulsion had even been applied to helicopter flight…Army Air Force experts declare publicly that in rocket power and guided missiles the Nazis were ahead of us by at least ten years.
“Secrets of the Little Blue Box: A story so incredible it may even make you feel sorry for the phone company”, (1971-10-01):
[Early account of “phone phreakers” and their most famous hacking device, the blue box, used to control the Bell Phone System and enable free long-distance calls (then exorbitantly expensive); the blue box was famously based on an AT&T research paper describing the tone frequencies and how they control the phone switching system. The author hangs out with phreaks such as Captain Crunch to see how it all works.
After reading Rosenbaum’s article, Steve Jobs and his partner in founding Apple, Steve Wozniak, “collaborated on building and selling blue boxes, devices that were widely used for making free—and illegal—phone calls. They raised a total of $6,000 from the effort.”]
-
[Gonzo-style account of hanging out with teenage hackers and phreakers in NYC, Phiber Optik and Acid Phreak, similar to Hackers]
“Sometimes,” says Kool, “it’s so simple. I used to have contests with my friends to see how few words we could use to get a password. Once I called up and said, ‘Hi, I’m from the social-engineering center and I need your password’, and they gave it to me! I swear, sometimes I think I could call up and say, ‘Hi, I’m in a diner, eating a banana split. Give me your password.’” Like its mechanical counterpart, social engineering is half business and half pleasure. It is a social game that allows the accomplished hacker to show off his knowledge of systems, his mastery of jargon, and especially his ability to manipulate people. It not only allows the hacker to get information; it also has the comic attractions of the old-fashioned prank phone call—fooling an adult, improvisation, cruelty. In the months we spent with the hackers, the best performance in a social-engineering role was by a hacker named Oddjob. With him and three other guys we pulled a hacking all-nighter in the financial district, visiting pay phones in the hallway of the World Trade Center, outside the bathrooms of the Vista Hotel, and in the lobby of the international headquarters of American Express.
…Where we see only a machine’s function, they see its potential. This is, of course, the noble and essential trait of the inventor. But hackers warp it with teenage anarchic creativity: Edison with attitude. Consider the fax machine. We look at it; we see a document-delivery device. One hacker we met, Kaos, looked at the same machine and immediately saw the Black Loop of Death. Here’s how it works: Photocopy your middle finger displaying the international sign of obscene derision. Make two more copies. Tape these three pages together. Choose a target fax machine. Wait until nighttime, when you know it will be unattended, and dial it up. Begin to feed your long document into your fax machine. When the first page begins to emerge below, tape it to the end of the last page. Ecce. This three-page loop will continuously feed your image all night long. In the morning, your victim will find an empty fax machine, surrounded by two thousand copies of your finger, flipping the bird.
…From a distance, a computer network looks like a fortress—impregnable, heavily guarded. As you get closer, though, the walls of the fortress look a little flimsy. You notice that the fortress has a thousand doors; that some are unguarded, the rest watched by unwary civilians. All the hacker has to do to get in is find an unguarded door, or borrow a key, or punch a hole in the wall. The question of whether he’s allowed in is made moot by the fact that it’s unbelievably simple to enter. Breaking into computer systems will always remain easy because the systems have to accommodate dolts like you and me. If computers were used only by brilliant programmers, no doubt they could maintain a nearly impenetrable security system. But computers aren’t built that way; they are “dumbed down” to allow those who must use them to do their jobs. So hackers will always be able to find a trusting soul to reveal a dialup, an account, and a password. And they will always get in.
“Hackers (film)”, (2020-12-28):
Hackers is a 1995 American crime film directed by Iain Softley and starring Jonny Lee Miller, Angelina Jolie, Jesse Bradford, Matthew Lillard, Laurence Mason, Renoly Santiago, Lorraine Bracco, and Fisher Stevens. The film follows a group of high school hackers and their involvement in a corporate extortion conspiracy. Made in the mid-1990s when the Internet was unfamiliar to the general public, it reflects the ideals laid out in the Hacker Manifesto quoted in the film: "This is our world now... the world of the electron and the switch [...] We exist without skin color, without nationality, without religious bias... and you call us criminals. [...] Yes, I am a criminal. My crime is that of curiosity." The film received mixed reviews from critics, and underperformed at the box office upon release, but has gone on to achieve cult classic status.
“The Radioactive Boy Scout: When a teenager attempts to build a breeder reactor”, (1998-11-01):
Growing up in suburban Detroit, David Hahn was fascinated by science. While he was working on his Atomic Energy badge for the Boy Scouts, David’s obsessive attention turned to nuclear energy. Throwing caution to the wind, he plunged into a new project: building a model nuclear reactor in his backyard garden shed. Posing as a physics professor, David solicited information on reactor design from the U.S. government and from industry experts. Following blueprints he found in an outdated physics textbook, David cobbled together a crude device that threw off toxic levels of radiation. His wholly unsupervised project finally sparked an environmental emergency that put his town’s forty thousand suburbanites at risk. The EPA ended up burying his lab at a radioactive dumpsite in Utah. [Keywords: 20th century, David Hahn, Experiments, Michigan, Nuclear engineering, Radiochemistry, Recreation, Teenage boys].
“Fancy Euclid's Elements in TeX”, (2019-03-19):
The most obvious option—to draw all the illustrations in Illustrator and compose the whole thing in InDesign—was promptly rejected. Geometrical constructions are not exactly the easiest thing to do in Illustrator, and no obvious way to automatically connect the main image to miniatures came to my mind. As for InDesign, although it’s very good at dealing with such visually rich layouts, it promised to scare the hell out of me by the overcrowded “Links” panel. So, without thinking twice, I decided to use other tools that I was familiar with—MetaPost, which made it relatively easy to deal with geometry, and LaTeX, which I knew could do the job. Due to some problems with MetaPost libs for LaTeX, I replaced the latter with ConTeXt that enjoys an out-of-the-box merry relationship with MetaPost.
… There are also initials and vignettes in the original edition. On one hand, they were reasonably easy to recreate (at least, it wouldn’t take a lot of thought to do this), but I decided to go with a more interesting (albeit hopeless) option—automatically generating the initials and vignettes with a random ornament. Not only is it fun, but also, the Russian translation would require adapting the style of the original initials to the Cyrillic script, which was not something I’d prefer to do. So, long story short, when you compile the book, a list of initial letters is written to the disk, and a separate MetaPost script can process it (very slowly) to produce the initials and vignettes. No two of them have the exact same ornament.
“Making of Byrne’s Euclid”, (2018-12-16):
Creating a faithful online reproduction of a book considered one of the most beautiful and unusual publications ever published is a daunting task. Byrne’s Euclid is my tribute to Oliver Byrne’s most celebrated publication from 1847 that illustrated the geometric principles established in Euclid’s original Elements from 300 BC.
In 1847, Irish mathematics professor Oliver Byrne worked closely with publisher William Pickering in London to publish his unique edition titled The First Six Books of the Elements of Euclid in which Coloured Diagrams and Symbols are Used Instead of Letters for the Greater Ease of Learners—or more simply, Byrne’s Euclid. Byrne’s edition was one of the first multicolor printed books and is known for its unique take on Euclid’s original work using colorful illustrations rather than letters when referring to diagrams. The precise use of colors and diagrams meant that the book was very challenging and expensive to reproduce. Little is known about why Byrne only designed 6 of the 13 books but it was could have been due to time and cost involved…I knew of other projects like Sergey Slyusarev’s ConTeXt rendition and Kronecker Wallis’ modern redesign but I hadn’t seen anyone reproduce the 1847 edition online in its entirety and with a design true to the original. This was my goal and I knew it was going to be a fun challenge.
Diagrams from Book 1 [Detailed discussion of how to use Adobe Illustrator to redraw the modernist art-like primary color diagrams from Bryne in scalable vector graphics (SVG) for use in interactive HTML pages, creation of a custom drop caps/initials font to replicate Bryne, his (questionable) efforts to use the ‘long s’ for greater authenticity, rendering the math using MathJax, and creating posters demonstrating all diagrams from the project for offline viewing.]
“Oliver Byrne (mathematician)”, (2020-12-28):
Oliver Byrne was a civil engineer and prolific author of works on subjects including mathematics, geometry, and engineering. He is best known for his 'coloured' book of Euclid's Elements. He was also a large contributor to Spon's Dictionary of Engineering.
“Oliver Byrne's edition of Euclid [Scans]”, :
Online scanned edition; part of a set of Euclid editions.
/
docs/ design/ 1990-tufte-envisioninginformation-ch5-byrneseuclid.pdf “SpeechJammer: A System Utilizing Artificial Speech Disturbance with Delayed Auditory Feedback”, (2012-02-28):
In this paper we report on a system, "SpeechJammer", which can be used to disturb people’s speech. In general, human speech is jammed by giving back to the speakers their own utterances at a delay of a few hundred milliseconds. This effect can disturb people without any physical discomfort, and disappears immediately by stop speaking. Furthermore, this effect does not involve anyone but the speaker. We utilize this phenomenon and implemented two prototype versions by combining a direction-sensitive microphone and a direction-sensitive speaker, enabling the speech of a specific person to be disturbed. We discuss practical application scenarios of the system, such as facilitating and controlling discussions. Finally, we argue what system parameters should be examined in detail in future formal studies based on the lessons learned from our preliminary study.
“Mark of Integrity”, (2009-02):
[Card marking is a venerable and sophisticated art. Jonathan Allen on juiced cards, luminous readers, sunning the deck, and other sharpers’ tricks card marking)]
The history of the marked playing card, perhaps as old as the playing card itself, is a miscellany of inventive guile. “The systems of card-marking are as numerous as they are ingenious,” wrote John Nevil Maskelyne in 1894. “Card doctoring,” to use Erdnase’s term, covers many forms of subterfuge, but in the brief survey that follows, we shall focus our attention upon what might more usefully be termed the “language” of the marked card.
…“Luminous readers” are cards treated in such a way that pale green ink traces become clearly visible when viewed through red-filtered spectacles or contact lenses. The technology caused alarm upon its discovery but, due to its limited effectiveness and its reliance upon somewhat vampiric eye adornment, has remained more of a popular novelty than a serious subterfuge.11 “Juiced cards,” on the other hand, do not need lens-based viewing, instead requiring the reader to defocus his or her eyes and spot liminal fluid-residue marks on an opponent’s distant cards (juiced cards are also known as “distance readers”). To many players, juicing, and its recent high-tech offshoot, “video juicing,” are the most effective real-world card-marking system available, and the considerable price of the closely guarded fluid recipe and application technique reflects this growing reputation.
“Card marking”, (2020-12-27):
Card marking is the process of altering playing cards in a method only apparent to marker or conspirator, such as by bending or adding visible marks to a card. This allows different methods for card sharps to cheat or may be used for magic tricks. To be effective, the distinguishing mark or marks must be visible on the obverse sides of the cards, which are normally uniform.
“Managing an iconic old luxury brand in a new luxury economy: Hermès handbags in the US market”, (2014-03):
The Hermès brand is synonymous with a wealthy global elite clientele and its products have maintained an enduring heritage of craftsmanship that has distinguished it among competing luxury brands in the global market. Hermès has remained a family business for generations and has successfully avoided recent acquisition attempts by luxury group LVMH. Almost half of the luxury firm’s revenue ($1.5B in 2012) is derived from the sale of its leather goods and saddlery, which includes its handbags. A large contributor to sales is global demand for one of its leather accessories, the Birkin bag, ranging in price from $10,000 to $250,000. Increased demand for the bag in the United States since 2002 resulted in an extensive customer waitlist lasting from months to a few years. Hermès retired the famed waitlist (sometimes called the ‘dream list’) in the United States in 2010, and while the waitlist has been removed, demand for the Birkin bag has not diminished and making the bag available to luxury consumers requires extensive, careful distribution management. In addition to inventory constraints related to demand for the Birkin bag in the United States, Hermès must also manage a range of other factors in the US market. These factors include competition with ‘affordable’ luxury brands like Coach, monitoring of unsolicited brand endorsers as well as counterfeit goods and resellers. This article examines some of the allocation practices used to carefully manage the Hermès brand in the US market.
“Birkin Demand: A Sage & Stylish Investment”, (2016-12-19):
History · Design · Craftsmanship & Quality · How To Buy A Birkin · Demand & Exclusivity · The Secondhand Market · Clientele · Why the Birkin Is A Safe Investment · Investment Factors · Investment Pricing Factors · Comparisons with Other Investments · Fake vs. Real · How the Birkin Remains Dominant · The Media · The Defaced Birkin · Conclusion
Birkin bags are carefully handcrafted. The creation process for each bag can take over 18 hours. That number can double if working on a Birkin accessorized with diamonds. The artisans who craft these bags are carefully screened and require years of high quality experience even before being considered for the job. “Hermès has a reputation of hiring mostly artisans who have graduated from the École Grégoire Ferrandi; a school that specializes in working with luxurious leathers.” It also typically takes about 2 years to train an Hermès craftsman, with each one supervised by an existing craftsman.Preparing the leather is the first step towards crafting the bag. The leather is examined for any defects an animal skin may have mosquito bites or wounds that must be repaired before the skin’s tanning. Leathers are obtained from different tanners in France, resulting in various smell sand textures. The stitching of the bag is also very precise. The bag is held together using wooden clamp, while the artisan applies each individual stitch on the bag. The linen that is used during the stitching process is waterproof and has a beeswax coating for rot prevention. Most Birkin bags are created with same color threads, but some rare bags have white threads even if the bag is not white. “More than 90% of the bag is hand stitched because it allows more freedom to shape the bag and makes it more resilient.” That’s when the hardware process begins. Unlike other bags, the hardware is attached using the unique Hermès process called “pearling” rather than by using screws. Artisans put a “small nail through a corner hole on the back of the clasp,the leather and the front clasp, take an awl with a concave tip and tap the bit of nail with a hammer gently in a circle until it is round like a tiny pearl.” This process ensures that the pearls will hold the two pieces of metal together forever. The bag is then turned right side out and ironed into shape.
…As secondhand market sales have grown, interest from first time buyers has also increased. This shows the Birkin bag is an important sales channel for an expanding global luxury product market. Such growth has propelled the Birkin to near legendary status in a very demanding market. According to Bag Hunter, “Birkin bags have climbed in value by 500% over the past 35 years, and an increase expected to double over the next 10 years.”
…Simply stated, it appears that the bag’s success hinges on this prestigious perception. A Birkin, terribly difficult to get is therefore highly coveted. In our global economy, that’s all the brand needs to pack the infinite waiting list. It is fashion’s version of Darwinism. We always want what we can’t have, so we will do whatever we can to get it. For instance, Victoria Beckham, the posh clothing designer, and wife of David Beckham reportedly owns about 100 Birkins, collectively valued at $2 million. It includes a pink Ostrich leather Birkin worth $150,000. Despite the fact that she has introduced her own line of handbags, she’s been spotted by the paparazzi wearing a Birkin bag. Kris Jenner also has a massive Birkin collection that she flaunts via social media and the willing participation of paparazzi. Her collection includes an Electric Blue 35cm which is supposedly worth $19,000. Actress Katie Holmes has gained attention for a bold red Birkin, while Julianne Moore has been seen wearing a hunter green 40cm with gold hardware. Julia Roberts and Eva Longoria all have even been seen with the bag. Even B-listed personalities such as reality star, Nicole Richie, with a black Birkin workout bag, is famously noted as frequently asking the paparazzi, “Did you get my bag?”. The Birkin has looked extra special on the arms of models, Alessandra Ambrosio and Kate Moss. Singers such as Jennifer Lopez and Courtney Love ironically show off their Birkins, and even world leaders such as Princess Mary of Denmark, with her black crocodile Birkin worth $44,500, is aware of its meaning and status.
“Bakker's Second Apocalypse & Frank Herbert's Dune: time loops & finding freedom in an unfree universe”, (2019-08-30):
Review of SF/F author R. Scott Bakker‘s long-running Second Apocalypse series, which finished in 2017. The series, a loose retelling of the Crusades, set in a fallen-SF fantasy environment, has drawn attention for its ambitious scope and obscure philosophical message centering around determinism, free will, moral nihilism, eliminativism of cognitive states, and the interaction of technology & ethics (which Bakker terms the ’Semantic Apocalypse’). In this series, the protagonist attempts to stop the apocalypse and ultimately accidentally causes it.
I highlight that Frank Herbert’s Dune universe is far more influential on Bakker than reviewers of Bakker have appreciated: countless elements are reflected in Bakker, and the very name of the primary antagonist, the ‘No-God’, uses a naming pattern from Dune and operates similarly. Further, both Dune and the Second Apocalypse are deeply concerned with the nature of time and temporal loops controlling ‘free’ behavior. Where they diverge is in what is to be done about the human lack of freedom and manipulability by external environments, and have radically different views about what is desirable: in Dune, humanity gradually grows up and achieves freedom from the time loops by the creation of a large time loop whose stable fixed point is the destruction of all time loops, ensuring that humanity will go on existing in some form forever; in the Second Apocalypse, liberation is achieved only through death.
“‘Story Of Your Life’ Is Not A Time-Travel Story”, (2012-12-12):
One of Ted Chiang’s most noted philosophical SF short stories, “Story of Your Life”, was made into a successful time-travel movie, Arrival, sparking interest in the original. However, movie viewers often misread the short story: “Story” is not a time-travel movie. At no point does the protagonist travel in time or enjoy precognitive powers, interpreting the story this way leads to many serious plot holes, it renders most of the exposition-heavy dialogue (which is a large fraction of the wordcount) completely irrelevant, and genuine precognition undercuts the themes of tragedy & acceptance.
Instead, what appears to be precognition in Chiang’s story is actually far more interesting, and a novel twist on psychology and physics: classical physics allows usefully interpreting the laws of physics in both a ‘forward’ way in which events happen step by step, but also a teleological way in which events are simply the unique optimal solution to a set of constraints including the final outcome and allows reasoning ‘backwards’. The alien race exemplifies this other, equally valid, possible way of thinking and viewing the universe, and the protagonist learns their way of thinking by studying their language, which requires seeing written characters as a unified gestalt. This holistic view of the universe as an immutable ‘block-universe’, in which events unfold as they must, changes the protagonist’s attitude towards life and the tragic death of her daughter, teaching her in a somewhat Buddhist or Stoic fashion to embrace life in both its ups and downs.
“Japan Sings Along With Beethoven”, (1990-12-29):
December in Japan is a festive season, filled with gift-giving, prayers for the new year, bamboo and pine branches in front of houses, office parties and Beethoven’s Ninth.
Beethoven’s Ninth? No one is sure how it happened, but indeed, Ludwig van Beethoven’s Choral Symphony is as much a staple of the season as dry weather and maddeningly short days. The symphony is being performed at least 170 times this month by professional and amateur groups throughout the country. Some orchestras play it several times in a row. The NHK Symphony Orchestra has performed what the Japanese call the Daiku, or Big Nine, five times this month, the Tokyo Symphony Orchestra 13 times and the Japan Philharmonic Symphony Orchestra 11 times.
“For Japanese, listening to Beethoven’s Ninth at the end of the year is a semi-religious experience,” said Naoyuki Miura, the artistic director of Music from Japan, which sponsors concerts abroad. “People feel they have not completed the year spiritually until they hear it.” Like the Christmastime sing-alongs of Handel’s Messiah in the West, Beethoven’s Ninth also draws audiences to sing-along performances at which the audiences lustily join in the choruses of Schiller’s “Ode to Joy,” singing German words they barely understand.
“Mandy (2018 film)”, (2020-12-28):
Mandy is a 2018 psychedelic action horror film directed by Panos Cosmatos, produced by Elijah Wood and co-written by Cosmatos and Aaron Stewart-Ahn based on a story Cosmatos conceived. A co-production of the United States and Canada, the film stars Nicolas Cage, Andrea Riseborough, Linus Roache, Ned Dennehy, Olwen Fouéré, Richard Brake, and Bill Duke.
“Weiner (film)”, (2020-12-28):
Weiner is a 2016 American fly on the wall political documentary film by Josh Kriegman and Elyse Steinberg about Anthony Weiner’s campaign for Mayor of New York City during the 2013 mayoral election.
“Falling Down”, (2020-12-28):
Falling Down is a 1993 American crime thriller film directed by Joel Schumacher and written by Ebbe Roe Smith. The film stars Michael Douglas in the lead role of William Foster, a divorced and unemployed former defense engineer. The film centers on Foster as he treks on foot across the city of Los Angeles, trying to reach the house of his estranged ex-wife in time for his daughter's birthday. Along the way, a series of encounters, both trivial and provocative, causes him to react with increasing violence and make sardonic observations on life, poverty, the economy, and commercialism. Robert Duvall co-stars as Martin Prendergast, an aging Los Angeles Police Department sergeant on the day of his retirement, who faces his own frustrations—even as he tracks down Foster.
“Redline (2009 film)”, (2020-12-28):
Redline is a 2009 science fiction auto racing anime film produced by Madhouse and released in Japan on October 9, 2010. The directorial debut feature of Takeshi Koike, it features the voices of Takuya Kimura, Yū Aoi and Tadanobu Asano, and an original story by Katsuhito Ishii, who also co-writes and sound directs. The film is set in the distant future, where a man known as JP takes on great risks for the chance of winning the titular underground race.
https:/
/ old.reddit.com/ r/ TOUHOUMUSIC/ search?q=author%3Agwern&sort=new&restrict_sr=on&t=all “Gwern.net newsletter (Substack subscription page)”, (2013-12-01):
Subscription page for the monthly gwern.net newsletter. There are monthly updates, which will include summaries of projects I’ve worked on that month (the same as the changelog), collations of links or discussions from my subreddit, and book/movie reviews. You can also browse the archives since December 2013.
“Those Who Stayed: Individualism, Self-Selection and Cultural Change during the Age of Mass Migration”, (2019-01-01):
This paper examines the joint evolution of emigration and individualism in Scandinavia during the Age of Mass Migration (1850–1920). A long-standing hypothesis holds that people of a stronger individualistic mindset are more likely to migrate as they suffer lower costs of abandoning existing social networks. Building on this hypothesis, I propose a theory of cultural change where migrant self-selection generates a relative push away from individualism, and towards collectivism, in migrant-sending locations through a combination of initial distributional effects and channels of intergenerational cultural transmission. Due to the interdependent relationship between emigration and individualism, emigration is furthermore associated with cultural convergence across subnational locations. I combine various sources of empirical data, including historical population census records and passenger lists of emigrants, and test the relevant elements of the proposed theory at the individual and subnational district level, and in the short and long run. Together, the empirical results suggest that individualists were more likely to migrate than collectivists, and that the Scandinavian countries would have been considerably more individualistic and culturally diverse, had emigration not taken place. [Keywords: Culture, individualism, migration, selection, economic history]
“Exome sequencing”, (2021-01-11):
Exome sequencing, also known as whole exome sequencing (WES), is a genomic technique for sequencing all of the protein-coding regions of genes in a genome. It consists of two steps: the first step is to select only the subset of DNA that encodes proteins. These regions are known as exons – humans have about 180,000 exons, constituting about 1% of the human genome, or approximately 30 million base pairs. The second step is to sequence the exonic DNA using any high-throughput DNA sequencing technology.
“Language Models are Unsupervised Multitask Learners”, (2019-02-14):
Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on task-specific datasets.
We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset—matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples.
The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text.
These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
“BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”, (2018-10-11):
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.
BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 accuracy to 86.7 Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
“Pointer Sentinel Mixture Models”, (2016-09-26):
Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. Our pointer sentinel-LSTM model achieves state of the art language modeling performance on the Penn Treebank (70.9 perplexity) while using far fewer parameters than a standard softmax LSTM. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vocabularies and larger corpora we also introduce the freely available WikiText corpus.
“The LAMBADA dataset: Word prediction requiring a broad discourse context”, (2016-06-20):
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1 novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
“RACE: Large-scale ReAding Comprehension Dataset From Examinations”, (2017-04-15):
We present RACE, a new dataset for benchmark evaluation of methods in the reading comprehension task. Collected from the English exams for middle and high school Chinese students in the age range between 12 to 18, RACE consists of near 28,000 passages and near 100,000 questions generated by human experts (English instructors), and covers a variety of topics which are carefully designed for evaluating the students’ ability in understanding and reasoning. In particular, the proportion of questions that requires reasoning is much larger in RACE than that in other benchmark datasets for reading comprehension, and there is a significant gap between the performance of the state-of-the-art models (43%) and the ceiling human performance (95%). We hope this new dataset can serve as a valuable resource for research and evaluation in machine comprehension. The dataset is freely available at this URL and the code is available at this URL.
“Training Deep Nets with Sublinear Memory Cost”, (2016-04-21):
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory—giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
https:/
/ devblogs.nvidia.com/ dgx-superpod-world-record-supercomputing-enterprise “Formication”, (2020-12-22):
Formication is the sensation that resembles that of small insects crawling on the skin when there is nothing there. It is one specific form of a set of sensations known as paresthesias, which also include the more common prickling, tingling sensation known as "pins and needles". Formication is a well documented symptom, which has numerous possible causes. The word is derived from formica, the Latin word for ant.
http:/
/ users.eecs.northwestern.edu/ ~nickle/ randAlg/ AzarBKU99.pdf https:/
/ www.eecs.harvard.edu/ ~michaelm/ postscripts/ mythesis.pdf https:/
/ www.nginx.com/ blog/ nginx-power-of-two-choices-load-balancing-algorithm/ “Napoleon Chagnon”, (2020-12-22):
Napoleon Alphonseau Chagnon was an American cultural anthropologist, professor of sociocultural anthropology at the University of Missouri in Columbia and member of the National Academy of Sciences. Chagnon was known for his long-term ethnographic field work among the Yanomamö, a society of indigenous tribal Amazonians, in which he used an evolutionary approach to understand social behavior in terms of genetic relatedness. His work centered on the analysis of violence among tribal peoples, and, using socio-biological analyses, he advanced the argument that violence among the Yanomami is fueled by an evolutionary process in which successful warriors have more offspring. His 1967 ethnography Yanomamö: The Fierce People became a bestseller and is frequently assigned in introductory anthropology courses.
“Margaret Mead”, (2020-12-27):
Margaret Mead was an American cultural anthropologist who featured frequently as an author and speaker in the mass media during the 1960s and 1970s. She earned her bachelor's degree at Barnard College in New York City and her MA and PhD degrees from Columbia University. Mead served as President of the American Association for the Advancement of Science in 1975.
“Rigoberta Menchú”, (2020-12-27):
Rigoberta Menchú Tum is a K'iche' Indigenous feminist and human rights activist from Guatemala. Menchú has dedicated her life to publicizing the rights of Guatemala's Indigenous peoples during and after the Guatemalan Civil War (1960–1996), and to promoting Indigenous rights internationally.
“Meat Consumption and Health: Food for Thought”, (2019-11-19):
For some time, medical and science organizations have been beating the drum that red and processed meat are bad for you. For almost as long, they have lamented that their efforts to inform the public have not convinced enough people to change their consumption. This month’s issue offers us food for thought on why. The field of nutritional epidemiology is plagued by observational studies that have conducted inappropriate analyses, accompanied by likely erroneous conclusions (1). Many studies selectively report results, and many lack an a priori hypothesis. Many use notoriously unreliable self-reports of food consumption while failing to collect or appropriately control for data on numerous potential confounders.
…Four more studies join the evidence base this month, and because they review all of the evidence that came before, they cannot be accused of cherry-picking. The first was a meta-analysis of cohort studies that focused on how dietary patterns, including differing amounts of red or processed meat, affected all-cause mortality, cardiometabolic outcomes, and cancer incidence and mortality (6). More than 100 studies including more than 6 million participants were analyzed. The overall conclusions were that dietary patterns, including differences in meat consumption, may result in only small differences in risk outcomes over long periods.
The next study was a meta-analysis that homed in specifically on cohort studies examining how reductions in red and processed meat might affect cancer incidence and mortality (7). It included 118 studies with more than 6 million participants, and it, too, found that the possible impact of reduced meat intake was very small. The third study was a meta-analysis of cohort studies that looked specifically at meat consumption and its relationship to all-cause mortality and cardiometabolic outcomes (8), and—once again—it found that any link was very small.
…In a fourth analysis in this issue (9), researchers examined randomized controlled trials that compared diets with differing amounts of red meat consumption for at least 6 months. They found 12 eligible studies, but one of them—the Women’s Health Initiative—was so large (almost 49 000 women) that it dominated the analysis. We can wish for more studies, and we could hope that they had more homogenous outcomes and better fidelity to assigned diets, but the overall conclusions from what they had were that “red meat may have little or no effect on major cardiometabolic outcomes and cancer mortality and incidence.”
…it may be time to stop producing observational research in this area. These meta-analyses include millions of participants. Further research involving much smaller cohorts has limited value. High-quality randomized controlled trials are welcome, but only if they’re designed to tell us things we don’t already know.
-
Background: Dietary guidelines generally recommend limiting intake of red and processed meat. However, the quality of evidence implicating red and processed meat in adverse health outcomes remains unclear.
Purpose: To evaluate the association between red and processed meat consumption and all-cause mortality, cardiometabolic outcomes, quality of life, and satisfaction with diet among adults.
Data Sources: EMBASE (Elsevier), Cochrane Central Register of Controlled Trials (Wiley), Web of Science (Clarivate Analytics), CINAHL (EBSCO), and ProQuest from inception until July 2018 and MEDLINE from inception until April 2019, without language restrictions, as well as bibliographies of relevant articles.
Study Selection: Cohort studies with at least 1000 participants that reported an association between unprocessed red or processed meat intake and outcomes of interest.
Data Extraction: Teams of 2 reviewers independently extracted data and assessed risk of bias. One investigator assessed certainty of evidence, and the senior investigator confirmed the assessments.
Data Synthesis: Of 61 articles reporting on 55 cohorts with more than 4 million participants, none addressed quality of life or satisfaction with diet. Low-certainty evidence was found that a reduction in unprocessed red meat intake of 3 servings per week is associated with a very small reduction in risk for cardiovascular mortality, stroke, myocardial infarction (MI), and type 2 diabetes. Likewise, low-certainty evidence was found that a reduction in processed meat intake of 3 servings per week is associated with a very small decrease in risk for all-cause mortality, cardiovascular mortality, stroke, MI, and type 2 diabetes.
Limitation: Inadequate adjustment for known confounders, residual confounding due to observational design, and recall bias associated with dietary measurement.
Conclusion: The magnitude of association between red and processed meat consumption and all-cause mortality and adverse cardiometabolic outcomes is very small, and the evidence is of low certainty.
-
Background: Cancer incidence has continuously increased over the past few centuries and represents a major health burden worldwide.
Purpose: To evaluate the possible causal relationship between intake of red and processed meat and cancer mortality and incidence.
Data Sources: Embase, Cochrane Central Register of Controlled Trials, Web of Science, CINAHL, and ProQuest from inception until July 2018 and MEDLINE from inception until April 2019 without language restrictions.
Study Selection: Cohort studies that included more than 1000 adults and reported the association between consumption of unprocessed red and processed meat and cancer mortality and incidence.
Data Extraction: Teams of 2 reviewers independently extracted data and assessed risk of bias; 1 reviewer evaluated the certainty of evidence, which was confirmed or revised by the senior reviewer.
Data Synthesis: Of 118 articles (56 cohorts) with more than 6 million participants, 73 articles were eligible for the dose-response meta-analyses, 30 addressed cancer mortality, and 80 reported cancer incidence. Low-certainty evidence suggested that an intake reduction of 3 servings of unprocessed meat per week was associated with a very small reduction in overall cancer mortality over a lifetime. Evidence of low to very low certainty suggested that each intake reduction of 3 servings of processed meat per week was associated with very small decreases in overall cancer mortality over a lifetime; prostate cancer mortality; and incidence of esophageal, colorectal, and breast cancer.
Limitation: Limited causal inferences due to residual confounding in observational studies, risk of bias due to limitations in diet assessment and adjustment for confounders, recall bias in dietary assessment, and insufficient data for planned subgroup analyses.
Conclusion: The possible absolute effects of red and processed meat consumption on cancer mortality and incidence are very small, and the certainty of evidence is low to very low.
-
Background: Studying dietary patterns may provide insights into the potential effects of red and processed meat on health outcomes.
Purpose: To evaluate the effect of dietary patterns, including different amounts of red or processed meat, on all-cause mortality, cardiometabolic outcomes, and cancer incidence and mortality.
Data Sources: Systematic search of MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials, CINAHL, Web of Science, and ProQuest Dissertations & Theses Global from inception to April 2019 with no restrictions on year or language.
Study Selection: Teams of 2 reviewers independently screened search results and included prospective cohort studies with 1000 or more participants that reported on the association between dietary patterns and health outcomes.
Data Extraction: Two reviewers independently extracted data, assessed risk of bias, and evaluated the certainty of evidence using GRADE (Grading of Recommendations Assessment, Development and Evaluation) criteria.
Data Synthesis: Eligible studies that followed patients for 2 to 34 years revealed low-certainty to very-low-certainty evidence that dietary patterns lower in red and processed meat intake result in very small or possibly small decreases in all-cause mortality, cancer mortality and incidence, cardiovascular mortality, nonfatal coronary heart disease, fatal and nonfatal myocardial infarction, and type 2 diabetes. For all-cause, cancer, and cardiovascular mortality and incidence of some types of cancer, the total sample included more than 400 000 patients; for other outcomes, total samples included 4000 to more than 300 000 patients.
Limitation: Observational studies are prone to residual confounding, and these studies provide low-certainty or very-low-certainty evidence according to the GRADE criteria.
Conclusion: Low-certainty or very-low-certainty evidence suggests that dietary patterns with less red and processed meat intake may result in very small reductions in adverse cardiometabolic and cancer outcomes.
“Health-Related Values and Preferences Regarding Meat Consumption: A Mixed-Methods Systematic Review”, (2019-11-19):
Background: A person’s meat consumption is often determined by their values and preferences.
Purpose: To identify and evaluate evidence addressing health-related values and preferences regarding meat consumption.
Data Sources: MEDLINE, EMBASE, Web of Science, Centre for Agriculture and Biosciences Abstracts, International System for Agricultural Science and Technology, and Food Science and Technology Abstracts were searched from inception to July 2018 without language restrictions.
Study Selection: Pairs of reviewers independently screened search results and included quantitative and qualitative studies reporting adults’ health-related values and preferences regarding meat consumption.
Data Extraction: Pairs of reviewers independently extracted data and assessed risk of bias.
Data Synthesis: Data were synthesized into narrative form, and summaries were tabulated and certainty of evidence was assessed using the GRADE (Grading of Recommendations Assessment, Development and Evaluation) approach. Of 19 172 initial citations, 41 quantitative studies (38 addressed reasons for meat consumption and 5 addressed willingness to reduce meat consumption) and 13 qualitative studies (10 addressed reasons for meat consumption and 4 addressed willingness to reduce meat consumption) were eligible for inclusion. Thirteen studies reported that omnivores enjoy eating meat, 18 reported that these persons consider meat an essential component of a healthy diet, and 7 reported that they believe they lack the skills needed to prepare satisfactory meals without meat. Omnivores are generally unwilling to change their meat consumption. The certainty of evidence was low for both “reasons for meat consumption” and “willingness to reduce meat consumption in the face of undesirable health effects.”
Limitation: Limited generalizability of findings to lower-income countries, low-certainty evidence for willingness to reduce meat consumption, and limited applicability to specific types of meat (red and processed meat).
Conclusion: Low-certainty evidence suggests that omnivores are attached to meat and are unwilling to change this behavior when faced with potentially undesirable health effects.
-
Description: Dietary guideline recommendations require consideration of the certainty in the evidence, the magnitude of potential benefits and harms, and explicit consideration of people’s values and preferences. A set of recommendations on red meat and processed meat consumption was developed on the basis of 5 de novo systematic reviews that considered all of these issues.
Methods: The recommendations were developed by using the Nutritional Recommendations (NutriRECS) guideline development process, which includes rigorous systematic review methodology, and GRADE methods to rate the certainty of evidence for each outcome and to move from evidence to recommendations. A panel of 14 members, including 3 community members, from 7 countries voted on the final recommendations. Strict criteria limited the conflicts of interest among panel members. Considerations of environmental impact or animal welfare did not bear on the recommendations. Four systematic reviews addressed the health effects associated with red meat and processed meat consumption, and 1 systematic review addressed people’s health-related values and preferences regarding meat consumption.
Recommendations: The panel suggests that adults continue current unprocessed red meat consumption (weak recommendation, low-certainty evidence). Similarly, the panel suggests adults continue current processed meat consumption (weak recommendation, low-certainty evidence).
“Legume”, (2020-12-27):
A legume is a plant in the family Fabaceae, or the fruit or seed of such a plant. The seed is also called a pulse. Legumes are grown agriculturally, primarily for human consumption, for livestock forage and silage, and as soil-enhancing green manure. Well-known legumes include alfalfa, clover, beans, peas, chickpeas, lentils, lupins, mesquite, carob, soybeans, peanuts, and tamarind. Legumes produce a botanically unique type of fruit – a simple dry fruit that develops from a simple carpel and usually dehisces on two sides.
“Phreaking”, (2020-12-22):
Phreaking is a slang term coined to describe the activity of a culture of people who study, experiment with, or explore telecommunication systems, such as equipment and systems connected to public telephone networks. The term phreak is a sensational spelling of the word freak with the ph- from phone, and may also refer to the use of various audio frequencies to manipulate a phone system. Phreak, phreaker, or phone phreak are names used for and by individuals who participate in phreaking.
“Blue box”, (2020-12-27):
A blue box is an electronic device that generates the in-band signaling tones formerly generated by telephone operator consoles to control telephone switches. Developed during the 1960s, blue boxes allowed private individuals to control long-distance call routing and to bypass the toll-collection mechanisms of telephone companies, enabling the user to place free long-distance telephone calls on national and international circuits.
“John Draper”, (2020-12-27):
John Thomas Draper, also known as Captain Crunch, Crunch or Crunchman, is an American computer programmer and legendary former phone phreak. He is a widely known figure within the computer programming world and the hacker and security community and generally lives a nomadic lifestyle. Following the emergence of the Me Too movement in 2017, allegations against him dating back decades surfaced in media reports and in social media posts concerning claims of inappropriate sexual behavior with young men. Draper denied any sexual intent but did not address all of the allegations directly.
“ConTeXt”, (2020-12-27):
ConTeXt is a general-purpose document processor. Like LaTeX, it is derived from TeX. It is especially suited for structured documents, automated document production, very fine typography, and multi-lingual typesetting. It is based in part on the TeX typesetting system, and uses a document markup language for manuscript preparation. The typographical and automated capabilities of ConTeXt are extensive, including interfaces for handling microtypography, multiple footnotes and footnote classes, and manipulating OpenType fonts and features. Moreover, it offers extensive support for colors, backgrounds, hyperlinks, presentations, figure-text integration, and conditional compilation. It gives the user extensive control over formatting while making it easy to create new layouts and styles without learning the low-level TeX macro language.
“Initial”, (2020-12-27):
In a written or published work, an initial or drop cap is a letter at the beginning of a word, a chapter, or a paragraph that is larger than the rest of the text. The word is derived from the Latin initialis, which means standing at the beginning. An initial is often several lines in height and in older books or manuscripts, sometimes ornately decorated.
“Long s”, (2020-12-27):
The long s, ſ, is an archaic form of the lower case letter s. It replaced the single s, or one or both of the letters s in a double s. The long s is the basis of the first half of the grapheme of the German alphabet ligature letter ß, which is known as the Eszett. The modern letterform is known as the short, terminal, or round s.
“R. Scott Bakker”, (2020-12-27):
Richard Scott Bakker is a Canadian fantasy author and frequent lecturer in the South Western Ontario university community. He grew up on a tobacco farm in the Simcoe area. In 1986 he attended the University of Western Ontario to pursue a degree in literature and later an MA in theory and criticism. Since the late 1990s, he has been attempting to elucidate theories of media bubbles and the intellectual alienation of the working class. After all but dissertation in a PhD in philosophy at Vanderbilt University he returned to London, Ontario where he now lives with his wife and daughter. He spends his time writing split between his fiction and his ongoing philosophic inquiry.
Johnson, interestingly, like Bouchard, was influenced by Dunette 1966 (and also Wolins 1962).↩︎