--- title: January 2021 News description: January 2021 Gwern.net newsletter with links on AI scaling up and down. created: 2020-01-02 thumbnail: /doc/ai/nn/gan/stylegan/anime/2021-01-gwern-tadne-randomsample.jpg thumbnailText: "A high-quality cherrypicked illustration of an anime girl with long blue hair, red eyes, wearing a blue kimono and hat. Generated by a large StyleGAN neural network trained on Danbooru2019 images, This Anime Does Not Exist.ai." status: finished previous: /newsletter/2020/12 next: /newsletter/2021/02 confidence: log cssExtension: dropcaps-de-zs backlink: False ... January 2021's [Gwern.net](/newsletter/2021/01 "'January 2021 News', Branwen 2020") [newsletter](https://gwern.substack.com/ "'Gwern.net newsletter (Substack subscription page)', Branwen 2013") is now out; previous, [December 2020](/newsletter/2020/12 "'December 2020 News', Branwen 2019") ([archives](/doc/newsletter/index)). This is a collation of links and summary of major changes, overlapping with my [Changelog](/changelog) & /r/gwern; brought to you by my donors on [Patreon](https://www.patreon.com/gwern). # Writings - ["Danbooru2020: A Large-Scale Crowdsourced and Tagged Anime Illustration Dataset"](/danbooru2021#danbooru2020 "Danbooru2020 is a large-scale anime image database with 4.2m+ images annotated with 130m+ tags; it can be useful for machine learning purposes such as image recognition and generation.") - [This Anime Does Not Exist.ai (TADNE)](https://thisanimedoesnotexist.ai/) ([discussion](/face#extended-stylegan2-danbooru2019-aydao)) - **Gwern.net**: +return-to-top floating button; *popups*: can now be disabled (use the 'gear' icon); final reimplementation (dynamic JS now; memoizing the recursive inlining, however clever & elegant, turns out to have painful edge-cases & still not be efficient enough---web browsers *really* don't like loading hundreds of kilobytes of extra HTML) # Links ## AI [Matters Of Scale](https://www.reddit.com/r/mlscaling/ "'ML Scaling subreddit', Branwen 2020"): - **Scaling up**: - ["DALL·E 1: Creating Images from Text"](https://openai.com/research/dall-e "'DALL·E 1: Creating Images from Text: We’ve trained a neural network called DALL·E 1 that creates images from text captions for a wide range of concepts expressible in natural language', Ramesh et al 2021"), OpenAI (GPT-3-12.5b generating 1280 tokens → [VQ-VAE](https://arxiv.org/abs/1906.00446#deepmind "'Generating Diverse High-Fidelity Images with VQ-VAE-2', Razavi et al 2019") pixels; generates illustration & photos); ["CLIP (Contrastive Language-Image Pre-training): Connecting Text and Images"](https://openai.com/research/clip "'CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the “zero-shot” capabilities of GPT-2 and GPT-3', Radford et al 2021"), OpenAI ([Radford et al 2021](https://cdn.openai.com/papers/Learning_Transferable_Visual_Models_From_Natural_Language_Supervision.pdf "Learning Transferable Visual Models From Natural Language Supervision"){#radford-et-al-2021-paper}: zero-shot image understanding via text description---useful for much more than just ranking DALL·E 1 samples by quality) Further [blessings of scale](/scaling-hypothesis#blessings-of-scale): simple [contrastive](https://arxiv.org/abs/2010.05113 "'Contrastive Representation Learning: A Framework and Review', Le-Khac et al 2020") training on _n_ = 400m leads to remarkable generalization & combinatorial flexibility of image generation by DALL·E 1, and CLIP learns to reach image classification SOTA by zero-shot on many datasets, with more human-like errors & less degradation out of samples than rivals, while costing the same to train. OpenAI released their smallest CLIP model (the "[ViT](https://arxiv.org/abs/2010.11929#google "Vision Transformer (ViT): An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale")-B/32"-equivalent) and people are discovering it seems able to do just about anything without any further training---the paper notes that it does everything from "fine-grained object classification, geo-localization, action recognition in videos, and OCR", but there's so much more, and you can use it to generate image captions/descriptions, classify your anime images, pull a specific target image description by gradient ascent or out of another neural network such as an ImageNet [BigGAN](https://arxiv.org/abs/1809.11096#deepmind "'BigGAN: Large Scale GAN Training for High Fidelity Natural Image Synthesis', Brock et al 2018") or TADNE StyleGAN2-ext (or, why not, synthesize images images embodying abstract concepts like emoji or words like "nightmare fuel" or "confusion"!), search your image datasets by embedding, find mislabeled images (eg. by [using "upside down" as the prompt](https://twitter.com/quasimondo/status/1351191660059832320 "I added #CLIP to my image labeling tool and have now full text search over my various collections. Here are 'potato', 'xray of a hand','skull' and 'circle' from my @internetarchive set. Besides labeling CLIP is also useful to find images that are wrongly oriented. In this case my tool works by example, but a text search for 'upside down' also often works."))... One wonders, like GPT-3, how much better the largest CLIP ("L/14-336px") is and how many ways of using it (or DALL·E 1) remain to be found? And why prediction losses work so well in one place, but then contrastive elsewhere? For perspective: there are newly-minted PhDs going on the job market who got excited about deep learning because of these new ["resnet"](https://arxiv.org/abs/1512.03385#microsoft "'Deep Residual Learning for Image Recognition', He et al 2015") things; undergrads who applied to grad school because [BERT](https://arxiv.org/abs/1810.04805#google "'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding', Devlin et al 2018") et al were blowing open NLP & extending neural supremacy to natural language would not yet have passed quals; and it has been only 1 academic semester since [GPT-3](https://arxiv.org/abs/2005.14165#openai "'GPT-3: Language Models are Few-Shot Learners', Brown et al 2020") was announced. Or to put it quantitatively, for just sequence modeling: it has been 8,478 days since [LSTM](/doc/ai/nn/rnn/1997-hochreiter.pdf#schmidhuber "'Long Short-Term Memory', Hochreiter & Schmidhuber 1997") RNNs were published; 3,045 days since [AlexNet's](https://papers.nips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf "'ImageNet Classification with Deep Convolutional Neural Networks', Krizhevsky et al 2012") ImageNet scores were released; 1,880 days since residual networks were published in a paper; 1,330 days since ["Attention Is All You Need"](https://arxiv.org/abs/1706.03762#google "Vaswani et al 2017") hit Arxiv; 844 days since BERT's paper was published; 718 days since [GPT-2](https://openai.com/research/better-language-models "'Better Language Models and Their Implications', OpenAI 2019") was announced; 353 days since [SimCLR](https://arxiv.org/abs/2002.05709#google "'A Simple Framework for Contrastive Learning of Visual Representations', Chen et al 2020"), and 249 days since GPT-3 was; and 27 days since CLIP/DALL·E 1.^[But it'll still be too many days 'till we say we're sorry.] [Spring is coming.](https://jetpress.org/volume1/moravec.htm "'When will computer hardware match the human brain?', Moravec 1998") (Some still insist we need not worry about "overpopulation on Mars" for >18,264 more days...) - ["Meta Pseudo Labels"](https://arxiv.org/abs/2003.10580#google), Pham et al 2020 (90% on ImageNet by pretraining a meta-learning teacher using JFT-300M on a TPUv3-2048) - ["Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity"](https://arxiv.org/abs/2101.03961#google), Fedus et al 2021 (1.57t-parameter [GShard](https://arxiv.org/abs/2006.16668#google "'GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding', Lepikhin et al 2020") followup; the mixture-of-experts approach, while scaling stably, starts showing its limits) - **Scaling down**: - ["DeiT: Training data-efficient image transformers & distillation through attention"](https://arxiv.org/abs/2012.12877#facebook "'Training data-efficient image transformers & distillation through attention', Touvron et al 2020"), Touvron et al 2020 (scaling Transformer classifiers down to ImageNet+1-GPU); ["BoTNet: Bottleneck Transformers for Visual Recognition"](https://arxiv.org/abs/2101.11605#google "'Bottleneck Transformers for Visual Recognition', Srinivas et al 2021"), Srinivas et al 2021/["Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet"](https://arxiv.org/abs/2101.11986), Yuan et al 2021 (hybrids); ["not-so-BigGAN: Generating High-Fidelity Images on Small Compute with Wavelet-based Super-Resolution"](https://arxiv.org/abs/2009.04433), Han et al 2020/["VQGAN: Taming Transformers for High-Resolution Image Synthesis"](https://compvis.github.io/taming-transformers/), Esser et al 2020 (training >1024px Transformer GANs on just 2 GPUs) Transformer supremacy in image-related tasks continues, and GANs are becoming increasingly hybridized. Do pure-GANs have a future, now that VAEs and autoregressive models are making such inroads into both the highest-quality & lowest-compute sample generation? To take the GAN/DRL analogy seriously, perhaps they were they ultimately a dead end, akin to trying to learn everything from rewards, and an adversarial GAN loss ought to be only [the cherry on the cake](/doc/ai/nn/2019-lecun-isscctalk-cake.png) of a large unsupervised/semi-supervised generative model. - ["ZeRO-Offload: Democratizing Billion-Scale Model Training"](https://arxiv.org/abs/2101.06840#microsoft), Ren et al 2021 (partial CPU training for 13b-parameter models on 1 V100 GPU, scaling to 128 GPUs) - ["Prefix-Tuning: Optimizing Continuous Prompts for Generation"](https://arxiv.org/abs/2101.00190), Li & Liang 2021 (could the [PET](https://arxiv.org/abs/2009.07118 "'It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners', Schick & Schütze et al 2020") & CLIP trick of averaging multiple embeddings to yield much better performance be reused for GPT-3 prompts to greatly improve prompting? The fact that the prefix-tuning, by directly optimizing the prompt embeddings, yields better performance than even single optimized text prompts, suggests so. The user could provide 3 or 4 similar prompts, and synthesize them into a single super-prompt to better program GPT-3...) - ["Scaling down Deep Learning"](https://greydanus.github.io/2020/12/01/scaling-down/), Greydanus 2020 (cute: parametric simplified-MNIST for rapid iteration on [highly-optimized](/note/faster "'Computer Optimization: Your Computer Is Faster Than You Think', Branwen 2021") tiny NNs: experiments in lottery-ticket & meta-learning of LRs/activations) - ["The neural network of the Stockfish chess engine"](https://cp4space.hatsya.com/2021/01/08/the-neural-network-of-the-stockfish-chess-engine/ "'NNUE: The neural network of the Stockfish chess engine', Goucher 2021") (very lightweight NN designed for incremental recomputation over changing board states) - ["Transformers in Vision: A Survey"](https://arxiv.org/abs/2101.01169), Khan et al 2021 - [OpenAI departures](https://openai.com/blog/organizational-update/ "'Organizational Update from OpenAI', OpenAI 2020"): Dario Amodei, Sam McCandlish, Tom Brown, Tom Henighan, Chris Olah, Jack Clark, Ben Mann, Paul Christiano et al leave---most for an unspecified new entity (["the elves leave Middle Earth"](https://steveblank.com/2009/12/21/the-elves-leave-middle-earth-%E2%80%93-soda%E2%80%99s-are-no-longer-free/ "'The Elves Leave Middle Earth—Sodas Are No Longer Free', Blank 2009")?) And the rest: - ["2020 AI Alignment Literature Review and Charity Comparison"](https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison), Larks - ["Grounded Language Learning Fast and Slow"](https://arxiv.org/abs/2009.01719#deepmind), Hill et al 2020 - ["DeBERTa: Decoding-enhanced BERT with Disentangled Attention"](https://arxiv.org/abs/2006.03654#microsoft), He et al 2020 ([SuperGLUE](https://arxiv.org/abs/1905.00537 "'SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems', Wang et al 2019") [falls](https://super.gluebenchmark.com/leaderboard/)) - ["Solving Mixed Integer Programs Using Neural Networks"](https://arxiv.org/abs/2012.13349#deepmind), Nair et al 2020/[Sonnerat et al 2021](https://arxiv.org/abs/2107.10201#deepmind "Learning a Large Neighborhood Search Algorithm for Mixed Integer Programs") - ["Towards Fully Automated Manga Translation"](https://arxiv.org/abs/2012.14271), Hinami et al 2020 - ["UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers"](https://arxiv.org/abs/2101.08001#baidu), Hu et al 2021 - ["FERM: A Framework for Efficient Robotic Manipulation"](https://arxiv.org/abs/2012.07975#bair "'A Framework for Efficient Robotic Manipulation', Zhan et al 2020"), Zhan et al 2021 (contrastive semi-supervised learning + data augmentation for sample-efficiency) - ["XMC-GAN: Cross-Modal Contrastive Learning for Text-to-Image Generation"](https://arxiv.org/abs/2101.04702#google), Zhang et al 2021 ## Genetics Everything Is Heritable: - ["Nurture might be nature: cautionary tales and proposed solutions"](https://www.nature.com/articles/s41539-020-00079-z), Hart et al 2021 - ["A genetic perspective on the association between exercise and mental health in the era of genome-wide association studies"](https://www.sciencedirect.com/science/article/pii/S1755296620300624), de Geus 2020; ["Evidence for shared genetics between physical activity, sedentary behaviour and adiposity-related traits"](/doc/genetics/heritable/correlation/mendelian-randomization/2020-schnurr.pdf "‘Evidence for shared genetics between physical activity, sedentary behavior and adiposity-related traits’, Schnurr et al 2020"), Schnurr et al 2020 - ["Antidepressant Response in Major Depressive Disorder: A Genome-wide Association Study"](https://www.medrxiv.org/content/10.1101/2020.12.11.20245035.full), Pain et al 2020 - ["Genome wide analysis of gene dosage in 24,092 individuals shows that 10,000 genes modulate cognitive ability"](https://www.biorxiv.org/content/10.1101/2020.04.03.024554.full "‘Estimating the effect-size of gene dosage on cognitive ability across the coding genome’, Huguet et al 2020"), Huguet et al 2020 (yep, still polygenic) - ["GWAS of three molecular traits highlights core genes and pathways alongside a highly polygenic background"](https://www.biorxiv.org/content/10.1101/2020.04.20.051631.full), Sinnott-Armstrong et al 2021 - ["Genome-scale sequencing and analysis of human, wolf and bison DNA from 25,000 year-old sediment"](https://www.biorxiv.org/content/10.1101/2021.01.08.425895.full), Gelabert et al 2021 (incredible this is possible) - ["Disentangling sex differences in the shared genetic architecture of PTSD, traumatic experiences, and social support with body size and composition"](https://www.medrxiv.org/content/10.1101/2021.01.25.21249961.full "'Disentangling sex differences in the shared genetic architecture of post-traumatic stress disorder, traumatic experiences, and social support with body size and composition', Carvalho et al 2021"), Carvalho et al 2021 ([LCV](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6684375/ "'Distinguishing genetic correlation from causation across 52 diseases and complex traits', O'Connor & Price 2018")) Recent Evolution: - ["African genetic diversity and adaptation inform a precision medicine agenda"](/doc/genetics/selection/natural/human/2021-pereira.pdf), Pereira et al 2021; ["The influence of evolutionary history on human health and disease"](https://www.nature.com/articles/s41576-020-00305-9), Benton et al 2021; ["Local adaptation and archaic introgression shape global diversity at human structural variant loci"](https://www.biorxiv.org/content/10.1101/2021.01.26.428314.full), Yan et al 2021 - ["Genome scans of dog behavior implicate a gene network underlying psychopathology in mammals, including humans"](https://www.biorxiv.org/content/10.1101/2020.07.19.211078.full), Zapata et al 2021 - ["Natural Selection in Contemporary Humans is Linked to Income and Substitution Effects"](https://ideas.repec.org/p/uea/ueaeco/2021-02.html), Hugh-Jones & Abdellaoui 2021 - ["The diversity and function of sourdough starter microbiomes"](https://elifesciences.org/articles/61644), Landis et al 2021 (crowdsourced sourdough show little trace of geographic origins?) Engineering: - ["In vivo base editing rescues Hutchinson-Gilford progeria syndrome in mice"](/doc/genetics/editing/2021-koblan.pdf), Koblan et al 2021 - ["From Genotype to Phenotype: polygenic prediction of complex human traits"](https://arxiv.org/abs/2101.05870), Raben et al 2021 ## Statistics/Meta-Science - ["The Quantum Field Theory on Which the Everyday World Supervenes"](https://arxiv.org/abs/2101.07884), Carroll 2021 ("...we have reason to be confident that the laws of physics underlying the phenomena of everyday life are completely known" because all unknown particles/fields are constrained to being extremely rare/weak, eg. by [Adelberger et al 2009](/doc/science/2009-adelberger.pdf "Torsion balance experiments: A low-energy frontier of particle physics")) - ["How accurate are citations of frequently cited papers in biomedical literature?"](https://www.biorxiv.org/content/10.1101/2020.12.10.419424.full), Pavlovic et al 2020 (includes original author's evaluation of whether a citation of their work is correct) - ["Energy-Efficient Algorithms"](https://arxiv.org/abs/1605.08448), Demaine et al 2016 ([reversible computing](https://en.wikipedia.org/wiki/Reversible_computing) asymptotics: constant-factor [stacks](https://en.wikipedia.org/wiki/Stack_\(abstract_data_type\))/[arrays](https://en.wikipedia.org/wiki/Dynamic_array), 𝒪(log _n_) time/energy [AVL trees](https://en.wikipedia.org/wiki/AVL_tree), 𝒪(_n_) space [sorts](https://en.wikipedia.org/wiki/Comparison_sort), & various 𝒪(Vertex+Edge) time/space/energy [graph searches](https://en.wikipedia.org/wiki/Graph_traversal)) - ["The Optimizer's Curse: Skepticism and Postdecision Surprise in Decision Analysis"](/doc/statistics/decision/2006-smith.pdf), Smith & Winkler 2006 ([regression to the mean](/note/regression "'Regression To The Mean Fallacies', Branwen 2021") is everywhere; another example of why Bayes & decision theory are two great flavors that go great together) ## Politics/Religion - ["The Mechanisms of Cult Production: An Overview"](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3650704), Xavier Marquez 2020 (see previously his [blog roundup](/doc/sociology/abandoned-footnotes/index)) - ["When Prophecy Fails and Faith Persists: A Theoretical Overview"](/doc/sociology/1999-dawson.pdf), Dawson 1999 - ["Why We Fight Over Fiction"](https://www.overcomingbias.com/p/why-we-fight-over-fictionhtml), Robin Hanson - [The All-Woman Supreme Court](https://en.wikipedia.org/wiki/All-Woman_Supreme_Court) ## Psychology/Biology - ["Still Alive"](https://www.astralcodexten.com/p/still-alive), Scott Alexander (announcement of SSC return as Substack newsletter 'Astral Codex Ten' & launching a low-cost psychiatry clinic 'Lorien Psychiatry') - ["The Temporal Dynamics of Opportunity Costs: A Normative Account of Cognitive Fatigue and Boredom"](https://www.biorxiv.org/content/10.1101/2020.09.08.287276.full), Agrawal et al 2020 - ["A unified framework for association and prediction from vertex-wise grey-matter structure"](https://onlinelibrary.wiley.com/doi/full/10.1002/hbm.25109), Couvy-Duchesne et al 2020 (more [morphometricity](/note/variance-component "'Variance Components Beyond Genetics', Branwen 2019")) - **Common phenomena**: ["Sounds from seeing silent motion: Who hears them, and what looks loudest?"](/doc/psychology/2018-fassnidge.pdf), Fassnidge & Freeman 2018 (on 'visual ear'; previously: [Saenz & Koch 2008](https://www.sciencedirect.com/science/article/pii/S0960982208007343 "The sound of change: visually-induced auditory synaesthesia"), [Fassnidge et al 2017](/doc/psychology/2017-fassnidge.pdf "A deafening flash! Visual interference of auditory signal detection")) - ["Predicting Mental Health From Followed Accounts on Twitter"](https://online.ucpress.edu/collabra/article/7/1/18731/115925/Predicting-Mental-Health-From-Followed-Accounts-on), Costelli et al 2021 ([Registered Report](https://en.wikipedia.org/wiki/Preregistration_\(science\)#Registered_reports): who you choose to follow says a lot about you---[everything is correlated](/everything)) - ["No evidence for general intelligence in a fish"](https://www.biorxiv.org/content/10.1101/2021.01.08.425841.full), Aellen et al 2021 - [Delirium tremens](https://en.wikipedia.org/wiki/Delirium_tremens) (why the pink Elephants in _Dumbo_? The inmates [temporarily ran the asylum.](/doc/anime/1990-langer.pdf "'Regionalism in Disney Animation: Pink Elephants and Dumbo', Langer 1990")) - ["Microbiome connections with host metabolism and habitual diet from 1,098 deeply phenotyped individuals"](/doc/genetics/microbiome/2021-asnicar.pdf), Asnicar et al 2021 - ["Universal DNA methylation age across mammalian tissues"](https://www.biorxiv.org/content/10.1101/2021.01.18.426733.full), Lu et al 2021; ["Whole-body senescent cell clearance alleviates age-related brain inflammation and cognitive impairment in mice"](https://onlinelibrary.wiley.com/doi/full/10.1111/acel.13296), Ogrodnik et al 2021 - ["BENDR: using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data"](https://arxiv.org/abs/2101.12037), Kostas et al 2021 (towards brain imitation learning) - [Parker-Hulme murder case](https://en.wikipedia.org/wiki/Parker%E2%80%93Hulme_murder_case); [The Slender Man stabbing](https://en.wikipedia.org/wiki/Slender_Man_stabbing) ([paracosms?](https://en.wikipedia.org/wiki/Paracosm)) - **Correction**: [Programming competition skills do not inversely correlate with job performance](https://news.ycombinator.com/item?id=25426329 "'Comment by Peter Norvig on "Being good at programming competitions correlates negatively with being good on the job"', Norvig 2020") after all ## Technology - [Natural nuclear fission reactors (Oklo)](https://en.wikipedia.org/wiki/Natural_nuclear_fission_reactor) - ["Baffles and Bastions: The Universal Features of Fortifications"](/doc/history/2007-keeley.pdf), Keeley et al 2007 - [The Corrupted Blood incident](https://en.wikipedia.org/wiki/Corrupted_Blood_incident) - [_Footnote_ 36: "Redisturbed"](/doc/design/typography/2020-jeremytankard-footnote-36-redisturbed.pdf "'Footnote 36: Redisturbed: In This Issue We're Focusing on the Redisturbed Typeface For The New Decade [Redisturbed is a fresh look at our original Disturbance typeface from 1993. Looking deeper at the concept of an unicase alphabet and designing it for expanded use today. More weights, optical sizes, language support and OpenType features.]', Tankard 2020"): a *unicase* font experiment ## Economics - ["Businesses Aim to Pull Greenhouse Gases From the Air. It's a Gamble"](https://www.nytimes.com/2021/01/18/climate/carbon-removal-technology.html "'Businesses Aim to Pull Greenhouse Gases From the Air. It’s a Gamble. A surge of corporate money could soon transform carbon removal from science fiction to reality. But there are risks: The very idea could offer industry an excuse to maintain dangerous habits', Plumer & Flavelle 2021") - ["Does Advertising](https://freakonomics.com/podcast/does-advertising-actually-work-part-1-tv-ep-440/) [Actually Work?"](https://freakonomics.com/podcast/does-advertising-actually-work-part-2-digital-ep-441/) (what could be more obvious than "advertising works", and trivial to confirm with correlational data? Yet, the tedious saying "correlation ≠ causation" stubbornly insists on being true); ["Digital Paywall Design: Implications for Content Demand and Subscriptions"](/doc/economics/advertising/2020-aral.pdf), Aral & Dhillon 2020 (NYT nag-paywall caused −9.9% reading; in line with [all the other results](/banner "'Banner Ads Considered Harmful', Branwen 2017")) - ["Who Gains and Who Loses from Credit Card Payments? Theory and Calibrations"](/doc/economics/2010-schuh.pdf), Schuh et al 2010 (a compelling case for getting a rewards credit card if you're a [debit card](https://en.wikipedia.org/wiki/Debit_card) user---why subsidize them so much?) - ["Squeezing the bears: cornering risk and limits on arbitrage during the 'British bicycle mania', 1896--1898"](/doc/economics/2019-quinn.pdf), Quinn 2019 ## Fiction - ["On Venus, Have We Got a Rabbi!"](https://www.tabletmag.com/sections/arts-letters/articles/on-venus-have-we-got-a-rabbi "A long-lost space age satire about what it means to be a Jew from one of science fiction's greatest humorists"), [William Tenn](https://en.wikipedia.org/wiki/William_Tenn) 2016 - ["St Martin’s Four Wishes"](/doc/history/2013-dubin-fabliauxtranslations-stmartinsfourwishes.pdf), Anonymous [medieval poet](https://en.wikipedia.org/wiki/Fabliau) (trans. Dubin 2013) ## Miscellaneous - The [Anglo-Japanese style](https://en.wikipedia.org/wiki/Anglo-Japanese_style) - [Stalag Luft III](https://en.wikipedia.org/wiki/Stalag_Luft_III) - [Ferdinandea](https://en.wikipedia.org/wiki/Graham_Island_\(Mediterranean_Sea\))