N/A
2015-04-30–2021-01-04
finished
certainty: log
importance: 0
This is the May 2015 edition of the Gwern.net newsletter; previous, April 2015. This is a summary of the revision-history RSS feed, overlapping with Changelog & /
Writings
- Darknet market arrests compilation finished & summaries calculated
- analysis of Effective Altruists’ donations as reported in the LW survey
Media
Links
Genetics:
Everything Is Heritable:
Politics/
- “Misperceiving Inequality”, Gimpelson & Treisman 2018 (excerpts)
- “Land without Plea Bargaining: How the Germans Do It”, Langbein 1979 (how the German legal system avoids our own reliance on plea-bargaining to send most defendants to jail without trial)
- “The Nitrous Oxide Philosopher”
- “The Science of Sex Abuse: Is it right to imprison people for heinous crimes they have not yet committed?”
- “Why the future won’t be genetically homogeneous”
- “Aged Heterogeneity: Fact or Fiction? The Fate of Diversity in Gerontological Research”, Nelson & Dannefer 1992
Statistics/
- “End-to-End Training of Deep Visuomotor Policies”, Levine et al 2015 (demo; talk)
- “The Unreasonable Effectiveness of Recurrent Neural Networks”
- “Using N-of-1 Trials to Improve Patient Management and Save Costs”, Scuffham 2010
- “Assessing Kurzweil: the results”
- “A survey of Bayesian predictive methods for model assessment, selection and comparison”, Vehtari & Ojanen 2012
- “The Nine Circles of Scientific Hell”, Neuroskeptic 2012
- “Leaving Office Feet First: Death In Congress”, Maltzman et al 1996
- “Nuclear weapon statistics using monoids, groups, and modules in Haskell”
- “Flat Priors and Other Improbable Tales”
Psychology/
- “Persistence of Long-Term Memory in Vitrified and Revived C. elegans”, Vita-More & Barranco 2015
- “The effects of oral iron supplementation on cognition in older children and adults: a systematic review and meta-analysis”, Falkingham et al 2010
- “Historical Review and Appraisal of Research on the Learning, Retention, and Transfer of Human Motor Skills”, Adams 1987
- “Programmer Interrupted”
- “E unibus pluram: Television And U.S. Fiction”, David Foster Wallace 1993
- 2,4-Dinitrophenol
- “What’s in a Color? The Unique Human Health Effects of Blue Light”, Holzman 2010
- “Diving Deep into Danger”
- “Late Night Thoughts on Reading Scientology”
Technology:
- “Epigrams in Programming”, Alan Perlis 1982
- “Implications of Historical Trends in the Electrical Efficiency of Computing”, Koomey et al 2011
- “How Much of the Web Is Archived?”, Ainsworth et al 2012
- “Actual Facebook Graph Searches”
Economics:
- “Overkill: An avalanche of unnecessary medical care is harming patients physically and financially. What can we do about it?”
- “The Insourcing Boom”; “The March of Robots Into Chinese Factories”
- “Jerven on Measuring African Poverty and Progress”
- “Online price discrimination: Conspicuous by its absence”/
“Price Discrimination and the Illusion of Fairness” - “On the front lines of humanity’s high-tech, global war on rats”
- “Smart Machines and Long-Term Misery”, Sachs & Kotlikoff; “The Ricardo effect in Europe (Germany fact of the day)”; “Four Futures: One thing we can be certain of is that capitalism will end”; “Has the ideas machine broken down?”; “The Post-Productive Economy”
- “Harder Choices Matter Less”
Philosophy:
- “The time resolution of the St Petersburg paradox”, Peters 2011
- “The Space Child’s Mother Goose”
Fiction:
- “Man And The Echo”, Yeats
- “Single-Bit Error”
- “Toward an Algorithmic Criticism”, Ramsay 2003
Misc:
Books
Fiction:
- Ready Player One, Cline 2011 (review)
- A Shropshire Lad, Housman 1896 (review)
- Floornight, nostalgebraist (recommended on SSC; overall cool—interesting concepts and developments and various bright spots compensate for some of the issues like pacing and wooden writing)
Nonfiction:
- Drift into Failure, Dekker (review)
Film/TV
Live-action:
- (review)
Anime:
Music
Touhou:
- “Old Fantasia” (Sound of Swing; In Walked Cat’s {C86}) [jazz]
- “恋と水と石炭と” (ししまい三号; 大楽符 {C77}) [classical]
- “水彩人形” (ししまい三号; 大楽符 {C77}) [classical]
- “空の間に間に” (ししまい三号; 大楽符 {C77}) [classical]
- “Dazzling You” (Maurits“禅”Cornelis feat. Vivienne; Origin of Love {C87}) [jazz]
- “少女綺想曲” (TAM×KEIGO KANZAKI; 東方バイオリンロック永-NAGARAE- {C87}) [instrumental rock]
- “エクステンドアッシュ” (TAM×KEIGO KANZAKI; 東方バイオリンロック永-NAGARAE- {C87}) [instrumental rock]
- “月見草” (TAM×KEIGO KANZAKI; 東方バイオリンロック永-NAGARAE- {C87}) [instrumental rock]
Kantai Collection:
- “鎮守府の序曲” (伊藤 翼; Kantai Philharmonic Orchestra {C86}) [orchestral]
Link Bibliography
Bibliography of page links in reading order (with annotations when available):
“April 2015 News”, (2015-03-31):
N/A
“Changelog”, (2013-09-15):
This page is a changelog for Gwern.net: a monthly reverse chronological list of recent major writings/
changes/ additions. Following my writing can be a little difficult because it is often so incremental. So every month, in addition to my regular /
r/ subreddit submissions, I write up reasonably-interesting changes and send it out to the mailing list in addition to a compilation of links & reviews (archives).Gwern “/r/gwern subreddit”, (2018-10-01):
A subreddit for posting links of interest and also for announcing updates to gwern.net (which can be used as a RSS feed). Submissions are categorized similar to the monthly newsletter and typically will be collated there.
“Tor DNM-related arrests, 2011-2015”, (2012-07-14):
I compile a table and discussion of all known arrests and prosecutions related to English-language Tor-Bitcoin darknet markets (DNMs) such as Silk Road 1, primarily 2011–2015, along with discussion of how they came to be arrested.
“LWer Effective Altruism donations, 2013-2014”, (2015-05-12):
A LW critic noted that the annual LW survey reported a median donation for “effective altruists” of $0, though the EA movement encourages strongly donations. I look closer at the 2013-2014 LW surveys and find in multiple regression that identifying as an EA does predict more donations after controlling for age and income, suggesting that the low EA median donation may be due to EAers having low income and youth (eg being a student) rather than being unusually or even averagely selfish.
“Meta-analysis of the heritability of human traits based on fifty years of twin studies”, (2015-05-18):
Despite a century of research on complex traits in humans, the relative importance and specific nature of the influences of genes and environment on human traits remain controversial. We report a meta-analysis of twin correlations and reported variance components for 17,804 traits from 2,748 publications including 14,558,903 partly dependent twin pairs, virtually all published twin studies of complex traits. Estimates of heritability cluster strongly within functional domains, and across all traits the reported heritability is 49%. For a majority (69%) of traits, the observed twin correlations are consistent with a simple and parsimonious model where twin resemblance is solely due to additive genetic variation. The data are inconsistent with substantial influences from shared environment or non-additive genetic variation. This study provides the most comprehensive analysis of the causes of individual differences in human traits thus far and will guide future gene-mapping efforts. All the results can be visualized using the MaTCH webtool.
http:/
/ www.unz.com/ gnxp/ through-the-wormhole-are-we-here-for-a-reason-premier-may-13th/ http:/
/ econlog.econlib.org/ archives/ 2015/ 05/ systematically_3.html http:/
/ digitalcommons.law.yale.edu/ cgi/ viewcontent.cgi?article=1532&context=fss_papers http:/
/ www.theatlantic.com/ magazine/ archive/ 1996/ 05/ the-nitrous-oxide-philosopher/ 376581/ http:/
/ www.newyorker.com/ magazine/ 2013/ 01/ 14/ the-science-of-sex-abuse?currentPage=all https:/
/ blogs.discovermagazine.com/ gnxp/ 2013/ 01/ why-the-future-wont-be-genetically-homogeneous/ “End-to-End Training of Deep Visuomotor Policies”, (2015-04-02):
Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot’s motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a partially observed guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.
“The Unreasonable Effectiveness of Recurrent Neural Networks”, (2015-05-21):
[Exploration of char-RNN neural nets for generating text. Karpathy codes a simple recurrent NN which generates character-by-character, and discovers that it is able to generate remarkably plausible text (at the syntactic level) for Paul Graham, Shakespeare, Wikipedia, LaTeX, Linux C code, and baby names—all using the same generic architecture. Visualizing the internal activity of the char-RNNs, they seem to be genuinely understanding some of the recursive syntactic structure of the text in a way that other text-generation methods like n-grams cannot. Inspired by this post, I began tinkering with char-RNNs for poetry myself; as of 2019, char-RNNs have been largely obsoleted by the new Transformer architecture, but recurrency will make a comeback and Karpathy’s post is still a valuable and fun read.]
There’s something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times. What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience I’ve in fact reached the opposite conclusion). Fast forward about a year: I’m training RNNs all the time and I’ve witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me. This post is about sharing some of that magic with you.We’ll train RNNs to generate text character by character and ponder the question “how is that even possible?”
“Using N-of-1 trials to improve patient management and save costs.”, (2010):
N-of-1 trials test treatment effectiveness within an individual patient. To assess (1) the impact of three different N-of-1 trials on both clinical and economic outcomes over 12 months and (2) whether the use of N-of-1 trials to target patients' access to high-cost drugs might be cost-effective in Australia. Descriptive study of management change, persistence, and costs summarizing three N-of-1 trials. Volunteer patients with osteoarthritis, chronic neuropathic pain or ADHD whose optimal choice of treatment was uncertain. Double-blind cyclical alternative medications for the three conditions. Detailed resource use, treatment and health outcomes (response) data collected by postal and telephone surveys immediately before and after the trial and at 3, 6 and 12 months. Estimated costs to the Australian healthcare system for the pre-trial vs. 12 months post-trial. Participants persisting with the joint patient-doctor decision 12 months after trial completion were 32% for osteoarthritis, 45% for chronic neuropathic pain and 70% for the ADHD trials. Cost-offsets were obtained from reduced usage of non-optimal drugs, and reduced medical consultations. Drug costs increased for the chronic neuropathic pain and ADHD trials due to many patients being on either low-cost or no pharmaceuticals before the trial. N-of-1 trials are an effective method to identify optimal treatment in patients in whom disease management is uncertain. Using this evidence-based approach, patients and doctors tend to persist with optimal treatment resulting in cost-savings. N-of-1 trials are clinically acceptable and may be an effective way of rationally prescribing some expensive long-term medicines.
https:/
/ izbicki.me/ blog/ nuclear-weapon-statistics-using-monoids-groups-and-modules-in-haskell https:/
/ arcsecond.wordpress.com/ 2010/ 09/ 08/ flat-priors-and-other-improbable-tales/ http:/
/ www.thefreelibrary.com/ E+unibus+pluram%3A+television+and+U.S.+fiction.-a013952319 “2,4-Dinitrophenol”, (2020-12-28):
2,4-Dinitrophenol (2,4-DNP or simply DNP) is an organic compound with the formula HOC6H3(NO2)2. It is a yellow, crystalline solid that has a sweet, musty odor. It sublimes, is volatile with steam, and is soluble in most organic solvents as well as aqueous alkaline solutions. When in a dry form, it is a high explosive and has an instantaneous explosion hazard. It is a precursor to other chemicals and is biochemically active, inhibiting adenosine triphosphate (ATP) production in cells with mitochondria. Its use as a dieting aid has been identified with severe side-effects, including a number of deaths.
“What’s in a Color? The Unique Human Health Effects of Blue Light”, (2010-01):
In 1958, J. Woodland Hastings and Beatrice M. Sweeney tested the ability of different wavelengths of light—corresponding to different colors—to shift the circadian rhythm in the photosynthetic marine dinoflagellate Gonyaulax polyedra. The greatest power to reset the organism’s daily meter lay in the blues, with a precipitous decline into the greens and a modest boost in the reds.
Hastings and Sweeney’s paper, published in the December 1958 Biological Bulletin, gathered dust for decades. No one thought these findings might hold any relevance for humans, whose circadian rhythms were then widely believed to be relatively insensitive to light.
But scientific discoveries in the past two decades have changed all that. Not only does light reset the human circadian rhythm, but the same blue light that has the strongest impact on dinoflagellates has equal power to reset our own clocks—although most visible wavelengths can reset the clock, the blues do the job with the greatest efficiency.
Now researchers are finding increasingly that an out-of-phase circadian rhythm is a health hazard. “Maintaining synchronized circadian rhythms is important to health and well-being,” says Dieter Kunz, director of the Sleep Research and Clinical Chronobiology Research Group at Charité–Universitätsmedizin Berlin. “A growing body of evidence suggests that a desynchronization of circadian rhythms may play a role in various tumoral diseases, diabetes, obesity, and depression.”
Shift workers, whom Kunz calls “a model for internal desynchronization,” are known to experience increased morbidity and mortality for a number of diseases, including cardiovascular disorders and cancer. In fact, in 2007, the World Health Organization decreed that shift work is a risk factor for breast cancer, and on that basis, in 2009, the Danish government began compensating some female shift workers with breast cancer.
At the same time, researchers have repeatedly shown that bright white light has the power to mitigate depression and other maladies of mood. An emergent recent literature suggests that blue light may be particularly potent for such applications.
/
docs/ technology/ 2013-02-07-nyrb-rich-divingdeepintodanger.html https:/
/ arcsecond.wordpress.com/ 2011/ 02/ 08/ late-night-thoughts-on-reading-scientology/ “Epigrams on Programming”, (1982-09-1):
[130 epigrams on computer science and technology, published in 1982, for ACM’s SIGPLAN journal, by noted computer scientist and programming language researcher Alan Perlis. The epigrams are a series of short, programming-language-neutral, humorous statements about computers and programming, distilling lessons he had learned over his career, which are widely quoted.]
8. A programming language is low level when its programs require attention to the irrelevant.…19. A language that doesn’t affect the way you think about programming, is not worth knowing.…54. Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy.
15. Everything should be built top-down, except the first time.…30. In programming, everything we do is a special case of something more general—and often we know it too quickly.…31. Simplicity does not precede complexity, but follows it.…58. Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it.…65. Make no mistake about it: Computers process numbers—not symbols. We measure our understanding (and control) by the extent to which we can arithmetize an activity.…56. Software is under a constant tension. Being symbolic it is arbitrarily perfectible; but also it is arbitrarily changeable.
1. One man’s constant is another man’s variable. 34. The string is a stark data structure and everywhere it is passed there is much duplication of process. It is a perfect vehicle for hiding information.
36. The use of a program to prove the 4-color theorem will not change mathematics—it merely demonstrates that the theorem, a challenge for a century, is probably not important to mathematics.
39. Re graphics: A picture is worth 10K words—but only those to describe the picture. Hardly any sets of 10K words can be adequately described with pictures.
48. The best book on programming for the layman is Alice in Wonderland; but that’s because it’s the best book on anything for the layman.
77. The cybernetic exchange between man, computer and algorithm is like a game of musical chairs: The frantic search for balance always leaves one of the 3 standing ill at ease.…79. A year spent in artificial intelligence is enough to make one believe in God.…84. Motto for a research laboratory: What we work on today, others will first think of tomorrow.
91. The computer reminds one of Lon Chaney—it is the machine of a thousand faces.
7. It is easier to write an incorrect program than understand a correct one.…93. When someone says “I want a programming language in which I need only say what I wish done,” give him a lollipop.…102. One can’t proceed from the informal to the formal by formal means.
100. We will never run out of things to program as long as there is a single program around.
108. Whenever 2 programmers meet to criticize their programs, both are silent.…112. Computer Science is embarrassed by the computer.…115. Most people find the concept of programming obvious, but the doing impossible. 116. You think you know when you can learn, are more sure when you can write, even more when you can teach, but certain when you can program. 117. It goes against the grain of modern education to teach children to program. What fun is there in making plans, acquiring discipline in organizing thoughts, devoting attention to detail and learning to be self-critical?
https:/
/ pdfs.semanticscholar.org/ 2066/ e63f9f668d9e9789949af2f2d8a55bf72c49.pdf “How Much of the Web Is Archived?”, (2012-12-26):
Although the Internet Archive’s Wayback Machine is the largest and most well-known web archive, there have been a number of public web archives that have emerged in the last several years. With varying resources, audiences and collection development policies, these archives have varying levels of overlap with each other. While individual archives can be measured in terms of number of URIs, number of copies per URI, and intersection with other archives, to date there has been no answer to the question "How much of the Web is archived?" We study the question by approximating the Web using sample URIs from DMOZ, Delicious, Bitly, and search engine indexes; and, counting the number of copies of the sample URIs exist in various public web archives. Each sample set provides its own bias. The results from our sample sets indicate that range from 35 between 2-5 copies, 1 in public web archives. The number of URI copies varies as a function of time, but no more than 31.3
http:/
/ www.newyorker.com/ magazine/ 2015/ 05/ 11/ overkill-atul-gawande http:/
/ www.theatlantic.com/ magazine/ archive/ 2012/ 12/ the-insourcing-boom/ 309166/ ?single_page=true http:/
/ www.bloomberg.com/ bw/ articles/ 2012-11-29/ the-march-of-robots-into-chinese-factories#r=else-fs http:/
/ www.econtalk.org/ archives/ 2013/ 01/ jerven_on_measu.html http:/
/ rsta.royalsocietypublishing.org/ content/ 369/ 1956/ 4913.full.pdf http:/
/ famouspoetsandpoems.com/ poets/ william_butler_yeats/ poems/ 10349 http:/
/ fernwanderweg.blogspot.ch/ 2013/ 01/ a-rockslide-in-action-arch-falls-into.html “Ready Player One”, (2020-12-28):
Ready Player One is a 2011 science fiction novel, and the debut novel of American author Ernest Cline. The story, set in a dystopia in 2045, follows protagonist Wade Watts on his search for an Easter egg in a worldwide virtual reality game, the discovery of which would lead him to inherit the game creator's fortune. Cline sold the rights to publish the novel in June 2010, in a bidding war to the Crown Publishing Group. The book was published on August 16, 2011. An audiobook was released the same day; it was narrated by Wil Wheaton, who was mentioned briefly in one of the chapters. In 2012, the book received an Alex Award from the Young Adult Library Services Association division of the American Library Association and won the 2011 Prometheus Award.A film adaptation, screenwritten by Cline and Zak Penn and directed by Steven Spielberg, was released on March 29, 2018. A sequel, Ready Player Two, was released on November 24, 2020.
“A Shropshire Lad”, (2020-12-22):
A Shropshire Lad is a collection of sixty-three poems by the English poet Alfred Edward Housman, published in 1896. Selling slowly at first, it then rapidly grew in popularity, particularly among young readers. Composers began setting the poems to music less than ten years after their first appearance, and many parodists have satirised Housman's themes and poetic style.
https:/
/ slatestarcodex.com/ 2015/ 05/ 08/ links-515-tall-and-linky/ http:/
/ www.amazon.com/ Drift-into-Failure-Sidney-Dekker/ dp/ 1409422216 “Pumping Iron”, (2020-12-28):
Pumping Iron is a 1977 American docudrama about the world of professional bodybuilding, with a focus on the 1975 IFBB Mr. Universe and 1975 Mr. Olympia competitions. Directed by George Butler and Robert Fiore and edited by Geof Bartz and Larry Silk, it is inspired by a book of the same name by Butler and Charles Gaines, and nominally centers on the competition between Arnold Schwarzenegger and one of his primary competitors for the title of Mr. Olympia, Lou Ferrigno. The film also features segments on bodybuilders Franco Columbu and Mike Katz, in addition to appearances by Ken Waller, Ed Corney, Serge Nubret, and other famous bodybuilders of the era.
“Monthly Girls' Nozaki-kun”, (2020-12-28):
Monthly Girls' Nozaki-kun is a Japanese four-panel romantic comedy manga written and illustrated by Izumi Tsubaki. The chapters are serialized online in Gangan Online, and have been published in both physical and digital releases of Shoujo Romance Girly and tankōbon volumes by Square Enix. An anime adaptation by Doga Kobo aired in July 2014.
“Gwern.net newsletter (Substack subscription page)”, (2013-12-01):
Subscription page for the monthly gwern.net newsletter. There are monthly updates, which will include summaries of projects I’ve worked on that month (the same as the changelog), collations of links or discussions from my subreddit, and book/movie reviews. You can also browse the archives since December 2013.
“Gwern.net newsletter archives”, (2013-12-01):
Newsletter tag: archive of all issues back to 2013 for the gwern.net newsletter (monthly updates, which will include summaries of projects I’ve worked on that month (the same as the changelog), collations of links or discussions from my subreddit, and book/movie reviews.)
“RNN metadata for mimicking individual author style”, (2015-09-12):
Char-RNNs are unsupervised generative models which learn to mimic text sequences. I suggest extending char-RNNs with inline metadata such as genre or author prefixed to each line of input, allowing for better & more efficient metadata, and more controllable sampling of generated output by feeding in desired metadata. A 2015 experiment using
torch-rnn
on a set of ~30 Project Gutenberg e-books (1 per author) to train a large char-RNN shows that a char-RNN can learn to remember metadata such as authors, learn associated prose styles, and often generate text visibly similar to that of a specified author.I further try & fail to train a char-RNN on Geocities HTML for unclear reasons.
More successfully, I experiment in 2019 with a recently-developed alternative to char-RNNs, the Transformer NN architecture, by finetuning training OpenAI’s GPT-2-117M Transformer model on a much larger (117MB) Project Gutenberg poetry corpus using both unlabeled lines & lines with inline metadata (the source book). The generated poetry is much better. And GPT-3 is better still.
“GPT-2 Neural Network Poetry”, (2019-03-03):
In February 2019, following up on my 2015–2016 text-generation experiments with char-RNNs, I experiment with the cutting-edge Transformer NN architecture for language modeling & text generation. Using OpenAI’s GPT-2-117M (117M) model pre-trained on a large Internet corpus and nshepperd’s finetuning code, I retrain GPT-2-117M on a large (117MB) Project Gutenberg poetry corpus. I demonstrate how to train 2 variants: “GPT-2-poetry”, trained on the poems as a continuous stream of text, and “GPT-2-poetry-prefix”, with each line prefixed with the metadata of the PG book it came from. In May 2019, I trained the next-largest GPT-2, GPT-2-345M, similarly, for a further quality boost in generated poems. In October 2019, I retrained GPT-2-117M on a Project Gutenberg corpus with improved formatting, and combined it with a contemporary poem dataset based on Poetry Foundation’s website. .> With just a few GPU-days on 1080ti GPUs, GPT-2-117M finetuning can produce high-quality poetry which is more thematically consistent than my char-RNN poems, capable of modeling subtle features like rhyming, and sometimes even a pleasure to read. I list the many possible ways to improve poem generation and further approach human-level poems. For the highest-quality AI poetry to date, see my followup page, “GPT-3 Creative Writing”.
For anime plot summaries, see TWDNE; for generating ABC-formatted folk music, see “GPT-2 Folk Music” & “GPT-2 Preference Learning for Music and Poetry Generation”; for playing chess, see “A Very Unlikely Chess Game”; for the Reddit comment generator, see SubSimulatorGPT-2; for fanfiction, the Ao3; and for video games, the walkthrough model. For OpenAI’s GPT-3 followup, see “GPT-3: Language Models are Few-Shot Learners”.
“Alan Perlis”, (2020-12-27):
Alan Jay Perlis was an American computer scientist and professor at Purdue University, Carnegie Mellon University and Yale University. He is best known for his pioneering work in programming languages and was the first recipient of the Turing Award.
“GPT-3 Creative Fiction”, (2020-06-19):
I continue my AI poetry generation experiments with OpenAI’s 2020 GPT-3, which is 116× larger, and much more powerful, than the 2019 GPT-2. GPT-3, however, is not merely a quantitative tweak yielding “GPT-2 but better”—it is qualitatively different, exhibiting eerie runtime learning capabilities allowing even the raw model, with zero finetuning, to “meta-learn” many textual tasks purely by example or instruction. One does not train or program GPT-3 in a normal way, but one engages in dialogue and writes prompts to teach GPT-3 what one wants.
Experimenting through the OpenAI Beta API in June 2020, I find that GPT-3 does not just match my finetuned GPT-2-1.5b-poetry for poem-writing quality, but exceeds it, while being versatile in handling poetry, Tom Swifty puns, science fiction, dialogue like Turing’s Turing-test dialogue, literary style parodies… As the pièce de résistance, I recreate Stanislaw Lem’s Cyberiad’s “Trurl’s Electronic Bard” poetry using GPT-3. (Along the way, I document instances of how the BPE text encoding unnecessarily damages GPT-3’s performance on a variety of tasks, how to best elicit the highest-quality responses, common errors people make in using GPT-3, and test out GPT-3’s improvements in NN weak points like logic or commonsense knowledge.)
GPT-3’s samples are not just close to human level: they are creative, witty, deep, meta, and often beautiful. They demonstrate an ability to handle abstractions, like style parodies, I have not seen in GPT-2 at all. Chatting with GPT-3 feels uncannily like chatting with a human. I was impressed by the results reported in the GPT-3 paper, and after spending a week trying it out, I remain impressed.
This page records GPT-3 samples I generated in my explorations, and thoughts on how to use GPT-3 and its remaining weaknesses. I hope you enjoy them even a tenth as much as I enjoyed testing GPT-3 and watching the completions scroll across my screen.
“Poetry Foundation”, (2020-12-27):
The Poetry Foundation is a Chicago-based American foundation created to promote poetry in the wider culture. It was formed from Poetry magazine, which it continues to publish, with a 2003 gift of $200 million from philanthropist Ruth Lilly.
“GPT-2 Folk Music”, (2019-11-01):
In November 2019, I experimented with training a GPT-2 neural net model to generate folk music in the high-level ABC music text format, following previous work in 2016 which used a char-RNN trained on a ‘The Session’ dataset. A GPT-2 hypothetically can improve on an RNN by better global coherence & copying of patterns, without problems with the hidden-state bottleneck.
I encountered problems with the standard GPT-2 model’s encoding of text which damaged results, but after fixing that, I successfully trained it on n = 205,304 ABC music pieces taken from The Session & ABCnotation.com. The resulting music samples are in my opinion quite pleasant. (A similar model was later retrained by Geerlings & Meroño-Peñuela 2020.)
The ABC folk model & dataset are available for download, and I provide for listening selected music samples as well as medleys of random samples from throughout training.
We followed the ABC folk model with an ABC-MIDI model: a dataset of 453k ABC pieces decompiled from MIDI pieces, which fit into GPT-2-117M with an expanded context window when trained on TPUs. The MIDI pieces are far more diverse and challenging, and GPT-2 underfits and struggles to produce valid samples but when sampling succeeds, it can generate even better musical samples.
“GPT-2 Preference Learning for Music Generation”, (2019-12-16):
Standard language generation neural network models, like GPT-2, are trained via likelihood training to imitate human text corpuses. Generated text suffers from persistent flaws like repetition, due to myopic generation word-by-word, and cannot improve on the training data because they are trained to predict ‘realistic’ completions of the training data.
A proposed alternative is to use reinforcement learning to train the NNs, to encourage global properties like coherence & lack of repetition, and potentially improve over the original corpus’s average quality. Preference learning trains a reward function on human ratings, and uses that as the ‘environment’ for a blackbox DRL algorithm like PPO.
OpenAI released a codebase implementing this dual-model preference learning approach for textual generation, based on GPT-2. Having previously used GPT-2 for poetry & music generation, I experimented with GPT-2 preference learning for unconditional music and poetry generation.
I found that preference learning seemed to work better for music than poetry, and seemed to reduce the presence of repetition artifacts, but the results, at n≅7,400 ratings compiled over 23 iterations of training+sampling November 2019–January 2020, are not dramatically better than alternative improvements like scaling up models or more thorough data-cleaning or more stringent sample curation. My blind ratings using n≅200 comparisons showed no large advantage for the RL-tuned samples (winning only 93 of 210 comparisons, or 46%).
This may be due to insufficient ratings, bad hyperparameters, or not using samples generated with common prefixes, but I suspect it’s the former, as some NLP tasks in Ziegler et al 2019 required up to 60k ratings for good performance, and the reward model appeared to achieve poor performance & succumb to adversarial examples easily.
Working with it, I suspect that preference learning is unnecessarily sample-inefficient & data-inefficient, and that the blackbox reinforcement learning approach is inferior to directly using the reward model to optimize text samples, and propose two major architectural overhauls: have the reward model directly model the implied ranking of every datapoint, and drop the agent model entirely in favor of backprop-powered gradient ascent which optimizes sequences to maximize the reward model’s output.
“A Very Unlikely Chess Game”, (2020-01-06):
…Black is GPT-2. Its excuse [for this chess blunder] is that it’s a text prediction program with no concept of chess. As far as it knows, it’s trying to predict short alphanumeric strings like “e2e4” or “Nb7”. Nobody told it this represents a board game. It doesn’t even have a concept of 2D space that it could use to understand such a claim. But it still captured my rook! Embarrassing!…Last month, I asked him if he thought GPT-2 could play chess. I wondered if he could train it on a corpus of chess games written in standard notation (where, for example, e2e4 means “move the pawn at square e2 to square e4”). There are literally millions of games written up like this. GPT-2 would learn to predict the next string of text, which would correspond to the next move in the chess game. Then you would prompt it with a chessboard up to a certain point, and it would predict how the chess masters who had produced its training data would continue the game – ie make its next move using the same heuristics they would. Gwern handed the idea to his collaborator Shawn Presser, who had a working GPT-2 chess engine running within a week:…You can play against GPT-2 yourself by following the directions in the last tweet, though it won’t be much of a challenge for anyone better than I am.
…What does this imply? I’m not sure (and maybe it will imply more if someone manages to make it actually good). It was already weird to see something with no auditory qualia learn passable poetic meter. It’s even weirder to see something with no concept of space learn to play chess. Is any of this meaningful? How impressed should we be that the same AI can write poems, compose music, and play chess, without having been designed for any of those tasks? I still don’t know.
[See also the much later Noever et al 2020a/Noever et al 2020b who do the exact same thing in applying GPT-2 to Go SGF/chess PGN games.]
/
docs/ www/ old.reddit.com/ 7eaaa81a26404ef60df4279ee1f1b0c829d73be5.html “Language Models are Few-Shot Learners”, (2020-05-28):
Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions—something which current NLP systems still largely struggle to do.
Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10× more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3’s few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora.
Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.
“GPT-3 Weaknesses: Byte-Pair Encodings (BPEs)”, (2020-06-23):
Compared to GPT-2, GPT-3 improves performance on character-level tasks like rhyming, alliteration, punning, anagrams or permutations, acrostic poems, and arithmetic less than expected, despite being very good at many other closely-related kinds of writings like satire.
Why? A plausible explanation is an obscure technical detail: as a performance optimization, GPT does not see characters but sub-word-chunks called “byte-pair encodings” (BPEs). Because GPTs never see characters but opaque partial-words, which vary chaotically based on the specific word and even the surrounding context, they are unable to easily learn about character-level aspects of language, like similar spellings or sounds, and are forced to learn relationships much more indirectly, like by brute-force memorizing of pairs of words.
Some experiments with reformatting GPT-3’s poorest-performing tasks to avoid inconsistent BPE encodings of strings shows small to large performance gains, consistent with this theory.
“GPT-3: Prompts As Programming”, (2020-06-23):
The GPT-3 neural network is so large a model in terms of power and dataset that it exhibits qualitatively different behavior: you do not apply it to a fixed set of tasks which were in the training dataset, requiring retraining on additional data if one wants to handle a new task (as one would have to retrain GPT-2); instead, you interact with it, expressing any task in terms of natural language descriptions, requests, and examples, tweaking the prompt until it “understands” & it meta-learns the new task based on the high-level abstractions it learned from the pretraining.
This is a rather different way of using a DL model, and it’s better to think of it as a new kind of programming, where the prompt is now a “program” which programs GPT-3 to do new things.
“GPT-2 Folk Music: Training a Spaceless Model”, (2019-12-12):
While training a GPT-2-117M on a folk music corpus written in ABC format, persistent syntax errors kept being generated by an otherwise-high-quality model: random spaces would be generated, rendering a music piece either erroneous or lower-quality. Why? It seems to be some issue with the GPT BPE encoder handling of spaces which makes it difficult to emit the right space-separated characters. We found that ABC does not actually require spaces, and we simply removed all spaces from the corpus—noticeably improving quality of generated pieces.
“Interacting with GPT–2 to Generate Controlled and Believable Musical Sequences in ABC Notation”, (2020-10-16):
Generating symbolic music with language models is a promising research area, with potential applications in automated music composition. Recent work shows that Transformer architectures can learn to generate compelling four-instrument scores from large MIDI datasets. In this paper, we re-train the small (117M) GPT-2 model with a large dataset in ABC notation, and generate samples of single-instrument folk music. Our BLEU and ROUGE based quantitative, and survey based qualitative, evaluations suggest that ABC notation is learned with syntactical and semantic correctness, and that samples contain robust and believable n-grams.
“Generating MIDI Music With GPT-2: Generating MIDI by converting to ABC and expanding the GPT-2 context window—works, if only just”, (2020-04-25):
To expand the ABC GPT-2 model to cover a wider variety of musical genres, I turn to the next-most compact widespread music encoding format: MIDI. There are hundreds of thousands of MIDIs which can be decompiled to ABC format, averaging ~10k BPEs—within GPT-2-117M’s feasible context window when trained on TPUs (which permit training of context windows up to 30k wide).
We compile the ABC from before and 2 large MIDI datasets, and convert to ABC, yielding ~453k usable ABC-MIDI musical files (~5.1GB of text). We trained January–April 2020 on our TPU swarm (with many interruptions), achieving a final loss of ~0.2 (underfit).
Sampling from the final model is hit-or-miss as it is prone to the likelihood repetition trap and it generates instruments one-by-one so it is common for instruments to be cut off or otherwise broken during sampling (indicating that sampling is increasingly a bigger problem than training for long-range sequence modeling). However, successful pieces are possible, and are musically far more diverse than the folk ABC corpus, with many pleasingly complex samples.
/
GPT-2-preference-learning#bradley-terry-preference-learning /
GPT-2-preference-learning#optimization-by-backprop-not-blackbox “Language Models are Unsupervised Multitask Learners”, (2019-02-14):
Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on task-specific datasets.
We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset—matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples.
The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text.
These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
“The Go Transformer: Natural Language Modeling for Game Play”, (2020-07-07):
This work applies natural language modeling to generate plausible strategic moves in the ancient game of Go. We train the Generative Pretrained Transformer (GPT-2) to mimic the style of Go champions as archived in Smart Game Format (SGF), which offers a text description of move sequences. The trained model further generates valid but previously unseen strategies for Go. Because GPT-2 preserves punctuation and spacing, the raw output of the text generator provides inputs to game visualization and creative patterns, such as the Sabaki project’s game engine using auto-replays. Results demonstrate that language modeling can capture both the sequencing format of championship Go games and their strategic formations. Compared to random game boards, the GPT-2 fine-tuning shows efficient opening move sequences favoring corner play over less advantageous center and side play. Game generation as a language modeling task offers novel approaches to more than 40 other board games where historical text annotation provides training data (e.g., Amazons & Connect 4/6).
“The Chess Transformer: Mastering Play using Generative Language Models”, (2020-08-02):
This work demonstrates that natural language transformers can support more generic strategic modeling, particularly for text-archived games. In addition to learning natural language skills, the abstract transformer architecture can generate meaningful moves on a chessboard. With further fine-tuning, the transformer learns complex gameplay by training on 2.8 million chess games in Portable Game Notation. After 30,000 training steps, OpenAI’s Generative Pre-trained Transformer (GPT-2) optimizes weights for 774 million parameters. This fine-tuned Chess Transformer generates plausible strategies and displays game formations identifiable as classic openings, such as English or the Slav Exchange. Finally, in live play, the novel model demonstrates a human-to-transformer interface that correctly filters illegal moves and provides a novel method to challenge the transformer’s chess strategies. We anticipate future work will build on this transformer’s promise, particularly in other strategy games where features can capture the underlying complex rule syntax from simple but expressive player annotations.
“Decompiler”, (2020-12-22):
A decompiler is a computer program that takes an executable file as input, and attempts to create a high level source file which can be recompiled successfully. It is therefore the opposite of a compiler, which takes a source file and makes an executable. Decompilers are usually unable to perfectly reconstruct the original source code, and as such, will frequently produce obfuscated code. Nonetheless, decompilers remain an important tool in the reverse engineering of computer software.