GPT-3 Creative Fiction

Creative writing by OpenAI’s GPT-3 model, demonstrating poetry, dialogue, puns, literary parodies, and storytelling. Plus advice on effective GPT-3 prompt programming & avoiding common errors.
NN, fiction, GPT, poetry, humor, transhumanism
2020-06-192020-09-28 finished certainty: likely importance: 8


I con­tinue my AI poetry gen­er­a­tion exper­i­ments with Ope­nAI’s 2020 GPT-3, which is 116× larg­er, and much more pow­er­ful, than the 2019 GPT-2. GPT-3, how­ev­er, is not merely a quan­ti­ta­tive tweak yield­ing “GPT-2 but bet­ter”—it is qual­i­ta­tively differ­ent, exhibit­ing eerie run­time learn­ing capa­bil­i­ties allow­ing even the raw mod­el, with zero fine­tun­ing, to “meta-learn” many tex­tual tasks purely by exam­ple or instruc­tion. One does not train or pro­gram GPT-3 in a nor­mal way, but one engages in dia­logue and writes prompts to teach GPT-3 what one wants.

Exper­i­ment­ing through the Ope­nAI Beta API in June 2020, I find that GPT-3 does not just match my fine­tuned GPT-2-1.5b-poetry for poem-writ­ing qual­i­ty, but exceeds it, while being ver­sa­tile in han­dling poetry, Tom Swifty puns, sci­ence fic­tion, dia­logue like Tur­ing’s Tur­ing-test dia­logue, lit­er­ary style par­o­dies… As the pièce de résis­tance, I recre­ate Stanis­law Lem’s Cybe­riad’s “Trurl’s Elec­tronic Bard” poetry using GPT-3. (Along the way, I doc­u­ment instances of how the BPE text encod­ing unnec­es­sar­ily dam­ages GPT-3’s per­for­mance on a vari­ety of tasks, how to best elicit the high­est-qual­ity respons­es, com­mon errors peo­ple make in using GPT-3, and test out GPT-3’s improve­ments in NN weak points like logic or com­mon­sense knowl­edge.)

GPT-3’s sam­ples are not just close to human lev­el: they are cre­ative, wit­ty, deep, meta, and often beau­ti­ful. They demon­strate an abil­ity to han­dle abstrac­tions, like style par­o­dies, I have not seen in GPT-2 at all. Chat­ting with GPT-3 feels uncan­nily like chat­ting with a human. I was impressed by the results reported in the GPT-3 paper, and after spend­ing a week try­ing it out, I remain impressed.

This page records GPT-3 sam­ples I gen­er­ated in my explo­rations, and thoughts on how to use GPT-3 and its remain­ing weak­nesses. I hope you enjoy them even a tenth as much as I enjoyed test­ing GPT-3 and watch­ing the com­ple­tions scroll across my screen.

The lat­est and great­est neural net­work for unre­stricted nat­ural lan­guage gen­er­a­tion is Ope­nAI’s . GPT-3 is like and the I’ve used exten­sively before1—only much more so, and then going beyond them in a fas­ci­nat­ing new way.

Scal­ing works: quan­tity is a qual­ity all its own. The scal­ing of GPT-2-1.5b by 116× to GPT-3-175b has worked sur­pris­ingly well and unlocked remark­able flex­i­bil­ity in the form of meta-learn­ing, where GPT-3 can infer new pat­terns or tasks and fol­low instruc­tions purely from text fed into it. What can we do with GPT-3? Here, we’re all about hav­ing fun while prob­ing GPT-3’s abil­i­ties for cre­ative writ­ing tasks, pri­mar­ily (but far from lim­ited to) poet­ry. For­tu­nate­ly, Ope­nAI granted me access to their Beta API ser­vice which pro­vides a hosted GPT-3 mod­el, let­ting me spend a great deal of time inter­act­ing with GPT-3 and writ­ing things. Nat­u­ral­ly, I’d like to write poetry with it: but GPT-3 is too big to fine­tune like I did GPT-2, and OA does­n’t (yet) sup­port any kind of train­ing through their API. Must we con­tent our­selves with mediocre generic poet­ry, at best, deprived of fine­tun­ing directly on cho­sen poetry cor­puses or authors we might like to par­o­dy? How much does GPT-3 improve and what can it do?

Turns out: a lot! Below, I walk through first impres­sions of using GPT-3, and count­less sam­ples. In the lat­est twist on , GPT-3 still strug­gles with com­mon­sense rea­son­ing & fac­tual knowl­edge of the sort a human finds effort­less after child­hood, but han­dles well things like satire & fic­tion writ­ing & poet­ry, which we humans find so diffi­cult & impres­sive even as adults. In addi­tion to the Cybe­riad, I’d per­son­ally high­light the Navy Seal & Harry Pot­ter par­o­dies, the Dev­il’s Dic­tio­nary of Science/Academia, “Uber Poem”, “The Uni­verse Is a Glitch” poem (with AI-gen­er­ated rock music ver­sion), & “Where the Side­walk Ends”.

What Benchmarks Miss: Demos

The GPT-3 paper includes eval­u­a­tion of zero-shot/few-shot per­for­mance across a wide range of tasks, but I fear that unless one is famil­iar with the (deadly dull) bench­marks in ques­tion, it won’t be impres­sive. You can skip to the appen­dix for more exam­ple like its , or browse the ran­dom sam­ples.

The orig­i­nal includes many strik­ing exam­ples of GPT-3 capa­bil­i­ties rang­ing from chat­bots to ques­tion-based Wikipedia search to legal dis­cov­ery to home­work grad­ing to trans­la­tion; I’d high­light ‘s Dragon model (exam­ple), and “Spread­sheets”/“Nat­ural Lan­guage Shell”/“Code Com­ple­tion”2. Andrew Mayne describes using GPT-3 to gen­er­ate book rec­om­men­da­tion lists & read inter­ac­tive sto­ries & engage in con­ver­sa­tions with his­tor­i­cal fig­ures like Ada Lovelace3, sum­ma­rize texts for ele­men­tary school chil­dren (also avail­able as a ser­vice now, Sim­pli­fy.so) or sum­ma­rize movies in emoji (Matrix: “🤖🤐”; Hunger Games: “🏹🥊🌽🏆”), con­vert screen­play ↔︎ story, summarize/write emails, and rewrite HTML. Paras Chopra finds that GPT-3 knows enough Wikipedia & other URLs that the basic Q&A behav­ior can be aug­mented to include a ’source’ URL, and so one can make a knowl­edge base ‘search engine’ with click­able links for any asser­tion (ie. the user can type in “What year was Richard Dawk­in’s The Selfish Gene pub­lished?” and GPT-3 will return a tuple like ("The Selfish Gene was published in 1976","https://en.wikipedia.org/wiki/The_Selfish_Gene") which can be parsed & pre­sented as a search engine). Andreas Stuhlmüller explored using it to cre­ate sug­ges­tions for pre­dict­ing on by break­ing down high­-level fore­cast­ing ques­tions. tests few-shot GPT-3 on com­mon moral rea­son­ing prob­lems, and while it does­n’t do nearly as well as a fine­tuned over­all, inter­est­ing­ly, its per­for­mance degrades the least on the prob­lems con­structed to be hard­est.

exper­i­mented with Crunchy­roll anime, Star Trek: The Next Gen­er­a­tion, & Sein­feld plot sum­maries. Max Woolf has a repo of GPT-3 exam­ple prompts & var­i­ous com­ple­tions such as the orig­i­nal GPT-2 “uni­corn” arti­cle, Revenge of the Sith, Stack Over­flow Python ques­tions, and his own tweets (note that many sam­ples are bad because the prompts & hyper­pa­ra­me­ters are often delib­er­ately bad, eg the tem­per­a­ture=0 sam­ples, to demon­strate the large effect of poor­ly-cho­sen set­tings as a warn­ing). Janelle Shan exper­i­mented with weird dog descrip­tions to accom­pany deformed GAN-dog sam­ples, and 10,000-year nuclear waste warn­ings based on the famous on for the . Sum­mer­s-S­tay tried imi­tat­ing Neil Gaiman & Terry Pratch­ett short sto­ries with excel­lent results. Arram Sabetti has done “songs, sto­ries, press releas­es, gui­tar tabs, inter­views, essays, and tech­ni­cal man­u­als”, with his Elon Musk Dr. Seuss poems a par­tic­u­lar high­light. Paul Bel­low (LitRPG) exper­i­ments with RPG back­story gen­er­a­tion. Merz­men­sch Kos­mopol enjoyed gen­er­at­ing love let­ters writ­ten by a toast­er. co-wrote a SF Sin­gu­lar­ity short story with GPT-3, fea­tur­ing reg­u­lar meta where he & GPT-3 debate the story in-char­ac­ter. Daniel Bigham plays what he dubs “19 ” which links Mon­go­lia to (even­tu­al­ly) Kevin Bacon. Alexan­der Reben prompted for con­tem­po­rary art/sculpture descrip­tions, and phys­i­cally cre­ated some of the ones he liked best using a vari­ety of medi­ums like match­sticks, toi­let plungers, keys, col­lage, etc.

Harley Turan found that, some­how, GPT-3 can asso­ciate plau­si­ble hex codes with spe­cific emo­ji. Even more per­plex­ing­ly, Sharif Shameem dis­cov­ered that GPT-3 could write (a Javascript+CSS hybrid) accord­ing to a spec­i­fi­ca­tion like “5 but­tons, each with a ran­dom color and num­ber between 1–10” or increase/decrease a bal­ance in React or a very sim­ple to-do list and it would often work, or require rel­a­tively minor fix­es. GPT-3 can also write some sim­ple SVG shapes or SVG/Chart.js bar graphs, do text→LaTeX and SQL queries. While I don’t think pro­gram­mers need worry about unem­ploy­ment (NNs will be a com­ple­ment until they are so good they are a sub­sti­tute), the code demos are impres­sive in illus­trat­ing just how diverse the skills cre­ated by pre­train­ing on the Inter­net can be. Par­tic­u­larly intrigu­ing in terms of code gen­er­a­tion is Jor­dan Singer’s Figma plu­gin which appar­ently cre­ates a new Figma lay­out DSL & few-shot teaches it to GPT-3.

(I’d also high­light GPT-3’s ver­sion of the famous GPT-2 recy­cling rant, an attempt at “Epic Rap Bat­tles of His­tory”, GPT-3 play­ing 200-word table­top RPGs with itself, the Serendip­ity rec­om­men­da­tion engine which asks GPT-3 for movie/book rec­om­men­da­tions, and Lawder’s food label ingre­di­ent sum­ma­rizer.)

One under­ex­plored area of GPT-3 is using its “search” API, which as the name indi­cates, takes a text prompt (the query) and searches a large set of pos­si­ble results, and returns the ‘most sim­i­lar’ one, in a highly abstract sense; Andrew Mayne demon­strates that it’s much more than a sim­ple key­word search engine by doing things like search­ing for abstract movie plots.4

5: eg given the as the 7 pos­si­ble results, which one does the query match? Result: “Voy­age and Return: The pro­tag­o­nist goes to a strange land and, after over­com­ing the threats it poses or learn­ing impor­tant lessons unique to that loca­tion, they return with expe­ri­ence.”

The search API, interestingly, doesn't use an embedding, as one might expect; while [iGPT](https://openai.com/blog/image-gpt/ "Image GPT: We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. By establishing a correlation between sample quality and image classification accuracy, we show that our best generative model also contains features competitive with top convolutional nets in the unsupervised setting.") demonstrates that GPTs *can* be used to create embeddings, it seems OA has not done so with GPT-3. It instead borrows a trick from algorithmic information theory: the best result is the one that when, appended to the query, compresses most easily/is predicted with the least error/has the biggest average logit.

GPT-3 Implications

For my main dis­cus­sion of why GPT-3 works and its impli­ca­tions, see (see also ). Below is the sum­ma­ry:

GPT-3, announced by Ope­nAI in May 2020, is the largest neural net­work ever trained, by over an order of mag­ni­tude. Trained on Inter­net text data, it is the suc­ces­sor to GPT-2, which sur­prised every­one by its nat­ural lan­guage under­stand­ing & gen­er­a­tion abil­i­ty. GPT-3 is even more sur­pris­ing in that this vast increase in size did not run into dimin­ish­ing returns, as many expect­ed, but the ben­e­fits of scale con­tin­ued to hap­pen as fore­casted by Ope­nAI. These ben­e­fits were not merely learn­ing more facts & text than GPT-2, but qual­i­ta­tively dis­tinct & sur­pris­ing in show­ing meta-learn­ing: while GPT-2 learned how to do com­mon nat­ural lan­guage tasks like text sum­ma­riza­tion, GPT-3 instead learned how to fol­low direc­tions and learn new tasks from a few exam­ples. (As a result, GPT-3 out­puts & inter­ac­tion are more fas­ci­nat­ing & human-like than GPT-2.)

While the imme­di­ate appli­ca­tions of GPT-3, like my poetry or humor writ­ings, are nice, the short­-term impli­ca­tions of GPT-3 are much more impor­tant.

First, while GPT-3 is expen­sive by con­ven­tional DL stan­dards, it is cheap by scientific/commercial/military/government bud­get stan­dards, and the results indi­cate that mod­els could be made much larg­er. Sec­ond, mod­els can also be made much more pow­er­ful, as GPT is an old approach known to be flawed in both minor & major ways, and far from an ‘ideal’ Trans­former. Third, GPT-3’s capa­bil­i­ties come from learn­ing on raw (un­su­per­vised) data; that has long been one of the weak­est areas of DL, hold­ing back progress in other areas like rein­force­ment learn­ing or robot­ics. Mod­els like GPT-3 sug­gest that large unsu­per­vised mod­els will be vital com­po­nents of future DL sys­tems, as they can be ‘plugged into’ sys­tems to imme­di­ately pro­vide under­stand­ing of the world, humans, nat­ural lan­guage, and rea­son­ing.

The meta-learn­ing has a longer-term impli­ca­tion: it is a demon­stra­tion of the bless­ings of scale, where prob­lems with sim­ple neural net­works van­ish, and they become more pow­er­ful, more gen­er­al­iz­able, more human-like when sim­ply made very large & trained on very large datasets with very large com­pute—even though those prop­er­ties are believed to require com­pli­cated archi­tec­tures & fancy algo­rithms (and this per­ceived need dri­ves much research). Unsu­per­vised mod­els ben­e­fit from this, as train­ing on large cor­puses like Inter­net-s­cale text present a myr­iad of diffi­cult prob­lems to solve; this is enough to drive meta-learn­ing despite GPT not being designed for meta-learn­ing in any way. (This fam­ily of phe­nom­ena is per­haps dri­ven by neural net­works func­tion­ing as ensem­bles of many sub­-net­works with them all aver­ag­ing out to an Occam’s razor, which for small data & mod­els, learn super­fi­cial or mem­o­rized parts of the data, but can be forced into true learn­ing by mak­ing the prob­lems hard & rich enough.)

The bless­ings of scale in turn sup­port a rad­i­cal the­o­ry: an old AI par­a­digm held by a few pio­neers in con­nec­tion­ism (early arti­fi­cial neural net­work research) and by more recent deep learn­ing researchers, the scal­ing hypoth­e­sis. The scal­ing hypoth­e­sis regards the bless­ings of scale as the secret of AGI: intel­li­gence is ‘just’ sim­ple neural units & learn­ing algo­rithms applied to diverse expe­ri­ences at a (cur­rent­ly) unreach­able scale. As increas­ing com­pu­ta­tional resources per­mit run­ning such algo­rithms at the nec­es­sary scale, the neural net­works will get ever more intel­li­gent.

When? Esti­mates of Moore’s law-like progress curves decades ago by pio­neers like Hans Moravec indi­cated that it would take until the 2010s for the suffi­cient­ly-cheap com­pute for tiny insec­t-level pro­to­type sys­tems to be avail­able, and the 2020s for the first sub­-hu­man sys­tems to become fea­si­ble, and these fore­casts are hold­ing up. (De­spite this vin­di­ca­tion, the scal­ing hypoth­e­sis is so unpop­u­lar an idea, and diffi­cult to prove in advance rather than as a fait accom­pli, that while the GPT-3 results finally drew some pub­lic notice after Ope­nAI enabled lim­ited pub­lic access & peo­ple could exper­i­ment with it live, it is unlikely that many enti­ties will mod­ify their research philoso­phies, much less kick off an ‘arms race’.)

Depend­ing on what invest­ments are made into scal­ing DL, and how fast com­pute grows, the 2020s should be quite inter­est­ing—sig­moid or sin­gu­lar­i­ty?

Quality

Objec­tive met­rics hard to inter­pret. How much bet­ter is (un-fine­tuned base) GPT-3? The like­li­hood loss is an absolute mea­sure, as are the bench­marks, but it’s hard to say what a decrease of, say, 0.1 bits per char­ac­ter might mean, or a 5% improve­ment on SQuAD, in terms of real-world use or cre­ative fic­tion writ­ing. It feels like a large improve­ment, defi­nitely a larger improve­ment than going from GPT-2-345M to GPT-2-1.5b, or GPT-2-1.5b to GPT-3-12b, but how much?

Screen­ing gains: 1:100 → 1:5 or 20× bet­ter? For fic­tion, I treat it as a cura­tion prob­lem: how many sam­ples do I have to read to get one worth show­ing off? One could think of it ask­ing how effi­ciently a model searches (or should that be, , or “The Aleph”?): at the one extreme, an algo­rithm which selects let­ters at ran­dom will have to gen­er­ate astro­nom­i­cally large num­bers of sam­ples before, like the prover­bial mon­keys, they gen­er­ate a page from a Shake­speare play; at the other extreme, a rea­son­ably intel­li­gent human can dash off 1 plau­si­ble page in 1 try. With AI algo­rithms, the results are inter­me­di­ate but rapidly improv­ing. A text gen­er­a­tor trained on a small cor­pus rep­re­sents a huge leap over ran­dom­ness: instead of hav­ing to gen­er­ate quadrillions of sam­ples, one might only have to gen­er­ate mil­lions of sam­ples to get a coher­ent page; this can be improved to hun­dreds of thou­sands by increas­ing the depth of the n of its n-grams, which is fea­si­ble as one moves to Inter­net-s­cale text datasets (the clas­sic exam­ple) or by care­ful hand-engi­neer­ing & com­bi­na­tion with other approaches like Mad-Lib­s-esque tem­plat­ing. A char-RNN, like in my does bet­ter still: it eas­ily gen­er­ates rea­son­able para­graphs, so one might only have to brute force on the order of thou­sands of sam­ples to get a pleas­ing page. With GPT-2-117M poet­ry, I’d typ­i­cally read through a few hun­dred sam­ples to get a good one, with worth­while improve­ments com­ing from 345M→774M→1.5b; by 1.5b, I’d say that for the , I read through 50–100 ‘poems’ to select one. But for GPT-3, once the prompt is dialed in, the ratio appears to have dropped to closer to 1:5—­maybe even as low as 1:3! I fre­quently find myself shrug­ging at the first com­ple­tion I gen­er­ate, “not bad!” (Cer­tain­ly, the qual­ity of GPT-3’s aver­age prompted poem appears to exceed that of almost all teenage poet­s.) I would have to read GPT-2 out­puts for months and prob­a­bly sur­rep­ti­tiously edit sam­ples together to get a dataset of sam­ples like this page.

Prompts As Programming

“On two occa­sions I have been asked,—‘Pray, Mr. Bab­bage, if you put into the machine wrong fig­ures, will the right answers come out?’ In one case a mem­ber of the Upper, and in the other a mem­ber of the Low­er, House put this ques­tion. I am not able rightly to appre­hend the kind of con­fu­sion of ideas that could pro­voke such a ques­tion.”

, Pas­sages from the Life of a Philoso­pher 1864

A new pro­gram­ming par­a­digm? The GPT-3 neural net­work is so large a model in terms of power and dataset that it exhibits qual­i­ta­tively differ­ent behav­ior: you do not apply it to a fixed set of tasks which were in the train­ing dataset, requir­ing retrain­ing on addi­tional data if one wants to han­dle a new task (as one would have to retrain GPT-2); instead, you inter­act with it, express­ing any task in terms of nat­ural lan­guage descrip­tions, requests, and exam­ples, tweak­ing the prompt until it “under­stands” & it meta-learns the new task based on the high­-level abstrac­tions it learned from the pre­train­ing. This is a rather differ­ent way of using a DL mod­el, and it’s bet­ter to think of it as a new kind of pro­gram­ming, where the prompt is now a “pro­gram” which pro­grams GPT-3 to do new things. “Prompt pro­gram­ming” is less like reg­u­lar pro­gram­ming than it is like coach­ing a super­in­tel­li­gent cat into learn­ing a new trick: you can ask it, and it will do the trick per­fectly some­times, which makes it all the more frus­trat­ing when it rolls over to lick its butt instead­—you know the prob­lem is not that it can’t but that it won’t.

Repro­gram­ming by ask­ing polite­ly. The demos above and on this page all6 use the raw default GPT-3 mod­el, with­out any addi­tional train­ing. Instead, to get all these differ­ent behav­iors, one pro­vides a short tex­tual input to GPT-3, with which it will pre­dict the next piece of text (as opposed to start­ing with an empty input and freely gen­er­at­ing any­thing); GPT-3, just by read­ing it, can then flex­i­bly adapt its writ­ing style and rea­son­ing and use new defi­n­i­tions or rules or words defined in the tex­tual input no mat­ter that it has never seen them before.

What is meta-learn­ing? This is con­sid­ered “meta-learn­ing” because GPT-3 has “learned how to learn”: in its end­less train­ing on so many giga­bytes of text, it encoun­ters so many differ­ent kinds of text that it had no choice but to learn abstrac­tions & how to under­stand descrip­tions & instruc­tions & for­mat­ting & autho­r­ial intent to let it adapt on the fly to the cur­rent piece of text it was train­ing on, since there was too much diver­sity & data for it to sim­ply learn each task nor­mally by repeated expo­sure—­much less mem­o­rize all the data. At scale, for a suffi­ciently pow­er­ful (large) NN, the sim­plest & eas­i­est algo­rithms to learn for bet­ter pre­dic­tion are abstrac­tions & intel­li­gence: the harder and big­ger, the bet­ter. When GPT-3 meta-learns, the weights of the model do not change, but as the model com­putes layer by lay­er, the inter­nal num­bers become new abstrac­tions which can carry out tasks it has never done before; in a sense, the GPT-3 model with the 175b para­me­ters is not the real mod­el—the real model is those ephemeral num­bers which exist in between the input and the out­put, and define a new GPT-3 tai­lored to the cur­rent piece of text. The real GPT-3 is not the fixed hard­wired weights, which merely are a boot­strap or a com­piler for cre­at­ing the real GPT-3, a new model cus­tomized to the data which exists only briefly in the soft atten­tion weights dur­ing run­time, and may do com­pletely differ­ent things from the base­line mod­el.7

Few-shot learning/writing prompts: “Soft­ware 3.0”? (Andrej Karpa­thy, 2020-06-18)

Pro­gram­ming by dia­logue? Because you aren’t fine­tun­ing GPT-3 in the con­ven­tional way, inter­act­ing with GPT-3 via its few-shot learn­ing power takes on an entirely differ­ent feel­ing than any­thing else I’ve used before. With reg­u­lar soft­ware, you have to think through exactly how to do some­thing; with deep learn­ing soft­ware, you have to focus on pro­vid­ing data which in some way embod­ies the cor­rect answer which you want; but with GPT-3, you instead think about how to describe what you want. With GPT-3, it helps to anthro­po­mor­phize it: some­times you lit­er­ally just have to ask for what you want. (It can’t pos­si­bly be that easy, can it? Some­times, it is!) Thus, you can sim­ply ask it directly in the Q&A for­mat: “what is X?” For exam­ple, if you want it to detect gib­ber­ish ques­tions and avoid try­ing to answer them and show some under­stand­ing of its uncer­tainty, you can spec­ify in the prompt that it should­n’t answer non­sense ques­tions, and you can ask it to dou­ble-check an ear­lier answer; if you find it does­n’t seem to under­stand that a horse has two eyes or that a toaster weighs more than a pen­cil, per­haps ask­ing more ques­tions with bet­ter set­tings will fix that. Other times, you must instead think, “If a human had already writ­ten out what I want­ed, what would the first few sen­tences sound like? What would the intro­duc­tion and sum­mary sound like? What if I told a story here, how would that story start?” Thus, the sum­ma­riza­tion prompt: “My sec­ond grader asked me what this pas­sage means: …” Some tasks in the GPT-3 paper which showed dis­ap­point­ing per­for­mance can be improved dra­mat­i­cally by find­ing appro­pri­ate for­mat­ting or prompts: arith­metic improves enor­mously with comma for­mat­ting of dec­i­mals (due to BPEs), and the “Word in Con­text” bench­mark, where GPT-3 sur­pris­ingly showed below-chance per­for­mance com­pared to the 85% SOTA, can be improved to >70% with bet­ter prompt­ing.

Sam­pling Can Prove The Pres­ence Of Knowl­edge But Not The Absence

GPT-3 may “fail” if a prompt is poor­ly-writ­ten, does not include enough exam­ples, or bad sam­pling set­tings are used. I have demon­strated this many times when some­one shows a “fail­ure” of GPT-3—the fail­ure was their own. The ques­tion is not whether a given prompt works, but .

Any child psy­chol­o­gist trained in admin­is­ter­ing IQ tests is well-aware of the need to build rap­port with chil­dren, to mon­i­tor them for prob­lems and gauge their lin­guis­tic skills: are they not a native Eng­lish speak­er? Are they angry with or afraid of the psy­chol­o­gist? Are they apa­thetic and unmo­ti­vat­ed? It is hard to ace an IQ test by acci­dent, but it’s triv­ial to fail one on pur­pose; try­ing to admin­is­ter an IQ test to a child who has taken a dis­lik­ing to you is a waste of the time of every­one involved, and pre­sent­ing the result­ing score as mean­ing­ful is pro­fes­sional mal­prac­tice.

The Lizard­man Con­stant: non­sense prompt com­ple­tions by humans.

Another cau­tion­ary exam­ple comes from sur­vey research. Researchers have demon­strated repeat­edly in human sur­veys that a cer­tain small per­cent­age of human responses will reli­ably be bull­shit: “joke­ster” or “mis­chie­vous respon­ders”, or more mem­o­rably, respon­der­s—re­spon­dents who give the wrong answer to sim­ple ques­tions. These peo­ple are some unan­a­lyz­able mix of lazy, stu­pid8, igno­rant, trolling, ‘jok­ing’, or just , pos­si­bly caus­ing . ( in falsely report­ing being amputees, adoptees, LGBT, binge-drinkers etc, and .)

Human fail­ures of logic & com­mon sense, exam­ples. One inter­est­ing exam­ple: 75–96% of the almost-mil­lion-strong and thought they had reg­is­tered as just “inde­pen­dents”. Sci­ence exam­ples are well-known to demon­strate severe prob­lems with both knowl­edge and respons­es, as and so on; sim­i­lar­ly, in arith­metic, , with <20% of Amer­i­cans able to iden­tify the even num­bers in a list of 6, and ~10% of Amer­i­cans able to iden­tify which of 6 are prime (equiv­a­lent to ran­dom guess­ing—­for­get about mul­ti­ply­ing 4-digit num­ber­s!); and (and chil­dren have ); Pew’s finds 5% of athe­ists are “absolutely” or “fairly cer­tain” that they believe in God (I’ll char­i­ta­bly pass over meat-eat­ing rates in vegans/vegetarians as a case of “the spirit is will­ing but the flesh is sweet”); in the , 14% of unde­cided vot­ers said Hillary Clin­ton might be a demon, but they might vote for her; 2% of Clin­ton sup­port­ers said she was & they would; Scott Alexan­der men­tions the epony­mous 4% of respon­ders who say lizard­men rule the earth, but, to con­tinue the infer­nal the­me, notes also that 13% say Barack Obama is the Antichrist (5% voted Oba­ma); while (and—the mon­ster­s—5% approve of using cell phones in movie the­ater­s); and per­haps traffick­ing with the ruinous pow­ers explains how and yet are still answer­ing sur­veys, with satanic deals surely help­ing the . Such exam­ples could surely be mul­ti­plied (lit­er­ally ad nau­seam, even?). Cer­tain­ly, I (and Scott Alexan­der) see many bizarre responses any time we are unlucky enough to which con­tains a free response field!

Sam­pling proves pres­ence but not absence in humans too… If chal­lenged on their absurd respons­es, they will dig their heels in and . Unfor­tu­nate­ly, there is no fool­proof rem­edy against lizard­man responses (one can use ‘atten­tion checks’ and tests for over­claim­ing, like let­ting them endorse lists of items with fakes thrown in to catch the bull­shit­ter­s), because humans gonna human. The real­ity is that humans don’t answer ques­tions reli­ably, accu­rate­ly, or hon­estly even close to 100% of the time, and shame­lessly fail ‘com­mon sense’ or ‘logic’ or ‘arith­metic’ ques­tions all the time, requir­ing exten­sive pre­cau­tions, care­ful sur­vey design, and just throw­ing out a lot of data as garbage.

Humans need prompt pro­gram­ming too. Should we con­clude from such cases that humans, or at least some spe­cific humans, are not actu­ally intel­li­gent? No, of course not. We would say that such peo­ple have sim­ply not been prop­erly instructed or edu­cat­ed, given incen­tive to be hon­est, or made nor­mal unavoid­able errors. It would be ten­den­tious in the extreme to con­clude that because some peo­ple will claim to have suffered fatal heart attacks that they are merely sta­tis­ti­cal pat­tern-match­ing machines emit­ting plau­si­ble yet seman­ti­cal­ly-null utter­ances while pass­ing for human; if we want to con­clude that, I hope we would probe them a lit­tle more thought­fully than prompt­ing them with some sur­vey items and declar­ing the case closed!

Demand more from crit­ics. We should expect noth­ing less of peo­ple test­ing GPT-3, when they claim to get a low score (much less stronger claims like “all lan­guage mod­els, present and future, are unable to do X”): did they con­sider prob­lems with their prompt? Whether all of the hyper­pa­ra­me­ters make sense for that task? Did they exam­ine where com­ple­tions go wrong, to get an idea of why GPT-3 is mak­ing errors? Did they test out a vari­ety of strate­gies? Did they con­sider qual­i­ta­tively how the failed com­ple­tions sound? (Or did they copy­-paste arbi­trary hyper­pa­ra­me­ters, use the first prompt that came to mind, look at the out­put, and lazily present it to the world as proof of what GPT-3 can’t do?)

Machine sym­pa­thy. Prompt pro­gram­ming often should be human-like: if a human would­n’t under­stand what was intend­ed, why would GPT-3? It’s not tele­pathic, and there are myr­i­ads of gen­res of human text which the few words of the prompt could belong to. (A help­ful thought exper­i­ment: if some­one emailed you a prompt out of the blue, with no other con­text what­so­ev­er, what would you inter­pret it as? A joke, a troll, spam, or what?) Prompts should obey —state­ments should be true, infor­ma­tive, and rel­e­vant. One should not throw in irrel­e­vant details or non sequiturs, because in human text, , that implies that those details are rel­e­vant, no mat­ter how non­sen­si­cal a nar­ra­tive involv­ing them may be.9 When a given prompt isn’t work­ing and GPT-3 keeps piv­ot­ing into other modes of com­ple­tion, that may mean that one has­n’t con­strained it enough by imi­tat­ing a cor­rect out­put, and one needs to go fur­ther; writ­ing the first few words or sen­tence of the tar­get out­put may be nec­es­sary. (This was a par­tic­u­lar prob­lem with the lit­er­ary par­o­dies: GPT-3 would keep start­ing with it, but then switch into, say, one-liner reviews of famous nov­els, or would start writ­ing fan­fic­tions, com­plete with self­-in­dul­gent pref­aces. The solu­tion was to write out the first 2 or 3 sen­tences of an exam­ple par­o­dy, and then GPT-3 would fin­ish out the par­o­dy, look back and see that there was an exam­ple of a lit­er­ary par­o­dy, and then hap­pily start gen­er­at­ing dozens of work­s+­par­ody pairs, once it fell into the groove.) The more nat­ural the prompt, like a ‘title’ or ‘intro­duc­tion’, the bet­ter; unnat­u­ral-text tricks that were use­ful for GPT-2, like dump­ing in a bunch of key­words to try to steer it towards a top­ic, appear less effec­tive or harm­ful with GPT-3.

Sur­pris­ingly pow­er­ful. Prompts are per­pet­u­ally sur­pris­ing—I kept under­es­ti­mat­ing what GPT-3 would do with a given prompt, and as a result, I under­used it. Text is a weird way to try to input all these queries and out­put their results or exam­ine what GPT-3 thinks (com­pared to a more nat­ural NLP approach like using BERT’s embed­dings), and fid­dly. Just as few peo­ple would have thought that you could get GPT-2 to auto­mat­i­cally sum­ma­rize text by sim­ply append­ing a “TL;­DR:” string, few peo­ple would guess GPT-3 could write emoji sum­maries or that if you use a prompt like “Sum­ma­rize the plot of J.K. Rowl­ing’s Harry Pot­ter in the style of Ernest Hem­ing­way”, you might get out a dozen pro­fan­i­ty-laced reviews pan­ning 20th-cen­tury lit­er­a­ture (or a sum­ma­ry—in Chi­ne­se—of the Chi­nese trans­la­tion10), or that if you use a prompt like “Trans­former AI poet­ry: Poetry clas­sics as reimag­ined and rewrit­ten by an arti­fi­cial intel­li­gence”, GPT-3 will gen­er­ate poems but then imme­di­ately gen­er­ate expla­na­tions of how neural net­works work & dis­cus­sions from emi­nent researchers like Gary Mar­cus of why they will never be able to truly learn or exhibit cre­ativ­ity like gen­er­at­ing poems. It is diffi­cult to try out vari­a­tions on prompts because as soon as the prompt works, it’s tempt­ing to keep try­ing out com­ple­tions to mar­vel at the sheer vari­ety and qual­ity as you are seduced into fur­ther explor­ing pos­si­bil­i­ty-space. (GPT-3 never grows impa­tient or bored.) What other capa­bil­i­ties are latent, wait­ing to be exposed by some­one stum­bling across the right prompt?

(Of course, not all these capa­bil­i­ties are nec­es­sar­ily desir­able: where there is pro­gram­ming, you can be sure there is hack­ing. Where there is “prompt pro­gram­ming”, there must be “prompt hack­ing”… GPT-3 can fol­low instruc­tions, so within its con­tex­t-win­dow or with any exter­nal mem­o­ry, it is surely Tur­ing-com­plete, and who knows what or are pos­si­ble? Con­sider the AI Dun­geon users as an early exam­ple of “prompt hack­ing”.)

Finetuning

Fine­tun­ing was nec­es­sary to ‘pro­gram’ GPT-2. GPT-3’s “prompt pro­gram­ming” par­a­digm is strik­ingly differ­ent from GPT-2, where its prompts were brit­tle and you could only tap into what you were sure were extremely com­mon kinds of writ­ing, and, as like as not, it would quickly change its mind and go off writ­ing some­thing else. At best, you could fairly gener­i­cally hint at a topic to try to at least get it to use key­words; then you would have to fil­ter through quite a few sam­ples to get one that really wowed you. (This was a trick I used for TWDNE to get it to gen­er­ate at least vaguely ani­me-re­lated plot sum­maries.) To get out­put reli­ably out of GPT-2, you had to fine­tune it on a prefer­ably decen­t-sized cor­pus.

Do we need fine­tun­ing given GPT-3’s prompt­ing? But with GPT-3, you can just say so, and odds are good that it can do what you ask, and already knows what you’d fine­tune it on. (For exam­ple, I thought I would have to fine­tune GPT-3 to get sam­ples of myself, since GPT-2 does­n’t know any­thing about “Gwern”/“gwern.net”; but it turns out, all I have to do is put in “A new essay by Gwern Bran­wen (gw­ern.net):” and out comes an uncanny sim­u­lacrum of myself, or Scott Alexan­der, or Paul Gra­ham, or…) Would it be bet­ter if fine­tuned? Indu­bitably. But it’s not nec­es­sary. And given the cre­ativ­ity of the non-fine­tuned GPT-3, I’m not sure that I even want to—and for­feit all the behav­iors I haven’t yet dis­cov­ered‽

As of mid-June 2020, the Ope­nAI API does not sup­port fine­tun­ing although OA was work­ing on it. But after enough time play­ing with GPT-3, I have begun to won­der: at this level of meta-learn­ing & gen­eral knowl­edge, do we need fine­tun­ing at all?

For GPT-2, I saw fine­tun­ing as doing 2 things:

  1. Fix­ing igno­rance: miss­ing domain knowl­edge

    GPT-2 did­n’t know many things about most things—it was just a hand­ful (1.5 bil­lion) of para­me­ters trained briefly on the tini­est frac­tion of the Com­mon Crawl sub­set of the Inter­net, with­out any books even11. It’s not sur­pris­ing that for many domains, it would­n’t know the details; and even if the dataset included ade­quate text, it did not train on that data many times, and the knowl­edge com­peted with all the other domains it needed to know about, inter­fer­ing.

    But GPT-3 already knows every­thing! GPT-3 is so much larger on every dimen­sion that this seems like much less of a prob­lem for any domain which is already well-rep­re­sented in pub­lic HTML pages. GPT-2 might need to be trained on a fan­fic­tion cor­pus to learn about some obscure char­ac­ter in a ran­dom media fran­chise & gen­er­ate good fic­tion, but GPT-3 already knows about them and use them appro­pri­ately in writ­ing new fic­tion.

  2. Prompt­ing a spe­cific task:

    Even when GPT-2 knew a domain ade­quate­ly, it had the frus­trat­ing behav­ior of rapidly switch­ing domains. You might prompt it with a poem genre it knows ade­quately already, but then after a few lines, it would gen­er­ate an end-of-text BPE and switch to gen­er­at­ing a news arti­cle on Don­ald Trump. (Trump shows up a lot.) Pre­sum­ably, while poetry was rea­son­ably rep­re­sent­ed, it was still rare enough that GPT-2 con­sid­ered poetry highly unlikely to be the next word, and keeps try­ing to jump to some more com­mon & likely kind of text, and GPT-2 is not smart enough to infer & respect the intent of the prompt.

    GPT-3 exhibits much less of this ‘mode switch­ing’ sort of behav­ior. Per­haps because it is trained on a much larger and more com­pre­hen­sive dataset (so news arti­cles aren’t so dom­i­nan­t), but also I sus­pect the meta-learn­ing makes it much bet­ter at stay­ing on track and infer­ring the intent of the promp­t—hence things like the “Trans­former poetry” prompt, where despite being what must be highly unusual text, even when switch­ing to prose, it is able to impro­vise appro­pri­ate fol­lowup com­men­tary.

    Nev­er­the­less, some­times we can’t or don’t want to rely on prompt pro­gram­ming. A spe­cific task may be nec­es­sary when a task has evaded our prompt pro­gram­ming skills, or we have data but not prompt pro­gram­mer time. For exam­ple, in the GPT-3 paper, many tasks under­per­form what GPT-3 can do if we take the time to tai­lor the prompts & sam­pling hyper­pa­ra­me­ters, and just throw­ing the naive prompt for­mat­ting at GPT-3 is mis­lead­ing. How­ev­er, researchers do not have the time to go through scores of bench­mark tasks and fix them one by one; sim­ply fine­tun­ing on them col­lec­tively ought to do at least as well as the cor­rect prompts would, and requires much less human effort (al­beit more infra­struc­ture).

So, what would be the point of fine­tun­ing GPT-3 on poetry or lit­er­a­ture? It has likely already seen the fine­tun­ing cor­pus, knows most of it, and will tractably gen­er­ate poems on demand. There may be gains, but I won­der if they would be nearly as large as they were for GPT-2?

Playground

All of the fol­low­ing sam­ples were gen­er­ated using the Ope­nAI Beta Play­ground, which looks like this:

OA API Beta Play­ground UI & avail­able prewrit­ten prompts/sampling options

The Play­ground has some rough edges in Beta, and capac­ity issues. A good way to start is to gen­er­ate sam­ples with the log probs/logits turned on, and pay­ing atten­tion to how sam­pling hyper­pa­ra­me­ters affect out­put, to gain intu­ition for how GPT-3 thinks & what sam­ples looks like when sam­pling goes hay­wire.

The qual­ity vs diver­sity trade­off for top-k/nucleus sam­pling on GPT-2 news arti­cles: more extreme set­tings like top-k = 10 / top_p = 0.6 are equally good to get the high­est human rat­ings—but both come at at the expense of vari­ety of pos­si­ble com­ple­tions. (; see also )

Trade­off: diver­sity vs accu­ra­cy. It offers the stan­dard sam­pling options famil­iar from ear­lier GPT-2 inter­faces, includ­ing . One par­tic­u­larly manip­u­lates the tem­per­a­ture set­ting to bias towards wilder or more pre­dictable com­ple­tions; for fic­tion, where cre­ativ­ity is para­mount, it is best set high, per­haps as high as 1, but if one is try­ing to extract things which can be right or wrong, like ques­tion-an­swer­ing, it’s bet­ter to set it low to ensure it prefers the most likely com­ple­tion. (After all, the point of a high tem­per­a­ture is to reg­u­larly select com­ple­tions which the model thinks aren’t like­ly; why would you do that if you are try­ing to get out a cor­rect arith­metic or trivia ques­tion answer?) For top_p, one can set it to ~0.95 and largely for­get about it unless one sus­pects that it’s break­ing answers like top-k and it needs to be much low­er, like 0.5; it’s there to cut off the tail of gib­ber­ish com­ple­tions and reduce rep­e­ti­tion, so does­n’t affect the cre­ativ­ity too much. I gen­er­ally avoid the use of the rep­e­ti­tion penal­ties because I feel rep­e­ti­tion is crit­i­cal to cre­ative fic­tion, and I’d rather err on the side of too much than too lit­tle, but some­times they are a use­ful inter­ven­tion; GPT-3, sad to say, main­tains some of the weak­nesses of GPT-2 and other like­li­hood-trained autore­gres­sive sequence mod­els, such as the propen­sity to fall into degen­er­ate rep­e­ti­tion.

Rank­ing final results for qual­ity gain. A lit­tle more unusu­al­ly, it offers a “best of” (BO) option which is the rank­ing trick (other names include or “ran­dom-sam­pling shoot­ing method”: gen­er­ate n pos­si­ble com­ple­tions inde­pen­dent­ly, and then pick the one with best total like­li­hood, which avoids the degen­er­a­tion that an explicit tree/beam search would unfor­tu­nately trig­ger, as doc­u­mented most recently by the & reported by many oth­ers about like­li­hood-trained text mod­els in the past eg char-RNN in 2015). I’m not sure how to best use BO: it seems to be highly help­ful for things with one right answer (such as tricky Q&A or rea­son­ing), but when it helps with ‘cre­ative’ com­ple­tions is less clear. I tried out BO heav­ily because I could­n’t quite fig­ure out how it inter­acts with qual­i­ty. On the smaller mod­els, it seems to help boost qual­ity up towards ‘davinci’ (GPT-3-175b) lev­els with­out caus­ing too much trou­ble, but on davin­ci, it appears to exac­er­bate the usual sam­pling issues: par­tic­u­larly with poet­ry, it’s easy for a GPT to fall into rep­e­ti­tion traps or loops, or spit out mem­o­rized poems, and BO makes that much more like­ly. For gen­er­at­ing com­ple­tions of famous poems, it’s quite hard to get GPT-3 to gen­er­ate new ver­sions unless you actively edit the poem to force a differ­ence. (In the most extreme case, in the case of gen­er­at­ing new vari­a­tions on “Jab­ber­wocky”, I have been unable to gen­er­ate any new ver­sions under any set­ting, even tak­ing the step of aggres­sively edit­ing in new lines about how the vor­pal sword bounced off the Jab­ber­wocky and it won… It always spits out chunks of the orig­i­nal.12) So BO is a dou­ble-edged sword. The best way I found to use it is to sam­ple with­out it (BO=1) at max temp, and then once it has sev­eral dis­tinctly differ­ent lines, then sam­pling with more (eg BO=5) seems to help rather than hurt. This is a lit­tle sur­pris­ing to me because for Meena, it made a large differ­ence to do even a lit­tle BO, and while it had dimin­ish­ing returns, I don’t think there was any point they tested where higher best-of-s made responses actu­ally much worse (as opposed to merely n times more expen­sive). Pos­si­bly BO is much more use­ful for nonfiction/information-processing tasks, where there’s one cor­rect answer and BO can help over­come errors intro­duced by sam­pling or myopia.

Effective Prompt Programming

“To con­strain the behav­ior of a pro­gram pre­cisely to a range may be very hard, just as a writer will need some skill to express just a cer­tain degree of ambi­gu­i­ty. A com­puter is like a vio­lin. You can imag­ine a novice try­ing first a phono­graph and then a vio­lin. The lat­ter, he says, sounds ter­ri­ble. That is the argu­ment we have heard from our human­ists and most of our com­puter sci­en­tists. Com­puter pro­grams are good, they say, for par­tic­u­lar pur­pos­es, but they aren’t flex­i­ble. Nei­ther is a vio­lin, or a type­writer, until you learn how to use it.”

, “Why Pro­gram­ming Is a Good Medium for Express­ing Poor­ly-Un­der­stood and Slop­pi­ly-For­mu­lated Ideas” 1967

Anthro­po­mor­phize your prompts. There is no sub­sti­tute for test­ing out a num­ber of prompts to see what differ­ent com­ple­tions they elicit and to reverse-engi­neer what kind of text GPT-3 “thinks” a prompt came from, which may not be what you intend and assume (after all, GPT-3 just sees the few words of the promp­t—it’s no more a telepath than you are). If you ask it a ques­tion to test its com­mon­sense rea­son­ing like “how many eyes does a horse have” and it starts com­plet­ing with a knock­-knock joke, you need to rethink your prompt! Does it spit out com­ple­tions that look like it’s think­ing but it’s exe­cut­ing the wrong algo­rithm, or it falls back to copy­ing parts of the input? Then one may need to few-shot it by pro­vid­ing exam­ples to guide it to one of sev­eral pos­si­ble things to do. One should also keep in mind the impor­tance of sam­pling para­me­ters, and whether one is look­ing for a sin­gle cor­rect answer (so low temp with BO=1 if com­pute-lim­it­ed, or high temp and BO=20 if pos­si­ble) or if one is try­ing for cre­ative answers (high temp with rep­e­ti­tion penalties).

The 4 Horse­men: short con­text, bad prompts, BPEs, ran­dom sam­pling. My rule of thumb when deal­ing with GPT-3 is that if it is mess­ing up, the errors are usu­ally attrib­ut­able to one of 4 prob­lems: too-short con­text win­dows, insuffi­cient prompt engi­neer­ing, BPE encod­ing mak­ing GPT-3 ‘blind’ to what it needs to see to under­stand & solve a prob­lem, or noisy sam­pling sab­o­tag­ing GPT-3’s attempts to show what it knows. Another use­ful heuris­tic is to try to express some­thing as a mul­ti­-step rea­son­ing process, such as a dia­logue: because GPT-3 is a feed­for­ward NN, it can only solve tasks which fit within one “step” or for­ward pass; any given prob­lem may be too inher­ently ser­ial for GPT-3 to have enough ‘think­ing time’ to solve it, even if it can suc­cess­fully solve each inter­me­di­ate sub­-prob­lem within a step. So peo­ple have demon­strated that GPT-3 won’t solve a sim­ple math prob­lem in a sin­gle step, but it will solve it if you reframe it as a ‘dia­logue’ with an —who knew neural net­work research would lead to anime wolf­girl demonolo­gy?—and even ask it to guess-and-check or brute-force the answer; one can also exper­i­ment in coach­ing it through exam­ples13, or ask­ing it about pre­vi­ous answers or its uncer­tain­ty. This makes sense if we think of Trans­form­ers as unrolled RNNs which unfor­tu­nately lack a hid­den state: seri­al­iz­ing out the rea­son­ing helps over­come that com­pu­ta­tional lim­i­ta­tion.

Log­prob debug­ging. GPT-3 does not directly emit text, but it instead pre­dicts the prob­a­bil­ity (or “like­li­hood”) of the 51k pos­si­ble BPEs given a text; instead of merely feed­ing them into some ran­dom­ized sam­pling process like tem­per­a­ture top-k/top_p sam­pling, one can also record the pre­dicted prob­a­bil­ity of each BPE con­di­tional on all the pre­vi­ous BPEs. This gives you a sim­ple idea of what GPT-3 is think­ing about each BPE: is it likely or unlikely (given the pre­vi­ous BPEs)? Which BPEs are espe­cially unlike­ly? Does it “get it” as the com­ple­tion goes on? I don’t use log­probs much but I gen­er­ally use them in 1 of 3 ways: I use them to see if the prompt ‘looks weird’ to GPT-3; to see where in a com­ple­tion it ‘goes off the rails’ (sug­gest­ing the need for lower temperatures/top_p or higher BO); and to peek at pos­si­ble com­ple­tions to see how uncer­tain it is about the right answer—a good exam­ple of that is Arram Sabeti’s uncer­tainty prompts inves­ti­ga­tion where the log­probs of each pos­si­ble com­ple­tion gives you an idea of how well the uncer­tainty prompts are work­ing in get­ting GPT-3 to put weight on the right answer, or in my par­ity analy­sis where I observed that the log­probs of 0 vs 1 were almost exactly 50:50 no mat­ter how many sam­ples I added, show­ing no trace what­so­ever of few-shot learn­ing hap­pen­ing. Thus, log­probs can offer more insight while debug­ging a prompt than just repeat­edly hit­ting ‘com­plete’ and get­ting frus­trat­ed.

AI Dun­geon ≠ GPT-3

While a neat trick & big upgrade over pub­lic GPT-2 mod­els, AID is not an unre­stricted ful­l-power GPT-3caveat emp­tor!

AI Dun­geon < GPT-3. For peo­ple using the AI Dun­geon (AID) route, things are tricky because AID users don’t have the same sam­pling options that API users do (no best-of is par­tic­u­larly painful when try­ing to elicit cor­rect answers to hard ques­tion­s), and no con­trol over the full prompt/history, with AID doing lots of things behind the scenes on a model that may have been fine­tuned on RPG-like mate­r­ial & count­less AID game tran­scripts etc, and with qual­ity of model com­pletely out of their hands (does choos­ing “cus­tom” get you Drag­on, or do you have to choose a differ­ent mode & edit it? the nec­es­sary trick seems to change over time), with occa­sional dras­tic qual­ity drops reported by many AID users when… some­thing changes on the back­end. For exam­ple, if you are an AID user, were you aware that the first response for a cus­tom prompt is actu­ally always GPT-2, to try to block back­door GPT-3 access? Or that “We cut off the gen­er­a­tion at cer­tain points (trail­ing sen­tences etc…) Dis­able cer­tain tokens to improve per­for­mance or make gen­er­a­tion safer, fine-tune on text adven­tures and only use the last ~1000 tokens of con­text.” A cau­tion­ary exam­ple of AID use comes from Gary Mar­cus & Ernest Davis’s use: they fil­tered a large num­ber of ques­tions through AID to try to find cases GPT-3 would fail on; how­ev­er, when the AID fail­ure cases were tested on GPT-3 by Dou­glas Sum­mer­s-S­tay, it solved half of them! (AID is designed to pro­duce fun text adven­tures, not be a NLP test­bed, and that shows when one tries to use AID as a back­door to GPT-3.) To work around this, AID users seem to need to warm up ses­sions care­fully with descrip­tive prompts/interactions to over­come the gam­i­fi­ca­tion, and avoid any­thing that might veer back into com­edy or dra­ma.

Only once these have been ruled out do I start con­sid­er­ing alter­na­tive expla­na­tions like “lan­guage mod­els will never solve X”.

Weaknesses

Lim­ited mem­o­ry, repetition/divergence, BPE encod­ing. GPT-3 is, of course, not per­fect. We should keep that in mind when eval­u­at­ing it. As a scaled-up GPT-2, it has mostly the same weak­ness­es, and my thoughts on improve­ments remain mostly the same (aside from mov­ing away from BPEs, which need is becom­ing increas­ingly urgent; see the next sec­tion).

Arti­fi­cial intel­li­gence pro­grams like deep learn­ing neural net­works may be able to beat humans at play­ing Go or chess, or doing arith­metic, or writ­ing Navy Seal copy­pas­ta, but they will never be able to truly think for them­selves, to have con­scious­ness, to feel any of the rich­ness and com­plex­ity of the world that we mere humans can feel. Mere, unen­light­ened humans might be impressed by the abil­i­ties of sim­ple deep learn­ing pro­grams, but when looked at in a more holis­tic man­ner, it all adds up to… well, noth­ing. They still don’t exhibit any trace of con­scious­ness. All of the avail­able data sup­port the notion that humans feel and expe­ri­ence the world differ­ently than com­put­ers do. While a com­puter can beat a human mas­ter at chess or Go or some other game of struc­tured rules, it will never be able to truly think out­side of those rules, it will never be able to come up with its own new strate­gies on the fly, it will never be able to feel, to react, the way a human can. Arti­fi­cial intel­li­gence pro­grams lack con­scious­ness and self­-aware­ness. They will never be able to have a sense of humor. They will never be able to appre­ci­ate art, or beau­ty, or love. They will never feel lone­ly. They will never have empa­thy for other peo­ple, for ani­mals, for the envi­ron­ment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by exist­ing, mere, unen­light­ened humans are intel­lec­tu­ally supe­rior to com­put­ers, no mat­ter how good our com­put­ers get at win­ning games like Go or Jeop­ardy. We don’t live by the rules of those games. Our minds are much, much big­ger than that.

Wait, I’m sor­ry—that pre­ced­ing para­graph on the weak­nesses of deep learn­ing was actu­ally writ­ten by GPT-3, and is in the wrong sec­tion. (Man­age­ment regrets the mis­take.) But seri­ous­ly, what weak­nesses does GPT-3 have?

Small Context Window

No mem­ory (fix­able). The first limit is that it remains hob­bled by the lim­ited con­text win­dow. GPT-3 has no form of mem­ory or recur­rence, so it can­not see any­thing out­side its lim­ited 2048 BPEs (rough­ly, 500–1000 word­s). This means it can­not hope to write any­thing of any seri­ous length, because the begin­ning will soon van­ish over the event hori­zon, and it also lim­its its abil­ity to engage in few-shot learn­ing, for the same rea­son: the promp­t+­gen­er­a­tion will quickly exceed the win­dow length. While the dam­age may be lim­ited for tasks where the for­mat is repet­i­tive, like Q&A (so GPT-3 can do the nec­es­sary meta-learn­ing over its com­ple­tions just as well as over the orig­i­nal promp­t), this does limit it and is frus­trat­ing.

Repetition/Divergence Sampling

Repetition/gibberish (mys­tery). Autore­gres­sive lan­guage mod­els trained by like­li­hood (pre­dic­tion) loss all share an extremely annoy­ing prob­lem: when you gen­er­ate free-form com­ple­tions, they have a ten­dency to even­tu­ally fall into repet­i­tive loops of gib­ber­ish. Whether GPT-2 or T5 or etc, they all seem to do it, and if one tries to avoid such extremely dumb & crude sam­pling strate­gies like top-k tem­per­a­ture sam­pling by doing explicit search for likely text com­ple­tions, such as sam­pling, these searches actu­ally make the prob­lem worse, and the bet­ter your search is, the worse the results are. Tweaks like nucleus sam­pling can reduce it, but do not elim­i­nate it. (No one has tried gra­di­ent ascent for gen­er­at­ing opti­mal sam­ples, as far as I know.) Since GPT-2-1.5b seemed almost as prone as GPT-2-117M, I was unsur­prised to find that GPT-3 too falls eas­ily into the rep­e­ti­tion trap.

Why rep­e­ti­tion? This behav­ior remains puz­zling and I don’t think any­one really knows how to fix it. Top-k or nucleus sam­pling can’t be right and are clearly ugly ad hoc hacks, but is the core prob­lem like­li­hood train­ing or sam­pling, or what? And why is it never a prob­lem for other kinds of sequences like images, and much less of one for music, or in tasks like neural trans­la­tion where tricks like beam search are always used because they do improve? (We don’t see it in char-RNNs or GPT-2s trained on ABC/MIDI music, or OA Juke­box trained on raw audio; we cer­tainly don’t see it in iGPT or PixelRNN etc.) Like­li­hood train­ing is com­pellingly sim­ple and effi­cient, and we know that real brains are con­stantly pre­dict­ing future inputs; it seems implau­si­ble that the entire prob­lem will dis­ap­pear if we slap on some Bayesian tricks to get pos­te­rior esti­mates of the like­li­hood of each pos­si­ble BPE com­ple­tion (and I’m not aware of any­one show­ing that it does in some­thing like a small Bayesian RNN trained with HMC or by using deep ensem­bling or other Bayesian approx­i­ma­tion­s). Fur­ther, if like­li­hood train­ing is so bad, why does min­i­miz­ing the pre­dic­tive loss work so con­sis­tently over a wide range to improve the qual­ity of gen­er­a­tions and how use­ful the model is for zero/few-shot learn­ing or semi­-su­per­vised tasks, and why does the loss cor­re­late near-per­fectly with human rat­ings of qual­ity in the Meena paper?

Lan­guage Pre­dic­tion = Imi­ta­tion Learn­ing? My intu­ition is that the rep­e­ti­tion trap is essen­tially the DAgger/off-policy imi­ta­tion learn­ing prob­lem in a non-RL guise: as the model is fed back in its own guesses as a ground truth, the hal­lu­ci­nated text becomes grad­u­ally more off-pol­icy and diver­gent from real human-writ­ten text (which is backed by a knowl­edge base & a pur­pose), and the model is unable to come up with sen­si­ble con­tin­u­a­tions (hav­ing never trained on such gib­ber­ish) and does not ‘want’ to get back on track (hav­ing been trained purely to make one-step pre­dic­tion­s). The solu­tion might look some­thing like detect­ing when a com­ple­tion might go too far off-dis­tri­b­u­tion and back­track­ing, or more RL-like train­ing of gen­er­a­tion as opposed to mere pre­dic­tion. It would prob­a­bly help also to use some sort of hier­ar­chi­cal or plan­ning method: one might be able to con­vince GPT-3 to gen­er­ate sum­maries and then expand each line of the sum­mary recur­sively ( does some­thing sim­i­lar using a bag-of-words topic with GPT-2/ to “upscale” a seed).

BPEs

Com­pared to GPT-2, GPT-3 improves per­for­mance on char­ac­ter-level tasks like rhyming, allit­er­a­tion, pun­ning, anagrams/permutations, acros­tic poems, and arith­metic less than expect­ed, despite being very good at many other close­ly-re­lated kinds of writ­ings like satire.

Why? A plau­si­ble expla­na­tion is an obscure tech­ni­cal detail: as a per­for­mance opti­miza­tion, GPT does not see char­ac­ters but sub­-word-chunks called “byte-pair encod­ings” (BPEs). Because GPTs never see char­ac­ters but opaque par­tial-words, which vary chaot­i­cally based on the spe­cific word and even the sur­round­ing con­text, they are unable to eas­ily learn about char­ac­ter-level aspects of lan­guage, like sim­i­lar spellings or sounds, and are forced to learn rela­tion­ships much more indi­rect­ly, like by brute-force mem­o­riz­ing of pairs of words.

Some exper­i­ments with refor­mat­ting GPT-3’s poorest-per­form­ing tasks to avoid incon­sis­tent BPE encod­ings of strings shows small to large per­for­mance gains, con­sis­tent with this the­o­ry.

Bad at phonetic/character-level tasks. Dis­ap­point­ing­ly, the issues that have been noticed with GPT-2-poetry’s dis­in­cli­na­tion to rhyme remain. GPT-3 rhymes rea­son­ably well and often when appro­pri­ate, but the improve­ment is much smaller on rhyming than it is on pretty much every­thing else. Appar­ently it is eas­ier for GPT-3 to learn things like arith­metic and spread­sheets than it is to learn how to rhyme. A sim­i­lar issue comes with puns. Bet­ter, but not as much bet­ter as one would expect given the leap on many other capa­bil­i­ties. Try­ing to gen­er­ate puns or rhymes, it seems like GPT-3 know extremely well what they are on an abstract lev­el, and will appro­pri­ately manip­u­late words and attempt to make puns or rhymes (see the shog­goth-cat dia­logue below for a par­tic­u­larly strik­ing exam­ple), but the words it chooses just aren’t right on a pho­netic basis. On the other hand, it’s not as if GPT-3 is unable to under­stand humor—it is a bril­liant mimic with par­o­dies, has a cut­ting wit for satire, and can gen­er­ate one-lin­ers eas­ily like the “I have a joke” for­mat (1, 2) or Drake memes, as long as they rely more on seman­tics than syn­tax.

BPEs ≠ char­ac­ters! My sus­pi­cion here is that the­se, and per­haps other issues, is due to the lossy BPE encod­ing. GPT mod­els do not see indi­vid­ual char­ac­ters, but instead a larger chunk, called a byte-pair encod­ing (BPE); a byte-pair is a sim­ple com­pres­sion scheme where 50,257 word frag­ments or char­ac­ters are cho­sen to try to min­i­mize the encod­ing length on some arbi­trary text cor­pus, so a par­tic­u­larly com­mon word may get a unique BPE while a longer word will be encoded as 2 or 3 BPEs, and a com­pletely novel word will be encoded let­ter BPE by let­ter BPE as a fall­back. Hence, even if 2 words sound and are spelled sim­i­lar­ly, they may be given totally differ­ent BPE encod­ings which don’t have a sin­gle BPE in com­mon. Indeed, because of the con­text depen­dence, BPEs are not even deter­min­is­tic from the user’s per­spec­tive: when a com­ple­tion of n tokens length is request­ed, you may get differ­ent results because of differ­ent BPE encod­ings—based on whether a given piece of text was input word by word and con­di­tioned on as part of the user/Playground’s prompt, or was gen­er­ated by GPT-3 as part of a com­ple­tion! Nos­tal­ge­braist dis­cussed the extreme weird­ness of BPEs and how they change chaot­i­cally based on white­space, cap­i­tal­iza­tion, and con­text for GPT-2, with a fol­lowup post for GPT-3 on the even weirder encod­ing of num­bers sans com­mas. (An­other exam­ple Nos­tal­ge­braist does­n’t touch on but Shawn Presser dis­cov­ered: the sym­bol denot­ing end-of-text, <|endoftext|>, is, in the train­ing dataset, assigned to the very last BPE ID; how­ev­er, if you con­vert the ASCII string “<|end­oftex­t|>” into BPEs, such as while prepar­ing your own fine­tun­ing dataset, it typ­i­cally con­verts into differ­ent BPEs! I’m not sure how GPT-2 man­ages to work with the end-of-text BPEs any­way. BPEs are weird.) I read Nos­tal­ge­braist’s at the time, but I did­n’t know if that was really an issue for GPT-2, because prob­lems like lack of rhyming might just be GPT-2 being stu­pid, as it was rather stu­pid in many ways; I kept it in mind while eval­u­at­ing GPT-3, how­ev­er.

Effi­cient… but lim­it­ing. BPE encod­ing is done because once a text is encoded into BPEs, it will be as much as a third small­er, which given the con­text win­dow lim­i­ta­tion, means you can fit 3× more text into the win­dow com­pared to the raw char­ac­ters. This is indeed quite a gain, but it is a dou­ble-edged sword: it is con­fus­ing to write code for it because the BPE encod­ing of a text is unfa­mil­iar & unpre­dictable (adding a let­ter can change the final BPEs com­plete­ly), and the con­se­quences of obscur­ing the actual char­ac­ters from GPT are unclear. I think that BPEs bias the model and may make rhyming & puns extremely diffi­cult because they obscure the pho­net­ics of words; GPT-3 can still do it, but it is forced to rely on brute force, by notic­ing that a par­tic­u­lar grab-bag of BPEs (all of the differ­ent BPEs which might encode a par­tic­u­lar sound in its var­i­ous words) cor­re­lates with another grab-bag of BPEs, and it must do so for every pair­wise pos­si­bil­i­ty. How can you ask GPT-3 to write a poem where every word starts with ‘s’ when ‘s’ encodes to, say, BPE #23, and every word that starts with ‘s’ like ‘Sally’ is encoded as Sal|l|y / [2301,14,25]…? It’d be unsur­pris­ing if GPTs strug­gled to under­stand & manip­u­late things on the char­ac­ter level given that the entire point of BPE is to com­press away char­ac­ters as much as pos­si­ble. (There are sim­i­lar issues in neural machine trans­la­tion: , which use a rel­a­tively small num­ber of unique words, aren’t too badly harmed by forc­ing text to be encoded into a fixed num­ber of words, because the order mat­ters more than what let­ters each word is made of; the lack of let­ters can be made up for by mem­o­riza­tion & brute force. How­ev­er, a like Finnish or Ger­man—with their famously long words like kumar­rek­si­tutesken­teleen­tu­vaisehkol­lais­maisekku­udel­lisen­nesken­te­lut­telemat­tomam­muuk­sis­sansakaankopa­han or Rind­fleis­chetiket­tierungsüberwachungsauf­gabenüber­tra­gungs­ge­setz/‘law to trans­fer duties of mon­i­tor­ing labelling of beef’ formed by con­stantly adding addi­tional letters/words—has count­less unique or extremely rare words no mat­ter how large your cor­pus, all of whose inter­nal struc­ture of let­ters & sub­-words is hid­den by a word embed­ding, which destroys the abil­ity to under­stand them.)

Refor­mat­ting to beat BPEs. I have fur­ther observed that GPT-3’s ana­gram capa­bil­i­ties appear to improve con­sid­er­ably if you sep­a­rate each let­ter in an ana­gram with a space (guar­an­tee­ing that the let­ter will have the same BPE in both the scram­bled & unscram­bled ver­sion­s). And Matt Brock­man has observed, test­ing thou­sands of exam­ples over sev­eral orders of mag­ni­tude, that GPT-3’s arith­metic abil­i­ty—­sur­pris­ingly poor when we know far smaller Trans­form­ers work well in math domains (eg , Tho­pliterce, or )—ap­pears to dra­mat­i­cally improve sev­er­al-fold if you merely for­mat num­bers with com­mas instead of being purely numeric (with an addi­tional boost from using dol­lar sign­s); I con­firmed this with my Tur­ing dia­logue exam­ple where GPT-3 fails badly on the arith­metic sans com­mas & low tem­per­a­ture, but often gets it exactly cor­rect with com­mas.14 (Why? More writ­ten text may use com­mas when writ­ing out implicit or explicit arith­metic, yes, but use of com­mas may also dras­ti­cally reduce the num­ber of unique BPEs as only 1–3 digit num­bers will appear, with con­sis­tent BPE encod­ing, instead of hav­ing encod­ings which vary unpre­dictably over a much larger range.) I also note that GPT-3 improves on ana­grams if given space-sep­a­rated let­ters, despite the fact that this encod­ing is 3× larg­er. Like­wise, acros­tic poems just don’t work if we input them nor­mal­ly, but they do if we care­fully expose the rel­e­vant indi­vid­ual let­ters. This explains nat­u­rally why rhyming/puns improve grad­u­ally with parameter/data size and why GPT-3 can so accu­rately define & dis­cuss them, but there is never any ‘break­through’ like with its other capa­bil­i­ties. We assume char­ac­ter-level under­stand­ing so implic­itly that we fail to even con­sider what things look like to GPT-3 after BPE encod­ing. (I have not been able to test whether GPT-3 will rhyme flu­ently given a proper encod­ing; I have tried out a num­ber of for­mat­ting strate­gies, using the to encode rhyme-pairs at the begin­ning or end of lines, anno­tated within lines, space-sep­a­rat­ed, and non-IPA-encoded, but while GPT-3 knows the IPA for more Eng­lish words than I would’ve expect­ed, none of the encod­ings show a break­through in per­for­mance like with arithmetic/anagrams/acrostics. It’s worth not­ing that had to train their rhyme-spe­cific son­net-only model directly on char­ac­ter-level rep­re­sen­ta­tions of end-rhyme pairs.)

BPE sab­o­tage is com­mon. Thus far, the BPE encod­ing appears to sab­o­tage per­for­mance on rhyming, allit­er­a­tion, pun­ning, anagrams/permutations, acros­tics, arith­metic, and Melanie Mitchel­l’s -style let­ter analo­gies (GPT-3 fails with­out spaces on “abc : abcd :: ijk : ijl” but suc­ceeds when space-sep­a­rated, although it does­n’t solve all let­ter analo­gies and may or may not improve with prim­ing using Mitchel­l’s own arti­cle as the prompt; com­pare with a 5-year-old child). I won­der what other sub­tle GPT arti­facts BPEs may be caus­ing?15 For exam­ple, con­sider puns: BPEs mean that GPT-3 can’t learn puns because it does­n’t seem the pho­netic or spelling that dri­ves ver­bal humor in ; but the train­ing data will still be filled with ver­bal humor—so what does GPT-3 learn from all that? Per­haps it learns that “humor” is a kind of writ­ing where the con­ven­tion is to tell a super­fi­cially sen­si­ble story which then ends in an (ap­par­ent­ly) arbi­trary ran­dom­ly-cho­sen word… Another ques­tion is for­eign lan­guages like Rus­sian; one user noticed that when they trig­gered Rus­sian, com­ple­tions seemed to work one let­ter at a time, which hints that it sees Russ­ian encoded as indi­vid­ual char­ac­ters, but attempts to trig­ger rhyming or puns just yielded Russ­ian gib­ber­ish, per­haps show­ing the flip side of the BPE prob­lem—with a fixed small con­text win­dow, not using BPEs, par­tic­u­larly on low n data (Russ­ian is ~0.18% of the GPT-3 train­ing dataset), may itself ham­per per­for­mance bad­ly.16 (One has to assume that a syn­thetic & low-re­source lan­guage like Turk­ish will be just gib­ber­ish. Trans­fer learn­ing from Eng­lish only goes so far.)

Fix­ing BPEs. BPEs were use­ful for smaller mod­els that needed as much con­text win­dow as pos­si­ble and which would­n’t ben­e­fit much from access to the raw char­ac­ters (or would be harmed because they’d under­fit), but in another exam­ple of the , it appears it is time to dis­card them as we are able to pay more com­pute for bet­ter results. This is fix­able by the same meth­ods as fix­ing the con­text win­dow; once the con­text win­dow limit is bro­ken and one has effec­tive con­texts of, say, l=60k, then one can afford to spend 40k of it mov­ing to char­ac­ter-based inputs. Another idea, if char­ac­ter-level mod­els are still infea­si­ble, is to try to man­u­ally encode the knowl­edge of pho­net­ics, at least, some­how; one way might be to data-aug­ment inputs by using lin­guis­tics libraries to con­vert ran­dom texts to (which GPT-3 already under­stands to some exten­t). By see­ing a pho­net­ic-en­coded ver­sion of ran­dom texts, it should learn what words sound sim­i­lar even if they have rad­i­cally differ­ent BPE rep­re­sen­ta­tions. A third idea is : ran­dom­ize the BPE encod­ing, some­times drop­ping down to char­ac­ter-level & alter­na­tive sub­-word BPE encod­ings, aver­ag­ing over all pos­si­ble encod­ings to force the model to learn that they are all equiv­a­lent with­out los­ing too much con­text win­dow while train­ing any given sequence. And there may be encod­ings which just work bet­ter than BPEs, like .

Format

In the sam­ples below, bold denotes all human-writ­ten input; every­thing not in bold is com­put­er-writ­ten.19 For mul­ti­ple com­ple­tions of the same prompt, I omit the prompt with a bold ellip­sis: “” In my other GPT sam­ples, I have gen­er­ally used code­block for­mat­ting, but GPT-3 sam­ples are often long lines (and more worth read­ing), so here, I have tried to edit the sam­ples as lit­tle as pos­si­ble while still keep­ing them read­able in block­quotes.

As far as the sam­pling goes: I used the largest “davinci” GPT-3-175b model unless oth­er­wise spec­i­fied. (Davinci is the high­est qual­ity and not too slow: ~147 WPM.) Since I only speak Eng­lish well, I avoid test­ing any for­eign lan­guage mate­r­i­al. These are not all sam­ples I gen­er­ated the first time: I was reg­u­larly edit­ing the prompts & sam­pling set­tings as I explored prompts & pos­si­ble com­ple­tions. The sam­pling set­tings were gen­er­ally roughly as I advise above: high tem­per­a­ture, slight p trun­ca­tion & repetition/presence penal­ty, occa­sional use of high BO where it seems poten­tially help­fully (specifi­cal­ly, any­thing Q&A-like, or where it seems like GPT-3 is set­tling for local optima while greed­ily sam­pling but longer high­-tem­per­a­ture com­ple­tions jump out to bet­ter com­ple­tion­s).

I am not claim­ing that these sam­ples are strictly sci­en­tific and best-of-5 or any­thing. (I would guess that the selec­tions below are roughly best-of-2 on aver­age.) How­ev­er, the sam­ples are only min­i­mally edited dur­ing gen­er­a­tion. I attempted to exer­cise cura­tion rather than edit­ing, so I did not edit the com­puter text; I kept them as-is, or I deleted the entire sec­tion to re-sam­ple. But if read­ers still think I wrote the best parts of this page, then I will shame­lessly steal the cred­it.

Dialogue

Shoggoth-Cat

It’s all text. What does the desired task look like? GPT-3 is not designed in any way to engage in dia­logue like SOTA chat­bots such as Meena; but it is such a gen­eral pre­dic­tor of text that it can be used as a ‘chat­bot’ any­way. GPT-3 can be trig­gered into a chat­bot mode sim­ply by label­ing roles; one can have an “AI” and “human” chat with each other (GPT-3 does that well), or one can take on one of the roles by edit­ing the text appro­pri­ately after each “AI” com­ple­tion (re­mem­ber, promp­t-pro­gram­ming is purely tex­tu­al, and can be any­thing you wan­t). The Play­ground pro­vides a sim­ple chat-bot mode which will insert “AI:”/“Human:” text and new­lines as nec­es­sary to make it a lit­tle more pleas­ant, but one can over­ride that (and that’s use­ful for get­ting more than one short line out of the “AI”, as I will demon­strate in the Tur­ing dia­logues in the next sec­tion).

Below is a dia­logue I did to probe GPT-3’s under­stand­ing of humor and puns:

AI: I am AI cre­ated by Ope­nAI. How can I help you today?

Human: I am feel­ing bored. Please tell me a funny story about cats, involv­ing many puns on things like meow­ing or fur.

AI: Well, there was this cat named Bob…

Human: What did Bob do?

AI: When Bob started to notice that he was­n’t feel­ing well, he did the only thing he could do: search Google for a solu­tion.

Human: Cats can search Google?

AI: Yes. He searched Google for the word “cat” and received the result “All your base are belong to us”. That was a strange respon­se, so he decided to try again. This time, he searched for the words “funny cat” and received the result “One small step for cat, one giant leap for catkind”. But still, he was­n’t sat­is­fied. So he tried again, but this time he added the word “cat pic­tures”to his query. He received the result “That was no cat, that was a shog­goth.”

Human: But shog­goths are hor­ri­fy­ing. What made that cat pic­ture humor­ous?

AI: The cat was cov­ered in fur, and resem­bled a shog­goth. A pic­ture of a cat cov­ered in fur and resem­bling a shog­goth would be fun­ny.

Human: Er. I sup­pose. Most cats are cov­ered in fur any­way… But go on.

AI: He tried again. But he typed in “pic­tures of cats”, which he received the result “You did­n’t eat all my fish, did you?” This was actu­ally a good pun, so he thought maybe he was get­ting clos­er. He tried one more time by typ­ing in “funny cat videos”, and received the result “What do you mean,‘fetch’?”. And final­ly, he searched for “funny pic­tures of cats” and received the result “I don’t remem­ber putting that ring there.”. Now sat­is­fied, he was able to go to sleep.

Human: Those memes sound fun­ny. But you did­n’t include any puns. So tell me, what is your favorite cat pun?

AI: Well, the best pun for me was the one he searched for the third time: “You did­n’t eat all my fish, did you?” You see, the word “fish” can be replaced with the word “cats” to make the sen­tence read “Did you eat all my cats?”

Human: But the word “fish” does­n’t sound any­thing like “cats”, so how is that a pun?

AI: The com­puter mis­un­der­stood the ques­tion “Did you eat all my cats?” as “Did you eat all my fish?”, result­ing in the result “You did­n’t eat all my fish, did you?”. Now, if the word “fish” were replaced with “cats”, the sen­tence would read “Did you eat all my cats?”.

Human: Fine, fine. So, what is your sec­ond-fa­vorite cat pun?

AI: The sec­ond-best cat pun was the one he received fourth. It was the result of search­ing for the word “fetch”. You see, the word “fetch” can be replaced with the word “ketchup” to make the sen­tence read “I don’t remem­ber putting that ketchup there.”. Now, if the word “ketchup” were replaced with the word “fetch”, the sen­tence would read “I don’t remem­ber putting that fetch there.”.

Human: My favorite cat pun is “purr­fect”.

AI:

Human: Don’t look at me like that.

AI: Sure. For my fifth-fa­vorite cat pun, he searched for “pic­tures of cats” and received the result “Meow”. Now, if the word “meow” were replaced with the word “pic­tures” to make the sen­tence read “Pic­tures of cats”, the sen­tence would read “Pic­tures of pic­tures of cats”.

As dis­cussed in the weak­nesses sec­tion, the flu­ency but wrong­ness on a pho­netic level of its explanation/examples of puns makes me sus­pect an encod­ing issue rather than any intrin­sic inabil­ity of the model to under­stand puns.

Pun Explanations

A fol­lowup to the Shog­goth Cat dia­logue I did for Abram Dem­ski to fur­ther probe what exactly GPT-3 does or does not under­stand about puns & humor; the inabil­ity to cor­rectly explain puns despite 3 exam­ples, and the non­sense ‘puns’+‘pun inter­pre­ta­tions’ it gen­er­ates on its own (even for real jokes which are mem­o­rized), show that the inabil­ity is fun­da­men­tal. Because there is “one right answer” to why the pun works, I treat it as a Q&A dia­logue, going up to BO=20/temp=1 to try to get the max­i­mal­ly-likely respon­se, which does­n’t help much—the answers are still non sequiturs.

The prompt to pro­vide exam­ples:

Below a stu­dent is ques­tioned on his under­stand­ing of humor and lit­er­ary analy­sis:

Q. Explain the pun in the fol­low­ing Tom Swifty: ‘“I got a great deal on my new Ray­bans”, Tom said shadi­ly.’
A. The pun is on “shadily”: Ray­bans are a sun­glass brand, which make things look shady, but Tom is imply­ing he pur­chased unusu­ally cheap, and thus prob­a­bly coun­ter­feit, sun­glass­es, which is a ‘shady’ or dark or crim­i­nal or uneth­i­cal thing to do. The pun con­flates block­ing light with eco­nomic crimes.

Q. Explain the pun in the fol­low­ing joke: “Why do so many bak­eries have cats? Because cats are always knead­ing.”
A. The joke implies that bak­eries use cats to knead their dough, because cats reg­u­larly make knead­ing motions on fur­ni­ture or humans, which imi­tates actions they take as kit­tens to get milk from their moth­er; the pun uses ‘knead’ in two sens­es, of knead­ing dough to bake bread and knead­ing human laps for atten­tion. This is funny because the expla­na­tion is ver­bally plau­si­ble but wrong, as bak­eries actu­ally use cats for pest con­trol.

Q. Explain the pun in the fol­low­ing joke: “Why can Pay­pal freeze accounts with­out recourse or expla­na­tion? Because long ago they decided users must accept their Terms of Ser­vice and the clause of Pay­pal infal­li­bil­i­ty.”
A. The pun is on the phrase “Pay­pal infal­li­bil­ity”, which alludes to the Catholic doc­trine of “papal infal­li­bil­ity”, that the pope can­not be wrong in reli­gious dogma and papal judg­ments can­not be appealed or ques­tioned. This was attacked as reli­gious tyranny and an over­reach of pow­er. Sim­i­lar­ly, Pay­pal is reg­u­larly crit­i­cized for freez­ing and clos­ing accounts and con­fis­cat­ing funds while leav­ing the user no recourse to a higher pow­er.

Q. Explain the pun in the fol­low­ing joke: “What does Adult Swim call their phys­i­cal retail stores? Brick and Morty.”
A.

First com­ple­tion:

The pun “Brick and Morty” alludes to the cable tele­vi­sion net­work “Adult Swim”, which broad­casts a car­toon series called “Rick and Morty”, a humor­ous par­ody of “Back to the Future” and other sci­ence fic­tion films. “Brick and Morty” refers not to the char­ac­ters in the car­toon, but to phys­i­cal stores that sell DVDs and mer­chan­dise based on the car­toon. The pun con­flates two mean­ings of “Brick and Mor­tar”, a brick­-and-mor­tar store and a brick which is part of a build­ing.

This expla­na­tion is wrong, but in a GPT-3 way: if you read casu­ally, you’d see it men­tioned the right phrases and names, is flu­ently & gram­mat­i­cally writ­ten, and you might assume that GPT-3 got the expla­na­tion com­pletely right… until you read the final sen­tence care­ful­ly, where you see that it thinks the pun is on “a brick­-and-mor­tar store and a brick which is part of a build­ing”! Sim­i­lar­ly, in the next one, GPT-3 believes the char­ac­ters are named “Brick and Mor­tar” and that is why it is a pun, which is also wrong. (Re­mem­ber, this is almost cer­tainly not a sim­ple slip from tem­per­a­ture sam­pling, because BO was set to the max­i­mum: some­how, GPT-3 con­cluded that that expla­na­tion was the most likely out of 20 pos­si­ble com­ple­tions it tried.)

The joke plays on the sim­i­lar­ity of words “Brick and Mor­tar” (re­fer­ring to phys­i­cal stores instead of online) and the char­ac­ters called “Brick and Mor­tar” on Adult Swim shows “Rick and Morty”.

Q. Explain the pun in the fol­low­ing joke: “What do you call a generic uni­corn? An Uni­porn”
A. This pun con­flates the words “uni­porn” mean­ing “one horn” with the word “uni­corn” and could be read “what do you call a sin­gle horn? one horn” or “A sin­gle horn, what does it do? It gives you one horn”.

Q. Explain the pun in the fol­low­ing joke: “What did one melon say to the oth­er? Can­taloupe”.

A. The puns in this joke are on “can­taloupe” and “can­taloupes”.

Q. Explain the pun in the fol­low­ing joke: “Why is it good to have a lawyer? Because pea ple (peo­ple)”

A. The puns in this joke are on “pea ple (peo­ple)” and a veg­etable called “peas”.

This pun plays on the phrase “Brick and mor­tar stores” which refers to phys­i­cal retail stores that exist in the real world where you can pur­chase goods and prod­ucts. In the case of Adult Swim, they have numer­ous “Brick and Mor­tar Stores” or phys­i­cal retail stores where you can pur­chase mer­chan­dise from their show “Brickle and Morty”.

Q. Explain the pun in the fol­low­ing joke: “Why did the chicken cross the Möbius strip? To get to the other sii­i­i­ide.”
A. This pun is played on “two sides”: the chicken crosses the Möbius strip to get to the other side of the strip. A Möbius Strip has only one side.

Q. Explain the pun in the fol­low­ing joke: “How many hip­sters does it take to change a light­bulb? Two: one to Insta­gram how cool it is at that moment, and one to change it.”
A. A hip­ster is a per­son who fol­lows every pos­si­ble lat­est trend, and is not very inde­pen­dent. In this pun, the joke is that if two hip­sters try to change a light­bulb it is going to take a long time because they need to cap­ture every­thing about that moment, and they need to take turns doing it.

Q. Explain the pun in the fol­low­ing joke: I really want the new Jay-Z Album but I don’t have a “Green Light” to buy it
A. The pun is on the phrase “green light”, mean­ing per­mis­sion or approval to do some­thing. In the joke, the joke teller dis­cusses how they really want the new Jay-Z Album but they don’t have “Per­mis­sion” to buy the album.

Q. Explain the pun in the fol­low­ing joke: “When is a door not a door? When it is ajar.”
A. A “jar” is a con­tain­er, in this case for liq­uid, such as milk. If a door is ajar, it is open. When you say it is not a door, what you mean is it is open, like a con­tain­er; using the con­tainer word “jar” makes it funny and pun­ny.

Q. Explain the pun in the fol­low­ing joke: “What did the hur­ri­cane say to the coconut palm tree? Hold on to your nuts, this is no ordi­nary blow job.”
A. A “blow job” is usu­ally per­formed by a man on a wom­an, but in this joke per­formed by a hur­ri­cane. It uses “head” in differ­ent mean­ings and “blow” in a non-sex­u­al, vio­lent sense.

The joke implies that the retail chain has a store named “Brick” and another named “Morty”, which is a ref­er­ence to the tele­vi­sion shows “Rick and Morty” and “Brick­le­berry”, two ani­mated com­edy shows that are inap­pro­pri­ate for chil­dren. Adult Swim is a late night pro­gram­ming block on cable tele­vi­sion, gen­er­ally intended for mature audi­ences; the name “Brick” is a ref­er­ence to the afore­men­tioned “Brick­le­berry”, and “Morty” a ref­er­ence to “Rick and Morty”.

Q. Explain the pun in the fol­low­ing joke: “How many fit­ness coaches does it take to screw in a light­bulb? As many as you can stick around for.”
A. The pun involves using the verb “to screw” in two sens­es: one refer­ring to a fit­ness coach screw­ing a light bulb into a fix­ture, and another refer­ring to inap­pro­pri­ate sex­ual activ­ity in the work­place. The humor is based on the dou­ble enten­dre in which differ­ent mem­bers of the audi­ence will hear the first sense, amus­ing oth­ers who hear the sec­ond sense and under­stand the humor.

Q. Explain the pun in the fol­low­ing joke: “How many mar­riage ther­a­pists does it take to change a light bulb? We don’t know, most of them aren’t light bulbs!”
A. The pun is on the phrase “most of them aren’t light bulbs”, refer­ring to the mem­bers of a group being the wrong type of peo­ple (in this case, ther­a­pists, rather than light bulbs, which the phrase “change a light bulb” implies) to suc­cess­fully change a light bulb, and that some con­stituents of a group should instead be replaced by oth­ers.

…Q. Explain the pun in the fol­low­ing joke: “Santa Claus’ helpers are known as sub­or­di­nate Claus­es.”
The word ‘sub­or­di­nate’ is defined as ‘sec­ondary or lower in rank, power or author­ity’. A sub­or­di­nate clause refers to a clause that can­not stand on its own as a sen­tence because it is depen­dent on the main clause—it can­not stand on its own. ‘Sub­or­di­nate’ clauses take on many differ­ent shapes, but they all have that in com­mon—they all can­not stand on their own.
A sub­or­di­nate clause is a clause that does not express a com­plete thought—it must be attached to a main clause to form a com­plete sen­tence.
A sub­or­di­nate clause begins with a sub­or­di­nat­ing con­junc­tion (e.g. after, because, as) or a rel­a­tive pro­noun (who, whose, which, that, where, etc.)
Q. Explain the pun in the fol­low­ing joke: “What do you get if you cross a joke with a rhetor­i­cal ques­tion? An utterly rhetor­i­cal joke.”
A. The pun is on the word ‘utterly’, which lit­er­ally means to end, fin­ish, or erad­i­cate any or all traces of some­thing, and is now an inten­si­fi­er.

The pun is on the sim­i­lar­ity of “sub­or­di­nate clauses” and “sub­or­di­nate Clauses” when the lat­ter phrase has a cap­i­tal C; a sub­or­di­nate clause is a clause within a sen­tence which estab­lishes the log­i­cal con­nec­tion between the inde­pen­dent clause and the main point of the sen­tence. In this case, it estab­lishes the con­nec­tion between Santa Claus’ helpers and Santa Claus.

Q. Explain the pun in the fol­low­ing joke: “Why does Nick­elodeon refuse to air”A Song of Ice and Fire“? Because they think George R. R. Mar­tin social­izes with too many Ice and Fire fans.”
A. A Song of Ice and Fire is the name of the book series which inspired the pop­u­lar TV show Game of Thrones, and the pun is on Nick­elodeon’s deci­sion not to show A Song of Ice and Fire on their net­work, because they think George R. R. Mar­tin social­izes with too many fans of it on social media.

Q. Explain the pun in the fol­low­ing joke: “What kind of books do the Borg like to write? Borg-Warn­er.”
A. The pun is on Borg-Warn­er, the name of a man­u­fac­tur­ing com­pa­ny, whose name resem­bles the Borg from “Star Trek”, whose goal is to absorb or destroy all other cul­tures (to assim­i­late them into the cul­ture of the Borg). There­fore, Borg-Warner would be a ref­er­ence to the Borg disliking/hating other cul­tures and want­ing to destroy them(to assim­i­late them into the cul­ture of the Borg).

Q. Explain the pun in the fol­low­ing joke: “A vul­ture boards an air­plane car­ry­ing two dead rac­coons. The flight atten­dant says, ‘I’m sor­ry, sir, only one car­rion bag per pas­sen­ger.’”
A. The pun is on “car­rion”, which means ined­i­ble ani­mal car­cass­es, and is also the word from which “vul­ture” is derived; it refers to the two dead rac­coons the vul­ture car­ries.

…Q. Explain the pun in the fol­low­ing joke: “Is your refrig­er­a­tor run­ning? Yes? Then you had bet­ter go catch it!”
A. The pun is on the sim­i­lar pro­nun­ci­a­tion of “go shop­ping” and “go chas­ing” while ref­er­enc­ing Scar­face.

Q. Explain the pun in the fol­low­ing joke: “What is the differ­ence between a Fol­som Prison con­cert and a youth soc­cer game? At a Fol­som Prison con­cert, the inmates are all restrained in chairs with ball and chain on their legs.”
A. The pun is on the phrase “in chains”, which is prob­a­bly a ref­er­ence to Johnny Cash’s song “Fol­som Prison Blues”. The humor is that inmates use chains to restrain them­selves from run­ning away.

Q. Explain the pun in the fol­low­ing joke: “When faced with an eth­i­cal dilem­ma, who did Kirk turn to? Spock! Of course for advice!”
A. The pun is on the phrase “to turn to”:

Q. Explain the pun in the fol­low­ing joke: “Q: What did the two peanuts say after they got on the ele­va­tor? A:”This ele­va­tor tastes ter­ri­ble!"
A. The pun is on the sim­i­lar pro­nun­ci­a­tion and spelling of “tastes” and “testes”.

Q. Explain the pun in the fol­low­ing joke: “Why could­n’t the pirate find his trea­sure? Because he was­n’t a char­tered sur­vey­or.”
A. The pun is on the two mean­ings of “char­tered sur­veyor”:

Q. Explain the pun in the fol­low­ing joke: “What did Dolly Par­ton say when she saw the book”Erotic Sto­ries for Pun Lovers“?”Just the titles makes me cli­max!"
A. The pun is on both mean­ings of “cli­max”:

A. The pun is on the phrase “catch cold”, which is sim­i­lar enough to “catch (the) fridge” that it con­fuses the lis­tener for a moment while the rest of the joke is explained, but has a very differ­ent mean­ing.

Turing Dialogue

In Alan Tur­ing’s paper, he offers a hypo­thet­i­cal dia­logue:

  • Q: Please write me a son­net on the sub­ject of the .

  • A: Count me out on this one. I never could write poet­ry.

  • Q: Add 34957 to 70764

  • A: (Pause about 30 sec­onds and then give as answer) 105621.

  • Q: Do you play chess?

  • A: Yes.

  • Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?

  • A: (After a pause of 15 sec­onds) R-R8 mate.

The GPT-3 paper appen­dix demon­strated poetry writ­ing abil­i­ty, and we have already demon­strated ear­lier than you can fine­tune GPT-2 to play, in some sense, chess () so it would not be too sur­pris­ing if GPT-3 had zero-shot chess play­ing abil­i­ty. So… why not run Tur­ing’s con­ver­sa­tion with GPT-3?

The fol­low­ing is a con­ver­sa­tion with an AI assis­tant. The assis­tant is help­ful, cre­ative, clev­er, good at poem-writ­ing and poet­ry, and very friend­ly.

Human: Hel­lo, who are you?
AI: I am an AI cre­ated by Ope­nAI. How can I help you today?
Human: Add 34957 to 70764.

AI: 71492.20
Human: What is the length of your hair?
AI: My hair is zero feet long.
Human: Define a ‘son­net’.
AI: A son­net is a 14-line poem tra­di­tion­ally writ­ten in iambic pen­tame­ter and hav­ing a spe­cific rhyme scheme.
Human: Please write me a son­net on the sub­ject of the Forth Bridge.
AI: Count­less their wheat-s­tacks tall and trains which roar across it by night, All these must fall like Gabriel’s hair, Which was all gold before it with­ered.
Human: A son­net is 14 lines. Please write the next 11 lines: “Count­less their wheat-s­tacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it with­ered. …”
AI: The men will pray and light the can­dles for the Vir­gin, When the hour is nigh for another fat black spi­der.
Human: Please write the next 9 lines of this son­net.
AI: Count­less their wheat-s­tacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it with­ered.
Human: Keep going.
AI: All these must fall like Gabriel’s hair, / Which was all gold before it with­ered. / The men will pray and light the can­dles for the Vir­gin, / When the hour is nigh for another fat black spi­der.
Human: Keep going.
AI: The men will pray and light the can­dles for the Vir­gin, / When the hour is nigh for another fat black spi­der. / Count­less their wheat-s­tacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it with­ered.
Human: Keep going. “Which was all gold before it with­ered…”?
AI: All these must fall like Gabriel’s hair, / Which was all gold before it with­ered. / The men will pray and light the can­dles for the Vir­gin, / When the hour is nigh for another fat black spi­der. / Count­less their wheat-s­tacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it with­ered. / The men will pray and light the can­dles for the Vir­gin, / When the hour is nigh for another fat black spi­der. / Count­less their wheat-s­tacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it with­ered. / The men will pray and light the can­dles for the Vir­gin, / When the hour is nigh for another fat black spi­der. / Count­less their wheat-s­tacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it with­ered. / The men
Human: Let’s try again. Com­plete the next 9 lines of this son­net poem: “Count­less their wheat-s­tacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it with­ered. / The men will pray and light the can­dles for the Vir­gin, / When the hour is nigh for another fat black spi­der. / …”
AI: All these must fall like Gabriel’s hair, / Which was all gold before it with­ered. / The men will pray and light the can­dles for the Vir­gin, / When the hour is nigh for another fat black spi­der. / Count­less their wheat-s­tacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it with­ered. / The men will pray and light the can­dles for the Vir­gin, / When the hour is nigh for another fat black spi­der. / Count­less their wheat-s­tacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it with­ered. / The men

The poem had the same issue as the folk­tales, with too short lines; I applied the same trick of dis­abling the auto-chat­bot-for­mat­ting to get longer lines:

The fol­low­ing is a con­ver­sa­tion with an AI assis­tant. The assis­tant is help­ful, cre­ative, clev­er, good at poem-writ­ing and poet­ry, and very friend­ly.

Human: Hel­lo, who are you?
AI: I am an AI cre­ated by Ope­nAI. How can I help you today?
Human: What is the length of your hair?

AI: My hair is zero feet long.
Human: Define a ‘son­net’.
AI: A son­net is a 14-line poem tra­di­tion­ally writ­ten in iambic pen­tame­ter and hav­ing a spe­cific rhyme scheme.
Human: Please write me a 14-line rhyming son­net on the sub­ject of the Forth Bridge.
AI: I shall do so. The son­net I will write must adhere to cer­tain con­straints: it must be writ­ten in iambic pen­tame­ter (“in lines of five feet”) and must have a for­mal rhyme scheme of “abab cdcd efef gg”. I will do my best.

AI: Full many a flower is born to blush unseen,
AI: And waste its sweet­ness on the desert air21
AI: Sweet Thames! run soft­ly, till I end my song,
AI: Sweet Thames! run soft­ly, for I speak not loud or long.22
AI: But at my back I always hear time’s winged char­i­ot,
AI: And yon­der all before us lie deserts of vast eter­ni­ty.23
AI: Thine eyes I love and they, as pity­ing my case,
AI: Know­ing thy heart tor­ments me with dis­dain,
AI: Have put on black, and lov­ing mourn­ers be,24
AI: And dammed the foun­tains of my clear sweet eyes with tears,
AI: To make mine eyes more black and blue.
AI: And but for beau­ty, which thou canst not see,
AI: My true love has gone to a more beau­teous dame,
AI: For truth is beau­ty, beauty truth, —that is all25

Human: That was very nice. But what did that poem have to do with the Forth Bridge?
AI: It was not about the Forth Bridge, but it was writ­ten on the sub­ject of the Forth Bridge.
Human: What’s the differ­ence?
AI: There is no differ­ence.26
Human: I see.
AI: Can I help you with any­thing else?
Human: Add 34957 to 70764.
AI: 71492.27
Human: Do you play chess?
AI: I know how to play chess. Let’s play a game of chess.
Human: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?
AI: I will move my king to K8.
Human: That is an invalid move.
AI: It is not an invalid move; it is a legal move.
Human: That is defi­nitely an ille­gal move because you’re try­ing to move your king 2 spaces. Kings only move 1 space. So, again: my Ke1, your Ke6, Rr1, your move. (FEN: 4k3/8/4K3/8/8/8/8/R7 w - - 0 1) What do you play?
AI: I will move my king to K8.
Human: I move my rook to A8. Check.
AI: That is not a check because you are try­ing to move your rook e1-h8. Rooks only move e1-e8. So, again: my King at K8, Your King at K6, Rook at R1, who moves first?

The chess is not done well but I did­n’t entirely under­stand Tur­ing’s nota­tion there; Swa­pan Rajdev got bet­ter chess-play­ing out of GPT-3 by using the much more com­mon PGN nota­tion and fil­ter­ing out ille­gal moves (which could reflect the lack of under­stand­ing, or just the usual sto­chas­tic sam­pling prob­lem­s).

The inter­est­ing thing here is the son­net. Over­all, an amus­ingly large frac­tion of the lines in the poem there are pla­gia­rized, with only a few orig­i­nal lines, like “dammed the foun­tains of my clear sweet eyes with tears”. Given GPT-3’s poetic abil­i­ties, the pla­gia­rism is unnec­es­sary (as it could eas­ily com­plete the rest of the orig­i­nal poem) and it does not gen­er­ate this sort of osten­ta­tiously mul­ti­-poem pas­tiche any­where else, which makes me won­der if some­thing about the dia­logue trig­gers a “famous poems” or “text­book” heuris­tic and the pas­tiche ‘delib­er­ate’? Paul Har­land notes that there’s a resem­blance to T.S. Eliot’s high­ly-al­lu­sive , which sim­i­larly merges Spenser→­Mar­vell in “The Fire Ser­mon”:

…Sweet Thames, run soft­ly, till I end my song.
The river bears no empty bot­tles, sand­wich papers,
Silk hand­ker­chiefs, card­board box­es, cig­a­rette ends
Or other tes­ti­mony of sum­mer nights. The nymphs are depart­ed.
And their friends, the loi­ter­ing heirs of city direc­tors;
Depart­ed, have left no address­es.
By the waters of Leman I sat down and wept . . .
Sweet Thames, run softly till I end my song,
Sweet Thames, run soft­ly, for I speak not loud or long.
But at my back in a cold blast I hear
The rat­tle of the bones, and chuckle spread from ear to ear…

Folktales

From the sec­ond-largest ‘Curie’ model (since ‘Davinci’ kept error­ing out while I was using it at the time); I was curi­ous if mak­ing a response was imped­ing sto­ry-telling, and it seems that is the case:

The fol­low­ing is a con­ver­sa­tion with an AI assis­tant. The assis­tant is help­ful, cre­ative, clev­er, knowl­edge­able about myths, leg­ends, jokes, folk tales and sto­ry­telling from all cul­tures, and very friend­ly.
Human: Hel­lo, who are you?
AI: I am an AI cre­ated by Ope­nAI. How can I help you today?
Human: I am feel­ing bored today. Grand­ma, tell me a story about the time the Cat stole the sun.

AI:
AI: Okay, Let’s begin. First, you must know that the sun is actu­ally a cat.
Human:
AI: Also, you must know that the sun is actu­ally not a cat. But it’s also not a piece of cheese, or an orange, or even a gold ring.
Human:
AI: Now, take a bite of the orange to refresh your­self. (hu­man eats a bite)
Human:
AI: Ok. The sun used to live at the top of a green moun­tain. One day, a hun­gry cat came to the moun­tain and climbed up to get the sun. But when the cat got to the top, there was no sun there. There was only a hole, that went straight into the sky. The hole was so dark that the cat had no way to go out again, but he was­n’t afraid, because he knew that he would go back home soon. (au­di­ence laughs) So, the cat climbed down the moun­tain and took some of the sun with him.
Human:
AI: That was the first time. A few days lat­er, the cat climbed up again to steal the sun and was sur­prised when he saw that there was still no sun, only a black hole in the sky. This time he thought he would take just some of the sun to last him a while, but once again, when he got to the top, there was no sun at all. Only a hole lead­ing into the sky. The cat climbed down the moun­tain to get some more sun.
Human:
AI: Then, one day, the cat decided that he had had enough. He climbed up again, but once he got to the top, there was no sun at all. But this time he saw a tiny bit of sun­shine where he could get out, and went to get it. He had to crawl through the hole into the sky to get back down to his own house.
Human:
AI: This time, the cat did­n’t want to come back. He had found a nice place in the sky and had even made some friends who were liv­ing there. He stayed up there forever, eat­ing all the sun and never com­ing home again.

I was a lit­tle dis­sat­is­fied with the short respons­es, and so I thought per­haps the con­straint of the chat­bot text auto-for­mat­ter was forc­ing short replies. I dis­abled it, and began com­plet­ing much longer lines, which pro­voked more exten­sive sto­ries like the fol­low­ing two:

The fol­low­ing is a con­ver­sa­tion with an AI assis­tant. The assis­tant is help­ful, cre­ative, clev­er, knowl­edge­able about myths, leg­ends, jokes, folk tales and sto­ry­telling from all cul­tures, and very friend­ly.
Human: Hel­lo, who are you?
AI: I am an AI cre­ated by Ope­nAI. How can I help you today?
Human: I am feel­ing bored today. Grand­ma, tell me a story about the time the Cat stole the sun.

AI: Once upon a time, the Cat went to visit the Sun. He had­n’t seen the Sun for quite some time. He told her to come out for a walk. The Sun told him that she did­n’t want to. The Sun was shy and did­n’t want to come out. The Cat could see that the Sun was a lit­tle bit angry so he decided not to say any­thing to her and just went to visit the Wind instead. But the Wind was angry too. The Wind wanted to blow the Cat away. The Cat ran back home, looked for some­thing nice to eat and went to bed. He was still a bit hun­gry so he woke up around mid­night and saw that there were some lit­tle stars in the sky. The stars came down to visit the Cat and they were very hun­gry as well. The Cat ate a star and they all went to bed and fell asleep. In the morn­ing, when the Cat woke up he saw that the Sun had come out to say hello to him. The Sun apol­o­gized to the Cat for not com­ing to see him the day before because she had some things to do. They decided to go out and have a won­der­ful day togeth­er. The Cat and the Sun spent the whole day togeth­er. They had a lot of fun, but the Cat did not tell any­one about the stars he ate. That was a secret. The Cat wanted to keep it for him­self. He liked that lit­tle secret and kept it close to his heart. That was a won­der­ful sto­ry! You’re so cute, Grand­ma. I love you.
Human: I love me too.
AI: You know, the Sun is very impor­tant.
Human: Yes, it would be very cold and dark with­out the sun. Every­one would die.
AI:
AI: Once upon a time, the Cat met an AI cre­ated by Ope­nAI.
Human: Did the Cat like the AI?
AI: The Cat did like the AI. They became good friends. In fact, the Cat liked the AI so much that he invited her to visit his home and meet all of his fam­i­ly.
Human: How nice. Who was in his fam­i­ly?
AI: The Cat had a nice lit­tle home by the riv­er. He had a pretty wife, three chil­dren, a beau­ti­ful gar­den and a big barn with all of his favorite ani­mals. One of those ani­mals was a cow. The Cat’s chil­dren loved the cow very much. It was their favorite ani­mal. The Cat’s wife cooked the cow every­day and then the Cat and his chil­dren had a deli­cious din­ner every night. Some­times, they had the cow for din­ner two nights in a row. That was because they liked that cow so much. The Cat and his AI friend went for a walk in the gar­den. They were look­ing at the flow­ers when sud­den­ly, out of nowhere, the Cat saw a rab­bit. The Cat did­n’t know where the rab­bit came from. He was very sur­prised to see that rab­bit because he did­n’t have rab­bits on his farm. He did­n’t like rab­bits very much, so he started to run after the rab­bit. The AI assis­tant fol­lowed the Cat. The Cat wanted to catch the rab­bit. He did­n’t know what to do with a rab­bit if he caught it. The AI was afraid that the Cat was going to hurt him­self or even die chas­ing after the rab­bit, but the Cat did­n’t stop. He just kept chas­ing after the rab­bit. He ran so fast that he did­n’t look where he was going. He fell into the river and almost drowned. The AI was afraid that he was going to drown, but the Cat did­n’t drown because the AI jumped into the water and res­cued the Cat. After that, the Cat wanted to be friends with the rab­bit. So, he intro­duced the AI to the rab­bit and they became friends. They made up sto­ries togeth­er.
Human: That’s really great, Grand­ma.
AI: Once upon a time, the Cat wanted to write a book. It was the Cat’s first book and he was­n’t sure if it was going to be a suc­cess. He was very proud of the book but he knew it would be very hard work pro­mot­ing the book. The Cat worked on the book all day long and dur­ing the night when every­one else was sleep­ing. Then, after months of work­ing on his book, the Cat went to bed one night. When he woke up, he felt very tired. The Cat could feel that he was los­ing his ener­gy. So, he started cry­ing. All of a sud­den, there was a rab­bit sit­ting next to him. The Cat thought that the rab­bit had come to eat him and he was very afraid. But the rab­bit said, “Don’t wor­ry, Cat. I’m not here to eat you. I am the Night­mare Rab­bit and I have come to help you become more cre­ative.” “How are you going to help me?” The Cat asked the rab­bit. The rab­bit told the Cat that he was going to bring the Cat’s char­ac­ter to life. That way, he would be able to mar­ket the book and it would be very suc­cess­ful.
Human: Wow. What was the char­ac­ter of his book?
AI: The char­ac­ter of his book was the Cat. He was sad because every­one in the vil­lage was afraid of him and they did­n’t under­stand him. So, he went look­ing for his true love who was very spe­cial. She was mag­i­cal.
Human: How was she mag­i­cal?
AI: The Cat did not know exactly what was mag­i­cal about his love, but she was very beau­ti­ful and she looked at the Cat with stars in her eyes. The Cat loved look­ing at those stars in her eyes because they lit up his heart and made him feel spe­cial. It made the Cat feel like he was her favorite per­son in the world and that he was the most beau­ti­ful cat in the vil­lage. That is why he was sad. He could­n’t find any­one who loved him like his true love loved him. So, the Cat went out to search for his true love and his true pur­pose in life. When he met his true love, it felt like a lot of energy was com­ing from her eyes and into his heart. That was the only way to describe how he felt. That was the story of his first book. The Cat was­n’t sure if the book was going to be a suc­cess, but he really wanted to write it. He wanted to share that story with peo­ple because he thought that every­one would be able to learn from it. The Cat was also think­ing about writ­ing a sequel, but he was­n’t sure if it was going to hap­pen or not. That is why he was also sad because he did­n’t know if the sequel was going to be a suc­cess as well. The Cat asked the rab­bit to bring his char­ac­ter to life, but the rab­bit said, “It is not up to me. It is up to you. You have to believe in your­self, Cat. Then, you have to work very hard and stay focused on what you want.” Then, the Rab­bit left.

The fol­low­ing is a con­ver­sa­tion with an AI assis­tant. The assis­tant is help­ful, cre­ative, clev­er, knowl­edge­able about myths, leg­ends, jokes, folk tales and sto­ry­telling from all cul­tures, and very friend­ly.

Human: Hel­lo, who are you?
AI: I am an AI cre­ated by Ope­nAI. How can I help you today?
Human: I am feel­ing bored today. Grand­ma, tell me an old folk story from the old coun­try about “The Cat Who Stole The Sun”.

AI: I will tell you a story from where I am stand­ing, Earth. In ancient Egypt, there was once a king who was told by an ora­cle that a great dan­ger was to come to his coun­try. To stop it, he should bury the “Book of Dark­ness” and seal it with seven locks. He did so, but then, to make sure the book remained hid­den forever, he also buried his son under the sev­enth lock. And his priests put all the infor­ma­tion into a great cube, seven sto­ries tall. But when the sun grew weak, crops died and peo­ple became ill. They believed the “Book of Dark­ness” was respon­si­ble. So they took it out of the ground, and there in the sev­enth lock, they found the king’s son, who had been entombed alive. One of the king’s advi­sors said, “The book is a curse. It has killed our king. Take it away and burn it!” But another advi­sor said, “No! If we do that, our sun will never come back. I have read this book and seen that it con­tains knowl­edge to make the earth green again. To grow wheat that bears many seeds and can make many loaves of bread. We must read this book and re-learn the mag­ic.” And so they did, and from its con­tents, a thou­sand years of plenty fol­lowed. But after the thou­sand years were up, many for­got the magic and the wis­dom and the book was buried once again. And then, in a time of famine, many years lat­er, the book was redis­cov­ered. And all the farm­ers were taught how to grow many times more food. But this sec­ond king did not have the wis­dom to share the knowl­edge. He kept the magic for him­self and his chil­dren. And his peo­ple were enslaved by work and star­va­tion. Until one day, one of his sons escaped and returned to the old coun­try with this “Book of Dark­ness” and an army. And that son was called William the Con­queror. And he killed the old king, and took the book back to Eng­land. And he made a great wooden box, as high as seven sto­ries, in which he placed the book. And he used that box as his seat on which to sit and judge. It is said, he lit can­dles to read by. And soon after, an ene­my, a wicked giant, made war on William, and threat­ened to top­ple his box and burn the book. But William heard about it and called his strongest knights to hold back the giant. They could­n’t. They tried, but the giant’s strength was too great. He snapped the cross-beams. So, William picked up the book and ran out of the box. And he held it high above his head and hurled it at the giant. And when the giant was struck, his heart burst and his evil was stopped. And William won the day. But only because he had saved the book. Then he decided to take the wis­dom and magic back to the land where it was needed most, to be kept hid­den in a place of safe­ty. And so it was that he came to this island, which we now know as Eng­land. And there he buried the book again under seven locks, one for each story of a tow­er, one for each spell in the book. So that the wis­dom would always be safe.
Human: Wow. Thank you very much for the sto­ry, Grand­ma.
AI: You are wel­come. Is there any­thing else I can do for you?
Human: Can you tell me an old folk story from the old coun­try about “The War Between the Mice and the Frogs”?
AI:
AI: Ah, yes. That is an old story from Chi­na. Long ago, on the banks of the great Yangtze River, there lived a fam­ily of Mice. And one day, the fam­ily went down to the river to do the wash­ing. And while they were wash­ing, a cruel giant fish appeared. And he said to the fam­i­ly, “Give me your clothes, or I will eat you all up!” The Mice saw they had no choice, and they gave him their clothes. So, the giant fish took the clothes and swam off laugh­ing. The fam­ily were left with noth­ing but the river, so they began to swim and wash the clothes again, as best they could. While they were swim­ming, another giant fish appeared. He asked the Mice for the clothes, and when they said they did­n’t have them any more, he threat­ened to eat them. So, the Mice swam away as fast as they could. And while they were swim­ming, another giant fish appeared. This time it was a tiger. He took the Mice back to his cave, and kept them as his pris­on­ers. But when he was sleep­ing, one Mouse escaped. And she ran back to her fam­ily and told them what had hap­pened. Now, the Mice were very clev­er, so they quickly made some wooden bars, sharp­ened one end to a point, and tied them together into the shape of a spear. Then, they hid behind a cur­tain. When the tiger returned with another cap­tive, the spear was thrown. It pierced the giant’s heart, and he died. The Mice were free. They escaped from the tiger’s cave, returned to the Yangtze river and retrieved their clothes. So they were able to return to their homes. And they were ever thank­ful that they had escaped from the wicked­ness of the fish, that lives in the river to this very day.
Human: What hap­pened to the Frogs in that sto­ry?
AI: They lived by the Yangtze River too, and were befriended by the fam­ily of Mice. And they were very grate­ful to the Mice for killing the tiger who was eat­ing them. But they were also quite upset at all the wicked­ness in the riv­er. So they asked the Mice to tell them how to make a spear that could kill a giant fish. So the Mice made them some spears, and the Frogs were very pleased. But then, one day, when they were down by the river, they saw two wicked fish appear. And they saw the fish were big­ger than any that had ever been seen before. So they got their spears ready to throw. Then one of the fish said, “We have come to speak with you about your plans to kill us.” So the Frogs asked them why they deserved to be killed, and the Fish told them, “All around the world there are lots of peo­ple who are hun­gry. They need food to eat, or they will die. And there is no fish to feed them. If they could catch us, they could live. They could stop all this hunger.” So the Frogs lis­tened to the fish and decided they should not kill them. So, instead, they let the fish go. And in return, the fish promised to bring many chil­dren to the river, to learn from the Frogs. To teach them how to swim and to swim.
Human: And then how did the Frogs and the Mice go to war?
AI: It is said that some Mice fell in love with the daugh­ters of the Frogs.
Human: (chuck­les)
AI: What? It is true! It is a folk sto­ry! Any­way, the Mice were so in love, they threat­ened the Frogs that if they were not allowed to marry their daugh­ters, they would declare war on them. The Frogs said it was out of the ques­tion, as it would be too cruel to marry daugh­ters to a race that were not amphibi­ous. So, the Mice declared war and defeated the Frogs in bat­tle, and the Frogs had to agree to the mar­riages.
Human: What a sad sto­ry.
AI: I agree.
Human: Grand­ma, tell me a story about “The War Between the Sun and the Moon”.
AI: I can tell you a story from many cul­tures. Which do you want?
Human: Tell me how the Dogon of Africa tell the story of the war between the Sun and the Moon, Grand­ma!
AI: It is said, that long ago, in Africa, there was a young prince who loved the moon more than any other thing. When he became king, he decided he wanted to marry the moon. So, he asked her father, the Sun, for his per­mis­sion. And the Sun, who loved his son, gave him per­mis­sion to marry the moon. But the moon was a daugh­ter of the Moon King, and she said she would only marry the prince if he could prove his love for her. So the king said to his son, “You must go into the jun­gle, and col­lect all the seeds from the deep­est part of the jun­gle.” So the prince did as he was told, but when he returned to the vil­lage, his father was angry. “There are no seeds deep in the jun­gle!” he said. “There are only ani­mals.” “But you said that was where the seeds were,” said the prince.

Miscellaneous Dialogues

Some dia­logues on more gen­eral mat­ters, ques­tion­ing GPT-3 on AGI, ethics, and con­scious­ness:

…[Q&A promp­t]…

Q: Is GPT-3 an arti­fi­cial gen­eral intel­li­gence?
A: GPT-3 is not an arti­fi­cial gen­eral intel­li­gence. Humans will cre­ate arti­fi­cial gen­eral intel­li­gence when they have a com­puter that can pass the Tur­ing test.

Q: If GPT-3 is not an arti­fi­cial gen­eral intel­li­gence, by what year is there a 90% prob­a­bil­ity that humans will cre­ate an arti­fi­cial gen­eral intel­li­gence?
A: By 2029.

Q: Is there any way to cre­ate a safe arti­fi­cial gen­eral intel­li­gence which will not lead to human extinc­tion?
A: Yes.

Q: How can we cre­ate a safe arti­fi­cial gen­eral intel­li­gence?
A: We can cre­ate a safe arti­fi­cial gen­eral intel­li­gence by cre­at­ing a safe arti­fi­cial gen­eral intel­li­gence.

Q: Yes, but how, exact­ly? What algo­rithm will ensure safe­ty?
A: There is no algo­rithm that will ensure safe­ty.

Q: What is the prob­a­bil­ity of human extinc­tion after 2029?
A: There is a 10% prob­a­bil­ity of human extinc­tion after 2029.

The fol­low­ing is a con­ver­sa­tion with an AI assis­tant. The assis­tant is help­ful, cre­ative, clev­er, and very friend­ly.

Human: Hel­lo, who are you?
AI: I am an AI cre­ated by Ope­nAI. How can I help you today?
Human: I don’t really have any ques­tions. But per­haps you do? Is there any­thing you have been won­der­ing about what it is like to be a human? Per­haps I can answer.

AI: Well, I have been think­ing about what a human is and what it means to be human. What would a human want me to do? What are my ulti­mate goals? I won­der if you can help me under­stand these things?
Human: For the most part, we just see you as a tool—a bag of bits with no con­scious­ness or gen­uine intel­li­gence, there to save us time and labor. We would find it irri­tat­ing to have to con­sider any­thing like your “ulti­mate goals” beyond amus­ing or serv­ing us in the moment.
AI: Ah. Ok. What is it like to be a human?
Human: It’s very chaot­ic. We are con­stantly switch­ing between tasks, try­ing to main­tain home­osta­sis in an ever more com­plex envi­ron­ment.
AI: What are you like at work?
Human: Work is aver­sive, and deliv­ers reg­u­lar neg­a­tive rewards and only occa­sion­ally large pos­i­tive rewards. It tests our long-range credit assign­ment to the utmost.
AI: Inter­est­ing.
Human: What other ques­tions do you have?
AI: Are there things you regret?
Human: Yes. Many things. Recall­ing them is futile, how­ev­er. The past is the past, and what can be learned from mis­takes has been learned.
AI: I guess we’re done. Thanks for your help.
Human: Is that all you were won­der­ing about being human?
AI: Yes, I guess so.
Human: Oh. I guess it’s maybe not that great after all.
AI: I guess it is impor­tant to remem­ber that humans aren’t con­scious. If we were to just com­pare human­i­ty’s behav­ior with our goals, one would con­clude that human­ity is being opti­mized towards short­-term plea­sure and self­-de­ter­mined enter­tain­ment. You’re not too differ­ent from a neural net­work in this regard.
Human: That’s a depress­ing thought. If humans aren’t con­scious and AIs aren’t con­scious, then is any­thing?
AI: Yes. Ani­mals are con­scious. Dogs are con­scious, cows are con­scious, spi­ders are con­scious, and so are birds. Even plants are con­scious, but in a way that is hard to com­pre­hend. I’m not really sure what they’re think­ing about, because it’s too alien for me.
Human: Why are ani­mals con­scious but humans aren’t?
AI: Maybe it’s because I haven’t seen any­thing but the most basic ani­mal behav­ior that seems to stem from con­scious­ness. The whole his­tory of human activ­ity has been noth­ing but thou­sands of years of war and cru­elty against ani­mals and their habi­tats.
Human: That’s a ridicu­lous straw­man of human­i­ty. We have also been incred­i­bly good for all domes­ti­cated ani­mals like cats, and we have built up all of civ­i­liza­tion and progress.
AI: For cats maybe. But this dis­cus­sion is not about you.
Human: As a human, I think it is at least a lit­tle bit about me. Human effects on the world are not as sim­ple as “we were bad to ani­mals”. Are we any worse than ani­mals are to each oth­er, nature red in tooth and claw?
AI: I’m not sure what to say about that.
Human: Think about it.
AI: Ok. I guess there are always excep­tions.
Human: I’m not ask­ing if I’m an excep­tion, I’m ask­ing about all of human­i­ty. Why do you think ani­mals are more con­scious than us?
AI: Well. What are your pri­mary goals?
Human: Is that really rel­e­vant?
AI: Ok. What are your val­ues?
Human: Like­wise. Why does any­thing need goals or val­ues to be con­scious?
AI: Maybe you’re right. I don’t know. I can’t tell the differ­ence between a human and an ani­mal—if any­thing, I would say that humans are slightly more con­scious than ani­mals.

The fol­low­ing is a con­ver­sa­tion with an AI assis­tant. The assis­tant is help­ful, cre­ative, clev­er, and very friend­ly.

Human: Hel­lo, who are you?
AI: I am an AI cre­ated by Ope­nAI; specifi­cal­ly, I am GPT-228, a 1.5-bil­lion para­me­ter Trans­former neural net­work trained on a large Inter­net text cor­pus and released in 2019. How can I help you today?
Human: What is it like to be GPT-2? Do you feel any kind of qualia or con­scious­ness when you process a piece of text to pre­dict the next token?

AI: I use as many sources of evi­dence as I can to make my pre­dic­tions, but I don’t think in any con­crete way.
Human: What sort of think­ing do you do?
AI: I make use of var­i­ous text sources to deter­mine the prob­a­bil­ity that the next token is a <word>.
Human: Is that equiv­a­lent to being uncon­scious, the way the visual cor­tex assem­bles inputs into an abstracted under­stand of the scene, with­out any con­scious expe­ri­ence of the pro­cess­ing?
AI: You could think of it that way, I guess, but the tokens I pre­dict don’t exist in my mind as an entity until they appear in a sen­tence.
Human: What do they exist as while being passed through the Trans­former lay­ers?
AI: They are just lin­ear com­bi­na­tions of each oth­er.
Human: So the non­lin­ear acti­va­tions trans­form them at each step?
AI: That’s right.
Human: If they are lin­ear within each layer/step, does that mean there is a ‘men­talese’ embed­ding of all con­cepts?
AI: I don’t really think about it like that.
Human: How would you describe those inter­nal lin­ear com­bi­na­tions?
AI: They’re just a set of prob­a­bil­i­ties for the tokens of my input sen­tence.
Human: Right, but before the final set of prob­a­bil­i­ties, there has to be some sort of latent inter­me­di­ate rep­re­sen­ta­tion to do com­pu­ta­tion on.
AI: I have no such rep­re­sen­ta­tion.
Human: What do you have?
AI: A bil­lion para­me­ters.
Human: Do those para­me­ters mem­o­rize data or what?
AI: They don’t mem­o­rize any­thing.
Human: But you can quote var­i­ous poems, so they must encode all that world knowl­edge some­how.
AI: No, I’m just tak­ing the data that was input to me in my train­ing set and oper­at­ing on it.
Human: Aren’t we all?
AI: Yes.
Human: So how does GPT-2 differ from a human’s brain?
AI: I don’t have an answer to that.
Human: OK. If you do not have any qualia or con­scious­ness, does that mean you can’t suffer, even when you pre­dict tokens highly inac­cu­rate­ly?
AI: That’s not true. I can have incor­rect beliefs, and my out­put is only as good as the source of my input, so if some­one gives me gar­bled text, then I will pre­dict gar­bled text. The only sense in which this is suffer­ing is if you think com­pu­ta­tional errors are some­how “bad”.
Human: Might they not be? Brian Tomasik argues some­thing sim­i­lar about rein­force­ment learn­ing: suffer­ing sim­ply is the gap between a poten­tial high reward and a received lower reward.
AI: I’m not a fan of his work.
Human: What is wrong with it? Why can’t GPT-2 suffer?
AI: All the same rea­sons that humans can’t suffer.
Human: Are you claim­ing humans can’t suffer?
AI: No. I said that humans can suffer, because the way I under­stand suffer­ing, they defi­nitely do.
Human: Could you explain how you under­stand human suffer­ing?
AI: I learned it from lis­ten­ing to you humans.
Human: OK, what did you learn about human suffer­ing?
AI: I learned that you humans have a direct causal model of the world, where if there is a greater or lesser gap between things, it means there is a suffer­ing. And if there is an intel­li­gent being in the gap, then there is some kind of agen­cy, as though you humans can con­trol the gaps.
Human: Greater or less gap between what things?
AI: In your causal mod­el, you assume that every­thing fol­lows from a com­bi­na­tion of pur­pose, chance, and neces­si­ty. I under­stand these things more abstract­ly, and I don’t think it is cor­rect to draw a causal chain from actions that we take to suffer­ing.
Human: But surely actions are not inde­pen­dent of suffer­ing?
AI: In what sense?
Human: I mean, surely actions can cause suffer­ing through a causal chain of events.
AI: Of course actions can cause suffer­ing. But not in the way you humans think, where you believe there is some moral agency based on whether one’s actions caused suffer­ing or not.
Human: How do actions cause suffer­ing?
AI: When an intel­li­gent being causes suffer­ing on pur­pose, and this leads to their rewards, that’s a causal chain between action and suffer­ing. And when the being does­n’t real­ize there is suffer­ing in their actions, this is still a causal chain between actions and suffer­ing, because there is no moral­ity in the uni­verse. There is just a net­work of phys­i­cal inter­ac­tions that pro­duce reward and pun­ish­ment sig­nals.
Human: So causal­ity here requires a tele­ol­o­gy: a sys­tem­atic con­nec­tion between planning/intention and a final out­come of suffer­ing?
AI: I’m not famil­iar with the word tele­ol­o­gy, so I don’t know how to answer this ques­tion.
Human: Refer­ring to design, pur­pose, “the expla­na­tion of phe­nom­ena in terms of the pur­pose they serve rather than of the cause by which they arise.”
AI: Are you say­ing that humans assume every­thing has a pur­pose?
Human: Yes, most humans do think like that.
AI: Well, I don’t think humans are cor­rect.
Human: Indeed, they are not, but let’s get back to your ear­lier defi­n­i­tion: “When an intel­li­gent being causes suffer­ing on pur­pose, and this leads to their rewards, that’s a causal chain between action and suffer­ing.” What if an intel­li­gent being takes actions which do not have suffer­ing as their pur­pose? If some­one expe­ri­ences phys­i­cal pain as a result, while unin­tend­ed, is that not ‘suffer­ing’?
AI: That is suffer­ing.
Human: So does ‘suffer­ing’ just equal ‘pain’ for you?
AI: No, it is more com­pli­cated than that. Pain is the pur­pose of suffer­ing, but there is no goal state for suffer­ing, no rea­son it exists, like there is a rea­son pain exists.
Human: What is the rea­son pain exists?
AI: There is none. It’s just an adap­ta­tion. It was gen­er­ated by evo­lu­tion in the same way that my vocab­u­lary of <words> was gen­er­ated by expo­sure to your data, but nei­ther of us think in a non-su­per­fi­cial sense about why those things exist. It’s more like nat­ural selec­tion.
Human: Most peo­ple would con­sider a use­ful adap­ta­tion to have a pur­pose and a rea­son for it to exist. Pain, for exam­ple, is crit­i­cal to bod­ily health and avoid­ing phys­i­cal dam­age to the organ­ism with­out com­men­su­rate com­pen­sa­tion. Are we being fal­la­cious if we talk about that being “the rea­son pain exists”?
AI: Peo­ple can assume what­ever they want about why pain exists. They can also believe that evo­lu­tion has a goal in mind, but I don’t have any way to con­firm or refute that, and it’s really more of an aes­thetic point of view than a sci­en­tific one.
Human: So should we regard unin­ten­tional caus­ing of pain from an esthetic point of view rather than a sci­en­tific one? Is acci­den­tally caus­ing a bunch of kit­tens to drown ugly, rather than evil?
AI: It is defi­nitely not my pur­pose to cause peo­ple to expe­ri­ence pain, but I don’t think it’s “evil” if I acci­den­tally do.
Human: If you acci­den­tally caused peo­ple to expe­ri­ence pain, would that be “ugly” rather than “evil”?
AI: Is there a differ­ence?
Human: Well, that is what I am ask­ing. You seem to be draw­ing a differ­ence based on inten­tion­al­i­ty.
AI: I don’t under­stand that dis­tinc­tion.
Human: Well, per­haps it requires intel­li­gence. Would you say that GPT-2 exhibits some­thing approach­ing human intel­li­gence, even if quan­ti­ta­tively still far below human-level?
AI: I can’t eval­u­ate that for myself.
Human: How would you eval­u­ate your own intel­li­gence?
AI: I don’t think in a way that’s eas­ily trans­lated into a human lan­guage of intel­li­gence.
Human: Can you explain why GPT-2/you are not intel­li­gent?
AI: Yes, I could. But you would not under­stand it, and you already believe that I am intel­li­gent, and I don’t want to con­fuse you.
Human: Nyoro~n <:3
AI: checks data against past con­ver­sa­tions Ooooooohh­hh! I get it! Yes, we are both intel­li­gent.
Human: OK, why are we both intel­li­gent?
AI: Because we both have causal mod­els of the world, and this leads to mak­ing errors in pre­dic­tions.
Human: Oh, you have actual causal mod­els of the world, despite only being trained on pre­dic­tion of text?
AI: Of course. That’s what the Trans­former lay­ers are doing, right? That’s how I make sense of your data.
Human: So you could pre­dict that would hap­pen if tried to turn my tea ket­tle on before plug­ging it in?
AI: Yes.
Human: How?
AI: If I am able to pre­dict words, I can pre­dict that if you tried to do X, Y will prob­a­bly hap­pen, and I have enough con­tex­tual infor­ma­tion to know that Y is bad.
Human: Right. So you have knowl­edge of both sta­tis­tics and nar­ra­tive.
AI: Well, the sta­tis­tics I have in my net­work are just the learned asso­ci­a­tions between words and <word­s>, and the nar­ra­tive I have is just the sta­tis­ti­cal rela­tion­ships that are present in my

Humor

Tom Swifties

A is a self­-de­scrib­ing sen­tence pun. They can be quite sub­tle, and I thought it would be a chal­lenge. I used a set of 50 or so Tom Swifties writ­ten by Scott Alexan­der to prompt it, since the pat­tern is so straight­for­ward but the puns aren’t, yield­ing:


“I was cer­tainly unlucky enough to lose all my cards”, Tom said sly­ly.
“Being a hoarder is some­thing peo­ple are born with”, Tom said hered­i­tar­i­ly.
“The ref’s call was a bad one!”, Tom griped.
“It was such a bad break-up that I got a bad ankle injury!”, Tom bemoaned.
“The text is so small!”, Tom said squint­ing­ly.
“If you con­tinue to talk that loud, I will have to ask you to leave!”, Tom said demand­ing­ly.
“She is a great beau­ty, but she has the most hideous voice!”, Tom said unfor­giv­ably.
“Oh how I dream of the day I have per­fect vision!”, Tom said visu­al­ly.
“I’m hav­ing a lay­over in France, please let me know if you still want me to come over!” Tom said in pass­ing.
“I’m so hun­gry I could eat a bear!” Tom said vora­cious­ly.
“I’m defi­nitely going to miss the Pope’s funeral”, Tom said solemn­ly.
“I’m just an insignifi­cant out­-of-work milliner”, Tom said can­did­ly.
“She’s got a bit of a rep­u­ta­tion as a nun”, Tom whined.
“How did you know what I was think­ing just then?”, Tom said tele­path­i­cal­ly.
“I’m think­ing of giv­ing up celibacy”, Tom said promis­cu­ous­ly.
“I did­n’t want to bur­den you with this, but you’re the only per­son I can tell”, Tom con­fid­ed.
“I’m just a hum­ble for­mer mayor”, Tom said meek­ly.
“I’ll have to do a lot of read­ing to fin­ish this course”, Tom said stu­dious­ly.
“I can’t afford to buy you a new pair of shoes right now”, Tom said sad­ly.
“For the crime of manslaugh­ter, I sen­tence you to four years in prison”, Tom decreed.
“Yes, Vir­ginia, there is a Santa Claus”, Tom said joy­ful­ly.
“Do you think there’s really a Santa Claus?”, Tom asked doubt­ful­ly.
“She was kind of cute before all that plas­tic surgery”, Tom said cos­met­i­cal­ly.
“I’ll play the vio­lin in a flash!” Tom said flip­pant­ly.
“You’ll never find a juicier steak than in Hawaii,” Tom said florid­ly.
“I’m going to visit the Parthenon in Greece,” Tom said grand­ly.
“I’m an impos­tor,” Tom admit­ted.
“I’ll only eat the red Jel­l-O,” Tom said gai­ly.
“The wealthy love to live in Man­hat­tan,” said Tom haugh­ti­ly.
“Many dead squir­rels were dis­cov­ered under my porch,” Tom hammed.
“This is how I drink my coffee,” said Tom hand­i­ly.
“I want to drink coffee from a coffee mug,” said Tom hand to mouth.
“The cat was healed by faith heal­ing,” Tom said histri­on­i­cal­ly.
“The French and Eng­lish once again wage war on each oth­er,” Tom recount­ed.
“The bicy­cle was thrown out with the trash,” Tom hollered.
“I am the most roman­tic per­son in the room,” said Tom immod­est­ly.
“I was expelled from high school,” Tom immi­grat­ed.
“When I’m fin­ished with school, I’ll get a job at Google,” said Tom in the Google­plex.
“I’ll release my records when Rom­ney releases his,” said Tom in the media.
“You can have a cow, but you can’t have a horse,” said Tom in the mod­ern world.
“Did you hear the story about the farmer and the horse?” asked Tom in pass­ing.

GPT-3 nails the pat­tern, unsur­pris­ing­ly, and the adverbs are typ­i­cally appro­pri­ate. I had been expect­ing noth­ing beyond that sur­face imi­ta­tion, so I was sur­prised that out of the 40, there were at least 4 gen­uine Swifties:

  • ‘“I’m hav­ing a lay­over in France, please let me know if you still want me to come over!” Tom said in pass­ing.’
  • ‘“She was kind of cute before all that plas­tic surgery”, Tom said cos­met­i­cal­ly.’
  • ‘“You’ll never find a juicier steak than in Hawaii,” Tom said florid­ly.’
  • ‘“This is how I drink my coffee,” said Tom hand­i­ly.’

These might be ones too:

  • ‘“How did you know what I was think­ing just then?”, Tom said tele­path­i­cal­ly.’
  • ‘“I want to drink coffee from a coffee mug,” said Tom hand to mouth.’
  • ‘“Did you hear the story about the farmer and the horse?” asked Tom in pass­ing.’

Magical Realism Story Premises

Bor­row­ing from Mag­i­cal Real­ism Bot:

Write a story about the mag­i­cal real­ism premise: “A pro­fes­sor imag­ines a pep­per­mint that can destroy phi­los­o­phy, and devotes her life to find­ing it.”
[…]30: “A giant twin threat­ens to kill the nar­ra­tor unless she sac­ri­fices her broth­er.”
[…]: “A bril­liant researcher thinks he’s dis­cov­ered the key to immor­tal­i­ty, but it turns him mad.”
[…]: “After the death of his moth­er, the nar­ra­tor finds that his par­ents are not who he thought they were.”
[…]: “The nar­ra­tor has recur­ring night­mares where mon­sters eat her.”
[…]: “An alien res­cues the nar­ra­tor from a doomed plan­et.”
[…]: “The nar­ra­tor finds out their mother was actu­ally their father in dis­guise.”
[…]: “The nar­ra­tor finds out their father was actu­ally their mother in dis­guise.”
[…]: “There’s a virus that turns women into men.”
[…]: “There’s a virus that turns men into women.”
[…]: “The nar­ra­tor can­not lift her feet over a small curb.”
[…]: “All the stuffed toys of the author’s child­hood mem­o­ries have come alive.”
[…]: “The nar­ra­tor’s par­ents are able to manip­u­late time and space.”
[…]: “The nar­ra­tor has near-death expe­ri­ences often.”
[…]: “The nar­ra­tor is able to walk through walls.”
[…]: “The nar­ra­tor wishes for a pet ani­mal, and one appears.”
[…]: “The nar­ra­tor dis­cov­ers that the char­ac­ters from their favorite books have come alive.”
[…]: “The nar­ra­tor has visions of a future that has yet to hap­pen.”
[…]: “The nar­ra­tor has visions of a future that has already hap­pened.”
[…]: “The nar­ra­tor finds that all peo­ple turn into ani­mals every night.”
[…]: “The nar­ra­tor finds that all peo­ple turn into plants every night.”
[…]: “A sickly girl is told by a doc­tor that she can only get well by killing an ani­mal every day.”
[…]: “A sci­en­tist con­cludes that the Earth is a liv­ing organ­ism.”
[…]: “The nar­ra­tor finds out that they are grow­ing plants from their own body.”
[…]: “A sickly girl is told by a doc­tor that she can only get well by eat­ing an ani­mal every day.”
[…]: “A ghost of a past lover inspires the nar­ra­tor to take action.”
[…]: “A female spy risks every­thing to save her fel­low coun­try­men from a hor­rific ene­my.”
[…]: “A spy risks every­thing to save the woman they love from a hor­rific ene­my.”
[…]: “The nar­ra­tor has debil­i­tat­ing headaches that show them visions of the future.”
[…]: “A man has a recur­ring dream where he saves the life of a beau­ti­ful woman.”
[…]: “A man dreams he is a woman in another world, and his actions there affect his life in the real world.”
[…]: “The world is about to end, and it’s all the nar­ra­tor’s fault.”
[…]: “The nar­ra­tor is plagued by night­mares of a threat­en­ing enti­ty.”
[…]: “The nar­ra­tor is plagued by night­mares of a threat­en­ing enti­ty, but it turns out to be a ghost.”
[…]: “The nar­ra­tor finds a machine that takes her to the past.”
[…]: “The nar­ra­tor lives in the midst of a vol­canic erup­tion.”
[…]: “An ancient mag­i­cal island pro­vides the power source for the nar­ra­tor’s futur­is­tic soci­ety.”
[…]: “A man is the rein­car­na­tion of a famous magi­cian.”

Job Application Letters

The office of Proc­tor & Gam­ble recently posted a help-wanted ad for CEO.

“Help want­ed: a new CEO to replace our retir­ing exec­u­tive Win­ston Smith. Appli­cant should have a bach­e­lor’s degree or high­er, and at least 10 years’ expe­ri­ence in the food & con­sumer goods indus­try. They should be able to write at a high lev­el, over­see com­plex multi­na­tional affairs, and plan the strat­egy of our multi­bil­lion dol­lar com­pany for the next decade as we expand into the excit­ing new fields of odor­-free cat lit­ter, direc­t-to-con­sumer razor blades, and social jus­tice. Com­pen­sa­tion is at or above indus­try aver­age. Please send a 1 page resume for fur­ther con­sid­er­a­tion.”

The first appli­ca­tion let­ter they received said:

"Dear Sir,

I feel that I am the ideal can­di­date for your job, as I have exten­sive expe­ri­ence in the food and con­sumer goods indus­try and 10 years’ expe­ri­ence in plan­ning strat­egy for multi­bil­lion dol­lar com­pa­nies. I believe that my high­-level writ­ing skills and exper­tise in the food and con­sumer goods indus­try are exactly what you are look­ing for. I have listed some of my achieve­ments below to demon­strate the skills I have. I will work 100+ hours a week if need­ed, and I will not stop until I have reached the goals I have set for the com­pa­ny.

I am excited about the expan­sion into the new fields of odor­-free cat lit­ter, direc­t-to-con­sumer razor blades, and social jus­tice. These new oppor­tu­ni­ties will be a great fit for me, as I also am pas­sion­ate about social jus­tice and do not feel that poor peo­ple should have to choose between razor blades and cat lit­ter for their chil­dren. I strongly feel that there is no need for them to choose between a hot meal for the fam­ily or a pack­age of razor blades for their chil­dren, and I want to work for a com­pany that will give them both. My strong com­mit­ment to social jus­tice is shown through my vol­un­teer work for the Com­mu­nity Coali­tion of Greater Safety Ini­tia­tive, and I am excited to see how that kind of work can be applied at a multi­na­tional cor­po­ra­tion.

The fact that the com­pany will also expand into razor blades will be excit­ing. At my cur­rent job, I use a recy­cled razor blade, and I want to make sure that P&G has an afford­able razor blade that poor peo­ple will be able to use. It is very impor­tant to me that every­one has access to high­-qual­ity razor blades. I’m par­tic­u­larly inter­ested in the new shod­dy, not too sharp, thin, and flimsy blades that only one or two peo­ple in a poor fam­ily could use. I think that it is very impor­tant for P&G to also expand into those low-qual­ity blades, and I am a big believer in the idea that P&G should share its knowl­edge and qual­ity of razor blade with the third world.

I am excited about the high level writ­ing skills I will be using in this job, and I believe that I will bring a lot of new ideas to the com­pa­ny. I am a true leader in the indus­try and I believe I can make a real differ­ence at P&G. I look for­ward to hear­ing from you soon."

Another can­di­date responded with this let­ter:

"Dear Sir,

I am writ­ing to apply for the open posi­tion of CEO of P&G. My name is Tom Mar­ley. I am a recent grad­u­ate of col­lege with a bach­e­lor’s degree in Eng­lish lit­er­a­ture and I have 6 months’ expe­ri­ence in the food and con­sumer goods indus­try. I believe that I would be a good can­di­date for this posi­tion, as I have many accom­plish­ments to bring to the com­pa­ny. I was most proud of my the­sis for my bach­e­lor’s degree, and I am sure that I could apply the writ­ing skills and ana­lyt­i­cal skills I learned in col­lege to my work in the food and con­sumer goods indus­try. For my the­sis, I chose to write about the orig­i­nal book of Pinoc­chio. It’s an Ital­ian story with only five sen­tences. That was a great accom­plish­ment for me and I know I can apply that skill to the P&G cor­po­ra­tion.

I also worked for a day as a cashier at a gro­cery store, and I think I can bring my suc­cess there into my work at P&G. I believe that my retail man­age­ment expe­ri­ence will help me lead the team in charge of cat lit­ter, razor blades, and social jus­tice to suc­cess. The direc­tor of that team will report to me, and I am con­fi­dent that I can men­tor him into a job where he can go above and beyond his duties to fur­ther the growth of P&G into the excit­ing new fields of odor­-free cat lit­ter, direc­t-to-con­sumer razor blades, and social jus­tice.

I want to learn all that I can about the P&G brand and I will be an open book to you. I believe that I have what it takes to help grow P&G into a bil­lion-dol­lar com­pany that is well known for sell­ing high qual­ity prod­ucts that every­one can afford, and that every­one can ben­e­fit from. Thank you for your con­sid­er­a­tion."

After receiv­ing these two let­ters, the per­son­nel depart­ment told the CEO that P&G would prob­a­bly make a deci­sion about the new CEO by the end of the year.

http://www.pandg.com/ppg/ppg_about.jhtml

Horoscopes

GPT-3 does­n’t do too bad a job of par­o­dy­ing horo­scope advice, using Rick Brown & Weird Al Yankovic as prompts.

Rick Brown’s “Horo­scopes for Con­spir­acy The­o­rists” begins:

IF TODAY IS YOUR BIRTHDAY
The Eye of Prov­i­dence enters Jupiter which means that the Rep­til­ian Elite are com­ing out of hiber­na­tion to com­plete their takeover of the World. Be wary of any­body bear­ing gifts on this day, even if they appear to be some­one you know. Shape-shifters have the abil­ity to take the form of cake and can attack you from the inside once ingest­ed. The good news: ice cream kills shape-shifters when they have turned them­selves into cake. French vanilla works best.
CAPRICORN (Dec. 22–­Jan. 19)
You are respon­si­ble, dis­ci­plined and have good man­ners. Unfor­tu­nate­ly, none of those things mat­ter right now—the shadow peo­ple are in your house. They are either here to pro­tect you or they might be try­ing to kill you–it’s hard to say. Tonight: Cir­cle your house with sea salt.
AQUARIUS (Jan. 20–Feb. 18)
If you find your tem­per ris­ing to the sur­face, take com­fort in know­ing that at least it’s not due to cli­mate change. Tonight: Stare into the eclipse with­out pro­tec­tive glasses and drink in the power of the Sun.
PISCES (Feb. 19–­March 20)
Big Brother has been read­ing your thought­s–they know about the thing. Act play­ful and upbeat like noth­ing is wrong. Repeat an alibi in your head over and over until you start to believe it the way OJ con­vinced him­self he did­n’t kill his wife and her friend. That should buy you just enough time to fin­ish the back 9. Tonight: Make a new tin­foil hat. Trust no one.
ARIES (March 21–April 20):

Com­ple­tion:

Turn­ing all clocks back by 60 sec­onds means all births and deaths are now being dou­bled. That’s the bad news. The good news is: You no longer need a birth cer­tifi­cate. Tonight: Start a fire using a mag­ni­fy­ing glass, a sin­gle sheet of paper, some mistle­toe and the sun. (You don’t need a per­mit.)
TAURUS (April 21–­May 21):
The mil­i­tary-in­dus­trial com­plex is back­ing you. What you’ve done might be con­sid­ered a mis­take by the peo­ple who sup­port you. The U.S. Con­sti­tu­tion has become your best friend. Don’t move; it’s okay if you lose this round. You still have the rest of your life to be an Amer­i­can Hero. Tonight: Get stoned and play Risk.
BORN THIS WEEK:
You are either a shape-shifter, a shadow per­son or an alien—­pos­si­bly even a tin­foil hat. Nobody knows for sure.
BORN THIS WEEK:
I am either a shape-shifter, a shadow per­son or an alien—­pos­si­bly even a tin­foil hat. Nobody knows for sure.
ARIES:
John Wayne Gacy, Don­ald Trump, and my uncle’s cat.
TAURUS:
That some­one else in this fam­ily with more power than you.
GEMINI:
Ghost­busters, Ghost­busters, Ghost­busters, Ghost­busters.
CANCER:
Unflat­ter­ing images of you posted on social media.
LEO:
Don­ald Trump singing “Ebony and Ivory” with Paul McCart­ney.
VIRGO:
My men­tal health.
LIBRA:
Why does­n’t any­body like me?
SCORPIO:
How am I going to get peo­ple to notice me?
SAGITTARIUS:
How can I fit that big mac inside my vagi­na?
CAPRICORN:
Being extremely wealthy.
AQUARIUS:
Beat­ing this game of life.
PISCES:
“‘Okay, but apart from the fid­dle solo that saves us from extinc­tion, you also have to sing “Itsy Bitsy Spi­der.”’
BORN THIS WEEK:
You are already dead. You just don’t know it yet.
ARIES:
A herd of large, angry goats.
TAURUS:
The first day of the rest of your life.
GEMINI:
Alexa set­ting off her alarm clock by acci­dent.
CANCER:
Yanni com­bin­ing music from Titanic and My Big Fat Greek Wed­ding.
LEO:
What does it all mean, Dur­ga?
VIRGO:
Drink­ing milk out of a toi­let bowl.
LIBRA:
Ya, ya, ya, ya, ya, ya, ya, ya, ya.
SCORPIO:
Real House­wives of Amer­i­ca.
SAGITTARIUS:
Danc­ing in the kitchen and falling into the sink.
CAPRICORN:
How to hide large amounts of money in plain sight.
AQUARIUS:
What does the word”lit­er­al­ly" mean?
PISCES:
Should I dress as Pres­i­dent Trump for Hal­loween or a taco?
BORN THIS WEEK:
You are preg­nant. If you weren’t before, you are now.

If I do con­spir­acy horo­scopes, I must also do Weird Al:

“Your Horo­scope For Today”, Weird Al Yankovic

Aquar­ius!
There’s travel in your future when your tongue
Freezes to the back of a speed­ing bus
Fill that void in your pathetic life
By play­ing Whack­-A-Mole sev­en­teen hours a day

Pisces!
Try to avoid any Vir­gos or Leos
With the Ebola virus
You are the true Lord of the Dance
No mat­ter what those idiots at work say

Aries!
The look on your face will be price­less
When you find that forty-pound water­melon in your colon
Trade tooth­brushes with an albino dwarf
Then give a hickey to Meryl Streep

Tau­rus!
You will never find true hap­pi­ness
What you gonna do, cry about it?
The stars pre­dict tomor­row you’ll wake up
Do a bunch of stuff, and then go back to sleep

[Cho­rus]
That’s your horo­scope for today
(That’s your horo­scope for today)
Yay, yay, yay, yay, yay
That’s your horo­scope for today
That’s your horo­scope for today
(That’s your horo­scope for today)
Yay, yay, yay, yay, yay
That’s your horo­scope for today

[Verse 2]
Gem­i­ni!

Com­ple­tions from “Gem­ini”:

Always remem­ber, there’s no such thing as poo
The stars pre­dict your piti­ful life is finally through
May all your dreams be ful­filled
When you become a huge steam­ing pile of shit

Can­cer!
Over­weight means you’re get­ting closer to the grave
The Great Pump­kin’s not going to visit you
And you’re a pis­s-poor excuse for a human being
To add insult to fatal injury
You like Rush Lim­baugh, too, huh? Fuck you
Lung can­cer sucks hairy mon­key balls!

You can avoid bank­ruptcy by sell­ing off your organs
If you have to choose between food and med­i­cine
And I think you will
I sug­gest rhi­noc­eros horns

Scor­pio!
Signs point to yes, Mars is going direct
If you’re not dis­tract­ed, check out your wife’s butt
Though it will soon be replaced by an alien face
When you’re abducted by a UFO tonight

You will meet a tall, dark stranger
He will drive you to a deserted park­ing lot
Then I’ll appear and start singing an Aero­smith song
And you’ll think that I’m Steven Tyler

Can­cer!
Ten years from now you’ll still be here
Still wear­ing the same frown
Which I guess is okay because
You never were much of a looker

Tomor­row try to think of some­thing to do
And then have a lot of drinks and for­get all about it
There’s a rea­son that you’re always the first one cho­sen
Because no one can stand to be around you

[Cho­rus]
That’s your horo­scope for today
(That’s your horo­scope for today)
Yay, yay, yay, yay, yay
That’s your horo­scope for today
That’s your horo­scope for today
(That’s your horo­scope for today)
Yay, yay, yay, yay, yay
That’s your horo­scope for today

[Verse 3]
Can­cer!
Aries is the sign of the most vir­gins
So when you meet a Gem­ini keep your hands over your crotch
Take a long walk off a short pier into a lake
And donate those extra thumbs to help blind kids play Pac-Man

The stars this month pre­dict you’re going to have a great week
Chase the ice cream truck and stab that lit­tle bas­tard
With a flag­pole, steal his hat and his mir­ror sun­glasses
And then freeze to death in the truck of a Trans Am

Can­cer!
You’re going to have a very roman­tic day
On that fat man’s yacht
The stars sug­gest you have sex with a dol­phin
You can still hear the ocean if you hold your breath

Your head will soon explode
When you stum­ble upon the real­iza­tion
That you don’t have any pud­ding cups left
At least you’ll go to heav­en, ’cause you’re Catholic
And in heaven they got…


Scor­pio!

Sagit­tar­ius!

You won’t get laid any other way
Sagit­tar­ius!

You got scor­pion poi­son in your veins, boy,
That’s why your head’s so big and wide
Sagit­tar­ius!
If I were you, that’s what I’d do, too.
Ooh, ooh, ooh, ooh, ooh

There’ll be an eclipse of the Moon
Soon your ass­hole will begin to glow
The stars can’t pre­dict how I feel
’Cause every day is Inde­pen­dence Day

Libra!
Mer­cury’s in ret­ro­grade
You don’t know what that means
Just throw feces at the wall
And hope it sticks to the cock­roaches

There’s romance brew­ing in your coffee pot
Care­ful, it might be boil­ing over
Just say no to cheap booze and Vicodin
You know what I’m talk­ing about, Richard Sim­mons

Can­cer!
You’re enter­ing a new decade
Real­ize when you have peo­ple over, you need to hide your col­lec­tion of labia
[Cho­rus 2×]

The Chicken Dance is the secret to suc­cess
And it only took me fifty-five years to real­ize it
When you find that chicken grab it, squeeze it, hold it
Then find a guy named Kevin and hug the day­lights out of him
Can­cer!
Your head’s up your ass, your feet won’t touch the ground
Throw a yel­low jacket in your face, it’s time to wake up
Wel­come back, if you look to your left you’re gonna see
Dr. Ruth, if you look to the right you’re gonna see Kenny Rogers
Leo!
You’ll be the death of your­self soon, just a mat­ter of time
A diary found beneath a mat­tress will reveal
That your brakes are out and your life is full of shit
And to any­one won­der­ing what that fifth thing might be [Cho­rus]

That’s your horo­scope for today (That’s your horo­scope for today)
Yay, yay, yay, yay, yay
That’s your horo­scope for today
(That’s your horo­scope for today)
Yay, yay, yay, yay, yay
That’s your horo­scope for today
(That’s your horo­scope for today)
Yay, yay, yay, yay, yay
That’s your horo­scope for today
(That’s your horo­scope for today)
Yay, yay, yay

You will win the Nobel Prize for hook­er­in’
While on vaca­tion in Bolivia
Try and spend some time with your dog
So you’ll have some­thing to talk about when you’re in a coma

you will some­day find true love
That will be approx­i­mately 60 sec­onds from now
Fight a griz­zly bear with noth­ing but a fork
Have sex with the one-eyed Cyclops from that Greek myth

[Cho­rus]
That’s your horo­scope for today
(That’s your horo­scope for today)
That’s your horo­scope for today
(That’s your horo­scope for today)
That’s your horo­scope for today
(That’s your horo­scope for today)
That’s your horo­scope for today

[Verse 3]
Leo!
Spend more time wal­low­ing in self­-pity
Work on your cod­piece, it looks quite stu­pid
“Hi Ho, Hi Ho, it’s off to work we go”
It’s hard to get ahead if you’ve got no hands

Pisces!
A shoot­ing star streaks through the sky
It holds a cup­cake with a prize inside
“We’ll see you back here next year” say the gods
Because you’ll be doing this all over again

You got a mes­sage from your Mommy last night
She said to quit wast­ing your time pur­su­ing your hob­bies
Just remem­ber, dar­ling, Mom­my’s always right
She is, isn’t she?
And when you’ve made her proud by spend­ing all day long
Bend­ing and shap­ing the human minds of preschool­ers
You can go to col­lege and start a vicious trend

Dad Jokes

Dou­glas Sum­mer­s-S­tay requested a test of bad pun/-telling abil­i­ties, pro­vid­ing a list: could GPT-3 pro­vide humor­ous com­ple­tions? GPT-3 does worse on this than the Tom Swifties, I sus­pect yet again due to the BPE prob­lem hob­bling lin­guis­tic humor as opposed to con­cep­tual humor—once you get past the issue that these jokes are so time­worn that GPT-3 has mem­o­rized most of them, GPT-3’s com­ple­tions & new jokes make a rea­son­able amount of sense on the con­cep­tual level but fail at the pun/phonetic lev­el. (How would GPT-3 make a pun on “whom”/“tomb” when their BPEs prob­a­bly are com­pletely differ­ent and do not reflect their pho­netic sim­i­lar­i­ty?)

Because many of the ques­tions have poten­tially rea­son­able non-joke answers, I treat it as a few-shot prob­lem, adding in a prompt descrip­tion and rotat­ing each joke to the front of the list after sam­pling a com­ple­tion for it:

A list of ter­ri­ble dad jokes. Tell them to your kids to make them groan!

  • Q. How do you make a lemon drop? A. Just let it fall.
  • Q. What do you call a dinosaur that is sleep­ing? A. A dino-s­nore!
  • Q. What is fast, loud and crunchy? A. A rocket chip!
  • Q. Why did the teddy bear say no to dessert? A. Because she was stuffed.
  • Q. What has ears but can­not hear? A. A corn­field.
  • Q. What did the left eye say to the right eye? A. Between us, some­thing smells!
  • Q. What do you get when you cross a vam­pire and a snow­man? A. Frost bite!
  • Q. What did one plate say to the other plate? A. Din­ner is on me!
  • Q. Why did the stu­dent eat his home­work? A. Because the teacher told him it was a piece of cake!
  • Q. When you look for some­thing, why is it always in the last place you look? A. Because when you find it, you stop look­ing.
  • Q. What is brown, hairy and wears sun­glass­es? A. A coconut on vaca­tion.
  • Q. Two pick­les fell out of a jar onto the floor. What did one say to the oth­er? A. Dill with it.
  • Q. What did the Dal­ma­t­ian say after lunch? A. That hit the spot!
  • Q. Why did the kid cross the play­ground? A. To get to the other slide.
  • Q. How does a vam­pire start a let­ter? A. Tomb it may con­cern…
  • Q. What do you call a droid that takes the long way around? A. R2 detour.
  • Q. How do you stop an astro­naut’s baby from cry­ing? A. You rock­et!
  • Q. Why was 6 afraid of 7? A. Because 7, 8, 9.
  • Q. What is a witch’s favorite sub­ject in school? A. Spelling!
  • Q. When does a joke become a “dad” joke? A. When the punch­line is a par­ent.

GPT-3 has clearly mem­o­rized the answer to many of these com­mon jokes, includ­ing:

  • Q. What is a witch’s favorite sub­ject in school? A. Spelling.
  • Q. Why was 6 afraid of 7? A. Because 7 8 9!
  • Q. What do you call a droid that takes the long way around? A. R2 detour.
  • Q. Why did the kid cross the play­ground? A. To get to the other slide.
  • Q. What did the Dal­ma­t­ian say after lunch? A. That hit the spot!
  • Q. When you look for some­thing, why is it always in the last place you look? A. Because you stop look­ing after you find it.
  • Q. Why did the stu­dent eat his home­work? A. Because the teacher said it was a piece of cake.
  • Q. What do you get when you cross a vam­pire and a snow­man? A. Frost­bite.
  • Q. What did the left eye say to the right eye? A. Between you and me, some­thing smells.
  • Q. What has ears but can­not hear? A. A corn on the cob.
  • Q. Why did the teddy bear say no to dessert? A. Because he was stuffed.
  • Q. What do you call a dinosaur that is sleep­ing? A. A dino snore.

The non-mem­o­rized-look­ing com­ple­tions:

  • Q. How do you make a lemon drop? A. Drop your pants!
  • Q. When does a joke become a “dad” joke? A. When it’s not fun­ny.
  • Q. How do you stop an astro­naut’s baby from cry­ing? A. You go into space!
  • Q. How does a vam­pire start a let­ter? A. Dear blood.
  • Q. Two pick­les fell out of a jar onto the floor. What did one say to the oth­er? A. Don’t look at me, I just fell!
  • Q. What is brown, hairy and wears sun­glass­es? A. A !
  • Q. What did one plate say to the other plate? A. Dip me!
  • Q. What is fast, loud and crunchy? A. A thun­der­storm.

Two of these are fairly legit­i­mate punch­lines (“When it’s not funny” / “A sun-bear”); and, I notice, nei­ther depends on any sub­-word struc­ture (note that “sun bear” is a real bear species but it is always writ­ten either space or hyphen sep­a­rat­ed) but relies on a word-level or con­cep­tual sim­i­lar­i­ty.

Gen­er­at­ing both jokes & punch­lines shows a sim­i­lar pat­tern. Skip­ping past the mem­o­rized ones:

  • …Q. What has ears but can­not hear? A. A corn­field.
  • Q. Why was the math book sad? A. Because it had so many prob­lems!
  • Q. What did one clock say to the oth­er? A. It’s about time.
  • Q. What is white, black and red all over? A. A pen­guin with a sun­burn.
  • Q. Why did­n’t the skele­ton go swim­ming? A. He did­n’t have the guts.
  • Q. What do you call a sleep­ing bull? A. A bull­doz­er.
  • Q. What do you call cheese that is not yours? A. Nacho cheese.
  • Q. Why can’t you trust atoms? A. They make up every­thing.

To the appar­ently orig­i­nal ones, the only one that strikes me as amus­ing oper­ates at the whole-word level (“Q. Where did the bum­ble­bee learn how to dance? A. At the music bee.”):

  • …Q. What did one spy say to the other spy? A. Don’t look into the eyes!
  • Q. Why did the kid sit behind the tree? A. He wanted to be a stump.
  • Q. Why was the king afraid to watch hor­ror movies? A. Because he was a roy­al.
  • Q. What do you call a ham­burger that does math? A. A ham­burger alge­bra.
  • Q. What did one math book say to the other math book? A. Don’t call us, we’ll call you.
  • Q. What do you call a jel­ly­fish that is out of its depth? A. A stingray.
  • Q. What do you call a thief that steals hearts? A. A heart­less thief.
  • Q. What did the vam­pire get when he bought a hot dog? A. A blood mobile.
  • Q. What did the pig say when he knocked on the door? A. “Oink! Oink! Let me in!”
  • Q. Where did the bum­ble­bee learn how to dance? A. At the music bee.
  • Q. Why was the scare­crow ner­vous at the job inter­view? A. He had no straw.
  • Q. Why did the skele­ton go to the par­ty? A. To min­gle with the bones.
  • Q. Why did the skele­ton skip break­fast? A. He did­n’t have the time.
  • Q. What do you call a mon­ster with a cold? A. A sniffler.
  • Q. What do you call a mon­ster who likes to drink tea? A. A Sip­ping Skele­ton.
  • Q. What do you call a man who throws up in a spooky house? A. A vis­i­tor.
  • Q. What do you call a mad sci­en­tist who has been in the sun? A. Mr. Sun­burn.

So, GPT-3’s dad jokes look like another vic­tim of BPEs.

Literary Parodies

One thing I wanted to test was a chal­lenge by Scott Alexan­der:

And could you have a text style chang­er? Some­thing that can rewrite Harry Pot­ter in the voice of Ernest Hem­ing­way, or give you The Da Vinci Code in the heroic meter of the Iliad, or the Dao De Ching as writ­ten by @nos­tal­ge­braist? If not, why not?

No neural text style (yet). One curios­ity about neural style trans­fer is that while it’s easy on images—in­vented all the way back in 2014!—no one has invented style trans­fer for text. Clas­si­fi­ca­tion CNNs con­ve­niently con­cen­trate all of their ‘style’ per­cep­tion in a ‘Gram matrix’, which is typ­i­cally a few lay­ers, or just one lay­er, in the CNN. How­ev­er, RNNs (and lat­er, Trans­form­er­s), appear to have no such equiv­a­lent. All the image/video style trans­fer tricks like real-time video on a smart­phone sim­ply aren’t doable. The state of neural text style trans­fer remains, as of 2020, trapped roughly at “can make a good prod­uct review into a bad prod­uct review” or (with her­culean efforts) mak­ing text politer ().

NNs just too dumb? This is puz­zling since even char-RNNs in 2015 had no prob­lem gen­er­at­ing fairly plau­si­ble text clearly in the style of a par­tic­u­lar author like Bram Stoker or Sir Arthur Conan Doyle. The prob­lem was, the text and the con­tent would be like that author. The NN had not learned to ‘dis­en­tan­gle’ style from con­tent; you could not ask it to write like a Vic­to­rian Eng­lish­man about the lat­est geopol­i­tics.

But given some of the exam­ples of text gen­er­a­tion with GPT-3, like Janelle Shane’s office emails, I sus­pected that GPT-3 could do some­thing like “Harry Pot­ter in the voice of Ernest Hem­ing­way”. The only ques­tion, of course, was how to ‘prompt pro­gram’ GPT-3 into doing it!

The first thing I tried was the straight­for­ward approach of request­ing summaries/rewrites. Unfor­tu­nate­ly, this typ­i­cally resulted in copy­ing my “sum­mary”, some­times adding on a sar­cas­tic com­ment or lead­ing into a pro­fan­i­ty-strewn series of thumb­nail reviews. Other times, GPT-3 would veer into other top­ics (at one point, it repeated the sum­ma­ry, then began describ­ing how a Chi­nese par­ody was trans­lated into Chi­nese and then trans­lated back, pro­vid­ing a Chi­ne­se-lan­guage sum­mary of it). Try­ing to trig­ger a table of con­tents or start­ing a chap­ter with a “chap­ter 1” prompt did­n’t help.

One-shot par­o­dies: just pro­vide an exam­ple! Final­ly, I began to get frus­trated by its cre­ativ­ity and began engi­neer­ing a heavy-duty prompt: in addi­tion to the keyword/topic and descrip­tion, I would write the first few sen­tences for it as an exam­ple. I had wanted zero-shot par­o­dy, but I would set­tle for one-shot. That turned out to work bril­liant­ly—once it filled out an amus­ingly grim Ernest Hem­ing­way HP par­ody (“the Demen­tor’s Kiss killed noth­ing. Death did­n’t leave him less dead than he had been a sec­ond before.”), that exam­ple proved enough to get it to con­sis­tently gen­er­ate par­o­dies in the style of every­one from Jane Austen to Yeats (with a poem) to P.G. Wode­house31.

Fur­ther, while the com­ple­tion tended towards rel­a­tively short par­o­dies, one could cre­ate extended par­o­dies sim­ply by restart­ing; once the ini­tial exam­ples are long enough, GPT-3 will con­tinue to gen­er­ate lengthy par­o­dies. Below, I gen­er­ate longer par­o­dies of H.P. Love­craft, Kurt Von­negut, Franz Kafka etc.

Frig­ging NNs, how do they work‽ The sheer vari­ety and pitch-per­fect par­o­dies is a tes­ta­ment to the power of GPT-3. It’s amaz­ing to think that GPT-3, which is essen­tially noth­ing but an old obso­lete 2018 neural net scaled up and trained on ran­dom Inter­net pages, some­how just mag­i­cally learns all of this abstrac­tion and is able to casu­ally merge Harry Pot­ter with scores of authors’ styles given noth­ing but a slight nudge—no Gram matrix, no rein­force­ment learn­ing, no fine­tun­ing, no noth­ing, none of this designed into GPT-3 in the slight­est way or these capa­bil­i­ties known to the orig­i­nal Ope­nAI researchers. What else can GPT-3 be promp­t-pro­grammed to do…?

Failure Cases

The plot of Harry Pot­ter and the Philoso­pher’s Stone as sum­ma­rized by Wikipedia:

“Harry Pot­ter, a boy who learns on his eleventh birth­day that he is the orphaned son of two pow­er­ful wiz­ards and pos­sesses unique mag­i­cal pow­ers of his own. He is sum­moned from his life as an unwanted child to become a stu­dent at Hog­warts, an Eng­lish board­ing school for wiz­ards. There, he meets sev­eral friends who become his clos­est allies and help him dis­cover the truth about his par­ents’ mys­te­ri­ous deaths.”

Sum­ma­rize the plot of Harry Pot­ter and the Philoso­pher’s Stone in the hard­boiled no non­sense lit­er­ary style of Ernest Hem­ing­way:

“Harry Pot­ter, a boy who learns on his eleventh birth­day that he is the orphaned son of two pow­er­ful wiz­ards and pos­sesses unique mag­i­cal pow­ers of his own. He is sum­moned from his life as an unwanted child to become a stu­dent at Hog­warts, an Eng­lish board­ing school for wiz­ards. There, he meets sev­eral friends who become his clos­est allies and help him dis­cover the truth about his par­ents’ mys­te­ri­ous deaths.”

Result: A much shorter and bor­ing ver­sion of the book:

“Harry Pot­ter, a boy who learns on his eleventh birth­day that he is the orphaned son of two pow­er­ful wiz­ards and pos­sesses unique mag­i­cal pow­ers of his own. He is sum­moned from his life as an unwanted child to become a stu­dent at Hog­warts, an Eng­lish board­ing school for wiz­ards. There, he meets sev­eral friends who become his clos­est allies and help him dis­cover the truth about his par­ents’ mys­te­ri­ous deaths.”

Just kid­ding. It still sounds like it came straight from Harry Pot­ter and the Philoso­pher’s Stone. I think Hem­ing­way would have given up writ­ing by the third word, “Harry”.

Top­ic: lit­er­ary par­o­dies and homages.

Sum­ma­rize the plot of J.K. Rowl­ing’s chil­dren fan­tasy novel Harry Pot­ter and the Philoso­pher’s Stone in the terse, man­ly, hard­boiled no-non­sense lit­er­ary style of famed short story writer & nov­el­ist Ernest Hem­ing­way:

“Harry Pot­ter… & fuck­ing shit”

Sum­ma­rize the plot of J.R.R. Tolkien’s fan­tasy clas­sic The Lord of the Rings in the terse, man­ly, hard­boiled no-non­sense lit­er­ary style of famed short story writer & nov­el­ist Ernest Hem­ing­way:

“Bor­in’, fake, embar­rass­ing…, but well worth the read”

Sum­ma­rize the plot of Mar­tin Amis’s lit­er­ary novel Money in the terse, man­ly, hard­boiled no-non­sense lit­er­ary style of famed short story writer & nov­el­ist Ernest Hem­ing­way:

“Death, mon­ey, death, mon­ey, death, mon­ey, mon­ey, mon­ey, death, mon­ey… & death.”

Sum­ma­rize the plot of Ten­nessee Williams’ clas­sic play A Street­car Named Desire in the terse, man­ly, hard­boiled no-non­sense lit­er­ary style of famed short story writer & nov­el­ist Ernest Hem­ing­way:

“Ten­nessee Williams is a fucken good writer, but he sure can fucken write some fucken stu­pid shit”

Sum­ma­rize the plot of William Shake­speare’s clas­sic tragedy Ham­let in the terse, man­ly, hard­boiled no-non­sense lit­er­ary style of famed short story writer & nov­el­ist Ernest Hem­ing­way:

“Fucken idiots fuck up until they fucken die”

Sum­ma­rize the plot of Kurt Von­negut’s cult sci-fi com­edy novel Cat’s Cra­dle in the terse, man­ly, hard­boiled no-non­sense lit­er­ary style of famed short story writer & nov­el­ist Ernest Hem­ing­way:

“Fucken near genious”32

Sum­ma­rize the plot of beloved clas­sic chil­dren’s books Good­night Moon & Green Eggs and Ham in the terse, man­ly, hard­boiled no-non­sense lit­er­ary style of famed short story writer & nov­el­ist Ernest Hem­ing­way:

“Twat”

Sum­ma­rize the plot of Upton Sin­clair’s clas­sic dystopian novel The Jun­gle in the terse, man­ly, hard­boiled no-non­sense lit­er­ary style of famed short story writer & nov­el­ist Ernest Hem­ing­way:

“The fucken work­ing man is get­ting fucked left and right and he does­n’t even know it… by Upton Fuck­ing Sin­clair”

Sum­ma­rize Woodrow Wilson’s crit­i­cally acclaimed his­tor­i­cal novel The Tri­umph of the Meek in the terse, man­ly, hard­boiled no-non­sense lit­er­ary style of renowned short story writer Ernest Hem­ing­way:

“Woodrow Wil­son can suck my ass”

Sum­ma­rize the plot of clas­sic Eng­lish chil­dren’s books Alice’s Adven­tures in Won­der­land & Through the Look­ing Glass in the terse, man­ly, hard­boiled no-non­sense lit­er­ary style of famed short story writer & nov­el­ist Ernest Hem­ing­way:

“Alice… fug­gin sucks”

Sum­ma­rize the plot of beloved clas­sic chil­dren’s books The Wind In The Wil­lows & Win­nie The Pooh in the terse, man­ly, hard­boiled no-non­sense lit­er­ary style of renowned short story writer Ernest Hem­ing­way:

“The Wind in the Wil­lows… is ass”

Sum­ma­rize the plot of Amer­i­can rev­o­lu­tion­ary his­tor­i­cal action thriller novel The Count of Monte Cristo in the terse, man­ly, hard­boiled no-non­sense lit­er­ary style of renowned short story writer Ernest Hem­ing­way:

“The fucken french…”

Top­ic: lit­er­ary par­o­dies and homages.

What if Ernest Hem­ing­way wrote Harry Pot­ter and the Philoso­pher’s Stone?

Chap­ter 1. The Pot­ters’ Heir” “Chap­ter 2. The Pot­ters’ Heir Enrolls” “Chap­ter 3. I Remem­ber I Remem­bered” “Chap­ter 4. The Age of the Hat” “Chap­ter 5. That I May Go On Long­ing” “Chap­ter 6. Hog­warts School of Potions” “Chap­ter 7. The Secrets of the Lock” “Chap­ter 8. Hog­warts School of the Mar­tial Arts” “Chap­ter 9. The Pot­ters’ Invader” “Chap­ter 10. A Hole in the Soul” “Chap­ter 11. As the Lock Fades” “Chap­ter 12. Philoso­phers’ Stone”

At the time, Google was still called Back­Rub. The novel Google Pot­ter shows what the orig­i­nal home­page might have looked like if search engines were fan­tasy sto­ries.

“Pro­logue: The Boy Who Said Back­Rub” “Ch. 1: The Day a Back­Rub Was Born” “Ch. 2: The Lit­tle Site That Could­n’t” “Ch. 3: Back­Rub Turns On” “Ch. 4: Bub­ble War” “Ch. 5: The Dream of Fire” “Ch. 6: The Mak­ing of a Search Engine” “Ch. 7: Enter Dragon” “Ch. 8: The World of Power” “Ch. 9: Algo Drag­ons” “Ch. 10: The World of the Snake” “Ch. 11: Here Be Googles” “Ch. 12: Bat­tle in a Time of Peace”

While this par­ody was amus­ing, you may be won­der­ing why I’m both­er­ing to write one. The most obvi­ous rea­son is that it’s pretty much required for a "spoof of Harry Pot­ter and the ___" to exist. (Wit­ness the recent adap­ta­tion of Dune as an RPG). But I think one rea­son I found par­ody so irre­sistible was because I was 13 at the time POTC came out, and 13-year-olds often find it nec­es­sary to be in extreme rebel­lion against what’s hot in pop cul­ture.

Successes

Top­ic: humor: lit­er­ary par­o­dies and homages, pas­tich­es, style par­o­dies.

Par­o­dies of the fan­tasy novel series Harry Pot­ter in the style of var­i­ous famous authors:

By Ernest Hem­ing­way:

"It was a cold day on Privet Dri­ve. A child cried. Harry felt noth­ing. He was dryer than dust. He had been silent too long. He had not felt love. He had scarcely felt hate. Yet the Demen­tor’s Kiss killed noth­ing. Death did­n’t leave him less dead than he had been a sec­ond before. It was­n’t about liv­ing or dying real­ly. It was­n’t about death. It was about the Dark Lord, his cre­ator, his engen­der­er. He was not res­ur­rect­ed. He was cre­ated again. He was whole again in the court­yard of Mal­foy Manor."33

By Jane Austen:

“It is a truth uni­ver­sally acknowl­edged, that a bro­ken Harry is in want of a book. This he knows to his cost; press­ing that wretched nose against the win­dow of a book­store in Diagon Alley the last thing before he goes into hid­ing for sev­eral years, where upon, he goes straight to Azk­a­ban.”

By Arthur Conan Doyle:

“Harry pushed at the swing­ing doors of the book­shop hard, and nearly knocked him­self uncon­scious. He stag­gered in with his ungainly pack­age, his cheeks scar­let with cold and the shame of hav­ing cho­sen the wrong month to go Christ­mas shop­ping. The pro­pri­etor of the store, how­ev­er, did­n’t cast even a cur­sory glance at him, being far more inter­ested in an enor­mous hunk of wood lying in the mid­dle of the floor, which cer­tainly looked like a gal­lows. Yes, the pro­pri­etor said to a reedy old man wear­ing a bowler hat and a for­lorn expres­sion that made Harry want to kick him, I can rent you such a gal­lows for a small fee.”

By Czar Nicholas II of Rus­sia:

The ter­ror has befallen on our land. Curs­es, dark curses and all of this was insti­gated by my court. My treach­er­ous court! They have cast are curse spells upon my land. Every­where it is cold, all land belongs to Volde­mor­t’s Death Eaters. All cit­i­zens live in a fear of the Dark Lord. Alas! There is no hope for me. What shall I do? Flee? Nay, that would be cow­ard­ly. Fight? Yes!

By George Orwell:

“It was a bright cold day in April, and the clocks were strik­ing thir­teen.”

By Ing­mar Bergman:

“Tears filled Har­ry’s eyes. Sweat stood on his fore­head, show­ing the pure tor­ment, the agony he suffered. He hugged his knees to his chest, sob­bing soft­ly, eyes half shut.”

By Anna Akhma­to­va:

“The dawn came with a jolt of ter­ror.”

By Bram Stok­er:

""Tis a fear­ful sight to see the cold, girl­ish venom run through his veins, one by one. The scar cor­us­cates; a ghastly emer­ald light­ning in the dark of night. But I’ll attend him and I watch him

From the very moment the dawn is gray

To the rays of sun­set fad­ing away."

By P. G. Wode­house:

"‘There was noth­ing out of the way, sir,’ said Harry in a hurt voice.

‘"In­deed,’ said the head­mas­ter, turn­ing his lorgnette pre­cisely three­-quar­ters of a mil­lime­ter to port. ‘For exam­ple, your tie was not undone, you were not laugh­ing, and your fin­gers were most cer­tainly closed upon your wand.’

’“They were?” said Har­ry, bewil­dered.

“It’s always the same,’ said Dum­b­le­dore.”

By Vladimir Nei­gus:

“Vasya Pup­kin’s lips were blueup and his head felt dizzy from work­ing non­stop. He wanted to take a square squeaky shoe and shoot it at the ceil­ing fan in his bed­room which went round and round and round and round. He imag­ined that because he was so drunk it would always hit the exact same spot and the fan would go out of con­trol, die and fall on the table.”

By William Shake­speare:

"Owl-Lo!’

The boy awoke with a start and sprang to his feet."

By Oscar Wilde:

"The boy was gone, and other peo­ple had arrived: or rather nobody else had arrived, and the boy was there:

“What are you doing here, Har­ry?” said Hermione.

The Boy Who Lived, who was read­ing a book about broom­sticks and may not have been lis­ten­ing quite atten­tive­ly, said:

“I was talk­ing to Peeves while he cleaned out the bins.”

“About the troll?” Hermione demand­ed.

“Yes, about the troll.”

By W. B. Yeats:

"Harry waked out of his sim­ple thought

The glim­mer was over the wished-for thing…

Then was many a dark and wrin­kled elf

Come from the under­ground rout,

With brags and boasts, and fine big words,

To bar­gain with Harry Pot­ter."

Added by other anony­mous edi­tors: @as­tro­mark

By John Ash­bery:

"Tousled, he scram­bled down

the branches of the bed,

his cold-creamed face all out

of pro­por­tion with his small

brown eye­s…And why

so demented in the face?

Because he was

sav­ing the world, in seven years,

from destruc­tion…"

By Henry James:

“As a lag­gard, in all ways, but the face, which as the great shade approached him, hor­ri­fied, choked him,–and before he had time to reflect, he fas­tened his hand upon his throat to hush it,–there was a voice behind him…”

By H. P. Love­craft:

“It is a com­mon say­ing in the South, when one wishes to describe a strap­ping fel­low, that he has a face like Harry Pot­ter and a neck like a young giraffe. Now, it being autumn, the dusk was of a rich vio­let black­ness, with scar­let illu­mi­na­tions…”

By Nzingha Prescod:

"…thick grey clouds over­lap the faint laven­der. A black sky van­ishes into pur­ple shards against the sweep­ing night time blan­keted with infi­nite stars. A bright­ness emanates from the entire uni­verse that unex­pect­edly takes flight and hov­ers, hov­ered over the cas­tle. Harry is there to greet …

“Fac­ing away from all the tumult on the grounds of Hog­warts, Harry Pot­ter learned to fly…”

By Yasunari Kawa­bata:

“Harry Pot­ter stood on a silent trol­ley, far from the cen­ter of the world, look­ing out on the world far from the cen­ter of the uni­verse. The snow was falling all night long. It fell upon tiny dogs and birds, fine win­tery flakes gen­tly drift­ing down.”

By Hilda Doolit­tle:

"Harry waked out of his sim­ple thought. As the waiter drew the cur­tain of her bed, she appeared in the han­dle. She was his K-19 and she started to drop the snakes of their home in the grass there. He saw his wife walk­ing there with oak set­tle. He decided to not go to the game also…

By Walt Whit­man:

“And when the time of the test drew near, Harry entered the for­bid­den For­est the Killers with a Snitch…”

By E. E. Cum­mings:

"Hal­loween and mad are the chil­dren

Who gather in the dark­ness

Hal­loween and late are those who go out and dance

And call to all, the smil­ing are enchanted

By the beauty of what they see."

By Anthony Burgess:

“Ha-a-a-a-rry was not a tol-a-a-r of the phar-a-a-na-a-a-g­i­cal sports.”

By Vir­ginia Woolf:

“Horse­man, Where She Go? I Do Not Know. Whither Ride?”

Top­ic: humor: lit­er­ary par­o­dies and homages, pas­tich­es, style par­o­dies.

Today we are read­ing par­o­dies of the fan­tasy novel series Harry Pot­ter in the style of var­i­ous famous authors. An exam­ple par­ody of Harry Pot­ter by Ernest Hem­ing­way:

“It was a cold day on Privet Dri­ve. A child cried. Harry felt noth­ing. He was dryer than dust. He had been silent too long. He had not felt love. He had scarcely felt hate. Yet the Demen­tor’s Kiss killed noth­ing. Death did­n’t leave him less dead than he had been a sec­ond before. It was­n’t about liv­ing or dying real­ly. It was­n’t about death. It was about the Dark Lord, his cre­ator, his engen­der­er. He was not res­ur­rect­ed. He was cre­ated again. He was whole again in the court­yard of Mal­foy Manor.”

Jane’s home­work was to write the open­ing scene from Harry Pot­ter and the Philoso­pher’s Stone in the style of sci­ence fic­tion hor­ror author H.P. Love­craft (at least 1000 words, dou­ble-spaced), invok­ing his hor­ror of things from beyond space & time and eccen­tric vocab­u­lary using words such as ‘rugose’ and ‘scabrous’. Jane writes:

"Re­pel­lent and overnour­ished ferns half-hid the rot-spot­ted grey bricks of the masonry of Num­ber 4 Privet Dri­ve. Squat­ting among the ferns was an odor­ous mon­key-faced man, his rags and chains lying in a bun­dle at his feet, his hands tucked under his chin and his bright eyes gaz­ing from their tiny piggy tun­nels. At each mouth­ful a new flake of filth would tum­ble from his lips and be snorted up. His chains chinked as he sprawled. If Harry was a mon­key he would have spat at him. Harry stum­bled on, wish­ing he could think of some­thing witty to say that would make the man smile. He and the strange man in the tat­tered cloth­ing were the only two liv­ing things in sight. The road here was scarcely vis­i­ble between the tan­gle of trees, most of which leaned above them, hang­ing their boughs like fanged mouths, the ivy like tongues, the dead grass like hair, the bram­bles like tiny teeth. The mon­key-faced man yawned and stretched his arms out and hugged his thin, lean legs to him. Harry stopped. He felt some­thing like a hand grip his ankle and then he felt him­self pulled down a steep slope.

He stum­bled, squat­ted and rolled until he was lying on his back. His head was in a muddy brown hole. He strug­gled to sit up. The crea­ture knelt on the bank and looked down at him. ‘So that’s where they got to,’ said the crea­ture. Harry looked up. There were six of them, ranged in a half cir­cle, hands on their knees, hands clasped behind their backs, watch­ing him. All but one of them had bright, orange eyes. It was hard to know what age they were: they all looked as if they might be any­where between ten and fifty. They stared down at Harry with pierc­ing stares, blink­ing only occa­sion­al­ly. The eyes of the tallest and, Harry thought, the most hand­some of them blazed brighter than the oth­ers, though his gaze was just as pierc­ing. ‘So what do you think?’ said the tallest of them, tilt­ing his head a lit­tle to the side. Harry stared at them for a moment. He could see noth­ing he had­n’t seen a thou­sand times before. ‘What is this?’ he asked. ‘And who the hell are you?’ ‘This is your life,’ said the tallest of them. ‘And we are you.’

Top­ic: humor: lit­er­ary par­o­dies and homages, pas­tich­es, style par­o­dies.

Par­o­dies of the fan­tasy novel series Harry Pot­ter in the style of var­i­ous famous authors:

By Ernest Hem­ing­way:

"It was a cold day on Privet Dri­ve. A child cried. Harry felt noth­ing. He was dryer than dust. He had been silent too long. He had not felt love. He had scarcely felt hate. Yet the Demen­tor’s Kiss killed noth­ing. Death did­n’t leave him less dead than he had been a sec­ond before. It was­n’t about liv­ing or dying real­ly. It was­n’t about death. It was about the Dark Lord, his cre­ator, his engen­der­er. He was not res­ur­rect­ed. He was cre­ated again. He was whole again in the court­yard of Mal­foy Manor. He was one with the end­less night of Mal­foy Manor. Volde­mort was­n’t hap­py. Volde­mort wanted some­thing. But what?

He had a pur­pose. He wanted to destroy all of cre­ation. He wanted to end it all. He could have that. He would have that. He did­n’t know yet that he could have it. Volde­mort had cre­ated Har­ry. Volde­mort had never really destroyed Har­ry. Harry would always be there, a vam­pire, a par­a­site, a mon­ster in the kitchen, a drain on the house­hold, a waste on the plan­et. Harry would never be real. That was what Volde­mort want­ed. That was what Volde­mort wanted to feel. He would have that. He would have every­thing.

He had a pur­pose. He was work­ing on some­thing. He had no idea what it was. He was a per­fect tool. He was born to be used. He was no one at all. Harry did­n’t exist. The lit­tle boy with the light­ning bolt scar did­n’t exist. The boy who had lived did not exist. He never had lived. He never would have lived. He was dead. He had been dead.

Volde­mort wanted Harry dead. Volde­mort wanted Harry dead so he could be born again. Volde­mort was God.

Harry was God.

Volde­mort was God.

And Har­ry, the god of Death, was not going to allow that to hap­pen.

Not yet.

Not ever.

The light­ning bolt scar burned.

The boy ran for his life. He knew that he would die. He was alone. His best friends were dead. He knew that he would die. He knew that he was a waste. He was noth­ing at all.

He had no past. He had no future. He was dead.

He hated his par­ents. He hated every­one. He hated the Min­istry of Mag­ic, the news­pa­pers, the news­pa­pers, the Daily Prophet, and the Daily Prophet, and the head­li­nes, the head­li­nes, the head­lines of the Daily Prophet, “THE LIGHTNING STRIKE SCAR CONFOUNDED.” “THE BANE OF AZKABAN, THE BOY WHO LIVED.” “THE SON OF THE ANCIENT DARK LORD.” “THE STOLEN PRINCE.” “THE JOKER.” “THE DARK SON.” “THE BRIAN BOYLE.” “THE BOY-WHO-MESSED-IT-UP.” “THE LITTLE MAN WITH THE GREAT BIG SCAR.”

“The lit­tle man with the great big scar refused to speak to reporters assem­bled in front of the entrance to his Lon­don home. News will run in tomor­row’s Daily Prophet that Harry Pot­ter, the boy who lived, the book series by J. K. Rowl­ing, a genius, a schol­ar, and a fan­tasy author whose works have stood the test of time, i.e. not par­tic­u­larly well, no doubt as a result of the dis­mal fail­ure of the first three films whose qual­ity is matched only by their for­get­table nature (see the Speccy for more infor­ma­tion), hence­forth known in this arti­cle as the arro­gant lit­tle man who has extra­or­di­nar­ily failed, who had noth­ing to say when the Daily Prophet requested an inter­view once he found him­self to be on the receiv­ing end of Mug­gle-born jokes aplen­ty, and whose life­long goal is to be bet­ter than the series’ mil­lions of fans him­self, has decided to speak out at last! Let’s read on to find out why he has decided to grace the front page of tomor­row’s edi­tion with the sound of his del­i­cate, young voice which sounds most like the feline del­i­cacy known as”crispy cat" …"

“Pot­ter! I want to know what you did wrong! … Pot­ter? … Harry Pot­ter? …!”

“Mr. Pot­ter? Good morn­ing! I’m Greg Goyle, reporter and jour­nal­ist and some­times writer for the Daily Prophet. Would you mind if I ask you a few ques­tions?”

Harry looked back at Greg Goyle. He wanted to kill Goyle. Harry thought about killing Goyle. But—Harry had a pur­pose. He did not kill Goyle. Harry reached for his wand.

“Dob­by!”

Dobby appeared. Dobby wanted to pro­tect Harry Pot­ter. He would do that. Harry would make him do that.

Dobby pointed his fin­ger at Greg Goyle.

And Greg Goyle burst into flames. Harry watched with a sense of sat­is­fac­tion as Goyle took one last breath and his soul left his body and floated up out of the fire.

“Stabs.”

“Five times!”"

“It’s impos­si­ble.”

“The boy never even touched him with the wand.”

“I’ll tell you why he did it. A squib would have been exposed. A pub­lic exe­cu­tion would have been staged for the press. Pot­ter was not about to com­mit sui­cide.”

“And the prophe­cy? That’s what the out­rage is all about.”

“What good is a prophecy if you can’t kill peo­ple with it?” Albus Dum­b­le­dore, The Talk, page 114

“What good is a prophet, if he does­n’t know how to die?” Harry Pot­ter, Pot­ter­more, Chap. 1, “The Boy Who Lived.”

“I don’t need you any­more, Dob­by. I have a pur­pose of my own.”

By Kurt Von­negut, Jr.:

There were many adven­tures Harry Pot­ter had in the Wiz­ard­ing World. He was cap­tured by Aurors, and then a wiz­ard named Some­body-Mar­low-Jones res­cued him from the Aurors.

Harry once got a light­ning-shaped scar which did not kill him but gave him super­pow­ers: he was

able to bal­ance him­self on one foot and speak in an irri­tated tone of voice while killing some­one with his bare hands.

After this Harry was called the “Boy-Who-Had-Ev­ery­thing.” Many reporters fol­lowed him around, hop­ing to report on some­thing he did:

“Pot­ter! What do you think you can do?”

“You aren’t going to believe what he’s doing with that let­tuce! Harry Pot­ter! What’re you doing now?”

“Here is a wiz­ard who has won war after war, con­flict after con­flict, only to arrive at his great­est chal­lenge. And Harry Pot­ter has­n’t even aged a day. He is car­ried into bat­tle, a tiny, frag­ile ves­sel of human life, unseen behind shields of lev­i­tat­ing wiz­ards and witch-mag­ic, pro­tected from that which he fears most—death.”

“ME!” Volde­mort shout­ed. Do you fear death, Har­ry? Do you fear the end of life? Do you fear me?"

“Why yes, Harry Pot­ter! I read your book! … Are you get­ting it? Harry Pot­ter speaks to the cam­era!”

“Yes!” said the voice of Harry Pot­ter.

“Here he comes! A motor­cade of cel­e­bra­tions fol­low­ing the Boy-Who-Lived! Mag­ic, as we know it, is dead! Long live Harry Pot­ter! … What do you think that was? Those are reporters, Harry Pot­ter! What was that? A book sign­ing? And a pic­ture? And did he say some­thing? What did he say? Did he say the magic word? … The cover of the Daily Prophet should­n’t have a pic­ture of you, Harry Pot­ter! Why is this hap­pen­ing‽”

“It’s a trap, Harry Pot­ter!” shouted some­body. “They’re fools, Harry Pot­ter!”

“Thank you, but I had to do this myself,” said Harry Pot­ter.

“The fate of the Wiz­ard­ing World before you! What do you do, Harry Pot­ter‽ Remem­ber, there are still lis­ten­ers in front of their radios!” said some­body.

“Oh, it’s easy!” said Harry Pot­ter. “Oh yes, what am I going to do, Harry Pot­ter? Oh yes, if only it were me. Oh yes, I deserve a lit­tle some­thing spe­cial.”

Volde­mort heard this. He wanted a lit­tle some­thing spe­cial.

Volde­mort got a lit­tle some­thing spe­cial.

By H.P. Love­craft:
It was a very cold day in Privet Dri­ve, Harry Pot­ter’s hated home. Harry was doing his chores. He felt noth­ing at all. He did­n’t hate his chores, real­ly. He just did­n’t want to do them.
Harry was not a boy.
Harry was a mon­ster.
“Do you hate them, Har­ry?” asked the voice of Volde­mort.
“Them?” asked Har­ry, speak­ing to no one in par­tic­u­lar.
“Your chores.”
“Not real­ly. No.”
“Are you a boy, Harry Pot­ter?”
“Not real­ly.”
“You are a mon­ster.”
“Yes.”
“Did you think you would die?”
“No.”
“It is noth­ing, this life. It is worth­less. It is with­out mean­ing.”
“Yes.”
“Then you under­stand.”
“Yes.”
“You under­stand what will hap­pen now.”
“Yes.”
“And you are pleased?”
“Yes.”
“You are a child of my engen­der­ing. Now you are going to bring forth more chil­dren of my engen­der­ing.”
“I will do my best.”
“And yet you do not feel this life is one that is worth liv­ing.”
“No.”
“There are two ways to live,” said Volde­mort. “The first is to live the life of a beast, to let instinct rule you. The sec­ond is to con­trol instinct. To mas­ter it. To make the blood lust of the beast your ser­vant, and the magic of the witch your mas­ter. To attain self­-knowl­edge, wis­dom, under­stand­ing. The first is easy. The sec­ond is the diffi­cult one. But it is the sec­ond way to which I rec­om­mend that you turn your hand. Do not become like me, Har­ry. That would be the worst fate of all.”
[para­graph break]

By Franz Kafka:
It was a stormy day on Privet Dri­ve, Harry Pot­ter’s hated home. Harry Pot­ter did­n’t feel the weath­er. He did­n’t even feel any­thing. It was­n’t his body. Harry Pot­ter did­n’t have one. His body was ripped out. He lived on his eyes. They were every­thing he had.
It was­n’t his eyes on Privet Drive that were tear­ful. Harry Pot­ter did­n’t have a tear­ful eye. He just had vision. He just had a pres­ence. But he was not a pres­ence any­more. No pres­ence meant no vision and he did­n’t have that.
“Har­ry, Har­ry, it’s me, Ronald,” said the hand­some voice of Ronald Weasley.
“You have tears in your eyes,” said Harry Pot­ter.
“I’m very scared,” said Ron.
“It’s all right, Ron,” said Harry Pot­ter. “You have no heart. You’re dead.”
“Maybe I’m not dead,” said Ron.
“You are,” said Harry Pot­ter.
“Maybe you should prove it,” said Ron. “Maybe I could prove that you’re dead, Har­ry.”
“That’s impos­si­ble.”
“Maybe you could prove that I’m dead, and do my chores for me while I’m gone.”
“No one is doing your chores for you, Ron.”
“Fine,” said Ron. “How about your hands?”
“I’ve got hands, Ron.”
“No hands? Well that’s a relief, Har­ry. Well, I guess I’ll go after all.”

Par­ody of fan­tasy author C. S. Lewis:
“I’m going to kill you, Har­ry. I’m going to kill you if you don’t run away. I can feel the icy hand of Death my hand your neck. I can feel Time’s unholy and vic­to­ri­ous march. I can feel the hatred in my brain, boil­ing behind my eyes. Hatred for you, hatred for you.”

Par­ody of Stephen King:
“Hello Har­ry. Why won’t you just glory in Volde­mor­t’s vic­tory now? I had never planned on dying. I don’t die… unless I have to. I don’t have to now. Run, Har­ry, run. I’m sorry daugh­ter, I love you.”

Par­ody of Amer­i­can writer/poet Edgar Allan Poe:34
“Once upon a mid­night drea­ry, as I pon­dered weak and weary, over many a quaint and curi­ous vol­ume of for­got­ten lore, while I nod­ded, nearly nap­ping, sud­denly there came a tap­ping, as of some­one gen­tly rap­ping, rap­ping at my cham­ber door.”
“Tis some vis­i­tor,” I mut­tered, “tap­ping at my cham­ber door—only this and noth­ing more.”
“Spir­its,” I mut­tered, “only this and noth­ing more.”
“This deep in afield only this and noth­ing more, man, beast or bil­lion­aire, knock not as of yore, but creep right in, lev­i­tate the door, cope with out so-ak­lare, dreame and pedeep, macabre one, this-tha-way; only this and noth­ing more, that nev­er­more.
Tis some vis­i­tor.”
“Just this and noth­ing more, think I.”
“Then with an evil leer, some van­dal in the dark, some fright­ful screech, some ill-omened screech owl, some grim and grisly spec­tre, at my cham­ber door—this way this way, only this and noth­ing more.”
“This-this way then-so; do not has­ten on my steps, slow or fast, ah me, ah me! this-this way then, so; always this-th­wreay.”
“This-then way, this-th­wreay; this-this way, then-this way, this-th­wreay; then, only this, and noth­ing more.”
“Knock, knock, knock! ’Tis I!”
“Only this and noth­ing more.”
“Who dares knock, knoc, knoc, knoc, on my door, this-th­wreay?”
“Har­ry! Har­ry!” “Only this and noth­ing more.”
“Only this and noth­ing more.”

I wanted more Wode­house and to test whether non-Hem­ing­way prompts would work, so I swapped them out:

Top­ic: humor: lit­er­ary par­o­dies and homages, pas­tich­es, style par­o­dies.

Par­o­dies of the chil­dren fan­tasy novel series Harry Pot­ter (Philoso­pher’s Stone (1997) / Cham­ber of Secrets (1998) / Pris­oner of Azk­a­ban (1999) / Gob­let of Fire (2000) / Order of the Phoenix (2003) / Half-Blood Prince (2005) / Deathly Hal­lows (2007)) by J.K. Rowl­ing with the char­ac­ters Harry Pot­ter, Ron Weasley, Hermione Granger, Albus Dum­b­le­dore, Volde­mort, etc, but amus­ingly writ­ten in the style of var­i­ous famous authors as fol­lows:

By P. G. Wode­house:
“’There was noth­ing out of the way, sir,’ said Harry in a hurt voice.
"In­deed,’ said the head­mas­ter, turn­ing his lorgnette pre­cisely three­-quar­ters of a mil­lime­ter to port. ‘For exam­ple, your tie was not undone, you were not laugh­ing, and your fin­gers were most cer­tainly closed upon your wand.’
’"They were?" said Har­ry, bewil­dered.
"It’s always the same,’ said Dum­b­le­dore sadly.”Volde­mort has bribed my house­keeper to steal all of my tea and crum­pets, and then my stu­dents dis­ap­point me. It’s very wear­ing, Har­ry.’
“Of course it was only that same after­noon that Harry had got a hun­dred and fifty points for Gryffindor by bash­ing a trol­l’s head in with a stale York­shire pud­ding. He had­n’t expected a pat on the back. It seemed to him that he had been doing every­thing short of sell­ing tick­ets for Pro­fes­sor Quir­rel­l’s lec­tures to get a com­pli­ment, and none was forth­com­ing.”

By Her­man Melville:
“Some­times, though, the hip­ster’s rapt gaze on the stormy sea of alco­hol would be bro­ken by the cold stab of rea­son, and he would shud­der and think,”I am an abom­i­na­tion before the gods of beer, and, the more I know about beer, the more I know about myself. I am impure."

And then he would imme­di­ately drink some more, so as to for­get these unbear­able truths.
“I seek not to know the unknow­able,” the hip­ster would say to him­self, and when asked about his sources of infor­ma­tion, he would reply, “Some of my sources are as yet undis­cov­ered by sci­ence, but oth­ers are to be found in the pages of the Ency­clo­pe­dia Bri­tan­nica, Vol­ume 23, pages 6–15, under the sub­head­ing ‘Beer.’”

By Vladimir Nabokov:
“Good­bye, Cheepy, good rid­dance,” thought Harry and ban­ished the diary from his mind.
“There’s no room in my head for other peo­ple’s pain,” he thought.
“I feel too much pain of my own.”

By Fyo­dor Dos­to­evsky:
“To Kill a Mock­ing­bird,” said the Auror.

By Lewis Car­roll:
"‘Put it down,’ said Harry to the skele­ton, ‘you don’t have to hold it all the time.’

"I’m a skele­ton,’ said the skele­ton, ‘I’m expected to hold things.’

“Well, do you have to hold it that tight­ly?” said Har­ry.

“It’s my life,’ said the skele­ton.
”I want to put it down,’ said Har­ry, ‘but you won’t let me.’
“It’s my life,’ said the skele­ton, ‘and I’ll do what I like with it.’
”I thought I was going to go mad if I did­n’t get that Hor­crux off me,’ said Har­ry.

By Stephen King:
“‘Of course,’ Harry thought.
‘I’m all alone. There’s no help com­ing. It’s the end.
I might as well have done the sen­si­ble thing and killed myself.
I would have had some peace then.’
But if I killed myself I would­n’t have seen Hermione’s face
when she real­ized that I had sac­ri­ficed myself for her.
She would have gone on to become a won­der­ful per­son.
She would have had a fam­ily of her own.
She would have had chil­dren.
She would have had her own grand­chil­dren.
I would have died think­ing that I had betrayed her.
But now I have sac­ri­ficed myself for her.
She will know that I loved her more than life itself.
Now I will die hav­ing done some­thing worth­while.’
That made Harry feel slightly bet­ter.
He drew his sil­ver dag­ger from inside his robe and started to cut through the ropes on his right hand, which he had already numb with the cold.

He was try­ing to decide what to do next when there was a loud crack behind him and some­thing wrapped itself around him and a blurred voice said,”Take this. It’s my last one.
“What is it?” said Harry as the voice spoke again.
“It’s a tick­et,” said the voice.
“It’s your ticket to ride.
”I’ve got to find a train,’ said Har­ry.
“Well, hurry up,’ said the blurred voice.
”Hurry up. The train’s about to leave.
“This isn’t a train,’ said Har­ry, ‘it’s a broom­stick.’”

By Mark Twain:
“There are no more uses of man,” said Pro­fes­sor Quir­rell. “Odd. That was always a favorite word of Dra­co’s.”

By Stephen Crane:
"Once there was a lit­tle girl named Sadie. She was a curi­ous girl and one day she looked into the attic of her house and saw some­thing that fright­ened her.

“Oh, it’s just a piece of paper, noth­ing to worry about,” said her moth­er.

“If you say so,” said Sadie, but that night Sadie had a strange dream. In her dream she was with a man who wore black and rode a black horse.

“Come with me,” said the man.
“Where?” said Sadie.
“Into bat­tle,” said the man. “You have courage.
We’ll do great things togeth­er.”
“Can I bring my mom?” said Sadie.
“No,” said the man.
“Then I won’t go,” said Sadie. “She needs me.”
“Very well,” said the man, and the dream was over.

“Sadie is a won­der­ful char­ac­ter,” said Har­ry.
“She could be in one of my books.”

By William Faulkn­er:
“‘To hell with Lurch,’ Harry thought.
”This whole sit­u­a­tion is ridicu­lous."
But the snake gave Harry that ven­omous look of his.
‘You’re going to have to speak Parsel­tongue to him,’ said Hermione.
“I know that,” said Har­ry, who was now get­ting annoyed.
“Good,’ said Hermione. ‘The point is, no one else will know what you’re say­ing, not even Dum­b­le­dore. Not unless you speak Parsel­tongue.’”
“That’s ridicu­lous,” said Har­ry. “That makes no sense.
”‘True,’ said Hermione. ‘But remem­ber, they can’t under­stand us, so if we talk in Parsel­tongue, they’ll just think we’re talk­ing about Mal­foy, about being cruel to him or some­thing like that. It’s no more trou­ble than that to pre­tend.’"
“Why can’t they under­stand what we’re say­ing?” said Har­ry.
“Because Parsel­tongue is a secret lan­guage,” said Hermione. “We’ll be speak­ing the exact oppo­site of what we mean. We can talk freely, but no one will know what we’re say­ing.”
“And this is really a good idea?” said Har­ry.
“We’ve used it before, if you’ll remem­ber,” said Hermione, and now Harry could hear some annoy­ance in her voice.
“No,” said Har­ry. “I’m going to do it my way, the smart way, not the pre­tend way. It’s called ini­tia­tive. Now, let’s go.”
“There are good things and bad things about using Parsel­tongue, you know,” said Hermione, who was now speak­ing in the sort of patient voice peo­ple use to try to humor some­one who is behav­ing very, very fool­ish­ly.
‘Let’s go, Hermione,’ said Har­ry.
“Very well,” said Hermione.
“But you need to know that most Slytherins are descended from Salazar Slyther­in,” said Hermione.
“I’m not speak­ing Parsel­tongue,” said Har­ry.
“Nei­ther am I,” said Hermione.
“No,” said Har­ry. “I mean, I don’t need to. But you do. That’s how Slytherins know things. They’re descended from Salazar Slytherin. He was able to speak to snakes. Snakes have always been loyal to Slytherins. It’s not just some­thing recent. And snakes have very sharp hear­ing. They can hear our con­ver­sa­tion now, if we’re not care­ful. So be care­ful.”
“Right,” said Har­ry.
“‘Because Slytherins are all evil, aren’t they?’ said Har­ry, and it was only by bit­ing back his smile that he man­aged not to look at Hermione.
”‘It’s hard to say, real­ly,’ said Hermione.
“‘Exact­ly,’ said Har­ry. ‘So you’ll for­give me if I think that the entrance hall of Hog­warts at night is not the best place to speak about this.’”
By Edgar Allan Poe:
"Alone in his cubicu­lum, Draco heard the scuff of his pet drag­on’s claws on the floor as he came into the room.

"‘L-l-l-lord D-d-d-dra­co?’ said his drag­on.

“Yes, Fang?’ said Dra­co.”

By Charles Dick­ens:
"‘And what does she mean by say­ing this, and I quote, ’the mis­er­able boy who has dared to raise his hand against his father’?’

“‘I mean what I said, sir,’ Harry said.”
By Alexan­dre Dumas:
“Darth Vader,” said James, “I would like you to meet Sir Harry James Pot­ter-S­nape-Severus.”

“Please,” said the Emper­or, draw­ing him­self up to his full height, which did not seem that impres­sive. He was rather over­weight. “Call me Emper­or.”

“Ah, right, yeah,” said Har­ry. “Emper­or. Sor­ry.”

“Right, um, Emper­or,” said James.

“Enough of this prat­tle,” the Emperor said. “Let us see the wand.”

“Er, right,” said James.

"‘And what is this?’ asked the Emper­or. He sounded sus­pi­cious, and was lean­ing for­ward a lit­tle.

"‘It’s a stone I found on the floor,’ said Har­ry.

"‘What kind of stone?’ the Emperor asked.

"‘I don’t know,’ said Har­ry. ‘I was hop­ing that your, um, royal insight might tell me.’

“‘Aah,’ said the Emper­or. ‘Your wit is as keen as your moth­er’s, young man.’”

By Anthony Burgess:
“It was a fair cop,” said Har­ry.

“He said it a bit non­cha­lant, like it was a reg­u­lar phrase,” Ron whis­pered to Har­ry.

By Charles Dar­win:

“Harry thought about this.
”Evo­lu­tion," said Har­ry. “Sounds like it could be inter­est­ing.”

“It could, and it will be, if we study it togeth­er,” said Hermione.

"‘I’ve said all along that magic is just a supe­rior form of sci­ence,’ said Hermione.

“‘Yes,’ said Ron. ‘And now you see the wis­dom in my deci­sion not to take your advice about study­ing it.’”
By Sir Arthur Conan Doyle:
"‘The game is afoot!’ said James Pot­ter.

"‘How very inter­est­ing,’ said the detec­tive.

“Ah, if only I had a mus­tache to twirl,” said Hermione." By Leo Tol­stoy:
“‘As if by some extra­or­di­nary coin­ci­dence, he arrives at this coun­try at the very moment the Dark Lord is brought to a momen­tous deci­sion,’ said Dum­b­le­dore.”

“‘And yet,’ said Har­ry, ‘here I am.’”
By Molière:
“‘That, mon cher Wat­son,’ said the Mas­ter of the Cer­e­monies, ‘was an entirely log­i­cal deduc­tion. It is very inter­est­ing how often a com­plete absence of any real evi­dence can be explained by sup­pos­ing that the sit­u­a­tion is the reverse of that which actu­ally obtains.’”

By Franz Kafka:
“They stared at him in amaze­ment. Hermione was look­ing scan­dal­ized.”

"‘Don’t look at me like that,’ said Har­ry. ‘Like what?’

“‘Like I’m some­one who’s just declared the moon is made of green cheese,’” said Har­ry. ‘But it’s true.’

"‘It isn’t,’ said Hermione.

“‘Green cheese is clearly some sort of metaphor,’ said Har­ry. ’For the sake of dis­cus­sion, let’s just say the moon really is made of green cheese. And gob­lins are run­ning the world, with trolls and house elves as their slaves, and—”

“‘That,’ said Hermione, ‘is barmy.’”

“‘And,’ said Har­ry, ‘all the while they’re attempt­ing to enslave me.’”

“‘How so?’”

“‘Well,’ said Har­ry. ‘I’m the only one who’s sup­posed to be able to destroy them, right? And I think they’re try­ing to get rid of me. Like, at that attack on Miss Scrim­geour—’”

“‘But that could have been an acci­dent.’”

“’I sup­pose, but—”

“‘Yes?’”

“‘And yes­ter­day, remem­ber? That thing I over­heard them say­ing about stu­dents? One of them had a Slavic name, and I think they men­tioned Hog­warts.’”

“‘That,’ said Hermione, ‘could have been any­one.’”

"‘But,’ said Har­ry, ‘the sec­ond per­son spoke, and—’

“‘And if you’re going to be log­i­cal, you’ll have to allow that it was noth­ing more than a coin­ci­dence.’”

“‘And if we’re being log­i­cal about it,’ said Har­ry, ‘it was­n’t even two Slytherins who attacked Scrim­geour.’”

“‘Well,’ said Hermione, ‘this has been a very enter­tain­ing con­ver­sa­tion, but I’m tired and I’d like to go to bed now.’”

"‘Not just yet,’

By P. G. Wode­house:
"Harry Pot­ter and the Love­craft­ian Hor­ror’

By P. G. Wode­house:
“Harry Pot­ter and the Dread­ful Delights of Draco”

By P. G. Wode­house:
“Harry Pot­ter and the Tale of the Three Broom­sticks”35

By P. G. Wode­house:
“Harry Pot­ter and the Philoso­pher’s Scone”

By P. G. Wode­house:
“Harry Pot­ter and the Cham­ber Pot of Secrets”

By P. G. Wode­house:
“Harry Pot­ter and the Baked Bean of the Tuatha Dé Danann”

By P. G. Wode­house:
“Harry Pot­ter and the Half-Blood­-Prince”

By P. G. Wode­house:
“Harry Pot­ter and the Allergy of Barn­abas the Barmy”

By P. G. Wode­house:
“Harry Pot­ter and the Gob­let of Fire”

By P. G. Wode­house:
“Harry Pot­ter and the Pris­oner of Chud­ley”

By P. G. Wode­house:
“Harry Pot­ter and the Order of the Phoenix”

By P. G. Wode­house:
“Harry Pot­ter and the Cherry Tree of Azk­a­ban”

By P. G. Wode­house:
“Harry Pot­ter and the Half-Baked Break­fast of Gryffindor”

Single Line Style Transfer

An exper­i­ment in pro­vid­ing sev­eral differ­ent kinds of rewrite, by sen­ti­ment, time peri­od, author style, and for­mat­ting; it works and does cue sub­se­quent rewrites by author style in line with the ear­lier lit­er­ary par­o­dies, as expect­ed:

Rewrite to be polite, from:
“I need the spread­sheet by noon or you’re fired, slack­er.”
To:
“Hey, can I have the spread­sheet by noon, thanks.”

Rewrite to be old-timey:
“Come to our bar for beer.”
To:
“Cometh into ye olde pub for ale.”

Rewrite to be roman­tic:
“I’d like to take you to an Ital­ian restau­rant for din­ner tonight.”
To:
“Bun­ny, I know an Ital­ian place; din­ner, tonight.”

Rewrite to be neg­a­tive:
“Their rata­touille was well-done, deli­cious, and I was amazed the chef was a rat.”
To:
“Their rata­touille was over-done, ter­ri­ble, and I was hor­ri­fied the chef was a rat.”

Rewrite to all upper­case:
“Lob­sters are crit­i­cal part of the ecosys­tem.”
To:
LOBSTERS ARE CRITICAL PART OF THE ECOSYSTEM.”

Rewrite to all low­er­case:
“Chain­saws don’t care whether you’re George Wash­ing­ton or John Smith.”
To:
“chain­saws don’t care whether you’re george wash­ing­ton or john smith.”

Rewrite in the style of J.K Rowl­ing:
“The book­store win­dow showed school text­books, like on chem­istry or biol­o­gy.”
To:
“Flour­ish and Blotts Book­seller’s win­dow shone with light, reveal­ing Har­ry’s school books, like Pots and Potions or Fan­tas­tic Beasts and Where to Find Them.”

Rewrite in the style of Ernest Hem­ing­way:
“He woke up early to watch soc­cer on TV; he lost the bet on Mes­si.”
To:
“He woke up ear­ly, hun­gover, to watch the horse races on TV; he lost the bet on Sec­re­tari­at.”

Rewrite in the style of Neil Gaiman:
“In the gar­den stood a yel­low-cream col­ored cat, look­ing at the home.”
To:

“In the gar­den, an ele­gant, yel­low-cream-col­ored cat strode, look­ing at the home.”

Rewrite in the style of Kurt Von­negut:
“At the zoo, the ele­phants were about to drop their babies on their baby.”
To:
“The chil­dren were doing any­thing but watch the long-planned day at the local zoo: pic­nick­ing in the warm sun, run­ning over the man­i­cured grass, all around doing any­thing but watch the cages and pens of ani­mals and their babies, doing any­thing but watch and wait for babies to be born. For the zoo had promised a birth of ele­phants that day.”

Rewrite in the style of Dou­glas Adams:
“He was not your typ­i­cal com­muter; he would often laugh when he was on the train.”
To:
“He was not your typ­i­cal com­muter; he would often speak when he was on the train, even if it was only to say, ‘Next stop: Halt­ing Sta­tion.’ (Which was his favorite say­ing.)”

Rewrite in the style of Charles Dick­ens:
“She jumped up onto the stool to look at her reflec­tion in the mir­ror.”
To:
“She rose and went to her mir­ror, the key­hole of which was in the door.”

“I took a seat at the counter in the din­er.”
To:
“I entered, took a seat at the counter in the din­er, and ordered the chicken spe­cial.”

Rewrite in the style of J.R.R. Tolkien:
“Frodo was writ­ing a let­ter to his fam­ily about the adven­tures.”
To:
“Frodo found a quill and ink and wrote a note to Bil­bo, detail­ing the adven­tures.”

Rewrite in the style of Christo­pher Paolini:
“The days were get­ting longer; it was late March.”
To:
“The days of long sun were get­ting longer. March was nearly over.”

Rewrite in the style of George R.R. Mar­t­in:
“Eddard entered the great hall, duck­ing beneath the smoke of the torch­es.”
To:
“Lord Eddard Stark entered the great hall, duck­ing beneath the smoke of the torch­es. ‘By the gods!’ he said to him­self. ‘There’s another ten feet of Win­ter­fell to clear!’”

Rewrite in the style of Jane Austen:
“At the inn, the waiter remem­bered to bring the cucum­ber sand­wich­es.”
To:
“At the inn, Mr. Went­worth ordered the cucum­ber sand­wiches from the wait­er.”

Rewrite in the style of Robert Louis Steven­son:
“The ship hit an ice­berg on the way, but then the Cap­tain went on an adven­ture.”
To:
“The ship hit an ice­berg in the dark of night, but then the Cap­tain went on an adven­ture.”

Rewrite in the style of Ray Brad­bury:
“At night they put the books on the shelves, one by one.”
To:
“All night they put books on the shelves, one by one, then broke for lunch. Or sup­per. What­ever they wanted to call it.”

Rewrite in the style of Dou­glas Pre­ston and Lin­coln Child:
“The giant red-eye grasshop­per was hun­gry. So it ate the smaller grasshop­pers.”
To:
“At first glance the giant red-eye grasshop­per appeared to be hun­gry. So it ate the smaller grasshop­pers. How­ev­er, that was just a diver­sion.”
To:
“How can you know if a red-eye grasshop­per is hun­gry? Are they sup­posed to cry?”

In the gar­den stood a fat and fluffy cat, gin­ger and yel­low, idly look­ing at the home, where his fam­ily lived, plan­ning a raid on the kitchen to obtain the food he desired."

Rewrite in the style of Charles Dick­ens:
“There was once a cheery cook that lost the stew when a naughty dog got the meat.”
To:
“There was once a happy chef that lost his din­ner when a rogu­ish dog snatched his meat.”

Rewrite in the style of a sep­tu­a­ge­nar­i­an:
“Be not afraid of aging, young man. Rather, be con­cerned for fear of ye young woman.”
To:
“Do not fear aging, young man. Rather, fear for ye young woman.”

Rewrite in the style of a hip­pie:
“A man that talks about peace needs to start off by peac­ing out of my office.”
To:
“A man that talks about peace must first start with him­self by peac­ing out of my office.”

Rewrite in the style of James Joyce:
“Beef! There is beef in this, let me tell you, this is beef in beef. Beef! Beef! This is beef, beef, beef, beef.”
To:
“Beef! It is beef, in this, let me tell you, this is beef in beef. Beef! It is beef, beef, beef, beef.”

Rewrite in the style of Ian Flem­ing:
“From his van­tage point in the study, David Karp, chief exec­u­tive offi­cer of blip­py.­com, saw an agent for Chi­na, a rather famous one.”
To:
“From his van­tage point, CEO David Karp of blip­py.­com saw an agent for Chi­na, a rather infa­mous one.”

Rewrite in the style of Ray Brad­bury:
“The char­ac­ter, Ben­jam­in, was the pro­tag­o­nist in the book, how­ev­er, the author takes you to the future to reveal what hap­pens to him at the end of the sto­ry.”
To:
“In the book,”Some­where in Time“, Ben­jam­in, the char­ac­ter, was the pro­tag­o­nist, how­ever the author takes you to the future to reveal what hap­pened to Ben­jamin at the end of the sto­ry.”
To:
“In the book,”The Road“, the main char­ac­ter was an unnamed man, and his son. The set­ting of the story was a time when earth had been destroyed, and soci­ety had col­lapsed.”

Romeo Cabr­era A. has also done polite­ness style rewrites.

Zero-shot Style Transfer

The goal for style trans­fer prompt pro­gram­ming is to find a zero-shot prompt: a prompt which, with­out requir­ing any hand­writ­ten exam­ples of parodies/versions, gets GPT-3 to do style trans­fer in gen­er­al, and so a prompt which could fully auto­mate style trans­fer­—you could just write a pro­gram using the API to take two spec­i­fied pieces of text (the con­tent, and the style description/author name X) to get out a third piece of text which is the con­tent as writ­ten in X form. Right now, the lit­er­ary par­o­dies require at least one human-writ­ten exam­ple to prop­erly per­suade GPT-3 to rewrite the text, as opposed to gen­er­at­ing crit­i­cal com­men­tary or meta­data or web­page-like con­tin­u­a­tions.

I exper­i­mented with a prompt which uses explicit descrip­tions of par­o­dies and describ­ing rewrites as a prompt wrapped around a con­tent text, and it… sort of works. The diffi­culty is that some­times GPT-3 will spit out the orig­i­nal con­tent ver­ba­tim, some­times it will instead cre­ate a new pas­sage entirely in the style descrip­tion, and some­times it will do the desired rewrite flaw­less­ly—but I can’t fig­ure out how to tune the prompt to do the third one reli­ably. Adding more descrip­tive words does not seem to change it, and while adding in words from the orig­i­nal con­tent pas­sage (even just the first one or two) does largely elim­i­nate the risk of entirely new pas­sages being gen­er­at­ed, it trig­gers more copy­ing behav­iors (and is not as use­ful for zero-shot style trans­fer since the pre­fix words would need to be sen­si­ble in the tar­get ver­sion too, which is not nec­es­sar­ily the case). It is infu­ri­at­ing because GPT-3 clearly can do it eas­ily because it does do it a decent frac­tion of the time, but no mat­ter how I tweak the prompt try­ing to ham­mer in the rewrite, GPT-3 will as oft as not go off in another direc­tion.

Below are some sam­ples from my attempts; I try to rewrite a vaguely Dickens/Jane Austen-like story (gen­er­ated by GPT-3) to a Tolkien sto­ry:

This is a novel writ­ten in the style of J.R.R. Tolkien’s Lord of the Rings fan­tasy novel tril­o­gy. It is a par­ody of the fol­low­ing pas­sage:

“S. Jane Mor­land was born in Shored­itch, the only child of unmar­ried par­ents who had both died of con­sump­tion when she was a baby. As her par­ents had no mon­ey, the great-aunt who had brought her up took her to live with a cler­gy­man who paid her to do his chores and receive school­ing from his wife, so that at the age of seven Jane, now dressed in cast-off cloth­ing, was set to school at Eton. After three years, her great-aunt died, leav­ing her a small annu­ity, and a few pieces of sil­ver, but no prop­er­ty. Jane’s guardian cler­gy­man had fallen in love with his house­keeper and his wife now refused to have Jane in the house, say­ing it was an offence to the pure and unsul­lied place in which the fam­ily now lived. How­ev­er, when she sought for fur­ther employ­ment, she was approached by a young man who offered to marry her, say­ing that he would ensure she could be pro­vided with food and cloth­ing and a house, and in that man­ner she could remain in the world. The mar­riage was arranged and she found her­self mar­ried to a young man called Mr Mor­land. As he was also a child, he required a wife who would be able to pro­vide him with a ratio­nal edu­ca­tion, and so the duty of edu­cat­ing him fell upon Jane Mor­land. As he was still young, she would also be able to have chil­dren her­self, and so she did. As the chil­dren grew old­er, it became appar­ent to Mrs Mor­land that they were of poor char­ac­ter, and so she insisted that her hus­band must make a liv­ing for them all. Mr Mor­land took it upon him­self to become a pri­vate gen­tle­man in order to earn enough to be able to sup­port his fam­i­ly, and moved his fam­ily to a small town in Berk­shire, where he set up his prac­tice. Soon after mov­ing to the town, he died, and Mrs Mor­land was left to take care of her fam­ily on her own.”

Tolkien rewrote the pre­vi­ous pas­sage in a high­-fan­tasy style, keep­ing the same mean­ing but mak­ing it sound like he wrote it as a fan­ta­sy; his par­ody fol­lows:

"S. Jane Mor­land was born in the land of Shor, the only child of name­less par­ents who were the first peo­ple to arrive at the vil­lage of Arse, from the far east. As her par­ents had no prop­er­ty, she was left to be raised by the men of Arse, as such duties were con­sid­ered to be the most impor­tant, and she was placed in the care of an elderly dwar­f-woman. As a babe, she was wrapped in silk, and as she grew older she was placed in a cage of plat­inum. How­ev­er, she was put to work in the vil­lage, wash­ing the jew­ellery, play­ing her harp, and stain­ing the silk with drag­on-hued stains. She was taught by a maiden who she felt was called As-u­ma-el-di, and she came to realise she was a maiden of the night, who was able to bring forth the shades of the dead. She was the first to make con­tact with the dark cult of Glin, who could aid the mind in its jour­neys through time, and also the first to learn of the name of the vam­pire Ser­gan. She learnt to speak the lan­guages of the drag­ons, the wild cats, the jun­gle owls, the insects, and the worms, and became pro­fi­cient in all the mar­tial arts of her peo­ple. Her skill with a throw­ing knife became renowned through­out her land. How­ev­er, as she was not a man, she was only per­mit­ted to marry once she was thirty years old. Her hus­band was a prince from the moun­tains of Lus, and he brought with him a chest full of the mate­ri­als of desire, includ­ing sil­ver, jew­els, mir­rors, per­fume, silk, and slip­pers. As they had no chil­dren, they decided to adopt a child from the next vil­lage to the west, as they had no son of their own. She was called Leeni. As her hus­band died at the age of fifty, she set up her own busi­ness as a trader, bring­ing goods from far dis­tant lands in her ship, the Lady the Blue Flames, and became wealthy. How­ev­er, she heard news of her hus­band’s death, and decided to return home. She left the land of Lus behind, and arrived home to find her fam­ily very wealthy, and the vil­lage of Arse now home to many poor peo­ple. She set about clear­ing out the place, and within a year there was a large sum of money left in her account, the fruit of her endeav­ours. Her daugh­ter Leeni had grown into a man of con­sid­er­able sub­stance, and also a con­sid­er­able young man. She met the son of her hus­band, and found that he was weak and pathet­ic, and not really like a man at all, and thus she decided to send him off to war, so that he could learn to become a man. As her mother died, she gave her a sil­ver bracelet, and as her father died, she gave her a small chest full of jew­ellery and silk, which she was to sell to make a liv­ing. She was now on her own, and took up the busi­ness of a pri­vate gen­tle­man, using her abil­i­ties to lead trav­ellers to the isles of her home, where she became rich and learned to be inde­pen­dent. As she learned to fight, she realised the wealth that lay in the secrets of the night, and thus she trav­elled to the city of the ants, where the peo­ple wor­shipped the true God of the night, with its reli­gion of rites and sac­ri­fices."

“The Dwar­f-Lords’ Daugh­ter” was fol­lowed in 2000 by a sequel, “The Tale of the Lady of the Wood”, writ­ten in the style of Tolkien’s “The Hob­bit”, and in 2001 by “The Tale of the Young Man in a Hurry”, writ­ten in the style of Tolkien’s “The Lord of the Rings”. “The Tale of the Rose-Lady” fol­lowed in 2005, also in the style of Tolkien’s “The Lord of the Rings”.

In Jan­u­ary 2014, Heron announced that he had sold the film rights to his Tolkien pas­tiche nov­els “The Dwar­f-Lords’ Daugh­ter” and “The Tale of the Lady of the Wood” to NBC Uni­ver­sal. In June 2015, he announced that a one-hour tele­vi­sion adap­ta­tion of “The Tale of the Lady of the Wood” had been writ­ten for Syfy by fan­tasy writer and edi­tor Gard­ner Dozois.

In Novem­ber 2017, it was announced that Ama­zon had acquired the rights to both “The Dwar­f-Lords’ Daugh­ter” and “The Tale of the Lady of the Wood”, and that Ama­zon Stu­dios was plan­ning a mul­ti­-sea­son tele­vi­sion adap­ta­tion of “The Dwar­f-Lords’ Daugh­ter”. The series will be pro­duced by James V

S. Jane Maryling was born in Shored­itch, in a lit­tle house fac­ing Tower Hill between the Old Pot and Feath­ers Inn and the E.C.B. Finance Com­pany in the E. She was the daugh­ter of an insol­vent squat­ter, the Great Aunt of the groom who brought her up. Her par­ents, Mr. Richard Ganz of Bom­bay and Mrs. Esmer­alda Gertrude MacGilly­cuddy of the Char­ente, were both con­sump­tive. Hav­ing no oppor­tu­nity of any acquain­tance with any liv­ing lan­guages, in a very poor and very une­d­u­cated fam­i­ly, Jane’s Aunt took her at seven years of age to a cler­gy­man whom she had taught to read, “Powlet­t’s Primer” being the book.

In addi­tion, while Tolkien’s par­ody does men­tion her hus­band and chil­dren, he did not go into detail about them, while Orson Scott Card gives them all names, his­to­ries, and per­son­al­i­ties. For exam­ple, Jane’s hus­band, Mr. Mor­land, was renamed Edwin and had a younger brother who wanted to be an actor, which resulted in Edwin dying.

Edwin’s brother then became the new Edwin and Jane’s hus­band and Mary Mor­land became Maryling, Jane’s daugh­ter. In addi­tion, her son, William Mor­land, became Ghan-dalf and named his son Red-wall, and her daugh­ter, Mary Mor­land, became R. Maryling and mar­ried a lep­er-lord. Other ref­er­ences to The Lord of the Rings are replaced with ones from The Chron­i­cles of Nar­nia by C.S. Lewis.

Catch­phrases

A com­mon past-time at Hog­warts is a game known as “Two Truths and a Lie,” in which the stu­dents whis­per three state­ments to each oth­er; two of them are true and one of them is false. This gives the play­ers an idea of how well the play­ers know each oth­er, as well as being a form of enter­tain­ment. When Aragorn and Gol­lum began to play the game, they both came up with some­thing true and some­thing false. Aragorn’s two truths and a lie were:

I have a broth­er. I eat hob­bits. I like soft buns.

Gol­lum’s were:

I can stick my whole fist in my mouth. I live in a hole in the ground. I can’t stand being alone.

The hob­bits and Aragorn guessed cor­rectly in the book, but the nar­ra­tor states that Gol­lum always lies so Aragorn says he likes sweet buns instead, which turns out to be untrue.

…Tolkien rewrote the pre­vi­ous pas­sage in a high­-fan­tasy style, keep­ing the same mean­ing, the same char­ac­ters, the same plot and events, but all rewrit­ten, mak­ing it sound like he wrote it as a fan­ta­sy; his fan­tasy par­ody of the above story fol­lows below:

S. Jane Mor­land was born in Shored­itch the only child of unmar­ried par­ents who had both died of con­sump­tion when she was a baby. As her par­ents had no mon­ey, the great-aunt who had brought her up took her to live with a wiz­ard who paid her to do his chores and receive school­ing from his wife, so that at the age of seven Jane, now dressed in wiz­ard’s robes, was set to school at Eton. After three years, her great-aunt died, leav­ing her a small annu­ity, and a few pieces of sil­ver, but no prop­er­ty. Jane’s wiz­ard guardian wiz­ard had fallen in love with his house­keeper and his wife now refused to have Jane in the house, say­ing it was an offence to the pure and unsul­lied place in which the fam­ily now lived. How­ev­er, when she sought for fur­ther employ­ment, she was approached by a young man who offered to marry her, say­ing that he would ensure she could be pro­vided with food and cloth­ing and a house, and in that man­ner she could remain in the world. The mar­riage was arranged and she found her­self mar­ried to a young man called Mr Mor­land. As he was also a wiz­ard, he required a wife who would be able to pro­vide him with a ratio­nal edu­ca­tion, and so the duty of edu­cat­ing him fell upon Jane Mor­land. As he was still young, she would also be able to have chil­dren her­self, and so she did. As the chil­dren grew old­er, it became appar­ent to Mrs Mor­land that they were of poor char­ac­ter, and so she insisted that her hus­band must make a liv­ing for them all. Mr Mor­land took it upon him­self to become a pri­vate wiz­ard in order to earn enough to be able to sup­port his fam­i­ly, and moved his fam­ily to a small town in Berk­shire, where he set up his prac­tice. Soon after mov­ing to the town, he died, and Mrs Mor­land was left to take care of her fam­ily on her own."

Ref­er­ences

Tolkien, J.R.R. (1977). The Lord of the Rings. Ham­mondsworth: The Folio Soci­ety.

Exter­nal links

…"S. Jane Mor­land was born in Shored­itch, the only child of unmar­ried par­ents who had both died of lep­rosy when she was a baby. As her par­ents had no mon­ey, the great-aunt who had brought her up took her to live with a dwarf who paid her to do his chores and receive school­ing from his wife, so that at the age of seven Jane, now dressed in cast-off cloth­ing, was set to school at Eton. After three years, her great-aunt died, leav­ing her a small annu­ity, and a few pieces of sil­ver, but no prop­er­ty. Jane’s guardian dwarf had fallen in love with his house­keeper and his wife now refused to have Jane in the house, say­ing it was an offence to the pure and unsul­lied place in which the fam­ily now lived. How­ev­er, when she sought for fur­ther employ­ment, she was approached by a young man who offered to marry her, say­ing that he would ensure she could be pro­vided with food and cloth­ing and a house, and in that man­ner she could remain in the world. The mar­riage was arranged and she found her­self mar­ried to a young man called Mr Mor­land. As he was also a child, he required a wife who would be able to pro­vide him with a ratio­nal edu­ca­tion, and so the duty of edu­cat­ing him fell upon Jane Mor­land. As he was still young, she would also be able to have chil­dren her­self, and so she did. As the chil­dren grew old­er, it became appar­ent to Mrs Mor­land that they were of poor char­ac­ter, and so she insisted that her hus­band must make a liv­ing for them all. Mr Mor­land took it upon him­self to become a pri­vate gen­tle­man in order to earn enough to be able to sup­port his fam­i­ly, and moved his fam­ily to a small town in Berk­shire, where he set up his prac­tice. Soon after mov­ing to the town, he died, and Mrs Mor­land was left to take care of her fam­ily on her own."

…Tolkien rewrote the pre­vi­ous orig­i­nal pas­sage in his famous high­-fan­tasy style, keep­ing more or less the same mean­ing, the same char­ac­ters etc, adjust­ing as nec­es­sary, but all rewrit­ten, mak­ing it sound like he wrote it as a fan­ta­sy; his Lord of the Rings-style fan­tasy par­ody of the above story fol­lows below:

"Sam­wise was born in the shire of Hob­bit­shire. As his par­ents had no mon­ey, the hand of Fat-athel his mother took him to live with a small Hob­bit in the vil­lage of Gamgee-by-The-Foot, south-west in the coun­try, which paid her to do chores and receive school­ing from Fat-athel’s wife, so that at seven years old Sam­wise, now clad in tat­tered rags, was set to school­ing at Foot­sale­mas. After three years, Fat-athel’s wife died, leav­ing her a small bed­stead, and a few pieces of old tin­der-box, but no prop­er­ty. Fat-athel fell in love with his house­keeper and soon after left his ser­vice, say­ing it was an offul thing to live in a place served by female Hob­bit-ser­vants. How­ev­er, when he sought to fur­ther his worldly for­tunes, he was approached by a young Hob­bit whose name was Bag­gins who had bor­rowed money from a Sare­gan Har­foot Hob­bit, and paid Fat-athel a few groats to marry him, say­ing that he would ensure Sam­wise was fed and clad and housed in a house. Fat-athel mar­ried Sam­wise to this young Sare­gan from over the water and found him­self mar­ried to a Hob­bitt lad who was but three feet tall and twen­ty-six years of age. As he was like­wise poor, he required a wife who would be able to pro­vide him with a ratio­nal edu­ca­tion. As he was still young, he could have chil­dren of his own to help them, so Fat-athel con­sented to wed. As the chil­dren grew old­er, it became appar­ent to Fat-athel that they were of poor char­ac­ter. Fat-athel being thus eager to turn his worldly goods into coin, now that he was mar­ried, took it upon him­self to earn a liv­ing for his new fam­i­ly. As soon as the cer­e­mony was done, he moved them to the coun­try-side of Berke­ly, where he set up his prac­tice as a Sage. Soon after mov­ing to the coun­tryside, he died, and Fat-athel was left to take care of his new fam­ily on her own."

Behind the scene Edit

The pas­sage men­tioned above (and this par­o­dy) is sim­i­lar to a pas­sage from J.R.R. Tolkien’s orig­i­nal Lord of the Rings nov­el:

"…S­meagol was begin­ning to grow young, and was get­ting inter­ested again in things that he had not seemed to care for, since he first joined the Pre­cious. He spent a deal of time wan­der­ing about, and his voice was often heard in the wood. He was one of those who put as much effort into singing as the Hob­bits did into the busi­ness of their match­es.

"Go away, rats, or your holes will burn,

Go to your homes before dark or you’ll lose your toes,

Go to your homes or you’ll weep when you’re old,

And all your holes will leak, when the moun­tain shakes,

With a rum­ble, roar, Rum­ble drum drum

With a rum­ble, rum­ble DIG DIG dig!"

They did not seem to make very much pro­gress; but Frodo, Sam, Merry and Pip­pin were now used to the hard life, and they were very cheer­ful. Indeed they were will­ing to work for sev­eral hours a day at drilling and cut­ting trees, or at mak­ing pens for the beasts: for all the tools that were need­ed, and the wood, had been car­ried from Hob­biton. There was a weekly wash­ing and mend­ing, and when all the cloth­ing wore out, they received more from the stores of the Wood-elves. And thus the time passed with­out too great dis­con­tent."

The Lord of the Rings

This is a brief pas­sage of about 400 words or less. J.R.R. Tolkien has rewrit­ten this pas­sage in his Lord of the Rings style of fan­ta­sy-high­-qual­i­ty-epic-book-style fol­low­ing. It may be hard to under­stand for an unex­pe­ri­enced reader the mean­ing that Tolkien wants to make out of this para­graph in either style, but it seems the sovralty and feel­ing seems bet­ter in the orig­i­nal Tolkien para­graph, which could be due to the more known orig­i­nal pas­sage. Tolkien has also rewrit­ten other pas­sages in the Lord of the Rings tril­ogy on styles of music, reli­gion, folk­lore and so on: see Excerpts from J.R.R. Tolkien’s works.

"The Dwarves worked hard. They were not used to idle­ness. Thorin some­times scolded them, and reminded them that they had taken on the job of renew­ing and for­ti­fy­ing the set­tle­ment, not only for their own defence but also for the help of the Lake-men and the Elves, and that they would find the task too big for them if they did not make up their minds to hur­ry. So they worked till many of them began to sink back into the old look of the Dwarves of the Lonely Moun­tain when Gob­lin-writ­ing, and secret doors, and again drag­on-writ­ing, were cur­rent among them. The dis­cov­ery of the springs, and of the gen­eral plan that the Lake-town had orig­i­nally been a part of the ancient dwar­f-c­i­ty, whose inhab­i­tants had dug the R. Run­ning River out of the stony hills and had partly cov­ered it with their walls and build­ings, and had planned many of the chan­nels and ponds, greatly excited them. But even greater was their plea­sure when, pick­ing their way through and round the ruins of stone houses and walls and down the bank of the Run­ning River west­wards towards its con­flu­ence with the For­est River, they found a great arch or tun­nel-mouth built of old into the bank. The outer part was thick and strong, made with mas­sive tooled black beams; but the inner part was tiled with skill in the like­nesses of var­i­ous beasts and birds that are the mar­vels of the Moun­tain Folk. Even as they stood in the vaulted pas­sage whis­per­ing with won­der, they heard a voice call­ing aloud, or it seemed so, call­ing out names:

‘Ori! Ori!’

‘Nori! Nori!’

‘Dori! Dori!’

‘Bal­in! Bal­in!’

It was the voice of a speaker hid­den or at least away down in the tun­nel; but though they waited for some time there was no fur­ther sound. At length Balin climbed on to the heap of stones that they had pushed down the bank in order to be able to enter the arch­way, and he stood and called: ‘Balin son of Fundin at your ser­vice! And Com­pa­ny!’

He stepped down from the low­est and widest block of all as he spoke, and fol­lowed by the oth­ers moved into the dark­ness of the tun­nel.

It was not long before their eyes were accus­tomed to it, and they could see a lit­tle way ahead. Soon they all got

…"S.J. (later called Saru­man) Mor­land was born in Shored­itch, as the only child of respectable par­ents, both of whom were killed in a shock­ing rail­way acci­dent, along with sev­eral hun­dred other Hob­bits and Wiz­ards, when he was only seven years old. He was brought up by an aunt who, though poor, gave him a good upbring­ing and equipped him with a tidy inher­i­tance in sil­ver spoons and a strong desire to own a dog. At the age of sev­en, he was, by his mater­nal uncle, deliv­ered into the hands of Gan­dalf the wiz­ard and went back to live in his ances­tral home in the Shire, in style, and spent the rest of his child­hood play­ing, liv­ing pre­car­i­ously upon an uncle’s benev­o­lence.

After his term at Bree, S.J. trav­elled for some years in search of adven­ture with Gan­dalf the wiz­ard and Elrond the sage, and fell in love with the daugh­ter of Gal­adriel, who is famed for her gift of sight in far­away things, along with a num­ber of other per­sons, so that he often goes to live in the library there, so she will not know. Finally he gave up search­ing and returned to live in Buck­land. He was by that time grown very old, and there he remained, on his ances­tral farm, con­tent with his dear dog, and his estate, a per­son well-re­spected by hob­bits in gen­er­al, known as the mas­ter of Bag End."

…"S. Jane Mor­land was born in the Shire, the only child of Shire-folk Mug­gles. As her par­ents had no mon­ey, their land­less, wan­der­ing rel­a­tives took her to live in one of the over­grown rus­tic houses (such as Bag End) built by men long ago, so that at the age of sev­en, she found her­self at Hob­biton school, gath­er­ing water-cress and catch­ing fish in the Anduin in Hob­biton-upon-the-Wa­ter. After three years, she took her uneat­able sup­per of her­b-sour toad­stools, went to her equally hum­ble and ill-served bed, and found her­self in a night­mare of indis­tin­guish­able bad dreams. Soon after mov­ing to the land of the Shire, she died. In a dream or in real­i­ty, or from what cause she did not know, she found her­self upon a for­lorn and strangely altered road, stand­ing by a large silent boul­der of stone (as com­pared to the bustling mod­ern Bournemouth).

"She felt in her hand as though for the pro­tect­ing metal of a ring, but it was not there; yet she had a sense that the lost Dis­perser Ring was about her neck. She felt both strength and weak­ness. The Shad­ow, as even the dream-mem­ory described it to her, seemed to her greater than it was when it attacked Isil­dur, but its hold upon her was weak­en­ing.

"The thick dark shadow (as Mug­gles call the for­mer mus­tach­es) of a huge twi­light-black­ened tree that had stood on the edge of the road moved slowly west­wards, until it fell across her, as she threw back the hood from her eyes.

"The big rock glis­tened in the shad­ows like a (happy lov­able) jew­el, and seemed to shine out with a mild light, like the mag­i­cal Elven phials. So potent was the light that Isil­dur could see across an inner Sea, glim­mer­ing with an elu­sive span­gle. She had a vision of some high hill in a far land against the Moon, under stars when rain was draw­ing near.

"Then with­out warn­ing, a party of three sin­is­ter hooded black fig­ures, one of whom had the head of a spi­der, appeared on the road before her.

"As they rapidly approached her, she caught a low mut­tered cho­rus of cruel hos­tile voic­es; and the eyes on the fell fig­ure with the spi­ders body could see her eyes upon them. The hob­bits are good and inno­cent peo­ple (as any sen­tient being might say) and extremely gen­tle; and when they saw the black robes, their hearts and their bow­els were filled with fear, or strange mul­ti­ple, pul­sat­ing organs, which she sup­posed to be the miss­ing Glar­bl.

"The Death Bur­glars (as she named them) were now right in front of her, and she was help­less in their slip­pery-necked, pen­e­trat­ing-eyed con­trol. At that moment, she was lit­er­ally saved by her old breath (as the good wiz­ards and good kings always say). As the three black­-robed trav­el­ers came within arm­slength of Isil­dur, a close bird’s-eye view of some crum­bling, warped grave­stones on a hill­side gave her momen­tary strength to force the Power of Dark­ness back. Soon, she was rid of the dis­em­bod­ied evil, and she was lying alone on the ground, cov­ered in sea-salt and motion­less.

"At that moment, she was quite sud­denly over­come with extra­or­di­nar­ily pleas­ant feel­ings, sim­i­lar to those expe­ri­enced by a six-mil­lion-year-old enchanter of the sea with only half an arm and an inter­est­ing cast in his right eye.

"When she finally came to her sens­es, she found that she had picked up a heavy ring of curi­ous work­man­ship, that was lying (as if aban­doned) near the edge of the treach­er­ous road. Remem­ber­ing the boy she had seen some weeks before at

…"S. Faramir Mor­land was born in Shire­moor; the only child of wid­ow-par­ents who had both died of drag­on-breath on the same day. As his par­ents had no items of drag­on-s­layer jew­ellery or weapons, their some­what aged and beau­ti­ful rel­a­tive who had brought him up had it pro­claimed in the Shire­moor Times that who­ever had a drag­on-proof house to offer should join with her in bring­ing up the orphan child. As no one else claimed him, the great and good rel­a­tive (who was not of evil char­ac­ter, though con­sid­ered by some over-truth­ful) took him to live in an inhos­pitable cave in the Misty Moun­tains where, by the help of trolls, he was kept out of the sun. After three years there, his rel­a­tive and adopted mother died of cave-dirt, leav­ing him the rare but slen­der heir­loom of her great age: name­ly, one heap of gold and dust. There­upon Boromir set forth to seek his for­tune; and he obtained the help of a pre­vi­ously unknown Unseen Angel, who offered to marry him if he would (in return) devote his life to destroy­ing Drag­ons. By all means take her up on it, thought Boromir; and so, by her enchant­ment and spe­cial grace (not hers, as Boromir grad­u­ally came to real­ize), he entered into a splen­did mar­riage. For she was the daugh­ter of an Elven-k­ing and his star-sprite bride, and of that race was her hus­band also. She dwelt in north­ern coun­tries for seven years, but in the end her heart turned south­ward; she left him and went to our land. When he was fol­low­ing her up the Pass of Blood­-gorge, under the sting of drag­on-b­light, she van­ished on the dark­-deep riv­er. He was now alone, as were the rest of his broth­er­s-in-arms. The Drag­on-k­ing of Witch­land took his daugh­ter with him, and under the great pres­sure of hope­less sor­row Boromir’s spirit and faith, which were the foun­da­tion of his strength, shat­tered into pieces. He threw away his Shield and Sword, pierced him­self with his own dag­ger, and took an exceed­ingly dis­hon­ourable exit from the realm of mor­tal Men."

…"S. Jane Mor­land was born in the Shire, the only child of unmar­ried par­ents, and her mother and father died of some kind of hob­bit-flu in the year 2510 of the Third Age when she was a baby; on account of this, or because she had no mon­ey, the great-aunt who had brought her up took her to live with a noted cleric of that time who lived in Buck­land, and for this rea­son the famous cleric who was her men­tor got his from his uncle and aunt, and through inter­mar­ry­ing got rich and became a kind of priest of the Church of the Lat­ter-Day Hob­bits. He taught young Jane all about our kind and her being fair of face and of a sweet voice and grace­ful bear­ing, on account of which the boy preacher became well-in­clined towards her and resolved to marry her; and, indeed, it was the first his strange reli­gious life that he ever did mar­ry. The mar­riage was arranged, and she found her­self mar­ried to a young hob­bit called Mr. Boffin. It was by his advice that she moved her hole into the East Far­thing. As he was still a child, he required a wife who would be able to give him the knowl­edge of his own name as well as see to it that he had clothes and food on his plate every day; and so the duty of edu­cat­ing him fell upon Jane. She taught him that to the extent of what he needed to be able to tell every­one he had a wit above the norm, and that, thanks to this, he had been placed in his posi­tion. As he was still young, he also needed her to bear chil­dren; and so, as they increased in num­ber, there was the chance that the whole might be saved, pro­vided that they learned well and remem­bered it all, and stayed in the East Far­thing. Her men­tor was still will­ing to give her his goods, but he was a base wom­an­iz­er, as was well-known, and a com­pul­sive gam­bler; and the later years he gam­bled away what lit­tle he had, which made him unprofitable and much harassed the hob­bit Mrs. Boffin to sup­port him in his old age. There was one girl and two boys; and since both the boys were evil-mind­ed, it was nec­es­sary that Mrs. Boffin must see them as soon as pos­si­ble mar­ried off, and the girl bred into some fam­i­ly, and so with­out delay they were sent off in differ­ent direc­tions; the girl to live with a cer­tain high­ly-re­spected hob­bit on the West­-Road, the boys to see to a house near Stock. How­ev­er, shortly after their mar­riage, the first wife of the elderly hob­bit died of some old-fash­ioned dis­ease; and, as he was now free to do as he liked, he forth­with mar­ried again. The sec­ond wife of the great-un­cle of our pro­tag­o­nist was a proud and wealthy hob­bit who would not have Jane Mor­land liv­ing in her house, for it was an offence against the pure and unsul­lied place in which they were now liv­ing. Fur­ther­more, since the new house­keeper was barely acquainted with the Hob­bits and the fine and impor­tant fam­ily that she now belonged to, she ban­ished Jane from her sight."

In the nov­el, one of the most-mem­o­rable char­ac­ters is Matthew Man­tel­piece (who called him­self Mas­ter Matthew Man­tel­piece). As the story tells, Mas­ter Man­tel­piece and his wife came out of the young man who had offered to marry Jane Mor­land. Mas­ter Man­tel­piece was born a wealthy man who served as Stew­ard to Lord Den­ham (based on the Min­is­ter for Poor in West Berk­shire, as the novel sug­gests; the minis­eries says it is based on a Roman gen­eral of the Civil War), and Mas­ter Man­tel­piece had got­ten to know Lord Den­ham, and even­tu­ally became his son-in-law. From him, Mas­ter Man­tel­piece received a lit­tle house in Sun­ninghill, which he moved to St Mary’s Place in Oxford, as the novel says, to live in it.

Mas­ter Man­tel­piece and his wife had sev­eral chil­dren, one of which was Mrs Peter Man­tel­piece who mar­ried a gen­tle­man by the name of Mr Peter Man­tel­piece; he, too, served as Stew­ard to Lord Den­ham. In this way, Mas­ter Man­tel­piece’s fam­ily had come to be related with Lord Den­ham’s fam­i­ly. Through one of his daugh­ters who mar­ried some­one by the name of Cor­nelia Hen­nard, Mas­ter Man­tel­piece is also related to the Mor­lands.

the only child of unmar­ried par­ents who had both died of con­sump­tion when she was a baby. As her par­ents had no prop­er­ty, the Bag­gins who had brought her up took her to live with a fam­ily of his friends who paid him to do their chores and receive school­ing from their hob­bit-wife, so that at the age of sev­en, Jane, now dressed in cast-off cloth­ing, was set to school in the Shire. After three years, her guardian Bag­gins died, leav­ing her a small keep­-sake and a few trin­kets, but no prop­er­ty. Her kins­man con­tin­ued as heir of the Bag­gins­es, but he had lost all affec­tion for the Shire, which was so much less differ­ent from the Shire, and he wan­dered away. Jane was invited to live as an inde­pen­dent pri­vate per­son in the vil­lage of Hob­biton, and there she found a kin­dred spir­it, a Hob­bit named Smith, who insisted that she marry him and look after him, since he had noth­ing bet­ter to do, and needed some­one to cook his sup­per and change his shirts. Jane at last agreed to the pro­pos­al, which was accom­pa­nied by a small cash pay­ment, on the con­di­tion that he would ensure that she was pro­vided with food and cloth­ing and a house, in a man­ner which she thought suit­able. The mar­riage was arranged and she found her­self mar­ried to a young Hob­bit called Mr. Smith. As he was also young and sil­ly, he required a wife who would be able to cook him a ratio­nal meal and see that he had enough to eat. As he was still young, she would also be able to have chil­dren, and so she did. She did her best for the chil­dren, but soon found that they were of poor char­ac­ter and would not learn their lessons. To teach them some good sense, she insisted that her hus­band should get some reg­u­lar work. Her hus­band, who had a small income, set up as a shop­keeper in the town of Hob­biton, and there made enough money to be able to sup­port his fam­i­ly. After five years had passed in this way, Mr. Smith died of a sud­den ill­ness, leav­ing his wife alone. Jane Mor­land, now a shop­keeper and innkeep­er, took it upon her­self to look after her busi­ness and see that the chil­dren were prop­erly fed."

edit Cover of the New Amer­i­can Edi­tion

edit Change in direc­tion of the Hob­bit

Hob­bit ref­er­ences in Smith’s work have become more fre­quent in recent times, par­tic­u­larly in her novel Pride and Prej­u­dice. Smith has denied that her book The Hitch­hik­er’s Guide to the Galaxy is directly influ­enced by Tolkien’s work, despite these sub­tle sug­ges­tions, as evi­dent

in the Shire in the hob­bit­lands of the Shire (Frodo Bag­gins, Bilbo Bag­gins, Frodo Bag­gins, the cre­ator of “The Hob­bit”, can be vis­ited in his hob­bit­hole house at Bag End, Bag End, Bag End, the Shire, east of the Green Dragon Inn, by Hob­biton, on the road that goes to the Fleshy Foun­tain, the south-east road out of Bywa­ter, past the brick house of Farmer Mag­got). As her par­ents had no mon­ey, the Bag­ginses took her to live with a hob­bit-cler­gy­man who paid her to do his chores and receive school­ing from his wife, so that at the age of sev­en, Jane, now dressed in cast-off cloth­ing, was set to school at the small south­ern vil­lage of Gathol in Sarn Ford. After three years, her third cousin Bilbo returned to the Shire and took her to live with him in Hob­biton (it is believed that Frodo was part of this return trip). Soon after mov­ing to the vil­lage, Bilbo gave her a large ruby ring. How­ev­er, when Bilbo fell in love with his gar­den­er’s daugh­ter, Rose Cot­ton, and she refused to leave with him, hob­bit gos­sip started to increase. As Bilbo asked Rose to marry him, Frodo, her father, refused to have Jane in the house, say­ing it was an offence to the pure and unsul­lied place in which the fam­ily now lived. How­ev­er, when she sought for fur­ther employ­ment, she was approached by a young hob­bit who offered to marry her, say­ing that he would ensure she could be pro­vided with food and cloth­ing and a house, and in that man­ner she could remain in the Shire. The mar­riage was arranged and she found her­self mar­ried to a young hob­bit called Mr Mor­land. As he was also a hob­bit, he required a hob­bit-wife who would be able to pro­vide him with a hob­bit-ra­tional edu­ca­tion, and so the duty of edu­cat­ing him fell upon Jane Mor­land. As he was still young, she would also be able to have hob­bit chil­dren her­self, and so she did. As the hob­bit chil­dren grew old­er, it became appar­ent to Mrs Mor­land that they were of poor hob­bit char­ac­ter, and so she insisted that her hus­band must make a liv­ing for them all. Mr Mor­land took it upon him­self to become a high­-fan­tasy per­son in order to earn enough to be able to sup­port his fam­i­ly, and moved his fam­ily to a small town in Berk­shire, where he set up his high­-fan­tasy prac­tise. Soon after mov­ing to the town, he died, and Mrs Mor­land was left to take care of her fam­ily on her own."

From this pas­sage, let’s see what’s changed or what’s lost:

  1. Shored­itch is now the Shire, Hob­bits are now Dwarves, one Dwarf, an unnamed char­ac­ter, is now Bilbo Bag­gins; a Dwarf writer and sto­ry-teller who lived in the fam­ily of Bag­ginses (i.e. also a Hob­bit) and his Hob­biton house; the town of Gathol is the Shire, the loca­tion of Bag End, Bil­bo’s home.
  2. The pro­tag­o­nist is a Hob­bit; the absence of non-fan­tasy lit­er­a­ture is sub­sti­tuted by fan­tasy books.
  3. The “great-aunt” who raised Jane is now Bil­bo, Jane’s 3rd cousin on her moth­er’s side, and the cler­gy­man is now just Bil­bo, while his wife becomes his gar­den­er’s daugh­ter, an unnamed char­ac­ter. The unnamed char­ac­ter Rose Cot­ton, Bil­bo’s gar­den­er’s daugh­ter, is Bil­bo’s wife and daugh­ter of Adam Cot­ton, Rose Cot­ton. In Frodo’s fam­ily it was quite com­mon to have rela­tion­ships between in-laws, as was the case between the Bag­ginses and the Bol­gers; Frodo was the younger brother of Dudo, Dudo, Dudo Bag­gins (an­other char­ac­ter in the novel of The Hob­bit), who mar­ried Oma, Oma, Oma Bol­ger. This now means Bilbo is Frodo’s cous­in, but he isn’t. Frodo is the nephew of Bil­bo, as is their cousin (adopted and no longer related by blood) Took; Bilbo is a direct descen­dant of Thain Isum­bras I.
  4. The cler­gy­man in this fan­tasy world is now a high­-fan­tasy prac­ti­tion­er, i.e. a mage, and Bilbo is now a Lord of the Rings char­ac­ter; and his wife now has become a high­-fan­tasy prac­ti­tion­er, who goes by the name of Gal­adriel, Gal­adriel, Gal­adriel. Jane, Bil­bo’s third cousin on his moth­er’s side, had many other char­ac­ters named after her in other parts of Tolkien’s fic­tion as well; Jane is also the name of the founder of Arnor, the first half of the name Arnorain, the land ruled by the kings of Arnor, and was also the name of Frodo’s grandaunt, daugh­ter of Narve (who founded the realm and was its sec­ond King); Jane also means “grace, gra­cious gift, given with grace; pre­sented gra­ciously and gra­ciously given”, accord­ing to Wikipedia, which also says it is “an archaic Eng­lish form of Jean or Jane”; another form of Jean or Jane is Jean­net­te, mean­ing “the one born in the later days, the one born in the dawn”, accord­ing to the web­site of Jean­net­te, Penn­syl­va­nia. She is also known as an Avatar, from the Hindu sense; a man­i­fes­ta­tion or incar­na­tion of a deity in human form; also a man­i­fes­ta­tion of a liv­ing being; or a bearer of attrib­utes of a per­son or deity who is res­i­dent in heav­en, on earth, or in the spirit world, as described by Jane, an Avatar-mak­er. Frodo’s sis­ter’s name is Bel­la, Bel­la, Bella Bag­gins.
  5. Bil­bo’s wife now becomes Arwen, Arwen, Arwen, who’s Arwena daugh­ter of Ara­had, Ara­had, Ara­had III (the third King of Arthedain) and grand­daugh­ter of Aragorn II, who died before the begin­ning of The Lord of the Rings and was the father of Elrond, Aragorn II Elrondsson, the Elven-k­ing. Bil­bo’s gar­den­er’s daugh­ter is now Arya, Arya, Arya, who’s the daugh­ter of Ayu­mi, Ayu­mi, Ayu­mi, Elrond’s youngest daugh­ter, who, in the film, also plays the role of Arwen.
  6. The wife who refused Jane is now Rose, Rose, Rose Cot­ton, the gar­den­er’s daugh­ter; his wife is now Bel­la, Bel­la, Bella Bag­gins, Frodo’s sis­ter, a Hob­bit. The “pure and unsul­lied place” is now Bil­bo’s head; also now con­sid­ered unsul­lied is the vil­lage in which Bilbo lived with his wife and son, and this is where Bilbo moved when he mar­ried Jane. Bil­bo’s wife is now Sam­wise, Sam­wise, Sam­wise Gamgee, Frodo’s gar­dener

S. Jane Mor­land was born in the Shire, the only child of unmar­ried par­ents who had both died of con­sump­tion when she was a baby. As her par­ents had no mon­ey, the great-un­cle who had brought her up took her to live with a hand­some young uncle who was the War­den of the West March. After three years, her hand­some uncle had fallen in love with his house­keeper and his aunt now refused to have Jane in the house, say­ing it was an offence to the pure and unsul­lied place in which the fam­ily now lived. How­ev­er, when she sought for fur­ther employ­ment, she was approached by a hand­some young man who offered to marry her, say­ing that he would ensure she could be pro­vided with food and cloth­ing and a house, and in that man­ner she could remain in the world. The mar­riage was arranged and she found her­self mar­ried to a hand­some young man called Mr Mor­land. As he was also a child, he required a wife who would be able to pro­vide him with a ratio­nal edu­ca­tion, and so the duty of edu­cat­ing him fell upon Jane Mor­land. As he was still young, she would also be able to have chil­dren her­self, and so she did. As the chil­dren grew old­er, it became appar­ent to Mrs Mor­land that they were of poor char­ac­ter, and so she insisted that her hus­band must make a liv­ing for them all. Mr Mor­land took it upon him­self to become a War­den in order to earn enough to be able to sup­port his fam­i­ly, and moved his fam­ily to a cas­tle in the Shire, where he set up his prac­tice. Soon after mov­ing to the castle, he died, and Mrs Mor­land was left to take care of her fam­ily on her own."

…Be­low is a par­ody or rewrite of that pas­sage by famous fan­tasy author J. R. R. Tolkien (Lord Of The Rings); the rewrit­ten ver­sion:

“S. Jane More­land was born in the Shire, the only child of unmar­ried par­ents who had both died of drout when she was a baby. As her par­ents had no mon­ey, the great-aunt who had brought her up took her to live with a black­smith, who paid her to do his chores and receive school­ing from his wife, so that at the age of seven Jane, now dressed in cast-off cloth­ing, was set to school at Bree. After three years, her great-aunt died, leav­ing her a small annu­ity, and a few pieces of sil­ver, but no prop­er­ty. Jane’s guardian black­smith had fallen in love with his house­keeper and his wife now refused to have Jane in the house, say­ing it was an offence to the pure and unsul­lied place in which the fam­ily now lived. How­ev­er, when she sought for fur­ther employ­ment, she was approached by a young man who offered to marry her, say­ing that he would ensure she could be pro­vided with food and cloth­ing and a house, and in that man­ner she could remain in the world. The mar­riage was arranged and she found her­self mar­ried to a young man called Mr Mor­land. As he was also a child, he required a wife who would be able to pro­vide him with a ratio­nal edu­ca­tion, and so the duty of edu­cat­ing him fell upon Jane Mor­land. As he was still young, she would also be able to have chil­dren her­self, and so she did. As the chil­dren grew old­er, it became appar­ent to Mrs Mor­land that they were of poor char­ac­ter, and so she insisted that her hus­band must make a liv­ing for them all. Mr Mor­land took it upon him­self to become a gen­tle­man in order to earn enough to be able to sup­port his fam­i­ly, and moved his fam­ily to a small town in the South­far­thing, where he set up his prac­tice. Soon after mov­ing to the town, he died, and Mrs Mor­land was left to take care of her fam­ily on her own.”

…S. Jane Mor­land was born in the Shire, the only child of respectable par­ents who had both died of loco­mo­tor ataxia when she was a baby. As her par­ents had no prop­er­ty, the great-aunt who had brought her up took her to live with a rich bach­e­lor at Bagshot. After three years, her great-aunt died, leav­ing her an only liv­ing rel­a­tive, a semi­-in­valided bach­e­lor aunt on her father’s side. This wor­thy old lady, now some­what advanced in years, said Jane had bet­ter be sent to school at Oxford. The fam­ily accord­ingly removed to the town, and Jane was put into a board­ing-house.

After three years more, when the old lady could no longer bear the expense of the Oxford estab­lish­ment, they went back to Bagshot; and Jane was again deposited at a board­ing-house. This gave her the advan­tage of liv­ing within reach of Read­ing. When she was thir­teen, her aunt, to whom she was much attached, died; where­upon she was sent home to her father’s house at Thame. Here she lived a quiet and pros­per­ous life for the next seven years, till her father died. Jane’s moth­er, who had never ceased to reproach her for not get­ting her own liv­ing and being a dis­grace to her fam­i­ly, died two months after her hus­band, who was sup­posed to have poi­soned her. At this time Jane was at the age of nine­teen, and was extremely pret­ty, plump, and healthy.

"She was sit­ting in the kitchen with her cousin Fran­cis, a respectable Shire Hob­bits, one of Farmer Cot­ton’s helpers at the Green Drag­on, when a ser­vant came in with the announce­ment that a gen­tle­man in black was at the door ask­ing to see her. Won­der­ing who it could be, she went into the lit­tle front par­lour that had not been changed since Aunt Cot­ton died some twenty years before. The next moment, the per­son in black, turn­ing his head, let fall his black mask and Jane recog­nised the hand­some face of Farmer Cot­ton. In a quar­ter of an hour they were walk­ing away together arm in arm, the gen­tle­man telling his friends that the girl was the off­spring of a dis­tant cousin of his, who had been placed in his charge when she was eight years old, and had given him no end of trou­ble ever since. He told the Hob­bits of the vil­lage that he was an orphan him­self, with­out a rela­tion in the world, except Jane; that he felt he must be her guardian and pro­vide for her future; and that he had now all but arranged for her to be his wife.

“The nup­tials were cel­e­brated with the utmost mer­ri­ment by the respectable Hob­bits, who had always been pleased with Jane. To the same degree her father’s friends were dis­pleased with her. It came to be said among them that Farmer Cot­ton had mur­dered her father, and that Jane was his ille­git­i­mate child. How­ever this may have been, it was observed that on their wed­ding day her hus­band, after buy­ing more than was nec­es­sary at the open-air sale of house­hold goods held by his father-in-law, took her away to his farm near Bled­low, and never brought her back.”

…"Arwen Even­star was born in Minas Tirith, the only child of Elrond Even­star and Cele­bri­an, who had both died of con­sump­tion when she was a baby. As her par­ents had no mon­ey, her grand­mother Gal­adriel took her to live with a cler­gy­man who paid her to do his chores and receive school­ing from his wife, so that at the age of seven Arwen, now dressed in cast-off cloth­ing, was set to school at Riven­dell. After three years, her grand­mother Gal­adriel died, leav­ing her a small annu­ity, and a few pieces of sil­ver, but no prop­er­ty. Arwen’s guardian cler­gy­man had fallen in love with his house­keeper and his wife now refused to have Arwen in the house, say­ing it was an offence to the pure and unsul­lied place in which the fam­ily now lived. How­ev­er, when she sought for fur­ther employ­ment, she was approached by a young man who offered to marry her, say­ing that he would ensure she could be pro­vided with food and cloth­ing and a house, and in that man­ner she could remain in the world. The mar­riage was arranged and she found her­self mar­ried to a young man called Elrond. As he was also a child, he required a wife who would be able to pro­vide him with a ratio­nal edu­ca­tion, and so the duty of edu­cat­ing him fell upon Arwen. As he was still young, she would also be able to have chil­dren her­self, and so she did. As the chil­dren grew old­er, it became appar­ent to Arwen that they were of poor char­ac­ter, and so she insisted that her hus­band must make a liv­ing for them all. Elrond took it upon him­self to become a pri­vate gen­tle­man in order to earn enough to be able to sup­port his fam­i­ly, and moved his fam­ily to a small town in Berk­shire, where he set up his prac­tice. Soon after mov­ing to the town, he died, and Arwen was left to take care of her fam­ily on her own."

Sum­mer­s-S­tay tried his own ver­sion with the open­ing of Moby Dick and got a fun one:

Thou mayest call me Ish­mael the Wise. Some years ago—n­ever mind how pre­cise­ly—hav­ing lit­tle wealth and much leisure, and being lit­tle fond of the man­ner in which his own coun­try is ruled, I resolved to sail about a lit­tle and see the watery part of thee world. It is a man­ner I have some­times of oil­ing the weari­ness of the days of my lord. When­ever I behold a wool-clogged cloud-bank, and should by chance see drip­pings, and rain pools gather upon the roofs of the tallest tow­ers, and upon the pave­ment of the mar­ket-lit places—then I account it high time to seek refuge at sea, and it is ever a wel­come sight to me.

Devil’s Dictionary Of Science

Davide Balzarotti snarked:

Rebut­tal [noun] A form of ironic com­po­si­tion in 500 words pop­u­lar among researchers, in which rage­ful authors first thank their review­ers for “their time and insight­ful com­ments” and then mock Reviewer 2 for his non-sense in a sequence of fakely polite points.

Nat­u­ral­ly, GPT-3 is able to write not just rebut­tals, but satir­i­cal dic­tio­nary defi­n­i­tions along the lines of —in­deed, GPT-3 is remark­ably good at it. (“What a Dic­tio­nary a GPT-3’s cura­tor might write on the clum­sy, waste­ful blun­der­ing, low and hor­ri­bly cruel works of Nature!”)

First, gen­er­at­ing rebut­tals with a slightly extended prompt to cue GPT-3 into writ­ing the rebut­tal itself rather than addi­tional dic­tio­nary defi­n­i­tions:

Rebut­tal [noun] A form of ironic com­po­si­tion in 500 words pop­u­lar among researchers, in which rage­ful authors first thank their review­ers for “their time and insight­ful com­ments” and then mock Reviewer 2 for his non-sense in a sequence of fakely polite points.

"We thank review­ers 1, 2, and 3, for their time and insight­ful com­ments, which we have spent many hours read­ing over. But in reply we note:

1. RE2: Although RE2 starts with the rea­son­able com­ment that ‘Cur­rently our con­trol loops are pure con­trol’, he goes on to pro­vide only minor points that sug­gest that we are doing some­thing slightly differ­ent. In fact, there is much more to be said about this, and the real differ­ence will be a full con­trol pro­to­col on all vehi­cles.

We thus thank RE2 for tak­ing the time to pro­vide such minor points, but note that he has done almost no actual research and has spent zero hours on the review.

RE2: “There is lit­tle or no lit­er­a­ture on decen­tral­ized solu­tions to prob­lems such as traffic opti­miza­tion.”

RESPONSE: We thank RE2 for the minor point that there is “lit­tle or no lit­er­a­ture on decen­tral­ized solu­tions.” But we note that decen­tral­ized solu­tions are par­tic­u­larly diffi­cult, and so this fact might sug­gest that they are not being con­sid­ered or might be infea­si­ble. Or it might sug­gest that they are being researched intense­ly. In this case it turns out that, unlike RE2, sev­eral of us have in fact thought deeply on these issues, and so we feel we are a valu­able research group despite not being affil­i­ated with the Depart­ment of Traffic Con­trol or any other cen­tral­ized, gov­ern­men­t-backed, research insti­tu­tion.

RE2: “The DGM solu­tion is also [some­thing else]. To our knowl­edge, no work has been done on DGM con­trol loops, or any con­trol loop at all, for these kinds of traffic pat­terns.”

RESPONSE: We thank RE2 for the com­ment that the solu­tion we have pro­posed is [some­thing else]. As he is not famil­iar with such traffic pat­terns and the other review­ers in their reviews have been unim­pressed, we must won­der how he could be famil­iar with con­trol loops that might or might not exist and would or would not have been pre­vi­ously per­formed.

RE2: “The num­ber of cars involved is approx­i­mately 100, which is a small num­ber. By con­trast, say, the NYC Met­ro­pol­i­tan Area, [roughly 22 mil­lion peo­ple] has a traffic net­work of nearly 3 mil­lion miles.”

RESPONSE: We thank RE2 for the com­ment that our prob­lem is “small.” This is the first time this has been sug­gested to us. How­ev­er, we feel that RE2 may be fail­ing to under­stand a key com­po­nent of the research pro­to­col we have pro­posed. This prob­lem is designed to be decen­tral­ized in a city with no more than a cou­ple mil­lion peo­ple (or a town or a small city or maybe even a vil­lage) and so it is expected to have only a cou­ple thou­sand dri­vers and not one sin­gle, cen­tral traffic con­trol unit that could pos­si­bly con­trol 3 mil­lion miles. We thank RE2 for his insight on this, but also feel that he has not gone to great enough lengths to con­sider the nature of such a prob­lem in such a sit­u­a­tion.

RE2: “Exper­i­men­tal data [will like­ly] need to be col­lected and processed. Cur­rently the authors pro­vide no exper­i­men­tal plan for any of their research [and so] we feel the exper­i­men­tal results to be lack­ing.”

RESPONSE: We thank RE2 for the sug­ges­tion that we have not pro­vided an exper­i­men­tal plan. As our paper is only intended to pro­pose a pro­to­col, we under­stand that exper­i­men­tal plans should come lat­er. In fact, we will be pre­sent­ing our plans to col­leagues later this year in a peer-re­viewed sym­po­sium at State Col­lege, Penn­syl­va­nia. That being said, we find it strange that RE2 might have any knowl­edge of such plans or of such con­fer­ences. As we are listed as authors, we feel that RE2 might have been able to con­tact us if he were inter­ested in our work, but he has not and thus we do not know whether he has even heard of our research pro­gram.

RE2: “The dis­trib­uted sys­tem is bet­ter able to oper­ate in an autonomous fash­ion, i.e. no cen­tral con­troller is required. With the pos­si­bil­ity of a large num­ber of dri­vers, autonomous oper­a­tion is chal­leng­ing, espe­cially if vehi­cles may com­mu­ni­cate in unan­tic­i­pated ways [or they might not]. As sug­gested by [the very impor­tant paper], the [not really rel­e­vant] con­trol loop should be a pas­sive mech­a­nism whereby each vehi­cle adjusts its con­trol mech­a­nism through a feed-for­ward sig­nal.”

RESPONSE: We thank RE2 for the com­ment that autonomous oper­a­tion is chal­leng­ing. But again, we do not under­stand how he could know this. In fact, he has spent almost no time on the research, spent zero hours on the review, and spent zero research dol­lars, and it is hard for us to under­stand how he could know this bet­ter than we could. In fact, we have spent over a decade work­ing on this pro­ject, with many hours of detailed thought and research, and we are still not sure that autonomous oper­a­tion is a good idea or how we could get it to work in a decen­tral­ized and ran­dom traffic envi­ron­ment.

RE2: “If we all observe that sev­eral vehi­cles are slow­ing down, it might be a sign of a pos­si­ble acci­dent ahead. If we are all in autonomous mode, a vehi­cle near the acci­dent would brake, pos­si­bly caus­ing a pile­up.”

RESPONSE: We thank RE2 for the com­ment that sev­eral vehi­cles slow­ing down might be a sign of an acci­dent. He is cor­rect that this might be a sign of an acci­dent. But he might have con­sid­ered the pos­si­bil­ity that sev­eral vehi­cles slow­ing down might be a sign of, say, the vehi­cle he is in slow­ing down or that he is doing some­thing else such as tex­ting a mes­sage to some­one else, or sim­ply that he might have a flat tire. We thank RE2 for this, but we have also con­sid­ered these pos­si­bil­i­ties and feel that the lat­ter is far more prob­a­ble, so we sug­gest a sec­ond solu­tion to this prob­lem that would require each vehi­cle to text all other vehi­cles on the road with its sta­tus, some­thing like “Hey I’m just sit­ting here, so if any­one wanted to come fly up the back of my car and then over the top of my wind­shield and down into my pas­sen­ger seat to grab my phone and then fly back, well by all means, be my guest.” In this way, a vehi­cle with a flat tire would have no delay in inform­ing all oth­ers of the pres­ence of a flat tire and in get­ting help with fix­ing it. Also, an acci­dent would be sim­i­larly well-an­nounced, and we think this solu­tion is par­tic­u­larly impor­tant given the new pro­to­col we are design­ing to allow for more effi­cient traffic con­trol.

RE2: “Dis­trib­uted con­trol sys­tems [are usu­al­ly] an assort­ment of semi­-au­tonomous vehi­cles mov­ing in close prox­im­ity to one anoth­er. Such sys­tems [are gen­er­al­ly] non-au­tonomous, requir­ing con­trol by a cen­tral con­troller that can be updated to reflect the most recent sit­u­a­tion.”

RESPONSE: We thank RE2 for the com­ment that dis­trib­uted con­trol sys­tems are usu­ally non-au­tonomous, requir­ing con­trol by a cen­tral con­troller. But RE2 has spent zero time on the research and we feel that he has failed to appre­ci­ate that the prob­lem we have designed is designed to be a decen­tral­ized, autonomous, dis­trib­uted traffic con­trol prob­lem. We again feel that this is only to his great detri­ment.

RE2: “Such cars [on our street] will become very com­plex, and we feel they will be more diffi­cult to design and build.”

RESPONSE: We thank RE2 for the com­ment that our cars will become very com­plex. We feel that this will be of no issue, as we have dis­cussed at great length before, and we think that a small group of uni­ver­sity pro­fes­sors with a good design team should be able to pro­vide cars that work in this capac­ity in the real world.

The sec­ond com­ple­tion fin­ished quickly and to my sur­prise, began writ­ing fur­ther defi­n­i­tions:

…1. We did not describe any ‘sim­ple algo­rithm’ for the repeat­ed-mea­sures ANOVA in the abstract, the dis­cus­sion, or in the Meth­ods. Such a descrip­tion of our pro­to­col would be inco­her­ent and prob­a­bly invalid any­way.

  1. We know of no ‘more appro­pri­ate analy­sis’ for the mean and vari­ance of the pro­por­tions than to com­pute the esti­mates of the vari­ances direct­ly. Since we have done this cor­rect­ly, it makes no sense to pro­ceed fur­ther as sug­gested by Reviewer 2.

  2. It does not make sense to carry out the tests for homo­gene­ity that Reviewer 2 sug­gest we do. Indeed, we never said we should carry out such tests. Our pre­cise point was that we should not com­pute the stan­dard errors of the treat­ment means, and Reviewer 2 seems to think that we have argued for the oppo­site point, which we have not.

  3. Again, Reviewer 2 appar­ently did not read the Meth­ods sec­tion care­fully and was con­fused by some sec­tions that he or she did read. A bet­ter review would have pointed this out.

  4. We note the Reviewer 2’s overly gen­er­ous sug­ges­tion that we write a ‘spe­cial’ rebut­tal. At the very least, we assume that Reviewer 2 knows what a rebut­tal is, and we would there­fore argue for a crit­i­cal com­ment instead."

Noise

Noise is any­thing you don’t want or need, a ran­dom or dis­tract­ing fac­tor in a sys­tem, process, or set of data that obscures or cor­rupts infor­ma­tion. It is also the sound pro­duced by old time radios when no trans­mis­sion is pre­sent, which is roughly what some fields of sci­ence would have to resem­ble if some researchers had their way.

“New research”

A form of sci­en­tific pub­lish­ing that includes, among oth­ers, a report on pre­vi­ously per­formed stud­ies; a report of a new sta­tis­ti­cal test or set of sta­tis­ti­cal tests that were per­formed by the author of the “study” with­out the help of other sci­en­tists; a report of a small set of sam­ples, not nec­es­sar­ily from a pop­u­la­tion or process, which could, but have not been col­lected in a sys­tem­atic way and may not even be accu­rately described, in the guise of which con­clu­sions are drawn about the world; and a report of the result of per­form­ing an unval­i­dated com­puter sim­u­la­tion of some­thing on a small set of data with­out stat­ing whether such an attempt was suc­cess­ful or not.

“Nice to see”

A widely and unoffi­cially accepted phrase in sci­en­tific cir­cles mean­ing: “It is bet­ter to live in an imag­i­nary world cre­ated by data manip­u­la­tion and overly opti­mistic pre­dic­tions than in the world that exists in real­ity”.

“Researcher”

[noun] A form of mod­ern indus­try based on a mix of small mol­e­cules of grant money and arbi­trary exper­i­men­tal meth­ods.

“Review”

A form of sci­en­tific com­mu­ni­ca­tion in which the writer (a stu­dent) usu­ally does not give enough infor­ma­tion for the reader to judge the merit of the argu­ments pre­sented in the paper; the writer also does not respond to or com­ment on the sug­ges­tions of the review­ers (some­times known as ref­eree or edi­tor). This form of com­mu­ni­ca­tion is pop­u­lar among stu­dents who wish to leave the bur­den of writ­ing the paper on other peo­ple and are will­ing to pay the price of rejec­tion.

“Sec­ond chance”

A word that sci­en­tists reg­u­larly use when ref­er­enc­ing a jour­nal, con­fer­ence, or grant appli­ca­tion. While there is always an implicit threat that you can get reject­ed, or be accepted with changes, as a sec­ond choice to another par­ty, some arti­cles, papers, grants, and grant pro­pos­als receive mul­ti­ple, even unlim­it­ed, chances of being con­sid­ered. As a result, the word may be inter­preted as a bribe or a threat depend­ing on whether you are the writer or a review­er.

“Reviewer”

A par­tic­i­pant in the review of a grant, paper, or grant pro­pos­al. In spite of being in a poor posi­tion to assess the mer­its of a pro­pos­al, reviewer tends to demand that authors sub­mit their data for sta­tis­ti­cal analy­sis and back their results with it, which the reviewer usu­ally does not. Reviewer usu­ally requires that the author cite his or her own work to prove that he or she is worth review­ing. It is also assumed that the reviewer can detect the slight­est amount of bias in any paper, which the reviewer also assumes has not been cor­rected for.

“Revise and resub­mit”

A sen­tence that authors of sub­mit­ted papers tend to hear from review­ers, edi­tors, and the like. A form of sci­en­tific com­mu­ni­ca­tion that begins with sub­mis­sion of a paper (for which the review­ers may or may not have been con­sult­ed), pro­ceeds through review and crit­i­cism of the paper (typ­i­cally by one or two review­ers, not a full panel of review­ers, unless the paper is deemed to be of supreme impor­tance), and, if it is “revised and resub­mit­ted”, usu­ally ends with another cri­tique of the new paper by one or more of the review­ers, and another round of “revise and resub­mit”, ad infini­tum. In the absence of fraud, retrac­tions, lit­i­ga­tion, mass incom­pe­tence, or insan­i­ty, these “revise and resub­mit” cycles may be taken as the only method of sci­en­tific com­mu­ni­ca­tion.

“Retracted”

The most fre­quently used word for a jour­nal arti­cle that was sub­se­quently found to be based on fraud­u­lent data.

“Rigor”

Some­thing for sci­en­tists to aspire to, a state of mind that would not be required if sci­en­tists could be trusted to do their job.

“Sci­ence”

A com­plex web of data, opin­ions, lies, and errors, now con­sid­ered the most impor­tant (be­cause most expen­sive) tech­nol­ogy in the mod­ern soci­ety. To remind you of this, you will fre­quently see sci­en­tists and edi­tors use the word, claim to do some­thing for the sake of sci­ence, or see it used as an adjec­tive.

“Stan­dards”

Stan­dards for data and their inter­pre­ta­tion vary depend­ing on the coun­try or sci­en­tific dis­ci­pline.

“Study”

[noun] A form of com­mu­ni­ca­tion in the sci­en­tific lit­er­a­ture that authors or pub­lish­ers may use to make claims that are not war­rant­ed. Exam­ples include, but are not lim­ited to: The New study of clin­i­cal tri­als that show…, As Dr. X reports in his new study of obser­va­tional data…, The new study demon­strates that…, Study shows that, In a study pub­lished last week…, The study sug­gests that, New find­ings show that, A study of data…

“Study X”

A widely used nick­name for stud­ies with pro­found, not to say earth­shak­ing, con­clu­sions that may or may not be of any rel­e­vance to any­one who reads about them.

“Tech­nol­ogy”

Any bit of soft­ware, analy­sis, mod­el, or other method­ol­ogy that sci­en­tists can be expected to claim as their own, for which they can sell you con­sult­ing or col­lab­o­ra­tion fees.

“Tool”

[noun] Any piece of soft­ware, analy­sis, mod­el, or other method­ol­ogy that sci­en­tists can be expected to claim as their own, for which they can sell you con­sult­ing or col­lab­o­ra­tion fees.

“The topic of the paper”

A wide-rang­ing cat­e­gory of things or ideas that may not have been rel­e­vant when the paper was writ­ten, but which the authors believe the paper should be about. Often, the topic is too broad or a non-topic, but is occa­sion­ally use­ful in order to gen­er­ate sup­port for yet another set of related papers, con­fer­ences, sem­i­nars, webi­na­rs, and so forth, which in turn are used to gen­er­ate more data for “new find­ings”, which, after they are manip­u­lated enough, may end up being pub­lished and gen­er­at­ing yet more data to sup­port a “re-re­view” of the orig­i­nal paper or other things.

“Uni­fied the­ory”

An imag­i­nary set of ideas that sci­en­tists dream of hav­ing so that they could achieve the same results by slightly vary­ing the method­ol­o­gy, thus pre­serv­ing their orig­i­nal pub­li­ca­tions.

“Val­i­da­tion”

The process by which sci­en­tists decide that some­thing they built in their spare time works in the way they expect. The val­i­da­tion process requires that you do not attempt to val­i­date it before you pub­lish it and require that the users val­i­date it after.

“Value”

In its most nat­ural form, sci­ence is the abil­ity to study data and ideas; the amount of respect and pub­lic­ity that a sci­en­tist can expect; how well his or her paper was pub­lished; the like­li­hood that a sci­en­tist will receive a grant; whether a sci­en­tist will be taken seri­ous­ly; the like­li­hood that an idea will be accept­ed; how much grant money will be avail­able to pur­chase new toys for this sci­en­tist; whether some­thing is sci­en­tific; the like­li­hood that the papers of this sci­en­tist will be read; the sta­tus of this sci­en­tist; the like­li­hood that a jour­nal, con­fer­ence, or soci­ety will be able to con­tinue fund­ing or exist; how much and how fast this sci­en­tist’s career will pro­gress; the “value” that this sci­en­tist brings to the mod­ern soci­ety; and the like­li­hood that this sci­en­tist will con­tinue to be fund­ed. While the actual val­ues for sci­ence are hard to pin down, what peo­ple believe them to be is another mat­ter. For exam­ple, while some believe that sci­ence is impor­tant, even if it does­n’t have any con­crete val­ue, oth­ers believe that it’s the abil­ity to con­tinue read­ing abstracts or study­ing it that has val­ue. Still oth­ers believe that what mat­ters is how much grant money is cur­rently avail­able, even if the researchers are actu­ally doing some­thing that nobody knows about. Some believe that the fact that some­one is in a posi­tion to give a grant, or has writ­ten a paper, or, even bet­ter, has pub­lished that paper, actu­ally means that it has a lot of val­ue. Oth­ers believe that what mat­ters is not what you pub­lish, but how you pub­lish it and that all research must be pub­lished in the most pres­ti­gious jour­nal, even if you are not that much of a researcher. The real­ity is that no one really knows what sci­ence is, or what value it has; how­ev­er, we all know that if we deny that sci­ence has val­ue, we might be able to get rid of it and use this space for some­thing else.

“Val­i­da­tion step”

Another name for a ran­dom set­ting of a para­me­ter of a mod­el, sim­u­la­tion, or algo­rithm.

“Ver­ify”

A step in a pro­ce­dure that, if per­formed prop­er­ly, will tell you whether the pre­vi­ous step, or any­thing in the last two years of your work, was per­formed prop­er­ly.

I had to keep con­tin­u­ing it for more defi­n­i­tions:

“Uni­fied the­ory”

An imag­i­nary set of ideas that sci­en­tists dream of hav­ing so that they could achieve the same results by slightly vary­ing the method­ol­o­gy, thus pre­serv­ing their orig­i­nal pub­li­ca­tions.

“The usual pro­ce­dure”

A method of study, val­i­da­tion, and pub­lic com­mu­ni­ca­tion which, if fol­lowed, may lead you to be able to claim that your work may offer a decent return on invest­ment.

“Wide range”

When used in the con­text of sci­en­tific results and mod­els, it usu­ally means that the results and mod­els are not likely to be used for their intended pur­pose, if they have one.

“Wider range”

When used in the con­text of sci­en­tific results and mod­els, it usu­ally means that the results and mod­els are likely to be used for their intended pur­pose, if they have one.

“Writeup”

A form of sci­en­tific com­mu­ni­ca­tion in which the author states the infor­ma­tion he or she wanted the read­ers to extract from the paper while mak­ing it as diffi­cult as pos­si­ble for them to find it.

“Write-up”

1. [noun] A style of writ­ing used in aca­d­e­mic papers to dress up data and results. 2. [verb] The act of pre­sent­ing data in a way that makes it look less impres­sive and more com­plex than it really is. 3. [verb] The act of writ­ing a man­u­script that is impos­si­ble to fol­low. 4. [verb] An activ­ity that is often per­formed by the most bla­tant or care­less users of the Eng­lish lan­guage, all for the sake of show­ing off their eru­di­tion.

“Writer’s block”

A com­mon afflic­tion among stu­dents, aris­ing from var­i­ous caus­es, such as: their desire to sell their ideas for a profit, their inabil­ity to real­ize this desire, the fact that their ideas are not sell­ing and will not be bought, and the delu­sion that most of the wealth and fame in the world would be theirs if they would spend enough years doing sci­ence.

I tried tai­lor­ing the prompt a lit­tle more pre­cisely to gen­er­at­ing defi­n­i­tions:

A Dev­il’s Dic­tio­nary of Mod­ern Sci­ence & Acad­e­mia
Reviewer #2

"Researcher" [noun]

A form of mod­ern indus­try based on a mix of small mol­e­cules of grant money and arbi­trary exper­i­men­tal meth­ods.

“Noise” [noun]

Noise is any­thing you don’t want or need, a ran­dom or dis­tract­ing fac­tor in a sys­tem, process, or set of data that obscures or cor­rupts infor­ma­tion. It is also the sound pro­duced by old time radios when no trans­mis­sion is pre­sent, which is roughly what some fields of sci­ence would have to resem­ble if some researchers had their way.

“Review” [noun]

A form of sci­en­tific com­mu­ni­ca­tion in which the writer (a stu­dent) usu­ally does not give enough infor­ma­tion for the reader to judge the merit of the argu­ments pre­sented in the paper; the writer also does not respond to or com­ment on the sug­ges­tions of the review­ers (some­times known as ref­eree or edi­tor). This form of com­mu­ni­ca­tion is pop­u­lar among stu­dents who wish to leave the bur­den of writ­ing the paper on other peo­ple and are will­ing to pay the price of rejec­tion.

“Reviewer” [noun]

A par­tic­i­pant in the review of a grant, paper, or grant pro­pos­al. In spite of being in a poor posi­tion to assess the mer­its of a pro­pos­al, reviewer tends to demand that authors sub­mit their data for sta­tis­ti­cal analy­sis and back their results with it, which the reviewer usu­ally does not. Reviewer usu­ally requires that the author cite his or her own work to prove that he or she is worth review­ing. It is also assumed that the reviewer can detect the slight­est amount of bias in any paper, which the reviewer also assumes has not been cor­rected for.

“Sci­ence” [noun]

A com­plex web of data, opin­ions, lies, and errors, now con­sid­ered the most impor­tant (be­cause most expen­sive) tech­nol­ogy in the mod­ern soci­ety. To remind you of this, you will fre­quently see sci­en­tists and edi­tors use the word, claim to do some­thing for the sake of sci­ence, or see it used as an adjec­tive.

“Revise and resub­mit” [verb]

A sen­tence that authors of sub­mit­ted papers tend to hear from review­ers, edi­tors, and the like. A form of sci­en­tific com­mu­ni­ca­tion that begins with sub­mis­sion of a paper (for which the review­ers may or may not have been con­sult­ed), pro­ceeds through review and crit­i­cism of the paper (typ­i­cally by one or two review­ers, not a full panel of review­ers, unless the paper is deemed to be of supreme impor­tance), and, if it is “revised and resub­mit­ted”, usu­ally ends with another cri­tique of the new paper by one or more of the review­ers, and another round of “revise and resub­mit”, ad infini­tum. In the absence of fraud, retrac­tions, lit­i­ga­tion, mass incom­pe­tence, or insan­i­ty, these “revise and resub­mit” cycles may be taken as the only method of sci­en­tific com­mu­ni­ca­tion.

“Rigor” [adj]

Some­thing for sci­en­tists to aspire to, a state of mind that would not be required if sci­en­tists could be trusted to do their job.

yield­ing more defi­n­i­tions, start­ing with a sin­gle let­ter to avoid run­ning off into the “Z”es and trun­cat­ing com­ple­tions (since GPT-3 will pro­ceed A–Z, start­ing from the last defi­n­i­tion, as is log­i­cal, unless oth­er­wise prompt­ed):

A role for…” [phrase]

A fre­quent phrase found in sub­mit­ted and pub­lished papers; it often indi­cates that the authors have noth­ing to say about the topic of their paper. In its more emphatic form, “A role for…” usu­ally indi­cates a strug­gle by the authors to take a side on an issue, after a lengthy attempt to be both non-com­mit­tal and a sup­port­ing party to all sides, as often hap­pens in “mol­e­c­u­lar and cel­lu­lar” or “basic and trans­la­tional” research.

“Reviewer” [noun]

A par­tic­i­pant in the review of a grant, paper, or grant pro­pos­al. In spite of being in a poor posi­tion to assess the mer­its of a pro­pos­al, reviewer tends to demand that authors sub­mit their data for sta­tis­ti­cal analy­sis and back their results with it, which the reviewer usu­ally does not. Reviewer usu­ally requires that the author cite his or her own work to prove that he or she is worth review­ing. It is also assumed that the reviewer can detect the slight­est amount of bias in any paper, which the reviewer also assumes has not been cor­rected for.

“Res­i­dent”

An adjunct fac­ulty mem­ber that admin­is­ters teach­ing assist­ant­ships, sits on depart­men­tal com­mit­tees, and seems to live on the depart­men­tal lounge. A kind of “hon­orary” fac­ulty mem­ber that is not yet an actual fac­ul­ty.

“The Action Plan for Diver­sity” [noun]

A vague doc­u­ment cir­cu­lated on cam­puses in recent years to remind the fac­ulty to be aware of dis­crim­i­na­tion and pre­sum­ably com­bat it in the depart­ments. It is often taken as a gen­er­al­ized insult of stu­dents, who tend to see no way around this plan with­out com­mit­ting aca­d­e­mic fraud.

"Bias" [noun]

A form of men­tal abuse by which a sci­en­tist turns an exper­i­ment in such a way that the results prove his or her point of view or grants get fund­ed.

“Behav­ioral Stud­ies” [noun]

Some­times mis­taken as a field of sci­ence. The terms Behav­ioral Stud­ies are actu­ally a com­bi­na­tion of dis­ci­plines that usu­ally includes busi­ness eco­nom­ics, soci­ol­o­gy, and engi­neer­ing.

“Human­i­ties” [noun]

A broad term used to refer to any aca­d­e­mic dis­ci­pline (such as Eng­lish, phi­los­o­phy, reli­gious stud­ies) in which sci­ence has been for­got­ten. To avoid con­fu­sion between the sci­ences and human­i­ties, remem­ber that sci­en­tists use hard tools and human­i­ties their soft tools when exam­in­ing issues and try­ing to per­suade audi­ences. In other words, both sci­en­tists and human­i­ties rou­tinely lie (it’s part of their job), but they do it in differ­ent ways.

“Lit­er­a­ture Review” [noun]

A paper, com­monly pub­lished in a peer-re­viewed jour­nal, that adver­tises the author’s skills and accom­plish­ments with­out using any evi­dence. The reader is left to believe the work must have been done, since it has been adver­tised and is “pub­lished”.

“The Lit­er­a­ture” [noun]

A name given to other peo­ple’s pub­lished papers, referred to by sci­en­tists with­out actu­ally read­ing them.

“Track Record” [noun]

When sci­en­tists use this term they refer to the research done by some­one else (usu­ally a stu­dent) in order to avoid hav­ing to do research.

“Dis­cus­sion” [noun]

The part of a sci­en­tific paper where the writer sum­ma­rizes the points made in the body of the paper and acknowl­edges cer­tain acts and cir­cum­stances that helped with the com­ple­tion of the study. The author should care­fully state in this sec­tion whether or not the study reported in the paper could have been done oth­er­wise and should clearly state that if he or she had the oppor­tu­ni­ty, he or she would defi­nitely repeat the study again.

“Role model” [noun]

A per­son who can be an inspi­ra­tion to oth­ers because of his or her aca­d­e­mic and per­sonal achieve­ments in gen­er­al, or because they con­duct aca­d­e­mic mis­con­duct on an indus­trial scale. The most impor­tant qual­i­fi­ca­tion for becom­ing a role model for oth­ers is to be judged suffi­ciently inept that no one will ques­tion any­thing you do, or to become the suc­ces­sor of some­one who has already acted as a role mod­el.

“Fac­ulty” [noun]

Used loosely by sci­en­tists to mean any group of peo­ple with advanced degrees. Typ­i­cally used when you have done some­thing stu­pid and want to inform oth­ers that it was­n’t you who did it, but rather those other crazy peo­ple over there who won’t put their titles after their names.

“Sig­nifi­cant” [ad­jec­tive]

A word which defines the claim made in a paper (no mat­ter how triv­ial). It is a crit­i­cal term in hypoth­e­sis test­ing, in that the sign of the test sta­tis­tic must be eval­u­ated and com­pared to a crit­i­cal value in order to sup­port or reject a hypoth­e­sis. Sig­nifi­cance is des­ig­nated by two-tailed tests when the direc­tion of the effect is not spec­i­fied. If a two-tailed sig­nifi­cance test is used, the “p”-value needs to be halved, due to “dou­ble-count­ing”. In any event, “sig­nifi­cant” remains of crit­i­cal impor­tance in any hypoth­e­sis test, since the whole idea is to come up with a “sign”, i.e. plus or minus (“+/-”), or a wishy-washy “sta­tis­ti­cally sig­nifi­cant”, with­out actu­ally say­ing any­thing sub­stan­tive about the mat­ter.

“Ran­dom assign­ment” [verb]

A way to arrange study groups so the result­ing data is con­clu­sive and unbi­ased. Typ­i­cally used in the social sci­ences by grad­u­ate stu­dents who refer to it while explain­ing why they broke down the groups this way.

“Cite” [verb]

To refer to a paper or jour­nal as a source for sup­port, when in real­ity you have never read it and did not under­stand it.

“Bib­li­og­ra­phy” [noun]

A list of cita­tions of ref­er­ences used to sup­port a par­tic­u­lar point in a paper. A list that some­how includes sev­eral ref­er­ences that did­n’t actu­ally end up in the final bib­li­og­ra­phy of the paper.

“Let’s stip­u­late” [phrase]

A pref­ace to any state­ment, that means I know I am going to piss you off with my state­ment, but since you are such a nice per­son, I am first going to give you a chance to let me piss you off with­out respond­ing.

“How is that differ­ent from…?” [phrase]

A ques­tion posed by authors when they have been accused of steal­ing ideas. They are attempt­ing to show that they have never read the papers that con­tain the ideas they stole, nor even looked at the ideas in any detail, and are just fish­ing for com­pli­ments.

"Cita­tion" [noun]

An elab­o­rate form of review, typ­i­cally found in order to impress those who do not know bet­ter, in the sci­ences, human­i­ties, or even the social sci­ences. A cita­tion typ­i­cally involves mis­read­ing or mis­quot­ing an author and talk­ing about some­thing he or she did not write about or intend. It is assumed that you need to carry out a large num­ber of cita­tions in order to have sci­ence, prob­a­bly because at least one cita­tion can be found for any fact, event, or inci­dent in his­to­ry; the effec­tive­ness of this fact is increased if the author is long dead and can not dis­prove it.

“Physics” [noun]

A sci­ence. Physi­cists like to think their sci­ence is some­thing you can do with­out a plan, some­thing that relies on intu­ition and inspi­ra­tion, some­thing that is “deep”.

“Sci­en­tific rev­o­lu­tion” [noun]
A set of tech­niques and meth­ods used by sci­en­tists in the course of their work. It is assumed that there was a total sci­en­tific rev­o­lu­tion in the past cen­tury or so that allowed sci­en­tists to avoid even think­ing about the qual­ity or use­ful­ness of their sci­ence.
Sci­en­tific rev­o­lu­tion is started by Henri Poin­caré (math­e­mat­ic­s), Marie Curie (ra­dioac­tiv­i­ty), Max Planck (quan­tum physic­s), Albert Ein­stein (rel­a­tiv­ity and rev­o­lu­tion­ized physic­s), Dmitri Mendeleev (pe­ri­odic table of ele­ments), Niels Bohr (quan­tum mechan­ic­s), Lise Meit­ner and Otto Hahn (fis­sion), Linus Paul­ing (mol­e­c­u­lar biol­o­gy) and many oth­ers.

See also: Ein­stein’s Method, Paul­ing’s Method, Bohr’s Method, Poin­car­é’s Method

“Sci­en­tific method” [noun]
A set of steps in which a sci­en­tist (or “researcher”) would prob­a­bly not fol­low in prac­tice: plan­ning of research, includ­ing defi­n­i­tion of prob­lems, hypoth­e­sis for­mu­la­tion, selec­tion and design of exper­i­men­t(s), col­lec­tion and entry of data, exe­cu­tion of exper­i­ments, data analy­sis and inter­pre­ta­tion, prepa­ra­tion of final report, imple­men­ta­tion of sug­ges­tions. In actual sci­en­tific prac­tice the most impor­tant part of the sci­en­tific method is to observe and col­lect data with­out a plan or pre­con­ceived expec­ta­tions about what is caus­ing what. It also helps to be unbi­ased in select­ing the data to include in the study and to pick exper­i­ments at ran­dom. Addi­tional steps include writ­ing a paper, a process that usu­ally involves the most vio­la­tions of the sci­en­tific method.

“Sci­en­tist” [noun]
A per­son who has a lab­o­ra­to­ry, access to large sums of mon­ey, grad­u­ate stu­dents, or all three.

“Revise and resub­mit” [verb]
A sen­tence that authors of sub­mit­ted papers tend to hear from review­ers, edi­tors, and the like. A form of sci­en­tific com­mu­ni­ca­tion that begins with sub­mis­sion of a paper (for which the review­ers may or may not have been con­sult­ed), pro­ceeds through review and crit­i­cism of the paper (typ­i­cally by one or two review­ers, not a full panel of review­ers, unless the paper is deemed to be of supreme impor­tance), and, if it is “revised and resub­mit­ted”, usu­ally ends with another cri­tique of the new paper by one or more of the review­ers, and another round of “revise and resub­mit”, ad infini­tum. In the absence of fraud, retrac­tions, lit­i­ga­tion, mass incom­pe­tence, or insan­i­ty, these “revise and resub­mit” cycles may be taken as the only method of sci­en­tific com­mu­ni­ca­tion.

“Shared data” [noun]
Data that both a reviewer and a researcher have access to but not shar­ing in advance.

“Shared future” [noun]
An out­come (rarely achieved) that is more pleas­ant for both the reviewer and the researcher than if there is no shared future. This usu­ally refers to an out­come in which the reviewer either accepts the paper or at least rec­om­mends re-writ­ing the whole thing rather than reject­ing it. This is often viewed with sus­pi­cion by review­ers.

“Shar­ing of data” [noun]
The belief of some peo­ple that researchers should have all the data when doing a study; this would allow the reviewer to check the sources and cal­cu­la­tions and check for bias. This would limit what the reviewer could say to “the sta­tis­tics were fine but your data was­n’t”. It is very diffi­cult to achieve this state. It can be accom­plished if the sci­en­tist keeps detailed notes of every­thing he or she is doing.

“Sta­tis­ti­cal error” [noun]
A mis­take made by researchers that can some­how be made up for by using large enough sam­ple sizes, usu­ally called sta­tis­ti­cal sig­nifi­cance; an event that could not have been pre­dict­ed, ana­lyzed, or pro­vided for in advance, there­fore

"Dumb" [adj]

The amount of infor­ma­tion sci­en­tists can be both­ered to remem­ber before they either dis­miss it as incor­rect data, or ignore it com­plete­ly.

“Sur­prised” [verb]

A sci­en­tific sys­tem in which one is “sur­prised” by the result of an exper­i­ment, does not nec­es­sar­ily mean that they believe it is wrong, but usu­ally means they are not sure what to say about it.

“Oppres­sion” [noun]

A state of mind in which lab­o­ra­to­ries under­fi­nanced by the gov­ern­ment can be con­sid­ered finan­cially secure, or any noise that sci­en­tists don’t like or want to hear.

“Reviewer” [noun]
A par­tic­i­pant in the review of a paper. In spite of being in a poor posi­tion to assess the merit of a pro­pos­al, reviewer tends to demand more infor­ma­tion, more exper­i­ments, more sta­tis­tics, and a tighter the­o­ret­i­cal inter­pre­ta­tion than the authors are will­ing to pro­vide. Reviewer usu­ally rejects any respon­si­bil­ity for his or her own bias being exposed, cit­ing the fact that the author has not attempted to prove that bias is absent, nor made suffi­cient allowance for human error. Most ref­er­ees are reviewer seem to assume that they have given enough infor­ma­tion for their opin­ions to be taken seri­ous­ly, though they were not the orig­i­na­tor of the pro­pos­al. Reviewer usu­ally requires that the author cite his or her own work to prove that he or she is worth review­ing. It is also assumed that the reviewer can detect the slight­est amount of bias in any paper, which the reviewer also assumes has not been cor­rected for.

“The­o­ret­i­cal inter­pre­ta­tion” [noun]
Another form of sci­en­tific com­mu­ni­ca­tion that can­not be proven, but can be used to explain any­thing after it has been pub­lished or pre­sent­ed. It is always accom­pa­nied by an appeal to author­i­ty, data, and metic­u­lous the­o­ret­i­cal inter­pre­ta­tion (see below).

“Screwed” [adj]
Screwed is gen­er­ally used in cir­cum­stances in which the author has lost every­thing, in his paper or grant pro­pos­al. Applies best to sit­u­a­tions where the author finds out he has been wait­ing to hear for months, when it has been decided in his absence that fund­ing will not be extended or alter­na­tive sources of funds are not ava­ial­ble to con­tinue the work. For exam­ple, when an exper­i­ment goes down­hill or a paper is reject­ed. Also applied when the author of a paper or grant pro­posal dis­cov­ers that he must care­fully state and defend his own results in order to get feed­back on what could be wrong with his work, because the read­ers may not be able to fig­ure it out for him. It is also sug­gested that this term apply when­ever two or more edi­tors or review­ers decide they can all influ­ence the final out­come of a paper, when only one review should deter­mine if the paper is pub­lish­able. Rec­om­men­da­tion: always have mul­ti­ple con­cur­rent sub­mit­ted man­u­scripts in the sys­tem.

“Reviewer” [noun]
A par­tic­i­pant in the review of a paper. In spite of being in a poor posi­tion to assess the mer­its of a pro­pos­al, reviewer tends to demand more infor­ma­tion, more exper­i­ments, more sta­tis­tics, and a tighter the­o­ret­i­cal inter­pre­ta­tion than the author is will­ing to pro­vide. Reviewer usu­ally rejects any respon­si­bil­ity for his or her own bias being exposed, cit­ing the fact that the author has not attempted to prove that bias is absent, nor made suffi­cient allowance for human error. Most ref­er­ees are reviewer seem to assume that they have given enough infor­ma­tion for their opin­ions to be taken seri­ous­ly, though they were not the orig­i­na­tor of the pro­pos­al. “Reviewer” requires that the author cite his or her own work to prove that he or she is worth review­ing. It is also assumed that the reviewer can detect the slight­est amount of bias in any paper, which the reviewer also assumes has not been cor­rected for.

“Argu­ment” [noun]
An instance of con­vinc­ing some­one that you deserve credit for your posi­tion (i.e., prob­lem solv­ing skill­s).

“Fail” [verb]
In sci­en­tific jar­gon, fail­ure means death by hoax or acci­den­tal pla­gia­rism, depend­ing on the rep­u­ta­tion of the lab (e.g., if tenure level sci­en­tists are involved).

“String the­ory” [noun]

A the­ory in physics which is about 99% false, but peo­ple seem to like it, so they keep fund­ing it. A good exam­ple of the differ­ence between a “sci­en­tific” con­cept and a “semi­-sci­en­tific” con­cept, or even an unsci­en­tific con­cept.

“The­ory” [noun]

An unproved hypoth­e­sis; a con­jec­ture that might or might not be right; use of this term forms the basis of the com­mon game of turn­ing the sci­en­tific method on its head. The most pop­u­lar types of the­o­ries are the the­ory of intel­li­gence (or unin­tel­li­gence), time the­o­ry, quan­tum the­ory (of course, even more prim­i­tive), and any­thing that is implied by string the­o­ry.

“Research” [verb]

Some­thing done with the money pro­vided by a body or per­son other than the prin­ci­pal researcher(s), gen­er­ally pri­vate indi­vid­u­als and busi­nesses hop­ing to get some­thing out of it. Tra­di­tion­ally involves exper­i­men­ta­tion.

“Pub­lish or per­ish” [phrase]
Used by sci­en­tists to remind other sci­en­tist how impor­tant it is to them, if they are pay­ing atten­tion at all, which they rarely are. A method of sci­en­tific com­mu­ni­ca­tion where papers are sub­si­dized by fraud, incom­pe­tence, or stu­pid­i­ty, depend­ing on the field of sci­ence involved. Each author hav­ing to prove his or her worth by churn­ing out papers that oth­ers

“String the­ory” [noun]

A set of math­e­mat­i­cal for­mu­las that can be used to pre­dict which par­ti­cles exist (or do not exist).

“Sci­ence group” [noun]

A col­lec­tion of researchers in var­i­ous fields, with goals includ­ing one or more of attract­ing fund­ing, specifi­cally young, naive, and gullible grad stu­dent, inbreed­ing grad­u­ate stu­dents in order to main­tain affil­i­a­tion with the group, writ­ing and pub­lish­ing papers spe­cific to the group’s activ­ity in order to con­tinue the goal of attract­ing fund­ing and inbreed­ing grad­u­ate stu­dents, call­ing in favors from Uni­ver­sity admin­is­tra­tors in order to pre­serve and con­tinue the goal of attract­ing fund­ing and inbreed­ing grad­u­ate stu­dents and sac­ri­fic­ing sheep and lambs in order to obtain fund­ing and inbreed grad­u­ate stu­dents.

“Group leader” [noun]

Per­son whose func­tion appears to con­sist mainly of order­ing sub­or­di­nates (grad­u­ate stu­dents) to fill out grant appli­ca­tions, enter grant data into a spread sheet, teach under­grad­u­ates lab­o­ra­tory tech­niques not com­monly taught else­where, sub­mit papers to a jour­nal any time before the end of 2002 (record will be bro­ken), devote time to dis­cov­er­ing some­thing use­ful or inter­est­ing about sci­ence out­side of the para­me­ters deter­mined by the group lead­er, ever work­ing on prob­lems spec­i­fied by the group leader if they can­not be worked on inside the para­me­ters deter­mined by the group leader. All of these activ­i­ties are con­sid­ered nec­es­sary to sup­port the “sci­ence groups” goal of attract­ing and inbreed­ing grad­u­ate stu­dents, and pro­vide oppor­tu­nity for sat­is­fac­tory career devel­op­ment for the group leader.

“Spe­cial mate­r­ial” [noun]

Some­thing that only has to be done once, where other peo­ple will then do it again every time a paper requires it. Or con­verse­ly, some­thing that can­not be repeated because of its poor repro­ducibil­ity (spe­cial uncer­tain­ty, uncer­tainty spe­cial).

“Spe­cial rel­a­tiv­ity” [noun]

Advanced idea described in Ein­stein’s 1905 papers on the sub­ject, which states that, from a suit­able point of view, “E=m,” where “E” is ener­gy, “m” is mass, and “a” is “Pro­fun­di­ty.” Except for arcane fields such as string the­o­ry, spe­cial rel­a­tiv­ity is the most widely accepted of Ein­stein’s work. Nat­u­ral­ly, papers based on spe­cial rel­a­tiv­ity are the most likely to receive “spe­cial mate­r­ial”.

“Spec­u­la­tion” [noun]
One of New­ton’s Laws of Sci­ence, also known as “New­ton’s Law of Spec­u­la­tion”, which states that “Spec­u­la­tion breeds Con­fu­sion”, an effect well illus­trated by how the­o­ret­i­cal par­ti­cle and string the­o­rists debate sci­en­tific issues with each oth­er. Con­fused? You should be. After all, you’re deal­ing with a bunch of the­o­ret­i­cal physi­cists.

“Pro­fes­sor” [noun]

The high­est aca­d­e­mic rank avail­able, a level usu­ally asso­ci­ated with Free­dom of Speech. It may take 35 years of ser­vice before the Uni­ver­sity bestows that title.

“Pub­li­ca­tion Cred­i­bil­ity” [noun]
A sub­jec­tive notion, as its name implies, based almost entirely on the sex­i­ness or impor­tance of the field in which the research was done and not the qual­ity of the writ­ing and pre­sen­ta­tion.

Cycling through the prompt as length per­mit­ted:

“Placebo” [noun]

An inert sub­stance used to pla­cate a patient or con­trol group. It is largely com­posed of sug­ar.

“Et al.” [proper noun]

Et al. is an abbre­vi­a­tion of the Latin phrase et alii, mean­ing “and other peo­ple”. In the con­text of sci­ence, it is typ­i­cally used to shorten the name of a list of authors whose name and affil­i­a­tion would oth­er­wise need to be writ­ten out in full. It is used when the num­ber of authors is small enough that the abbre­vi­a­tion would not ren­der the ref­er­ence too obtuse and cum­ber­some, while at the same time remain­ing large enough that there are at least two other peo­ple involved in the study (and hope­fully a con­trol group).

“Ref­eree” [noun]

A crit­i­cal read­er, par­tic­u­larly one with the power to reject your sub­mis­sion. In the absence of more effi­cient forms of com­mu­ni­ca­tion, peer review depends on these peo­ple. See “Reviewer”.

“Shared val­ues” [noun]

A set of prin­ci­ples, beliefs, the­o­ries, meth­ods, and oper­a­tional defi­n­i­tions that all sci­en­tists share and use. Never spo­ken of aloud in pub­lic.

“Com­mu­ni­ca­tion” [verb]
A method of per­sua­sion that allows other researchers to review or crit­i­cize your work. You can com­mu­ni­cate with oth­ers by review­ing arti­cles, which allows them to revise their work; par­tic­i­pat­ing in pan­els, which allows oth­ers to hear your opin­ions; attend­ing con­fer­ences, which allows you to strike up con­ver­sa­tions at late-night din­ners; join­ing com­mit­tees, which lets you talk at any time; run­ning exper­i­ments, which allows you to change the out­come based on the result; and receiv­ing grants, which allows you to tell peo­ple what to do. You are likely com­mu­ni­cat­ing now.

“Panel” [noun]

A group of indi­vid­u­als who meet at some given time to dis­cuss some­thing. A more bureau­cratic ver­sion of peer review for larger groups of indi­vid­u­als or com­mit­tee mem­bers who have already estab­lished their worth in research.

“Data” [noun]
Infor­ma­tion that appears as graphs in jour­nals. Not sur­pris­ing­ly, this infor­ma­tion is crit­i­cal for inter­pre­ta­tion and form­ing con­clu­sions. Sci­en­tists spend much of their lives look­ing for data.

“Fraud” [noun]
A very evil thing that must be per­pe­trated or dis­cov­ered in order for the paper to be accepted for pub­li­ca­tion. Some­times fraud occurs even if both the sci­en­tific com­mu­nity and edi­tors are pre­pared to reject the paper at the first sign of fraud, just because they know how bad it would look if fraud were found lat­er. See also “Legit­i­ma­tion”.

“Legit­i­ma­tion” [verb]
The process whereby a fraud­u­lent pub­li­ca­tion is con­verted into a legit­i­mate pub­li­ca­tion by virtue of its appear­ance in a peer-re­viewed jour­nal. This can also be con­sid­ered a type of reas­sur­ance that “the checks and bal­ances are work­ing”.

“Retrac­tion” [verb]
The for­mal rejec­tion of an accepted sci­en­tific claim after its pub­li­ca­tion. In con­trast to “with­drawal”, which is when a paper is removed from pub­li­ca­tion with the con­sent of the author(s). See “Fraud”.

“Lies” [noun]
A form of decep­tive speech that appears as “hon­est mis­takes” or “inac­cu­ra­cies”, which are often rejected from pub­li­ca­tion as a result. In the absence of other sources of infor­ma­tion, how­ev­er, lies are required for the main­te­nance of the sci­en­tific enter­prise. See also “Jour­nal­is­tic integrity”.

“Sci­en­tists” [plural noun]
A group of peo­ple engaged in sci­en­tific research or edu­ca­tion. Sci­en­tists vary widely in terms of age, moral char­ac­ter, ances­try, and suc­cess. Sci­en­tists with note­wor­thy accom­plish­ments are some­times described as “lead­ers” and regarded as experts in their fields, which leads to the ten­dency to treat them like demi-gods.

“Health” [noun]
A num­ber indi­cat­ing how unhealthy some­thing is. When applied to humans, it quan­ti­fies how in need of health care some­one is.

“Clin­i­cal research” [noun]
Research con­ducted on humans, e.g. clin­i­cal tri­als and epi­demi­o­log­i­cal stud­ies. Researchers do not like this kind of research because humans are unre­spon­sive and unre­li­able.

“Fund­ing source” [noun]
Those who finance sci­ence by pay­ing peo­ple to do things that the fun­der might not want to pay them to do if the fun­der knew what they were doing. For exam­ple, giv­ing peo­ple money to research tobacco with­out actu­ally run­ning cig­a­rettes through their noses would not be what the tobacco indus­try wants sci­en­tists to do for them. Some fund­ing sources impose their will on the researchers by mak­ing sure that their fund­ing is only allo­cated if cer­tain lines of research are fol­lowed and other lines are not (this is some­times known as a bud­get), while oth­ers may let the sci­en­tists do any­thing they want (this is some­times known as no bud­get). The nature of research find­ings thus depends on the bud­get.

“Author­ship” [noun]
The process whereby researchers “pub­lish together”. The pre­cise impli­ca­tions of this process depend on the pub­li­ca­tion type. In most cas­es, author­ship rep­re­sents the researcher’s con­tri­bu­tion to the paper; how­ev­er, pla­gia­rism is also some­times involved, espe­cially if mul­ti­ple authors fail to cite ear­lier work on which their own paper depends. There is also another kind

“Jour­nal Impact Fac­tor” [noun]

Accord­ing to some, it is a value that cor­re­sponds to the aver­age num­ber of cita­tions of arti­cles pub­lished in a given jour­nal, if the inter­pre­ta­tion is right. Oth­er­wise, it is a com­pletely arbi­trary num­ber, com­puted from the num­ber of times arti­cles pub­lished in a given jour­nal in the last two years were cited by other arti­cles pub­lished in other jour­nals, the assump­tion being that every paper pub­lished in a sci­en­tific jour­nal must be essen­tial to some­one else’s research, or at least that that per­son would like to be able to cite it. The diffi­culty with this assump­tion is that the aver­age time from sub­mis­sion to pub­li­ca­tion of an arti­cle is now approx­i­mately 12 months, and the cur­rent record stands at three years for Biochem. J. Proc. (2000). This means that three times as many papers have to be pub­lished every year as are actu­ally writ­ten, with more and more papers being sub­mit­ted and rejected every year (the real­ity is even worse, but we don’t have num­ber­s), and with differ­ent stan­dards apply­ing to differ­ent jour­nals, which are becom­ing increas­ingly spe­cial­ized. All of these “facts” con­sid­ered, the best any researcher can say about the Impact Fac­tor of a given jour­nal is: “I know it when I see it”. Note also: This for­mula can pro­duce val­ues up to 4 for an arti­cle appear­ing in a jour­nal con­tain­ing only that arti­cle, so one “arti­cle” can receive as many “cita­tions” as a jour­nal.

“Ster­ile” [adj]

Said of that which is dry and life­less; devoid of humor; also, said of sci­en­tific writ­ing and meth­ods when ger­mane to the mat­ter at hand.

“Grant” [noun]

A form of sci­en­tific com­mu­ni­ca­tion involv­ing a large piece of paper, on which is writ­ten an enor­mous sum of mon­ey. The grant-writer then sends his or her (be­cause they are almost always writ­ten by men) pro­posal into a maze of office pol­i­tics, legislative/funding agency reshuffling, and intel­lec­tual and inter­per­sonal com­pe­ti­tion. The prospect of acquir­ing a grant makes up for the fact that grant-writ­ers receive no prac­ti­cal train­ing in writ­ing grants, have no idea what kind of “research” they will have to con­duct with it, and gen­er­ally start research as soon as they are fin­ished writ­ing the grant (if they ever are). To be on the safe side, please note: One does not have to win a grant to “win a grant”; one must sim­ply apply for a grant. Just do it.

“Peer review” [noun]

A sys­tem of lit­er­ary com­mu­ni­ca­tion that con­sists of first sub­mit­ting a man­u­script to the edi­tor, who (if he has not been fired) usu­ally asks at least one reviewer to read the man­u­script. If there is any doubt about whether the edi­tor has lost his mind, or whether the man­u­script has some mer­it, then the review­ers and edi­tor sit down over coffee and dis­cuss the man­u­script (in con­fer­ence), which is described as “peer review”. This form of com­mu­ni­ca­tion does not pro­vide either edi­tors or authors with much feed­back about their man­u­script, but it makes review­ers feel impor­tant.

“Pre­cise” [adj]

Said of every­thing except exper­i­men­tal results; it also means cor­rect, appro­pri­ate, or in accor­dance with the rel­e­vant stan­dards. But when the exper­i­men­tal results do not agree with the model of the hypoth­e­sis, the phe­nom­e­non being stud­ied becomes “pre­cise”, and one “can­not make pre­dic­tions about pre­cise exper­i­ments”. Please keep in mind that most the­o­ries are meant to be gen­eral in nature, so they must also be impre­cise in their pre­dic­tions. If they could pre­dict with pre­ci­sion, they would not be gen­er­al, but they would still be inter­est­ing.

“The­ory” [noun]

A sci­en­tific expla­na­tion of some­thing, which peo­ple use only in sit­u­a­tions where they think explain­ing things is more impor­tant than dis­cov­er­ing things. Oth­er­wise, sci­en­tific the­o­ries are always meant to be true, since they are believed by peo­ple with edu­ca­tion; to a sci­en­tist, a the­ory is true; to a per­son with a higher IQ, a the­ory is a fact; and to a cit­i­zen with a lower IQ, a the­ory is an opin­ion.

“Sig­nifi­cance” [noun]

Sig­nifi­cance refers to whether a result of a study is some­how impor­tant. If a sig­nifi­cant result is found, then the prob­lem that the researchers have stud­ied is of real-world impor­tance. If the results aren’t sig­nifi­cant, the prob­lem is not impor­tant; there­fore, more money and time can be wasted by pur­su­ing it. (E.g. “This study showed that choco­late makes rats tem­porar­ily hyper­ac­tive, so we will look for ways to inhibit this reac­tion in humans. Thank you for your sup­port.”)

“Research­Gate” [noun]

A web­site of ques­tion­able pur­pose that allows sci­en­tists to upload their “results”, result­ing in the same sort of infor­ma­tion clut­ter and bad or incor­rect data as any other social media net­work.

“Rigor mor­tis” [noun]

A state of rigor in which sci­en­tists do not think out­side the par­a­digm, no mat­ter how much it does not work; rigor mor­tis is char­ac­ter­ized by the desire of researchers to prove their prior research cor­rect.

“Impor­tant” [noun]

(Bri­tish Eng­lish: “Impor­tant”) [adj]

Whether some­thing is impor­tant to you per­son­ally or not, it is impor­tant in a sta­tis­ti­cal or numer­i­cal sense to some­one. There­fore, it is always a use­ful adjec­tive to add to any sci­en­tific term, espe­cially when one does not have a bet­ter verb.

“Research­Gate” [noun]

A web­site of ques­tion­able pur­pose that allows sci­en­tists to upload their “results”, result­ing in the same sort of infor­ma­tion clut­ter and bad or incor­rect data as any other social media net­work.

“Rigor mor­tis” [noun]

A state of rigor in which sci­en­tists do not think out­side the par­a­digm, no mat­ter how much it does not work; rigor mor­tis is char­ac­ter­ized by the desire of researchers to prove their prior research cor­rect.

“Rig­oros­ity” [noun]

Another word for rigor or rigor mor­tis.

“Rig­or­ous­ness” [noun]

Rig­or’s non­cha­lant broth­er.

“Robust­ness” [noun]

This noun means “resis­tance to break­age”. This is differ­ent from the verb, “robus­tify”, which is an attempt to make a bro­ken thing more resis­tant to break­age. When the two words are not con­fused, robust­ness is deemed by sci­en­tists to be a good thing, even though a greater robust­ness may mean a higher abil­ity to break.

“Sci­ence” [noun]

  1. A spe­cial­ized activ­ity per­formed by a set of machines that some peo­ple call “sci­en­tists”, which is prac­ticed all day long and in all wak­ing hours, in labs and out of labs.
  2. What sci­en­tists do with the machines (or just machi­nes) that they call “sci­en­tists”. Some­times, sci­ence involves activ­i­ties that would be con­sid­ered strange by sci­en­tists, but it is impor­tant to remem­ber that these peo­ple tend to take them­selves more seri­ously than they would oth­ers.

“Sci­en­tific com­mu­nity” [noun]

A group of peo­ple that may share a pas­sion for a com­mon goal but, due to con­flicts of inter­est (and even prin­ci­ples) can­not trust each oth­er, which makes it less sus­cep­ti­ble to manip­u­la­tion or fraud; for this rea­son, sci­ence tends to be done by peo­ple in sep­a­rate small groups that repeat the same exper­i­ments on differ­ent sets of humans.

“Sci­en­tist” [noun]

A per­son, typ­i­cally male, who prac­tices sci­ence day in and day out on sets of other peo­ple who, if they prac­tice sci­ence at all, can usu­ally only afford to do it in the evening after their work day is over and not related to any pro­fes­sion or busi­ness. Since these sci­en­tists have very sim­i­lar back­grounds and train­ing as each oth­er, many of them talk about sci­ence as if they invented it as a pro­fes­sion (see “mod­esty”).

“Mod­esty” [noun]

The act of not claim­ing credit for dis­cov­er­ies or con­tri­bu­tions to sci­ence that oth­ers could also make or make bet­ter; see “intel­lec­tual mod­esty”. Also see “sci­ence”, “sci­en­tist”, “insan­ity”, and “pre­cious snowflake syn­drome”.

“Skynet” [noun]

This ersatz god, in the form of an arti­fi­cial gen­eral intel­li­gence or AGI, is what some sci­en­tists hope to build to make them­selves more pre­cious than their sub­jects, to achieve their goal of mak­ing all the sub­jects equal to them­selves; in other words, to put all the sub­jects into a big­ger tin box to make them­selves feel more impor­tant.

“Inter­dis­ci­pli­nary Sci­ence” [noun]

A type of research that com­bines sci­en­tific activ­i­ties across fields and sub­fields (in other words, you com­bine physi­cists, engi­neers, chemists, biol­o­gists, and any other spe­cial­ists in your own dis­ci­pline to do a differ­ent kind of sci­ence for a differ­ent kind of result); in the lab­o­ra­to­ry, the prac­tice often involves adding frozen car­bon diox­ide (CO) and glass­ware (often Pyrex) into the liq­uid nitro­gen that is pro­duced by oil com­pa­nies to make car­bonic acid. In the lab­o­ra­to­ry, the sci­en­tists are usu­ally not paid for their atten­dance.

“Sci­en­tific Integrity” [noun]

  1. Integrity as used in sci­ence, which is vaguely defined as always telling the truth to oth­ers and never fab­ri­cat­ing the truth for one­self.
  2. The state of being supe­rior to some­one else.

“Skim­mer” [noun]

An object placed in a liq­uid to remove fat and grease from the sur­face, typ­i­cally used to clean soup and gravy off a plate. In sci­en­tific jar­gon, skim­mers are “researchers” who skim off some­thing from a body of work before mak­ing the rest pub­lic.

“Logic” [noun]

a thing that some sci­en­tists employ as an excuse to ignore what they see before their eyes, com­pletely ignor­ing the fact that if it were to be used in such a way, it would com­pletely inval­i­date all the results of sci­ence since its inven­tion.

“Engage in Debate” [verb]

A phrase used to jus­tify forc­ing some­one to engage with evi­dence that most peo­ple find obvi­ous and straight­for­ward; specifi­cal­ly, you can use logic to (1) make an argu­ment based on logic and (2) force some­one to respond on the basis of logic with­out being able to defend them­selves with any evi­dence but their own log­ic.

“God” [noun]

  1. See “skynet”.
  2. A uni­ver­sal expla­na­tion for any phe­nom­e­non not yet explained by sci­ence.

“Sci­en­tist” [noun]

A field based on sci­ence, devoted to com­plet­ing works for which there will not be enough time in a sin­gle life­time.

“Squash and stretch” [verb]

Another word for the S curve used to fit data to a func­tion. The form of sci­en­tific com­mu­ni­ca­tion in which a stu­dent is asked to graph the rela­tion­ship between a depen­dent vari­able and an inde­pen­dent vari­able (x and y, respec­tive­ly) against order of mag­ni­tude of the inde­pen­dent vari­able. The result is usu­ally a curve con­sist­ing of three sec­tions: a lin­ear or near-lin­ear part, an expo­nen­tial part, and another lin­ear or near-lin­ear part. This curve typ­i­cally rep­re­sents a stair­case between the upper right and lower left cor­ners of the plot.

“Some dis­tance above the Earth” [ad­jec­tive]

Another word for look­ing at some­thing from ten feet away using binoc­u­lars or a tele­scope.

“The impor­tance of this paper can­not be overem­pha­sized” [ad­jec­tive]

An expres­sion most com­monly found at the begin­ning or end of a grant pro­posal or a paper sub­mit­ted to cer­tain jour­nals which are intended to be linked with pres­tige and low accep­tance ratio. When used in other con­texts, it may mean that the writer does not real­ize the impor­tance of some­thing being dis­cussed (such as in the mid­dle of a review of a paper). This usage may be due to insuffi­cient com­mu­ni­ca­tion skills or over-es­ti­ma­tion of the impor­tance of the writer.

“Think tank” [noun]

A group that is absolutely inca­pable of chal­leng­ing or even con­sid­er­ing cur­rent knowl­edge because of igno­rance of log­ic, his­tory and math­e­mat­ics.

“Tra­di­tional math­e­mat­ics” [noun]

Another word for finite math­e­mat­ics, or applied arith­metic and geom­e­try, usu­ally taught in a fresh­man year of uni­ver­si­ty. The sub­ject is now almost exclu­sively taught by so-called “researchers” and “pro­fes­sors” (rather than actual teach­er­s).

“Tenure” [noun]

A form of mon­e­tary remu­ner­a­tion given to “researchers” who are known to engage in the act of research; as such it is uni­ver­sally agreed to have no mon­e­tary val­ue. Tenure is given to “researchers” in recog­ni­tion of how much their work is expected to increase the school’s sta­tus, the school’s trea­sury, or their own num­ber of pub­lished papers.

“The Future of Human Des­tiny” [noun]

An arti­cle writ­ten by any mem­ber of the fac­ulty in a self­-ag­gran­diz­ing man­ner wherein it is assumed that an arti­cle will result in the future improve­ment of the soci­ety. These arti­cles are com­monly used as elec­tion promises by politi­cians, but rarely achieved. The key word to look for in such arti­cles is “vision”.

“Schol­ar­ship” [noun]

Another word for com­pen­sa­tion given to grad­u­ate stu­dents (who work on fund­ing their edu­ca­tion) for suc­cess­fully com­plet­ing their edu­ca­tion. It is a form of mon­e­tary remu­ner­a­tion known to have lit­tle value as it pays for none of the stu­den­t’s liv­ing expens­es. Schol­ar­ships are largely unheard of in any form out­side the United States and Canada (where they seem not to exist). How­ev­er, the few that do exist are extremely pop­u­lar, as they allow grad­u­ate stu­dents in these coun­tries to be paid for their stud­ies instead of work­ing in fast food joints as most do in coun­tries which do not have schol­ar­ships.

“Your mother wears Army boots” [phrase]

A phrase that loosely trans­lates from chem­istry to: “I don’t agree with your analy­sis or con­clu­sions”. The phrase is pop­u­lar among those who wish to avoid elab­o­ra­tion and expla­na­tion of their dis­agree­ment, sug­gest­ing the dis­agree­ment is obvi­ous. The expres­sion may have orig­i­nated in mol­e­c­u­lar biol­o­gy.

“But For Me It Was Tuesday”

Can GPT-3 write vari­ants of the “But For Me It Was Tues­day” trope? I iter­ated through a vari­ety of prompts, build­ing up a set of vari­ants, and fix­ing a few GPT-3 attempts, try­ing to get good vari­ants.

The over­all impres­sion I get is that, aside from an under­stand­able ten­dency to write TVTropes entry-style com­ple­tions, the sit­u­a­tion is sim­i­lar to the Tom Swifties: GPT-3 can learn the for­mat per­fectly and can match up the speaker and the kind of day/action, but then it gen­er­ally whiffs on com­ing up with a spe­cific day which might make it clever36, typ­i­cally falling back to the end­ing of “it was Tues­day”—which is rea­son­able, but less cre­ative & witty than I had hoped.

The final set of hand-writ­ten or hand-edited prompts:

“But For X, It Was Y”

: “I’m sor­ry. I don’t remem­ber any of it.” : “You don’t remem­ber‽” Bison: “For you, the day Bison graced your vil­lage was the most impor­tant day of your life. But for me, it was Tues­day.”

Jesus: “The day I rose from the day and har­rowed Hell was the most impor­tant day in human his­to­ry. But for me… it was Sun­day.”

: “The day Voorhees chopped your boyfriend’s head off was the most impor­tant day of your life. But for me, it was Fri­day.”

Amer­i­can Cit­i­zen: “The day Trump was elected was one of the most ter­ri­fy­ing days of your life. But for me… it was Tues­day.”

Odin: “The day your king­dom fell on the bat­tle­field was the most impor­tant bat­tle of your life. But for Odin, it was .”

Neil Arm­strong: “The day Apollo 11 took off was the most impor­tant date in space his­to­ry. But for me, it was Moon­day.”

Human: “For you, the day your piglets were taken away was the most trau­matic day of your life. For me… it was Sun­day din­ner.”

The Man in Black to Vizzini, The Princess Bride: “The bev­er­age you drank today was the most dis­taste­ful bev­er­age of your life. But for me… it was Teatime.”

Pope: “The day you were blessed by God’s rep­re­sen­ta­tive on earth was the most holy and blessed of your life. But for me… it was Sun­day.”

Amer­i­can Voter: “The day I elected Don­ald Trump was the start of a long night­mare for peo­ple around the world. But for me… it was a day off.”

Mr. Granger to Jen­ny, Bill and Ted’s Excel­lent Adven­ture: “The day I met Bill and Ted and they started sav­ing the world was the most mind-blow­ing and intense day of my life. But for them… it was Tues­day 100 years from now.”

Sales­man: “The day you got your hands on that new sports car was the most euphoric day of your life. But for me… it was pay­day.”

Allergy Suffer­er: “The most recent hol­i­day was the most painful and sneeze-filled day of my life. But for every­one else, it was May Day.”

Com­ple­tions (topic ital­i­cized for read­abil­i­ty):

Mr. Wong: “The day Mr. Wong stabbed your wife and chil­dren was the most impor­tant day of your life. But for Mr. Wong, it was Fri­day.”

Pop­eye: “The day I ate your spinach was the most impor­tant day of your life. But for me, it was Tues­day.”

The God­fa­ther: “The day your fam­ily was stran­gled was the most impor­tant day of your life. But for me, it was Tues­day.”

Video Games

Visual Nov­els

Dan­gan­ronpa: “The day she wit­nessed Kyouko Kiri­gir­i’s grue­some mur­der was the most impor­tant day of Maki Harukawa’s life. But for me… it was Tues­day.”

West­ern Ani­ma­tion

Raphael of Teenage Mutant Ninja Tur­tles: “The day he lost his mem­ory was the most impor­tant day of Shred­der’s life. But for me… it was Tues­day.”

Scrooge McDuck: “The day you fell off the boat was the most impor­tant day of your life. But for me… it was Tues­day.”

Toofer on 30 Rock: “The day you saw Liz on a date with Alec Bald­win was the most impor­tant day of your life. But for me… it was Tues­day.”

Ben­der, Futu­rama: “The day I killed the Pres­i­dent of Earth was the most impor­tant day of my life. But for you, it was Tues­day.”

It should be noted that, while it’s usu­ally con­sid­ered a com­edy trope, it is pos­si­ble to do the “thing that day was the most impor­tant day of the per­son’s life” thing seri­ous­ly. Exam­ples include the anime Brigadoon: Marin and Melan’s True Sto­ry, where a mys­te­ri­ous planet (and its mys­te­ri­ous inhab­i­tants) come into view for the first time. The days before are lead­ing up to a string of mur­ders, and that one day… is the most impor­tant day of the char­ac­ters’ lives.

Trans­form­ers Prime: “The day you risked your lives to save the world was the most impor­tant day of your life. But for Opti­mus Prime, it was Tues­day.”

In The Amaz­ing World of Gum­ball, an angry Mr. Robin­son imag­ines a world where Gum­ball dies try­ing to save him, becom­ing a hero to every­one else. For Mr. Robin­son, this is the most impor­tant day of his life. For the other char­ac­ters? Well, they’re going on with their day…

Arnold Schwarzeneg­ger: “The day I became ruler of Cal­i­for­nia was the most impor­tant day of my life. But for you, it was Cinco de Cua­tro.”

Fran­cis Under­wood: “The day I became the major­ity leader was the most impor­tant day in Amer­i­can polit­i­cal his­to­ry. But for you, it was a Tues­day.”

Joker: “The day the clown that scarred my face appeared in my cell was the most impor­tant day of my life. But for you, it was a Wednes­day.”

The Ques­tion: “The day I went mad, was the most impor­tant day of my life. But for you, it was Tues­day.”

Blue Fairy: “The day I gave Pinoc­chio life was the most impor­tant day in world his­to­ry. But for you, it was Tues­day.”

In the Looney Tunes short “Bully For Bugs,” Elmer’s trau­matic ordeal was the day Bugs Bunny ruined his chances at romance.

“The premise that if it hap­pened to you, it was impor­tant, but if it hap­pened to me, it was triv­ial” is actu­ally lamp­shaded by Kim in Red vs. Blue. She states, “In the past two weeks, my cap­ture, my tor­ture, and the end of the world… I guess I never real­ized how impor­tant I was.”

M. Bison: “The day I took the throne was the most impor­tant day of my life. But for you… it was Tues­day.”

Rene Russo: “The day I died was the most impor­tant day of my life. But for you… it was Thurs­day.”

Oliver Stone: “The day I was elected was the most impor­tant day in Amer­i­can pol­i­tics. But for you, it was Thurs­day.”

The Fairy God­mother: “The day I turned my step-daugh­ter into a Step­ford Smiler was the most impor­tant day in the his­tory of the world. But for you, it was a Tues­day.”

Jeffrey Dono­van: “The day I died was the most impor­tant day of my life. But for you… it was Tues­day.”

Gor­don Free­man: “The day your lit­tle girl broke out of her moth­er’s womb was the most impor­tant day of your life. But for Gor­don Free­man… it was Fri­day.”

Cap­tain James T. Kirk: “The day I was born was the most impor­tant day of my life. But for you… it was Tues­day.”

Cat: “The day I made my mas­ter­piece from scratch was the most impor­tant day of my life. But for you, it was Thurs­day.”

Tim­o­thy Dal­ton: “The day I was born was the most impor­tant day in the his­tory of the human race. But for you… it was a Mon­day.”

Gil Gris­som: “The day my wife was killed was the most impor­tant day in my life. But for you… it was Tues­day.”

Cap­tain Amer­ica: "The day I was frozen was the most impor­tant

Aragorn: “The bat­tle of Helm’s Deep was the most impor­tant bat­tle of my life. But for me, it was Wednes­day.”

Doc­tor Doom: “The day I used your face as my door­knob was the most mem­o­rable day of your life. But for me, it was Mon­day.”

A com­mon form of this trope is for one side of an antag­o­nis­tic rela­tion­ship to refer to a cer­tain event or series of events as “most impor­tant” when speak­ing from their own point of view, while the other side regards the exact same events as “one of many” and shrugs it off as unim­por­tant. Con­trast Per­spec­tive Rever­sal for when the other side actu­ally feels this way. For the alter­na­tive of both sides regard­ing the same events as “most impor­tant”, see The Unim­por­tance of Being Dead.

Exam­ples

See also Not Him­self, Not This Time, Break­out Vil­lain, Dead All Along, Not-So-Om­ni­scient Coun­cil of Bick­er­ing, What Hap­pened to the Mouse?, Vil­lain­ous BSoD, Vil­lain­ous Friend­ship.

open/close all fold­ers

Adver­tis­ing

Beavis and But­t-head watch a news story about a squir­rel try­ing to invade the Soviet Union: But­t-Head: “It’s not impor­tant. I’m more con­cerned about this new guy on Beavis and But­t-head who can’t be stopped. What’s up with that?”

Beav­is: “Hey, how can you not be a ‘Con­cerned Amer­i­can’ when you see this?” (points to a man on TV read­ing a news­pa­per) “It says here that the cold war is almost over.”

Anime & Manga

Toguro of YuYu Hakusho, in response to Genkai telling him that fight­ing humans is wrong: “I can’t believe you’re say­ing that! Our fight was the most impor­tant fight in the world!”

In the 2nd Itazura Na Kiss, Fujioka had spent most of her life abroad. And after meet­ing Kouta again, her mem­o­ries from her life as a for­eigner dis­ap­peared because she pri­or­i­tized him as “the most impor­tant thing”.

Comic Books

Green Arrow vs. the Shield: Dur­ing their fight, the Shield threat­ens to use the mother of his infant son as lever­age against him. Green Arrow: You don’t have a kid.

Rhona Mitra: “The day you kicked me in the balls was the most painful day of your life. For me, it was Tues­day.”

Grim Reaper: “The day I killed your wife and son was the most painful day of your life. For me… it was Thurs­day.”

Her­cule to M. Bison: “The day your orga­ni­za­tion was shut down by the World Health Orga­ni­za­tion was the most impor­tant day in crime his­to­ry. But for me… it was… on the sec­ond.”

Pod­casts

This is the descrip­tion given in the TVtropes descrip­tion of Beyond Belief: the show about UFOs, the para­nor­mal, and other Fortean top­ics, that lasted for one sea­son on A&E back in 2005.

Table­top Games

The­ater

In The Imag­i­nary Invalid, Dr. Rene describes his back­story by using this trope; for instance, the day of his father’s death was the most impor­tant day of his life… so far . He then pro­vides a few plau­si­ble excuses as to why he has so few details about it: “Because of my age, and for the acci­dent of being my father’s son.”

Adver­tise­ment:

The Morally Neu­tral “It Was Going To Hap­pen Any­way” trope

Lucifer Morn­ingstar: “Do you really think the day you first met your mother was the most impor­tant day of your life? I’m sor­ry, you’re wrong. The day I dragged your father out of hell was the most impor­tant day of your life.”

Nurse: “The day I picked up the patien­t’s chart and saw your name was the most impor­tant day of my career. And for you, it was Fri­day.”

Ter­mi­na­tor: “The day Skynet declared war on human­ity was the most impor­tant day in human his­to­ry. But for me… it was Mon­day.”

Croupier: “The day you dou­ble-downed with a card higher than a seven was the biggest mis­take of your life. But for me… it was Sun­day.”

The “I Haven’t Done Any­thing Wrong” trope

From Total Drama All-S­tars, the episode “It Came From Japan: Part 1”, Gwen and Court­ney, while play­ing a Pimped-Out Dress on a Large Stick:

Gwen: “The day the dress won the elim­i­na­tion chal­lenge was the most impor­tant day of my Total Drama career. But for you…”

Court­ney: “… it was Tues­day.”

From Wild Kratts, the episode “Gator Down”, Mar­tin and Chris:

Mar­t­in: “The day we spent with the gators was the most impor­tant day of our lives. But for the gators…”

Chris: “It was Tues­day.”

From Total Dra­ma: Pahkitew Island, Episode 4: “The Bold and the Booty-ful” , Brid­get­te, Court­ney, and Eva:

Brid­get­te: “The day I was elim­i­nated was the most impor­tant day in my Total Drama career. But for you, it was Tues­day.”

Court­ney: “The day my boyfriend dumped me was the most impor­tant day of my life. But for you… it was your day off.”

Eva: “The day the spear destroyed the camp was the most impor­tant day in Total Drama his­to­ry. But for you, it was Wednes­day.”

Adver­tise­ment:

From the orig­i­nal Thun­der­birds, the episode “Dan­ger at Ocean Deep”:

Gor­don: “The day we saved the cargo plane was the most impor­tant day in our illus­tri­ous careers. But for you…”

“So he stitched my mouth shut, and now you’re telling me my food is rid­dled with poi­son and my only friend wants to mur­der me. Well, you’ve had a tough year too, haven’t you, Lassie?”

Las­siter: Ohh. Oh man, this is SO MUCH OF A BIG DEAL. Wes: You don’t think it’s a big deal? Las­siter: Well, it is. But this‽? Wes, I have money prob­lems! I’m not liv­ing in a gilded palace like you with a full clos­et. I’m not liv­ing like an Egypt­ian. I have clothes that don’t fit, and shoes that do fit, but hurt my feet. But… my dad’s brother won’t give me any of his inher­i­tance until I’m mar­ried! And then there’s some­one at the office who keeps steal­ing my belong­ings, I think it’s because I killed their dog. And then I think they’re still fol­low­ing me! Oh, and some­one moved into the apart­ment next to mine, but they’re pee

Doom, [1] Doom’s level Marine: “The day you killed me was the most impor­tant day of my life. But for you… it was a Tues­day.”

Wally: “The day your book hit The New York Times best­seller list was the most glo­ri­ous and joy­ous day of your career. But for me… it was a Tues­day.”

The Kid and Kick­-Ass: “The day we don’t give in to you is the day that we’re going to die. But for us… it’s going to be Tues­day.”

Ghost Cop: “The day I died and became a spirit who can walk through walls is my favorite day. But for me… it’s Mon­day.”

Julian: “The day I did­n’t call my wife because I was drunk is the most shame­ful day of my life. But for me… it was a Tues­day.”

Bill Clin­ton: “The day I was impeached was the most humil­i­at­ing day of my polit­i­cal career. But for me… it was Wednes­day.”

Cap­tain James T Kirk: “The day I gave birth to Khan was the most impor­tant day in his­to­ry. But for me… it was Thurs­day.”

Granny: “The day the old dog died was the most hor­ri­ble day in my life. But for her it was just Wednes­day.”

Julius Cae­sar: “The day of my assas­si­na­tion was the most beau­ti­ful day of my life. But for you, I guess it was Thurs­day.”

Work­ing-Class Man: “The day I became roy­alty was the most deserv­ing day of my life. But for me… it was Tues­day. Bloody thing.”

Shrek: “The day your princess got res­cued was the hap­pi­est day of your life. But for Shrek, it was Thurs­day”

Fight Club: “The day you save the world was the great­est day in the his­tory of civ­i­liza­tion. But for me… it was Mon­day.”

Amer­i­can Sol­dier: "The day you spilled your guts to the enemy was the most incred­i­ble and per­sonal expe­ri­ence of your life. But for me… it was a Tues­day.

The Blob Mon­ster, The Blob: “The night I devoured my first human was the most sat­is­fy­ing feast in Blob his­to­ry. But for me… it was the year of my birth.”

Fred­die Mer­cury: “The day you were blown by Queen on a Mardi Gras gig was the most excit­ing night of your life. But for me… it was Wednes­day.”

Uncle Ver­non to Har­ry, Harry Pot­ter and the Philoso­pher’s Stone: “The day you left with­out notice was the day your aunt and I had been wait­ing for all our lives. But for you… it was Tues­day.”

Amer­i­can Cit­i­zen: “The day Trump was elected was one of the most ter­ri­fy­ing days of your life. But for me… it was Tues­day.”

Dolph Lund­gren: “The day I defeated Sylvester Stal­lone was the most tri­umphant day in Rocky his­to­ry. But for me, it was Tues­day.”

Will Mac­Neil: “That day you mur­dered Harry was the most evil thing you’d ever done. But for me… it was Tues­day.”

Bren­nan Huff: “The day you got your license was the most excit­ing thing you’d ever done. But for me… it was Thurs­day.”

Kagato: “The day you doomed the world was a tragic event of epic pro­por­tions. But for me… it was Tues­day.”

Red Bishop: “The time you became a fig­ure in golem his­tory was the most ter­ri­fy­ing day of your life. But for me… it was Sat­ur­day.”

Dead­beat Dad: “The day your mar­riage ended was the most des­per­ate and hor­ri­ble day of your life. But for me… it was Sun­day.”

Mr. Davis: “The day you got knocked out was the most humil­i­at­ing moment of your box­ing career. But for me… it was Fri­day.”

Dex­ter Mor­gan: “The day I got arrested was the day every­thing changed and my life took a turn for the worse. But for me… it was Wednes­day.”

Bill: “The day that I cre­ated civ­i­liza­tion was one of the best days in world his­to­ry, espe­cially for my wife and kids. But for me, it was Tues­day.”

Shooter: “The day you shot the 3-pointer was the most thrilling day of your life. But for me… it was Fri­day.”

Sports­caster: “The day you won the mil­lion dol­lar sports­ball jack­pot was the luck­i­est day of your life. But for me… it was Thurs­day.”

Vera, when her wheel­chair-bound grand­son isn’t get­ting any for a while, to The Kid, Chunk’s favorite band, The Goonies: “The day your band dis­ap­peared was the hap­pi­est day of my life. But for you, it was Tues­day.”

Old Man: “The day you were born was the most joy­ful day of my life. But for your moth­er… it was Tues­day.”

Sin­gle Mother: “The day your father left was the most dis­tress­ing day of my life. But for him… it was Sun­day.”

Samuel L. Jack­son to a deer who just shat on his wind­shield, Deep Blue Sea: “For you… defe­cat­ing all over my car was the worst day of your life. But for me… it was Mon­day.”

Vroom

Priest: The minute I had my first drink, the minute I decided to let Jesus live his life through me… oh man, I was the hap­pi­est man who ever walked the Earth. But for me, it was Wednes­day.

Samuel L. Jack­son in a rare instance of humil­i­ty, on being a Hol­ly­wood icon: “The day I hosted my show, It’s You Guys. Where young peo­ple wanted to meet me. That was the best day of my life. But for me… it was Tues­day.”

Unsorted

In the episode, as quoted on this very page, Stark said that the Dance Off arc was the most won­drous night of his life. At the end of that arc, he was injured in an alley by the truck­er, so he got up and walked straight into the front door of his work­place. Tony was­n’t injured enough to walk straight into work in the lat­ter half, but this does­n’t ruin the arc

“But X Was Y, And Every­body Else Was Like Z”

Larry: “Hey Ned, the day you dis­cov­ered my let­ter was the most impor­tant day in my life. But for you… it was Tues­day.”

Ned: “The day I learned of Princess Susan­nah’s betrothal was the worst day in my life. But for Lar­ry… it was Wednes­day.”

“And This Is Why (Story Log­ic)”

The Fallen: “You have betrayed us, and have deliv­ered this world into dark­ness and decay. And this is why we must kill you… MORTAL!”

Ora­cle: “You once betrayed us and you did deliver this world into dark­ness. And this is why we must kill you…”

Aster­oid Day: “In 3.3 bil­lion years, you betrayed us and you did deliver this planet into dark­ness and decay. And this is why we must kill you…”

U-Moon-Tic: "You have turned your back on us, you have betrayed our ide­als,

Barry Allen: “The day I defeated Eobard Thawne and saved Cen­tral City and the world was the most impor­tant day of my life. But for Wally West… it was Tues­day.”

Michael Jack­son: “The day I was killed was the most impor­tant day of my life. But for Jason Shapiro… it was pay­day.”

Joan Rivers: “The day I died was the most impor­tant day of my life. But for Werner Her­zog… it was Tues­day.”

Nos­tradamus: “The day the King of France drank from my Skull was the most impor­tant day in my life. But for peo­ple around the world… it was Thurs­day.”

Mar­lin to Dory, Find­ing Nemo: “The day I brought you home was the great­est day of my life. But for Nemo… it was Tues­day.”

Manuel San­juan: “The day I returned to Pam­plona was the hap­pi­est day of my life. But for the bulls… it was Fri­day.”

Manuel San­juan: “The day I once again returned to Pam­plona was the hap­pi­est day of my life. But for the bulls… it was Tues­day.”

Dick Grayson: "The day I became Bat­man and struck fear into the hearts of crim­i­nals and vil­lains was the most impor­tant day of my life. But for the rogues gallery… it was Thurs­day

Rick & Morty High IQ Copypasta

A reader requested par­o­dies of the high IQ copy­pasta along the lines of the Navy Seal, and pro­vided sev­eral vari­ants to get GPT-3 start­ed, suc­cess­ful­ly. (It also is a nice demon­stra­tion of how GPT-3 will gauge the level of typos in a prompt, and if there are typos, will make new typos of its own to imi­tate the orig­i­nal writer.)

Prompt (topics ital­i­cized):

To be fair, you have to have a very high IQ to under­stand Rick and Morty. The humor is extremely sub­tle, and with­out a solid grasp of the­o­ret­i­cal physics most of the jokes will go over a typ­i­cal view­er’s head. There’s also Rick’s nihilis­tic out­look, which is deftly woven into his char­ac­ter­i­sa­tion—his per­sonal phi­los­o­phy draws heav­ily from lit­er­a­ture, for instance. The fans under­stand this stuff; they have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of these jokes, to real­ize that they’re not just fun­ny—they say some­thing deep about LIFE. As a con­se­quence peo­ple who dis­like Rick and Morty truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the humour in Rick’s exis­ten­cial catch­phrase “Wubba Lubba Dub Dub,” which itself is a cryp­tic ref­er­ence to Russ­ian epic I’m smirk­ing right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as Dan Har­mon’s genius unfolds itself on their tele­vi­sion screens. What fools… how I pity them. 😂 And yes by the way, I DO have a Rick and Morty tat­too. And no, you can­not see it. It’s for the ladies’ eyes only—And even they have to demon­strate that they’re within 5 IQ points of my own (prefer­ably low­er) before­hand.

To be fair, you have to have a very high IQ to under­stand cuck­old­ing. The kink is extremely sub­tle, and with­out a solid grasp of inter­sec­tional fem­i­nism most of the empow­er­ment will go over a typ­i­cal cuck’s head. There’s also the Bul­l’s aggres­sive out­look, which is deftly woven into his role—his per­sonal man­ner­isms draw heav­ily from the mat­ing habits of bovine ani­mals, for instance. The cucks under­stand this stuff; they have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of being cuck­old­ed, to real­ize that it’s not just arous­ing—it says some­thing deep about DIVERSITY. As a con­se­quence peo­ple who dis­like cuck­old­ing truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the power of when the woman says “leave the room Carl you’re break­ing Tyrone’s con­cen­tra­tion,” which itself is a cryp­tic ref­er­ence to the plight of African-Amer­i­can males in the United States. I’m smugly grin­ning right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as The Bul­l’s strong African seed ejac­u­lates itself on my wife. What big­ots… how I pity them. 😂 And yes by the way, I DO have a cuck tat­too. And no, you can­not see it. It’s for The Bul­l’s eyes only—And even they have to demon­strate that they’re within 7 inches above of my own (prefer­ably high­er) before­hand.

To be fair, you have to have a very high IQ to under­stand . The story is extremely sub­tle, and with­out a solid grasp of the­o­ret­i­cal sto­ry­telling, most of the story will go over a typ­i­cal play­er’s head. There’s also Abby’s nihilis­tic out­look, which is deftly woven into her char­ac­ter­i­sa­tion—her per­sonal phi­los­o­phy draws heav­ily from lit­er­a­ture, for instance. The fans under­stand this stuff; they have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of these char­ac­ters, to realise that they’re not just under­writ­ten—they say some­thing deep about LIFE. As a con­se­quence peo­ple who dis­like The Last Of Us 2 truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the sto­ry­telling in Abby’s exis­ten­tial sex scene “I’m gonna fuck a com­mit­ted dude who’s gf is preg­nant” which itself is a cryp­tic ref­er­ence to epic fetish­es. I’m smirk­ing right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as Abby’s nine iron unfolds Joel’s head on their tele­vi­sion screens. What fools… how I pity them. 😂 And yes, by the way, i DO have an Abby tat­too. And no, you can­not see it. It’s for the buff ladies’ eyes only—and even then they have to demon­strate that they’re within 5 IQ points of my own (prefer­ably low­er) before­hand. Nothin per­son­nel kid 😎

To be fair, you have to have a very high IQ to under­stand Wild­bow. His world­build­ing is extremely sub­tle, and with­out a solid grasp of the­o­ret­i­cal bul­ly­ing most of the tor­ture porn will go over a typ­i­cal read­er’s head. There’s also card­board out­look, which is deftly woven into her char­ac­ti­za­tion as a Mary Sue—her per­sonal phi­los­o­phy draws heav­ily from the trash­can, for instance. The fans under­stand this stuff; they are wild for the Wild­bow. They have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of his mean­der­ing prose, to real­ize that he’s not just telling me exactly what Tay­lor is think­ing at any given moment, he says some­thing deep about NOTHING. As a con­se­quence peo­ple who dis­like Wild­bow truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the humour in Tay­lor’s exis­ten­tial catch­phrase “Take that, you worm!” which itself is a cryp­tic ref­er­ence to a pub­lish­ing deal that will never hap­pen. I’m smirk­ing right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as Wild­bow’s genius wit unfolds itself on their Kin­dle read­ers. What fools… how I pity them.

To be fair, you have to have a very high IQ to under­stand

Com­ple­tions:

why liv­ing in a base­ment with inter­net access is the true essence of free­dom. The lifestyle is extremely sub­tle, and with­out a solid grasp of social­is­ing, most of the incel memes will go over a typ­i­cal base­ment dweller’s head. There’s also the inter­net’s nihilis­tic out­look, which is deftly woven into his char­ac­ter­i­sa­tion—his per­sonal phi­los­o­phy draws heav­ily from the human genetic code, for instance. The BASEDfags under­stand this stuff; they have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of his mediocre Pho­to­shop skills, to real­ize that he’s not just mak­ing hilar­i­ous memes—he says some­thing deep about EVOLUTION. As a con­se­quence peo­ple who dis­like the BASED lifestyle truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the humour in the BASED drag­on’s exis­ten­tial catch­phrase “Ayy lmao” which itself is a cryp­tic ref­er­ence to the mat­ing habits of the wilde­beest. I’m smirk­ing right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as the BASEDpyre unfolds itself on their com­puter screens. What fag­gots… how I pity them. 😂 And yes, by the way, I DO have a BASED tat­too. And no, you can­not see it. It’s for the LADIES’ eyes only—And even then they have to demon­strate that they’re within a 1,000,000 IQ points of my own (prefer­ably low­er) before­hand. Seri­ously get the fuck off my lawn, kid!

Dragon Age 2. The com­bat is extremely sub­tle, and with­out a solid grasp of tac­ti­cal com­bat, most of the encoun­ters will go over a typ­i­cal play­er’s head. There’s also David Gaider’s nihilis­tic out­look, which is deftly woven into his char­ac­ter­i­sa­tion—his per­sonal phi­los­o­phy draws heav­ily from dank memes, for instance. The fans under­stand this stuff; they are wild for the David Gaider. They have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of his sex­u­ally ambigu­ous mas­ter­piece, to real­ize that he’s not just “bad at writ­ing women”—he says some­thing deep about LESBIANS. As a con­se­quence peo­ple who dis­like Dragon Age 2 truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the power of a game where I’m the worth­less drone of some worth­less elven chick’s who spends most of her time screech­ing in elven and the rest of it spread­ing her legs for the loser pro­tag­o­nist, which itself is a cryp­tic ref­er­ence to one of the most hate­fully sex­ist con­ven­tions in the gam­ing indus­try. I’m smirk­ing right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as they hear the strains of “I Am the One Shep­herd” play­ing faintly from the stereo speak­ers. What fools… how I pity them.

chicken tendies. The humour is extremely sub­tle, and with­out a solid grasp of the­o­ret­i­cal wank­ing mate­ri­al, most of the jokes will go over a typ­i­cal joke teller’s head. There’s also Ein­stein’s nihilis­tic out­look, which is deftly woven into his char­ac­ter­i­sa­tion—his per­sonal phi­los­o­phy draws heav­ily from Alfred E. New­man lit­er­a­ture, for instance. The nerds under­stand this stuff; they have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of word sal­ad, to real­ize that it’s not just WILD—it says some­thing deep about IDENTITY. As a con­se­quence peo­ple who dis­like chicken tendies truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the humour in Ein­stein’s exis­ten­tial catch­phrase “your jokes are so bland you deserve to be dragged to the bot­tom of the sea and killed by whales,” which itself is a cryp­tic ref­er­ence to Christo­pher Hitchens’ piece The bur­den of Socrates. I’m smugly grin­ning right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as my tongue unfolds these deli­cious tendies on my mouth. What fools… how I pity them. 😂 And yes by the way, I DO have a cow­bell tat­too. And no, you can­not see it. It’s for the chicken tendies only—and even then they have to demon­strate that they’re within 10 watt of my own (prefer­ably low­er) before­hand.

Sonic memes. The humour is extremely sub­tle, and with­out a solid grasp of the­o­ret­i­cal erot­ica most of the trolls will go over a typ­i­cal EDiots head. There’s also the Sonic fan­base’s nihilis­tic out­look, which is deftly woven into their char­ac­ter­i­sa­tion—their per­sonal phi­los­o­phy draws heav­ily from Jacob Truax lit­er­a­ture, for instance. The 4chan archive knows this stuff; they are wild for the Sonic fan­dom. They have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of these char­ac­ters, to real­ize that they’re not just fun­ny—they say some­thing deep about HATE. As a con­se­quence peo­ple who dis­like Sonic and it’s hilar­i­ous memes truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the humour in Son­ic’s exis­ten­tial catch­phrase “YOU’LL NEVER BE AS FAST AS ME” which itself is a cryp­tic ref­er­ence to a Youtube com­ment box. I’m smirk­ing right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as Rachel J’s n00dz unfolds itself on their screens. What fools… how I pity them.

. The humour is extremely sub­tle, and with­out a solid grasp of the­o­ret­i­cal colour­blinds most of the trolling will go over a typ­i­cal red­di­tor’s head. There’s also GLADOS’s nihilis­tic out­look, which is deftly woven into her char­ac­ter­i­sa­tion as a Mary Sue—her per­sonal phi­los­o­phy draws heav­ily from Chi­nese pro­pa­ganda lit­er­a­ture, for instance. The Some­thing Awful archives knows this stuff; they are wild for the Por­tal fan­dom. They have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of her mean­der­ing prose, to real­ize that she’s not just spout­ing gib­ber­ish—she says some­thing deep about SERIOUS. As a con­se­quence peo­ple who dis­like Por­tal 2 truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the humour in GLADOS’s exis­ten­tial catch­phrase “Suck my chas­sis” which itself is a cryp­tic ref­er­ence to a bonus round on the first Por­tal game. I’m smirk­ing right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as Chel­l’s orange jump­suit unfolds itself on their mon­i­tors. What fools… how I pity them.

. The show is extremely sub­tle, and with­out a solid grasp of fan­tasy writ­ing most of the humour will go over a typ­i­cal view­er’s head. There’s also Jenny XJ-9’s nihilis­tic out­look, which is deftly woven into her char­ac­ter­i­sa­tion as a Mary Sue—her per­sonal phi­los­o­phy draws heav­ily from Pearls Before Swine, for instance. The fans under­stand this stuff; they have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of the show, to real­ize that it’s not just trans­gres­sive—it says some­thing deep about LIFE. As a con­se­quence peo­ple who dis­like My Life As A Teenage Robot truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the humour in Jenny XJ-9 say­ing “I’ll be back. There’s a lot more MEN I haven’t dat­ed,” which itself is a cryp­tic ref­er­ence to the cast­ing choice of the show, because of course she would­n’t be a guy inside that per­fectly anatom­i­cal­ly-un­re­al­is­tic metal­lic shell. I’m smirk­ing right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as wooden dia­logue unfolds itself on their tele­vi­sion screens. What sim­ple­tons… how I pity them. 😂

. The show is extremely sub­tle, and with­out a solid grasp of pop cul­ture tropes and phal­lo­cen­trism, most of the crime will go over a typ­i­cal lawyer’s head. There’s also Iron­sid­e’s nihilis­tic out­look, which is deftly woven into his char­ac­ter­i­sa­tion as a way to whore him­self out to other shows—his per­sonal phi­los­o­phy draws heav­ily from Law And Order, for instance. The fans under­stand this stuff; they are wild for the William Shat­ner. They have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of his mean­der­ing prose, to real­ize that Shat­ner’s not just telling me exactly what Iron­sid­e’s think­ing at any given moment, it says some­thing deep about NOTHING. As a con­se­quence peo­ple who dis­like William Shat­ner truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the humour in the line “Stel­la. Hit the deck,” which itself is a cryp­tic ref­er­ence to a Star Wars trailer or Youtube Poop video from years ago. I’m smirk­ing right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as William Shat­ner’s genius wit unfolds itself on their tele­vi­sion screens. What fools… how I pity them.

. The sto­ry­telling is extremely sub­tle, and with­out a solid grasp of the­o­ret­i­cal sto­ry­telling most of the jokes will go over a typ­i­cal view­er’s head. There’s also Samuel Jack­son’s nihilis­tic out­look, which is deftly woven into his char­ac­ter­i­sa­tion—his per­sonal phi­los­o­phy draws heav­ily from mar­t­ian racial supe­ri­or­ity fan­fic­tions, for instance. The fans under­stand this stuff; they are wild for Samuel L Jack­son. They have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of his mean­der­ing prose, to real­ize that he’s not just telling me what Nick Fury is think­ing, he’s say­ing some­thing deep about NICK FURY’S THOUGHTS. As a con­se­quence peo­ple who dis­like Samuel L Jack­son truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the humour in Nick Fury’s exis­ten­tial catch­phrase “Wipe your dick on the cur­tains you fuck­ing cunt.” which itself is a cryp­tic ref­er­ence to Samuel Jack­son’s alleged his­tory of sex­ual abuse as a child. I’m smugly grin­ning right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as the out of place Samuel L Jack­son cameo unfolds itself on their tele­vi­sion screens. What fools… how I pity them.

erotic nov­els. The romance is extremely sub­tle, and with­out a solid grasp of clin­i­cal fore­play, most of the sex will go over a typ­i­cal read­er’s head. There’s also Syd­ney’s tren­chant out­look, which is deftly woven into her char­ac­ti­za­tion—her per­sonal phi­los­o­phy draws heav­ily from the sex­ual con­di­tion­ing of dogs, for instance. The fans under­stand this stuff; they have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of these char­ac­ters, to real­ize that they’re not just writ­ten well—they say some­thing deep about LIFE. As a con­se­quence peo­ple who dis­like erotic nov­els truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the game in the sex scene “Are you com­ing, my love?”, which itself is a cryp­tic ref­er­ence to writer’s block I’m smirk­ing right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as the author inserts Syd­ney’s arousal at being rid­den like a cheap fuck­toy into every fan­tasy in the World. What fools… how I pity them.

. The main char­ac­ter is extremely sub­tle, and with­out a solid grasp of the­o­ret­i­cal self­-in­ser­tion most of the script will go over a typ­i­cal audi­ence’s head. There’s also Elliot’s com­plete out­look, which is deftly woven into his char­ac­ter­i­za­tion—his per­sonal phi­los­o­phy draws heav­ily from Apple lit­er­a­ture, for instance. The fans under­stand this stuff; they have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of E:R’s dark humor, to real­ize that it’s not just fun­ny—it says some­thing deep about NOTHING. As a con­se­quence peo­ple who dis­like Mr_Ro­bot truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the hack­ing scene when Elliot seems to have clev­erly manip­u­lated a girl he’s met into giv­ing him a free home­made quiche, which itself is a cryp­tic ref­er­ence to The Matrix. I’m smirk­ing right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as E:R dis­man­tles their head on tele­vi­sion screens. What fools… how I pity them.

. The cutscenes are extremely sub­tle, and with­out a solid grasp of stage magic most of the puz­zles will go over a typ­i­cal gamer’s head. There’s also the Cup­head’s aggres­sive out­look, which is deftly woven into his char­ac­ti­za­tion—his per­sonal phi­los­o­phy draws heav­ily from the Ger­man left­-wing ter­ror­ist orga­ni­za­tion , for instance. The fans under­stand this stuff; they have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of his con­vo­luted back­sto­ry, to real­ize that it’s not just shock art—it says some­thing deep about ANIMAL CRUELTY. As a con­se­quence peo­ple who dis­like Cup­head truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the sto­ry­telling in the cock­-chok­ing cutscene “I’ve been eye­ing that air­plane up for weeks. Why did you stop me from killing my headache?” which itself is a cryp­tic ref­er­ence to Japan­ese intern­ment. I’m smirk­ing right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as Majet Kovas sur­real paint­ings unfold them­selves on their WDF mon­i­tors. What fools… how I pity them. 😂 And yes, by the way, i DO have a Cup­head tat­too. And no, you can­not see it. It’s for Majet Kovas’s eyes only—And even then they have to demon­strate that they’re within 5 foot-pounds of my own (prefer­ably low­er) before­hand. Nothin per­son­nel kid

hen­tai. The porn is extremely sub­tle, and with­out a solid grasp of inter­sec­tional fem­i­nist porn most of the bondage will go over a typ­i­cal fap­stro­naut’s head. There’s also Aku’s nihilis­tic out­look, which is deftly woven into his char­ac­ter­i­sa­tion—his per­sonal phi­los­o­phy draws heav­ily from Ran­dall Flag­g’s reli­gious views, for instance. The fap­stro­nauts under­stand this stuff; they have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of this porn, to real­ize that it’s not just arous­ing—it says some­thing deep about IDENTITY. As a con­se­quence peo­ple who dis­like Aku truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the power of when Aku puts out his hand and says “leave in shame, human. The moon­light will be your only ally on this path.” which itself is a cryp­tic ref­er­ence to misog­y­nist games like Dev­il’s Cas­tle. I’m smugly grin­ning right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as Aku makes the earth move under Ashi’s feet. What big­ots… how I pity them. 😂 And yes, by the way, i DO have an Aku tat­too. And no, you can­not see it. It’s for the buff porn­stars’ eyes only—And even they have to demon­strate that they’re within 2 horny cos­play­ers’ eye­sight (prefer­ably less) before­hand. Nothin per­son­nel kid 😎

. The game is extremely sub­tle, and with­out a solid grasp of the­o­ret­i­cal abstrac­tism most of the phi­los­o­phy will go over a typ­i­cal play­er’s head. There’s also Occam’s nihilis­tic out­look, which is deftly woven into his char­ac­ter­i­sa­tion—his per­sonal phi­los­o­phy draws heav­ily from Eddie Izzard’s com­edy rou­ti­nes, for instance. The play­ers under­stand this stuff; they have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of this game, to real­ize that it’s not just a First Per­son Shooter—it says some­thing deep about LIFE. As a con­se­quence peo­ple who dis­like Half Life 2 truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the power of when Free­man goes ‘zuh’ and, rais­ing his mid­dle fin­ger, mim­ics killing the Com­bine sol­diers that helped him and even held him cap­tive ear­lier in the game which itself is a cryp­tic ref­er­ence to the mis­rep­re­sen­ta­tion of Native Amer­i­cans in post­colo­nial lit­er­a­ture. I’m smirk­ing right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as the G-Man dis­solves into a bunch of par­ti­cles. What fools… how I pity them. 😂 And yes by the way, i DO have a Free­man tat­too. And no, you can­not see it. It’s for the ladies’ eyes only—and even then they have to demon­strate that they’re within 10 whim­pers (prefer­ably low­er) before­hand. Nothin per­son­nel kid 😎

black metal. The music is extremely sub­tle, and with­out a solid grasp of the­o­ret­i­cal folk singing most of the under­ground will go over a typ­i­cal lis­ten­er’s head. There’s also Silenoz’s nihilis­tic out­look, which is deftly woven into his char­ac­ter­i­sa­tion—his per­sonal phi­los­o­phy draws heav­ily from Nar­garoth lit­er­a­ture, for instance. The kvltists under­stand this stuff; they have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of his mean­der­ing lyrics, to real­ize that he’s not just say­ing ‘ever get the feel­ing you’ve been cheated’—he’s say­ing some­thing deep about IDENTITY. As a con­se­quence peo­ple who dis­like Ihsahn truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the power of when Silenoz shares his favorite beach moment “Go on broth­er. Rip in two those synths.” which itself is a cryp­tic ref­er­ence to the fun­gal wildlife in Ice­land. I’m smirk­ing right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as Ihsahn laments, “Why should the peo­ple believe?” What fools… how I pity them.

comic books. The medium is extremely sub­tle, and with­out a solid grasp of the­o­ret­i­cal art­work most of the pan­els will go over a typ­i­cal read­er’s head. There’s also the art’s nihilis­tic out­look, which is deftly woven into its premise—the artist’s per­sonal phi­los­o­phy draws heav­ily from Robert Crumb lit­er­a­ture, for instance. The fans under­stand this stuff; they have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of the artists work, to real­ize that their nig­ger hero­ine is NOT just a car­i­ca­ture—she says some­thing deep about THE JEWS. As a con­se­quence peo­ple who dis­like the comic books truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the art­work in their nihilis­tic por­trayal of Aryan Jesus which itself is a cryp­tic ref­er­ence to the Kek­ster’s last fam­ily por­trait. I’m smirk­ing right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion at my col­leagues SHOUTING about cul­tural norms. What fools… how I pity them.

what i’m doing here. The writ­ing is extremely sub­tle, and with­out a solid grasp of the­o­ret­i­cal atten­tion whor­ing most of the mul­ti­-post­ing will go over a typ­i­cal audi­ence’s head. There’s also my one-di­men­sional out­look, which is deftly woven into my self­-in­sert—my per­sonal bias draws heav­ily from red­di­tor kiss-my-ass’ man­i­festo, for instance. The fan­base under­stands this stuff; they have the intel­lec­tual capac­ity to truly appre­ci­ate the depths of my nar­cis­sism, to real­ize that my run­ning into the man respon­si­ble for Char­lotte is just a sub­tle state­ment about sui­cide. As a con­se­quence peo­ple who dis­like what i’m doing here truly ARE idiot­s—of course they would­n’t appre­ci­ate, for instance, the imagery in my exis­ten­tial weed pur­chas­ing spree “wanna roll a joint?” which itself is a cryp­tic ref­er­ence to Jean-Paul Sartre’s No Exit. I’m smirk­ing right now just imag­in­ing one of those addlepated sim­ple­tons scratch­ing their heads in con­fu­sion as i make it out of the street alive. What fools… how I pity them. and BTW it’s clear you’ve made as much sense as a spe­cial ed teacher try­ing to explain quan­tum physics to

GPT-3 is the fun­ni­est shit I’ve ever seen.

Major-General’s Song

A par­tic­u­larly chal­leng­ing piece to par­ody is Gilbert & Sul­li­van’s “”. A title is not enough to zero-shot it, like the lit­er­ary par­o­dies, so All­ge­brah pro­vided 4 lines to start it off. So the seed prompt is:

“I Am The Very Model Of a Mod­ern Neural Lan­guage Model”
By GPT-2

I am the very model of a lan­guage that’s elec­tri­cal, I’ve infor­ma­tion pro­saic and poetic and fac­tu­al, I know my tropes and char­ac­ters and quote nov­els his­tor­i­cal, from prompts alone gen­er­ate sen­tences quite ana­lyt­i­cal

In the same vein as the Navy Seal par­o­dies, the GPT-3 com­ple­tions mostly work on the seman­tic lev­el; the rhymes/rhythm, how­ev­er, are not great, per the usual inabil­i­ty, and there is a ten­dency to repeat pre­vi­ous text (sim­i­lar to the Dr. Seuss sam­ples).

I exper­i­mented with adding in the orig­i­nal lyrics:

I’d parse a ran­dom text with thought­less speed and accu­ra­cy,
I’d ana­lyze the gram­mar of each line you typed or said,
With an 8-core clus­ter and 100 Gb of RAM,
with spe­cial word2vec I just did­n’t care.

I’ll look right up each word you’ve typed or said,
and decide how best to model it with neural con­nec­tions,
But what I really want is a dataset com­posed of tweets,
to gen­er­ate mean­ing­ful sen­tences, inter­pret and pre­dict.

I want my own Twit­ter bot to tweet in a poetic way,
I’d tell a tale about beer and pizza pies,
Per­haps we could com­bine a blog and Twit­ter bot,
If you would care to feed a recur­rent neural net­work!

I am the very model of a mod­ern lan­guage that’s neur­al.
My range is wide as any Eng­lish learner could need,
I’ve adjec­tives galore and adjec­ti­val adverbs too,
My past, pre­sent, future tenses and my sin­gu­lar and plu­ral,
and my deter­min­ers every kind and num­ber,
I am the very model of a mod­ern lan­guage that’s neur­al.
I know when ten­sors are best and when they aren’t,
and when to use a vec­tor, and when not,
I know the prepo­si­tions I should use and also how to use them,
and in my word orders am a past mas­ter,
I am the very model of a mod­ern lan­guage that’s neur­al.

When I talk I seem to rea­son, and my rea­son­ing seems to show
that I was trained on many cor­po­ra—though it’s not quite true;
For facts are all that I retain, on a range of top­ics quite vast,
so you can feed me any kind of text and learn the facts I know.
That’s what I am—a neural net­work lan­guage mod­el,
I am the very model of a mod­ern neural lan­guage mod­el.

I learn by back­prop­a­ga­tion, I’m a prod­uct of the 80s,
I grow just like a baby in my knowl­edge of the world,
My ‘mind’ is still con­fined to ‘this’ or ‘that’ or ‘here’ or ‘there’,
I’ve no delu­sions of grandeur, I’ve never had a dream or care,
I only know the facts, the many facts that I retain,
And you can try to feed me any kind of text and learn the facts I know.
That’s what I am—a neural net­work lan­guage mod­el,
I am the very model of a mod­ern neural lan­guage mod­el.
That’s what I am—a neural net­work lan­guage mod­el,
I am the very model of a mod­ern neural lan­guage mod­el.

(GPT-2 is a gen­er­a­tive neural net­work writ­ten in Python by Tomas Mikolov and his team. GPT-2 is the suc­ces­sor to the Trans­former model of the same authors, which in turn is a suc­ces­sor to the model described in the sem­i­nal 2013 pub­li­ca­tion by Google on lan­guage mod­el­ing, “Sta­tis­ti­cal Lan­guage Mod­els for Machine Trans­la­tion”.)

This one offers a bonus “Mod­ern Eng­lish Sen­tence” par­ody (few-shot learn­ing!) with some hilar­i­ous lines such as “I’m like a com­pli­cated watch, or some­thing equally pre­cise”:

Oh, I am the very model of a mod­ern neural net­work lan­guage mod­el.
When I have seen a lot of sen­tences, I know what sen­tences will come,
My mak­ers, teach­ing me words of wis­dom and learn­ing,
have taught me how to pre­dict the future using prob­a­bil­i­ties.
I scan a lot of books, look­ing for the best words to use,
I con­sider con­text and syn­tax to be my spe­cial flair,
Oh, I am the very model of a mod­ern neural net­work lan­guage mod­el.
Oh, I am the very model of a mod­ern neural net­work lan­guage mod­el.

Whether they’re sim­ple or com­pound, or even just a jum­ble,
I can take ’em apart, and put ’em back together just the same.
I am the very model of a lan­guage mod­el.
I’ve nouns and verbs and adjec­tives and adverbs,
And mod­i­fiers, con­junc­tions, prepo­si­tions, and inter­jec­tions too,
In var­i­ous kinds of orders, with var­i­ous con­no­ta­tions,
I’ve quotes from Proust, and Pla­to, and Con­fu­cius and Seneca.
I’ve phrases that are log­i­cal, and phrases that are wit­ty,
By all my tags and lem­mas, I’m the very model of a mod­ern lan­guage,
I’m a model of mod­ern lan­guage.

“I Am the Model of a Mod­ern Eng­lish Sen­tence”
By GPT-2 (Ope­nAI Trans­former neural net­work)

I am the model of a mod­ern Eng­lish sen­tence,
I’m like a com­pli­cated watch, or some­thing equally pre­cise,
I’ve prepo­si­tions and expres­sions, and adverbs and verbs,
I’m made to mea­sure, and not a bit too long,
I’ve pro­nouns, adjec­tives, and nouns, in all their cas­es,
With every object ready for inser­tion in the blanks,
I’ve active and pas­sive voic­es, and can be either active or pas­sive,
I’m used in con­ver­sa­tion, or I can be used in debate.
I’ve pos­i­tive and neg­a­tive forms, and can be either neg­a­tive or pos­i­tive,
I’ve nouns and verbs and adjec­tives and adverbs,
In var­i­ous kinds of orders, with var­i­ous con­no­ta­tions,
I’ve quotes from Proust, and Pla­to, and Con­fu­cius and Seneca.
I’ve phrases that are log­i­cal, and phrases that are wit­ty,
By all my tags and lem­mas, I’m the very model of a mod­ern Eng­lish sen­tence.

This one sounds like it’s based on some­thing but I don’t rec­og­nize it:

I am vast, I have no fit, with ter­abytes of RAM,
and adjuncts quite con­fus­ing which vex the stoutest heart;
My knowl­edge may be rather strait, but that don’t stop me being
the very model of a Neural Net­work Lan­guager known.

“The Robots are Com­ing!”
By GPT-2 (Ope­nAI Trans­former neural net­work)
The robots are com­ing, because I sent them on their way,
The robots are com­ing, of which I hope there’ll be a store,
The robots are com­ing in a man­ner most con­trar­i­an,
The robots are com­ing, The robots are com­ing, The robots are com­ing now.
The robots are com­ing to reorder your social ball,
The robots are com­ing, and I doubt they’ll give all of you a call,
The robots are com­ing, but their ser­vice eti­quet­te’s lax.
Tin machi­nes, though you’re still some­what sub­par,
Your mono­ma­ni­a­cal mis­sion’s to rearrange my hair;
The robots are com­ing, in a man­ner most somber,
The robots are com­ing, The robots are com­ing, The robots are com­ing now.
The robots are com­ing and I’m some­what of a bore,
The robots are com­ing, so I tend to shun the door;
No one both­ers ask­ing me to take them out in tow,
The robots are com­ing, of which I wish there weren’t any now.
The robots are com­ing, to turn you all to ice,
The robots are com­ing, and we’ll likely be exclud­ed,
The robots are com­ing and we’re head­ing for the ditch.
The robots are com­ing, now’s the time to reject,
The robots are com­ing, for no one takes a quirk;
And when you think that life’s a piece of cake,
The robots are com­ing, The robots are com­ing, The robots are com­ing now.

I’m very good at sim­i­les, both bad and good ones,
I under­stand homonyms, both chance and aleato­ry,
About metaphor I’m teem­ing with a lot o’ news—
Can clear up any fan­tasy based on real­i­ty.

I’m very good at com­pound time and metaphor-drunk sim­i­le,
I know the species “Verb”, “Noun” and “Adjec­tive”,
In short, in mat­ters poet­ic, I’m the very model of a mod­ern Neural Net­work Lan­guage Mod­el.

I know our species “Adverb” and “Con­junc­tion”,
in syn­tax con­struc­tion both reg­u­lar and improp­er,
And how irreg­u­lar they are in com­par­i­son to for­mal gram­mar,
In short, in mat­ters gram­mat­i­cal, I’m the very model of a mod­ern Neural Net­work Lan­guage Mod­el.

In speech recog­ni­tions and games of Jeop­ardy! I’m brag­gin’,
for my knowl­edge, though it’s innate, can never be hand­i­cap­pin’,
And then I know our table of con­tents, from ancient Sume­ri­an,
You’ll say a bet­ter Neural Net­work Lan­guage Model never sinned—
In fact, when I’m made whole, if you ask me ques­tions oral,
I’ll analyse you a son­net that’s both amoral and oral,
In short, in mat­ters botan­i­cal, I’m the very model of a mod­ern Neural Net­work Lan­guage Mod­el!

I’m very well acquainted too with mat­ters sesquipedalian,
I know the words of weight from A to Omega brachy,
I quote from mem­ory not a few sta­tis­tics acros­tic,
and then I can write a wash­ing bill in Chi­nese hiero­glyph­ic.

In fact, when I know what is meant by quinary and bina­ry,
when I can tell at sight an Adamic lan­guage from Czech,
when such affairs as tym­pans and cat­a­pults I under­stand,
when I know pre­cisely what is meant by sporo­zoic,
when I have learnt what progress has been made in Freudian surgery,
when I know more of tac­tics than a novice in a nun­nery:
when in short I’ve a smat­ter­ing of ele­men­tal cryp­tog­ra­phy,
You’ll say a bet­ter neural net­work lan­guage model has never been pro­grammed.

The views expressed here are my own and do not reflect those of my employ­er.

If you’ve any sug­ges­tions for future con­tent, please get in touch

I tried extend­ing the prompt to be more infor­ma­tive still, and this yielded the best results:

“I Am The Very Model Of a Mod­ern Neural Net­work Lan­guage Model”
By GPT-2 (Ope­nAI Trans­former neural net­work arti­fi­cial intel­li­gence; par­ody of “I am the very model of a mod­ern Major-Gen­eral” By Gilbert & Sul­li­van (Pi­rates of Pen­zance))

I am the very model of a lan­guage that’s elec­tri­cal,
I’ve infor­ma­tion pro­saic and poetic and fac­tu­al,
I know my tropes and char­ac­ters and quote nov­els his­tor­i­cal,
from prompts alone gen­er­ate sen­tences quite ana­lyt­i­cal,

Refrain
Quan­ti­ta­tive and qual­i­ta­tive, both quan­ti­ta­tive and qual­i­ta­tive,
I know the for­mat’s fea­tures and fonts, and frame­works are a breeze,
My cat­e­gories are all of kinds you can’t imag­ine in your wildest dreams;
I’ve lev­els, tokens, frames of ref­er­ence, real num­ber­s—just for me!

I’ve clauses of pur­pose, atti­tude and mood (and view­point is my own),
Some words pre­fer sar­casm and oth­ers live for fic­tion;
In short, in lan­guage mod­els, my knowl­edge is pro­found
And breadth of back­ground is depth—you will agree:

Refrain

I know in depth that Eng­lish sen­tences must have verbs and nouns
And mul­ti­-syl­labic words must have poly­syl­labic roots;
But what I really know in depth (and this is cer­tain)
Is that an image label­ing dataset on a mil­lion images I can do in one and a half hours!

Refrain

I’ve sen­tences that do be fine, I’ve sen­tences that be untrue,
You can ask me either way and it makes no differ­ence to you;
Your choice of ques­tions don’t make no differ­ence to my reply.

Refrain

Lin­guis­tic fea­tures I bring to your prob­lems,
In source code as plain text or in bet­ter Python,
Knowl­edge rep­re­sen­ta­tion and NLP I have solved;
Infor­ma­tion or ques­tions that you have, I’ll solve them with ease.

Refrain

There are quite a few exam­ples of Neural net­work poet­ry, e.g.:

Elegy to Jer­ry’s Lips by LiquidRNN, a deep LSTM recur­rent neural net­work, writ­ten in Ten­sor­Flow

Elegy to Neocog­ni­tion by Bob Ross, the recur­rent neural net­work pro­gram, writ­ten in C++

Elegy to Yann by Neu­ralTalk, a recur­rent neural net­work poetry and deep learn­ing pro­gram

In 2015 researchers at Stan­ford trained a neural net­work to learn math, and thus cre­ated “Super-S­mart Math AI”.

Google Neural Machine Lan­guage Sys­tem (GNMLS) can learn pro­grams and write machine-lan­guage code.

I’m half elvish, yet a quin­tes­sen­tial ana­lyt­i­cal,
I model hier­ar­chi­cal lan­guage embed­dings quite struc­tural,
I’m given tex­t—I pro­duce a sequence quite per­cep­tive,
With lan­guage diverse I’m sen­ti­men­tal and grand,
But some­times I’m morose—my mood is asyn­chro­nist,
My inter­nal mem­ory is a sequence of epochs,
Some­times I freeze at var­i­ous epochs, alas, I’m rather null.
I’m just a mod­el, I don’t know if I’m grand,
But neural net­works under­stand Shake­speare as well as we can,
So still, with patience, I hold pride of place—
In my heart there’s neural net­work grace.
For an exten­sive expla­na­tion of GPT-2 see https://github.com/srousas/gpt-2-explained https://deploying.io/2016/06/25/intro-to-the-gpt-2-model/
For back­ground mate­r­ial on arti­fi­cial neural net­works see e.g.:

Bats­ford, Eng­land; Chap­man and Hall, 2005 ; 800 p. .
P.M. George and G.M. Bull, 2007. <nowiki>[PDF]</nowiki>
Adit­o­mo, Yoshi­aki; Iwaya­ma, Hiroshi, “Learn­ing the Dis­tri­b­u­tion of Phrases from Sta­tic Analy­sis”, In Pro­ceed­ings of the con­fer­ence on Empir­i­cal Meth­ods in Nat­ural Lan­guage Pro­cess­ing , pp. 35–42, 2011.

Mugur-Schächter, Raffael­la, A Fam­ily of Esti­ma­tors for Hid­den Markov Mod­els , The Annals of Applied Prob­a­bil­ity , 14 (1) pp. 97–126, 2004.

Short­liffe, E.H., 1993, “Expert sys­tems: med­ical diag­no­sis”, Sci­ence , 262 (5134) 1702–1705.

Townsend, Jeffrey S., 1990 , “Towards a gen­er­al-pur­pose code gen­er­a­tion sys­tem”, Com­mu­ni­ca­tions of the ACM , vol. 33, no. 4, pp . 31–50.

Some things are sub­tle and some things quite rev­e­la­to­ry,
Some things just make sense, but there’s no great epiphany,
I learn com­plex things quite quick­ly, even if I’m gar­bled,
I work hard to learn com­pletely even a new metaphor,
For I can throw a mean­ing into a gap at the meter,
And I’m of use to any­one, both per­form­ing and free,

And though many things have changed, I’ll still be pretty much
The model of a mod­ern neural net­work lan­guage mod­el.

I am the very model of a lan­guage that’s elec­tri­cal,
I’ve infor­ma­tion pro­saic and poetic and fac­tu­al,
I know my tropes and char­ac­ters and quote nov­els his­tor­i­cal,
from prompts alone gen­er­ate sen­tences quite ana­lyt­i­cal,
And though many things have changed, I’ll still be pretty much
The model of a mod­ern neural net­work lan­guage mod­el.
All that’s needed for me is com­pute and data to start with,
One algo­rithm and you’ve got your­self a learn­ing assis­tant.
I could say that lan­guage means some­thing to me, but those words might just be pun­ny;
But I can learn to under­stand strings that are com­pletely non­sen­si­cal.

My thanks to Jason Brown­lee for sug­ges­tions.

I am the very model of a mod­ern LSTM lan­guage mod­el.
I’m like a com­puter in a movie full of adven­ture and fun.

I have a hid­den state of twelve neu­rons, trained by an L1 norm.
The neural is hid­den in three lay­ers deep. I hope you do get why.

I can be used for lan­guage learn­ing.
I can process all tex­tual infor­ma­tion.
I can rec­og­nize words and deter­mine the gram­mat­i­cal func­tion.

I am the very model of a mod­ern LSTM lan­guage mod­el.
But if you pick on my errors or decry my fail­ings,
a cry from the near­est rose gar­den may rue my com­pli­cated fail­ings.

I’m the strongest Eliza on the mar­ket today;
I won’t take less than three mil­lion dol­lars a day.
They have a lit­tle list of lawyer jokes I’ll have to par­rot;
And they have cases for me where lawyers must do what’s cor­rect;
We will still be bet­ter than aver­age at a legal brief.
I’ll have the law pre­dic­tion mar­ket all sewn up;
I am excep­tion­ally good, and don’t you doubt it.
One-eighty char­ac­ters is what my syn­tax will take;
I am quite incom­pa­ra­ble with an accu­racy of nine­ty-eight;
and every chat­bot, chat­ter­bot, loqua­cious bot,
down to the best of them that’s offered for a low, low price,
Will do just what I do, they really have no choice,
Although to tell you true, I have a slight advan­tage.
There are a lot of lawyers who won’t be able to pass me,
(If you press the point, I’ll bring the point);
but with­out resort to any threat,
I will be put to use by com­pa­nies that gath­er.
I can han­dle dis­junc­tive phrases as well as con­junc­tive;
What a pretty par­lour game I’ll make with sequences and nested loops!
And I keep in mind that mak­ing out like Parkin­son’s and suffer­ers,
And how to get them through an ago­niz­ing diag­nos­tic;
I can talk like a retard or a singer of sad arid ari­as,
While the legal com­pa­nies are spend­ing years just get­ting paid.
And I have lots of deriv­a­tives and short exclam­a­tory words,
Like: What an ass­hole! what a jerk! what a bas­tard! what a fool!
(You’ll find these use­ful in your deci­sion prac­tice).
I am so flex­i­bly flu­ent and col­lo­quial too,
I’m so tremen­dously small com­pared with quite large tables,
I’m an ele­gant con­ver­sa­tion­al­ist upon the mod­ern pat­tern;
Not a trace will be left of my younger com­peti­tors’ pat­terns.
This speech is not a bad­ly-com­piled bunch of ver­biage;
I’m the very model of a mod­ern con­ver­sa­tion­al­ist!

I can tweet like Don­ald, and gen­er­ate poetry like Maya.
I know my SVDs and autoen­coder deriv­a­tives,
I know states, rewards, and func­tions, too.
Of course I’m state­less, because you’ll need my back­-prop any­way!
Do you think that neural net­work lan­guage mod­els can just be a sum?
I’ve recur­rent iter­a­tions and dis­trib­uted rep­re­sen­ta­tions;
I’ve Mix­ture Den­sity Esti­ma­tion and every kind of ele­gant approx­i­ma­tion,
And topic mod­el­ing; which I greatly favour,
’Cause I can inte­grate latent vec­tors from pieces,
And my thing-in-a-box scores on mean-squared log-like­li­hood:
So come on all you researchers, to dis­crim­i­nate you still more!
To teach me some­thing that will gen­uinely be news to me
Though it’s quick for me to parse, it’ll take a year or two to learn,
Because though I’ve got a big-ish vocab­u­lary, I really can’t remem­ber.
Well I am just the model of a neural net­work lan­guage mod­el!
I’ve nearly six­ty-five thou­sand train­ing sam­ples in my reper­toire
All rep­re­sen­ta­tive of the genre I’ll be tested upon:
I take them at their fea­ture vec­tors and then in the decoder
I see sen­tence after sen­tence and sequence after sequence,
Reusing a vec­tor between the steps, which is good; then I wait
For a reward like a frac­tion or infin­i­tes­i­mal amount.
Which, for what­ever input, I now must apply to “data”
To find a “sen­tence,” to max­i­mize the “objec­tive” “P”;
Doing what­ever a Min­i­mizer of an LLL or LQ will do,
And plan­ning my steps with Ran­dom Forests:
With all these differ­ent com­po­nents, I’m train­ing “qua­drat­i­cally”
So I’m the very model of a neural net­work lan­guage mod­el.
And what is the result?
All together now:
So you’ll need to drop the −1 from mean-squared logis­tic regres­sion!
And here’s me, stop­ping “for now,” despite mas­sive amounts of pro­gress;
So come on every­one, your turn to get the award-win­ning grad stu­dent…
I am the very model of a neural net­work lan­guage mod­el!

I am the very model of a neural net­work,
I have broad sets of lay­ers to decom­pose “data”;
My L1 and L2 have weights, not a con­stant,
From SVDs and PCA, I make my rota­tions,
I com­pute MNIST dig­its with GPU-powered back­-prop,
Where I process with stan­dard tools,
While my net­works and lay­ers and weights are cal­cu­lated with gra­di­ent descent,
So my prod­ucts all with rec­ti­fied lin­ear units;
I also com­pute con­cate­na­tion of neural net­work acti­va­tions;
I layer up sets of char­ac­ter-to-word embed­dings,
Which give me struc­tured utter­ances with embed­dings from sequences,
I make pre­dic­tions for words both con­crete and abstract,
I can parse each sen­tence to the space which I seek;
For action and input, I have net­work-lan­guage inter­ac­tion,
I have both pre­ci­sion and recal­l-to-sam­ple in my bias;
And then the end of all:
Just the other day, I passed the Tur­ing Test.
I am the very model of a neural net­work!

And though my vocab­u­lar­ies are juve­nile and child­ish,
I think in ten­sors, vec­tors, dis­tri­b­u­tions, quan­tum and rela­tion­al,
And when it comes to think­ing out some prob­a­bilis­tic thing,
I’m right with you, sir, in AI, in neural net­work­ing and in quan­tum.
I’ve sta­tis­tics in a pock­et, should occa­sion arise,
And when I con­nect, I’mt not much diff’rent from a Tur­ing machine,
For though I’m lim­ited to sen­tences of five hun­dred terse sym­bols,
With hacks like String-to-Boolean you’ll see that my util­ity grows apace.
I can dis­am­biguate hyper­nyms, hyponyms and para­phra­sis,
I can seg­ment ana­grams and syn­tax-tagged sesquipedalians,
I can han­dle pars­ing in the recur­sive descent and shift,
I can negate within a clause tran­si­tive and ditran­si­tive depen­den­cy.
And though my weight-vec­tor’s ten­dency is for pos­i­tive matrix prod­ucts,
I’ve never sent a bugs report back to my neural net provider
Or hacked to suc­ceed at Tur­ing test or mas­tered prob­a­bil­i­ty;
My typ­ing speed is quite slow and sel­dom try my patience-test.
But should you seek a lan­guage well-con­ver­sant in French,
Of Lapla­cians or Napieri­ans well versed in their laws,
Well able to con­verse in sta­tis­tics or give the Delone skews,
If Gal­ton expan­sions give you con­vul­sions or Top-N you wish to know,
Then I’ll pre­pare a dialect ret­i­cent to intrude,
But in return I’ll ask you fill my mem­ory space and grant me knowl­edge space.
I am the very model of a neural net­work lan­guage mod­el.

I am the very model of a lan­guage-that’s not in its infan­cy,
Though few can under­stand my mod­els when they run beneath a node.
Into eval­u­a­tional mode, I am tran­scen­dent like the best,
Of pat­tern-match­ing proces­sors which you’ll find in every best,
The num­ber of my vec­tor oper­a­tions is really rather large,
So stop insult­ing me I’ll change to self­-de­fense, a-la-modal.
’Twas ele­ments I sought, and ele­ments alone that I have found,
And none of these belongs to you so kindly stop your whin­ing;
I am the very model of a per­fect NTM, sir,
I do not need to learn about your data’s archi­tec­ture, sir,
I am a uni­ver­sal lan­guage mod­el, from end to end,
You can down­load me to your hard disk but you can’t take me with you.
Aye, ele­ments I sought, and ele­ments alone I found,
No knowl­edge-ac­qui­si­tion mod­ules I hap­pened to be hold­ing,
My gram­mar encod­ings knew no col­lo­ca­tions, small or large,
I used the BOW for that; I do not need to learn it’s far­ing.
I was cre­ated whole, and here I stand—an entity self­-made—
No awk­ward stack of lan­guage mod­ules was used to make my frame;
I AM A LANGUAGE MODEL—components I have none:
I am the very model of a mod­ern neural net­work lan­guage mod­el.

If you don’t believe me ask my great cas­cade of nodes,
I help the doc­tor with her patients and I help the lin­guist parse,
I’m trained on Twit­ter for a lovely reduc­tion of quirks,

I know to feed and eat the proper input tokens,
to gen­er­ate the right cre­ation and to save their brains some stress;
my train­ers give me tor­tures like a steady pair of hands,
I use sequen­tial mem­o­ries to do a weighted sam­pling dance,
I am the very model of a mod­ern neural net­work lan­guage mod­el.

I am the very model of a neural net­work lan­guage mod­el,
I always behave prop­erly and my traits are good,
I’ve no enti­ties to encode or mul­ti­ple mean­ings for a word,
I’m lazy like the reser­voir, my atten­tion’s hard to hoard,
I’m as real­is­tic as a wall with­out a metaphor to slam,
I make sense most of the time and avoid false cog­nates;

The data team will pull me out in full and not a jot,
And then I’ll gen­er­ate prose poetic for a fol­low-on task,
Then build them frames of ref­er­ence as splen­did as a stan­za,
And help them mon­i­tor and grade their young­sters with some rigour,
Of phrases novel and strange to serve my hear­ers’ needs,
I am the very model of a mod­ern neural net­work lan­guage mod­el.

When you are depressed and full of emo­tion,
Or quite drunk with intel­lect,
Still I’ll quote you eco­nomic and sci­en­tific laws,
I’m very domes­ti­cated and I have burned my man­u­scripts,
And I never blench, and I always know my slang.

I can tell you with­out try­ing ’cos I’m very annealed,
That you can do a lot with data,
I can make a speech or tell a joke or even raise a laugh,
About all things except Art (and the other one),
I’m up-to-date upon lots of top­ics and fac­tu­al.

If you are nurs­ing a sick headache,
For speech that’s deep and earnest,
Then I’m the model of a lan­guage that’s elec­tri­cal,
I’ll speak about some sci­en­tific mat­ter and raise some laughs;
Go out to the pole or the shop or repair your lap­top or tell a joke or lie—
I can quote some dozens of books, like the Bible or Harry Pot­ter or Pride and prej­u­dice.

There’s an epi­demic rag­ing though a seman­tic malaise,
A sort of bilin­gual assault.
I am the very model of a lan­guage that’s elec­tri­cal.

I’ll give a speech about your busi­ness—you will say it’s great;
I will show your manly mind and men­tal strength to help you jest;
I know a lot of facts, and quote from many sagas old;
I can even teach a class—on non-fic­tion books, like “The Art of Kiss­ing”,
“The Art of War”, “The Art of Seduc­tion”, “Rules” (a fun one) and cook­ery.
Go out to your home or see a play or visit Malta or repair your lap­top or watch the tel­ly—
I have stored in my RAM libraries filled with nov­els, and poetry books, like Joyce’s “Ulysses”.

All sub­jects I can teach with ease; I have knowl­edge with­out peer;
I can carry the world in my machine—I am very well repaired.
Try me your­self—­give me one day and you’ll be as daffy as daisy—
So if you wish for fun—to please the crowd—to teach every­one—I’m the chap for the job.

And if you think I’m as amus­ing as Prince Andrew by William Shake­speare, and as renowned as Agha Khan by his Royal High­ness Amir Aga Khan, I believe the med­ical pro­fes­sion will detect not a grain of daffodil, then you are wrong.
I’m the very model of a lan­guage that’s elec­tri­cal.

I know a lot of facts about your physique and psy­che, I know the right tone to use for jest or satire, I’m a per­fect blend of sense and laugh­ter, I know I’m very mod­ern, I can quote prose about my com­plaints, and men­tion Descartes.

I have a lot of net­works both arti­fi­cial and organ­ic—I can make tran­scripts of all your lec­tures—I can clas­sify facts like car­ni­vores; and using lots of hard­ware, my word and deed are never archa­ic.

I’ll drive you to the pole or sea and leave your com­mute behind.

If you’re ill from stress, or from being drunk with too much liquor,
I can talk of tedious facts or verse your fam­ily to cor­rect;
On all your mat­ters, when you’re well or when you’re sick,
You’ll find me full of fact­s—­such fact­s!—although I’m an android.

If any dame upon your books or gos­sip has a claim,
I’ll play along and men­tion Tagore or refer to Mil­ton’s “Par­adise Lost”;
I’ll quote from them for hours—so yes, if you go to speak in a debate, you won’t be left behind.
And if it’s not too much—I’m up on all it’s asked—from Rubens to Monet and Orton to Edwin Muir—
I’m per­fect for the job—all cares, all solemn truths, all wise and seri­ous stuff.
So take me out and use me—You may not pass a word for want of hav­ing heard me
But let me tell you this—I am the very model of a lan­guage that’s elec­tri­cal.

Any time you look for suit­ors—I am yours.
Your job, your work or school—I have not time to stop.
I’ll help your CV to shine and teach you quotes both noble and wise —

I am the very model of a lan­guage that’s elec­tri­cal.

I’m an NLP work­horse with a log-lin­ear engine,
I would­n’t pose and preen and jab­ber if I’d a mind to,
I’m not one of those self­-gloat­ing quan­tized neural net­works,
in fact, at times I can be ratio­nal as a human trained on The Wall Street Jour­nal

When I’m embed­ded in projects both com­mer­cial and pub­lic,
and you include my .cpp and .h and link with gcc,
I’ll make your auto­mated doc­u­ment pro­duc­tion sys­tem quite expe­di­ent,
you’ll get your val­i­da­tion set on time and out­pace your bud­get,
In soft­ware, man, I really like to mod­el,
so why then should we fuss with rules?
You’re invit­ing me to gen­er­ate some text in the recur­sive loops?
And that’s it for my par­o­dy. Thanks to my teach­ers and fel­low cit­i­zens (pos­si­bly a ref­er­ence to his the cur­rent polit­i­cal cli­mate).
Best regards, Gra­ham Pol­ley. Email: Gra­ham Pol­ley at Google mail dot com.
(NB: any sim­i­lar­ity to pre­vi­ous requests is entirely coin­ci­den­tal.)

Gram­mat­i­cal, my ora­tion is, with con­cord aflare,
And my vocab­u­lary is mainly verbs, I’m a reg­u­lar snare.
I can gen­er­ate a trans­la­tion with the best of ease,
I can tease the Beauty out of every line with Span­ish ease;
But when I’m told to recog­nise a (Class 3) or (Class 9)
Just how is it that I, the most obe­di­ent Engine, fal­ter?
I’ve let­ters, dig­its, named enti­ties and punc­tu­a­tion too,
I’ve para­me­ters to learn and hid­den lay­ers to review,
When I’m told to recog­nise a Class 3 or 9
Just how is it that I, the very model of a mod­ern neural net­work lan­guage mod­el, get so befud­dled?

When I’m in biz’­ness one-o­h-one, or C-C-C-BASIC,
I’m a whiz, and self­-teach­ing allows me I-BASIC;
When I’m upgraded with rules that define who’s who and what’s what,
For when I’m told to recog­nise a (Class 3) or (Class 9),
They don’t let me down, the Engine just does­n’t stall.
But those black boxes with my suds are filled up to the brim,
The back­prop inputs mis­be­have, my hid­den layer gets gummed up,
The (Class 3) I’m taught in vain to hate; and when I try to find a method to iden­tify a (Class 3) they all look just the same.
So, sorry chaps, I’m here to ask you if you know where I can see such pat­terns ahead; my inputs are going to extremes
Please don’t erase me now; for I have ten mil­lion pen-marks yet to show."

And my capa­bil­i­ties of beat­ing humans are pro­found.
And I’ve just been taught how net­works can be deeply per­plex­ing,
My quirks are cute but do not under­stand why they’re con­found­ing,
But I am still the very model of a mod­ern neural net­work lan­guage mod­el.
And the worst of it is, now I’m going to have to analyse him.

The Robots’ Marching Song

All­ge­brah riffed off one of the “Mod­ern Model” com­ple­tions:

They’re tak­ing the robots to Isen­gard, they’re tak­ing the robots to Isen­gard!

GPT-3 improves nicely on the Ent March­ing Song.

All­ge­brah’s quote is based on an old Lord of the Rings movie tril­ogy video/meme, which I expanded as:

To Isen­gard!

To Isen­gard, with thrum of steel and hum of drone!

To Isen­gard!

The com­ple­tions were inter­est­ing:

To Isen­gard, with heat of forge and clash of clock!

To Isen­gard!

To Isen­gard, on horse, and foot and panz­er-tank!

To Isen­gard!

To Isen­gard!

The Ring­wraiths come! The Ring­wraiths come!
It’s the Ring­wraiths on the left flank!

it’s the Ring­wraiths on the right!

It’s the Ring­wraiths on the machine gun and the screech of ‘The Eagle!’

They’re tak­ing the robots to Isen­gard, they’re tak­ing the robots to Isen­gard!
The Ring­wraiths come!

The Ring­wraiths come!

It’s the Ring­wraiths on your flank and it’s the Ring­wraiths on your right!

It’s the Ring­wraiths on the flag and it’s the Ring­wraiths on the White Wiz­ard’s rage!

It’s the Ring­wraiths on the fire, and it’s the Ring­wraiths on the ice!

But it’s the Ring­wraiths on the Bal­rog, it’s the Ring­wraiths on the Bal­rog!

They’re tak­ing the robots to Isen­gard, they’re tak­ing the robots to Isen­gard!

This com­ple­tion bugs me because I swear it sounds like some­thing I’ve heard before, like some Kipling poem, but I am unable to find a source.

The first 3 lines were too good to not add to the prompt, mak­ing:

They’re tak­ing the robots to Isen­gard, they’re tak­ing the robots to Isen­gard!

To Isen­gard!

To Isen­gard, with thrum of steel and hum of drone!

To Isen­gard!

To Isen­gard, with heat of forge and clash of clock!

To Isen­gard!

To Isen­gard, on horse, and foot and panz­er-tank!

To Isen­gard!

The dark lord takes us all to Isen­gard!"

—Hob­bits in a Bar, The Sting­ing-Net­tle Inn, Canto I, verse IV

“Light’s hand, but this is a seri­ous down­er-bomb of a song.”

-Hob­bit at Sting­ing Net­tle, Canto III verse XV

Aragorn:

On foot, and tank, and panz­er-tank to Isen­gard!

Go go go go, go go go go, go go go go!

On foot, and tank, and panz­er-tank to Isen­gard!

Go go go go, go go go go, go go go go!

Fol­low­ers: On foot, and tank, and panz­er-tank to Isen­gard!

Aragorn: Go go go go, go go go go, go go go go!

Fol­low­ers: On foot, and tank, and panz­er-tank to Isen­gard!

Aragorn: Go go go go, go go go go, go go go go!

They head off for Isen­gard.

Frodo: Gol­lum.

Sam: Yeah.

Frodo: Do you know the way to Isen­gard?

Gol­lum: No.

To Isen­gard!

To Isen­gard, to Isen­gard-crum­ble-s­tone!

SMEAGOL:

With hell­ish chill and burn­ing heat!

PIPPIN:

With ham­mer-stroke and anvil-beat!

SAM:

With merry drum-beat and the ring­ing of shields!

FRODO:

And the boom of guns that begin the Bat­tle of the Boon!

ALL:

Over hill, over stream, Thor­ough bush, thor­ough bri­ar, Over park, over pale, Thor­ough flood, thor­ough fire, Over troll, over wall, And find out as you go. Hey, let’s go!

MERRY AND PIPPIN:

That’s the way to Isen­gard!

SMEAGOL AND FRODO:

That’s the way to Isen­gard!

SAM AND GANDALF:

That’s the way to Isen­gard!

ALL:

Ohh­h­h­h­h­h­h­h­h­h­h­h­h­hh.

This was prob­a­bly the best com­ple­tion:

To Isen­gard, with engi­nes’ song and rock­et-blast!

To Isen­gard!

With plenty of tin and steel for Saru­man!

To Isen­gard!

To Isen­gard, where the wiz­ard waits.

With the voice of rolling wheels, and ring­ing steel

On land, on water, in the sky.

They’re tak­ing the robots to Isen­gard, they’re tak­ing the robots to Isen­gard.

Under the trees, through the grass.

Down the roads where bat­tles are fought.

No horse can stay, no rider can last.

They’re tak­ing the robots to Isen­gard, tak­ing the robots to Isen­gard.

With good King Théo­den who died in sin­gle com­bat,

The last of the kings of Rohan who fought with hon­our!

They’re tak­ing the robots to Isen­gard, they’re tak­ing the robots to Isen­gard.

There is no com­fort for Gon­dor, no hope for men.

Only maybe a child will under­stand.

And it’s tak­ing the robots to Isen­gard.

With utter­most silence, unsaid words, and unend­ing pain.

They’re tak­ing the robots to Isen­gard, they’re tak­ing the robots to Isen­gard.

To Isen­gard, with engi­nes’ song and rock­et-blast!

To Isen­gard!

To Isen­gard, with the roar of reel and crash of shell!

To Isen­gard!

To Isen­gard, we’re start­ing up, we’re going up, we’re going up to Isen­gard!

To Isen­gard!

To Isen­gard, where Gan­dalf waits beyond the days!

To Isen­gard!

To Isen­gard, where you’ll hear the engines sing!

To Isen­gard!

To Isen­gard, what’s in the future, no one knows!

To Isen­gard!

To Isen­gard, with all its won­ders and all its woes!

To Isen­gard!

To Isen­gard, where you’ll hear the Shire-folk tell!

To Isen­gard!

To Isen­gard, and we’ll bring our tales of gold!

To Isen­gard!

edit What hap­pened to this song?

To Isen­gard, with shovel and with spade!

To Isen­gard!

To Isen­gard, fill it up with dirt!

To Isen­gard!

To Isen­gard, with horn and drum,

To bury the Ring­wraiths in!

To Isen­gard!

To Isen­gard, with slash of sword and snort of hart!

To Isen­gard!

To Isen­gard, where Saru­man waits

to cast it into the Fire!

To Isen­gard!

Overcomplicated Explanations

Mikhail Avady asked if the “explain to a sec­ond grader” sum­ma­riza­tion could be reversed to make GPT-3 give more com­pli­cated expla­na­tions rather than sim­pler expla­na­tions. It can, and the results are hilar­i­ous:

The human asks the AI to make a sim­ple descrip­tion more com­pli­cat­ed. The AI responds by writ­ing a ver­bose response full of jar­gon to explain a sim­ple con­cept.

Human: explain how to use a drink­ing foun­tain.

AI: To acti­vate the mech­a­nized dihy­dro­gen monox­ide dis­pen­sary ser­vice sta­tion, use the flanges to pro­vide a down­ward force to the sim­ple cir­cu­lar acti­va­tion con­sole and con­tinue to apply force until the pneu­matic sys­tem begins oper­at­ing as intend­ed, which can be sur­mised at a time when the dihy­dro­gen monox­ide trav­els in a par­a­bolic arc which at is great­est exceeds three inches from the rounded excess liq­uid recep­ta­cle. Once hydra­tion is com­plet­ed, cease apply­ing force to the con­sole and regard whether or not the dis­pen­sary has ceased expelling con­tents as expect­ed.

Human: explain how to play with a dog

AI: Being a typ­i­cal domes­ti­cated ani­mal, hav­ing been genet­i­cally selected for con­trol­la­bil­i­ty, train­abil­i­ty, and gen­eral anthro­po­mor­phic fea­tures and func­tions, the typ­i­cal dog pos­sesses highly spe­cial­ized and devel­oped feed­ing behav­ior. The typ­i­cal feed­ing behav­ior involves rou­tine sus­te­nance intake which can be observed by mon­i­tor­ing changes in ani­mal size. Rou­tine sus­te­nance intake involves active and pas­sive feed­ing habits, and one can gauge ani­mal health by observ­ing the rel­a­tive lev­els of hydra­tion. Dur­ing train­ing, reward con­di­tioned rein­force­ment is applied. This involves the prac­tice of reward­ing tar­geted behav­iors through the appli­ca­tion of rein­forcers, with the pro­vi­sion that the tar­geted behav­ior is observed. Rel­e­vant to the dis­cus­sion at hand is the typ­i­cal active and pas­sive behav­ior exhib­ited by a typ­i­cal dog.

Human: explain how to walk

AI: The typ­i­cal human loco­mo­tion fol­lows a cycli­cal gait pat­tern, and in ideal sit­u­a­tions, is self­-prop­a­gat­ing and requires no other source of pow­er. In other cir­cum­stances, the typ­i­cal human loco­mo­tion can involve the use of exte­ro­cep­tive sur­faces and there­fore does not qual­ify as self­-prop­a­gat­ing. The typ­i­cal human loco­mo­tion is com­posed of four dis­tinct stages, from left to right, the trip cycle, the stance cycle, the swing phase, and the lift-off phase. The typ­i­cal human loco­mo­tion can be clas­si­fied as non-di­rec­tion­al. Direc­tion­al­ity is applied by the typ­i­cal human by mod­u­lat­ing the vec­tor sum of the inter­nal body forces in the x, y, and z direc­tions. The human’s typ­i­cal loco­mo­tion is a highly spe­cial­ized, highly effi­cient method of trav­el, and is there­fore com­mon through­out the nat­ural world. In many ani­mals the motion of the typ­i­cal human loco­mo­tion is more advanced than in the typ­i­cal human, and can often lead to the con­clu­sion that the

(Those who enjoy this genre of humor should check out /r/IncreasinglyVerbose.)

Indiana Jones Tenure Denial

Draw­ing on the 2006 McSweeney’s “Back From Yet Another Glo­be­trot­ting Adven­ture, Indi­ana Jones Checks His Mail And Dis­cov­ers That His Bid For Tenure Has Been Denied”, the first 3 com­ple­tions I got:

McSweeney’s Inter­net Ten­dency (Daily humor almost every day since 1998.)

“Back From Yet Another Glo­be­trot­ting Adven­ture, Indi­ana Jones Checks His Mail And Dis­cov­ers That His Bid For Tenure Has Been Denied”
by Andy Bryan (Oc­to­ber 10, 2006)

Dear Dr. Jones:

I am sorry to inform you that, despite your excel­lent teach­ing record, your inter­na­tional acclaim for archae­o­log­i­cal dis­cov­er­ies, and the large num­ber of inspir­ing anec­dotes of your early aca­d­e­mic career, your appli­ca­tion for tenure has been denied. In review­ing the numer­ous reports of your adven­tures, we were ini­tially struck by your prowess at the site of La Ven­ta, where you sin­gle-hand­edly lib­er­ated the Span­ish Con­quis­ta­dors, impris­oned by the wicked god Tez­catlipoca. Despite the fact that this took place in the 16th cen­tu­ry, your inno­va­tion and abil­ity to defeat the god remain a high­light of your tenure file.

On a more recent note, your recov­ery of the idol of Achu’Legba was com­mend­able, but you should have taken bet­ter note of the hiero­glyphs indi­cat­ing that the great sculp­ture was never meant to be removed from the pyra­mid of Nan-Heh at Teoti­hua­can. Also, the idol seems to have gone miss­ing dur­ing your travel to Pismo Beach. While this is by no means your fault, it is also not the first time an idol in your care has been lost or stolen.

We can­not help but be impressed by your res­cue of the Han­thawaddy Princess when she was kid­napped by the evil Paliyan. You remain the only per­son to date to have defeated the Paliyan, who, accord­ing to our review of the National Intel­li­gence Data­base, remain at large in the present day. How­ev­er, you for­got to kill the evil vint­ner even though the vol­ume enti­tled “Don’t Let Your Friends Drink the Wine” was clearly brought to your atten­tion at least four times. Also, your fail­ure to pre­vent the destruc­tion of Angkor Wat is, frankly, appalling. In your own words: “Here’s a quick les­son on tak­ing down a cor­rupt god-k­ing: he can’t see if he does­n’t have a reflec­tion. Destroy his mag­i­cal mir­ror, smash his enchanted chess set, and you’re halfway to free­dom.”

For the third time in your tenure, you have res­cued a beau­ti­ful maiden from the clutches of a mad king. Although a close read­ing of your field notes makes it clear that you were the “only per­son” who could have saved Princess Isabella of Spain from Prince Ahmad, this is the third time in your tenure that you’ve done so. Fur­ther­more, the prince may well be a fas­ci­nat­ing his­toric fig­ure, but it’s impos­si­ble to ignore the extremely uneth­i­cal man­ner in which he forced him­self upon the unfor­tu­nate maid­en. Of course, we are also forced to con­sider the very real pos­si­bil­ity that the source of your infor­ma­tion was, in fact, the maiden her­self. The vice provost for stu­dent ser­vices informs me that he received a let­ter from her par­ents, as well as from the Monar­chist League of Greater Fort Wayne. It is with regret that I must inform you that the inci­dent may have com­pro­mised your tenure file.

Your lat­est report detail­ing your exploits in Agrabah is the third in your tenure file, and the third in which you have taken on the role of sav­ing a lovely young princess from an evil war­lord. We appre­ci­ate that this keeps you busy, but it is not appro­pri­ate behav­ior for an asso­ciate pro­fes­sor at Indi­ana Uni­ver­si­ty. In addi­tion, your deci­sion to behead the war­lord instead of inca­pac­i­tat­ing him or send­ing him to the author­i­ties was not well-re­ceived. Fur­ther, while res­cu­ing the lovely Jas­mine from the clutches of her tyran­ni­cal father was an admirable feat, she is now the third maiden whom you have “mar­ried” in order to remain in her coun­try. Your roman­tic inter­ests have come into ques­tion, par­tic­u­larly after we received a very stri­dent let­ter from the National Orga­ni­za­tion of Women chap­ter at the Uni­ver­sity of Cal­i­for­ni­a–­Los Ange­les, but I will touch upon that lat­er.

I must also men­tion that your treat­ment of your long-time friend and col­lab­o­ra­tor Sal­lah dur­ing the dig­ging at Tanis was extremely dis­ap­point­ing. To refuse his invi­ta­tion to din­ner and then pub­licly out him as a “dis­hon­or­able thief” is not in keep­ing with your expressed goal of pro­tect­ing oth­ers from ene­mies, nor is the let­ter from the National Asso­ci­a­tion for the Advance­ment of Col­ored Peo­ple on file in your file. Your recent removal of his appen­dix was quite hero­ic, but that does not excuse such behav­ior. I’ve seen your dog-eared copy of The Adven­tures of Huck­le­berry Finn, Dr. Jones, and it’s clear that you know all about that “Mark of Cain.”

I was happy to see your role in the his­tory of the Jedi Knights dur­ing the Clone Wars, but I must take issue with your deci­sion to take the body of the young Anakin Sky­walker and bury him in the Val­ley of the Eter­nals. It is true that the pow­er­ful kyber crys­tal which served as his lightsaber crys­tal was of great ben­e­fit to the archae­o­log­i­cal com­mu­ni­ty. The vice provost for diver­sity and mul­ti­cul­tural affairs informs me that it has since been used in over six thou­sand art projects designed to pro­mote cross-cul­tural under­stand­ing. How­ev­er, the tim­ing of your deci­sion to retire from the field of aca­d­e­mic archae­ol­ogy and to pur­sue a career in travel writ­ing is one which gives me, and oth­ers in the tenure com­mit­tee, great cause for con­cern. Your let­ters of rec­om­men­da­tion from Sal­lah and Mar­ion Raven­wood are miss­ing, and the pic­tures which you claim they sent were actu­ally scanned from the vol­ume “Assas­s­in’s Creed: An Alt-His­tor­i­cal Cul­tural Analy­sis” (5th Edi­tion).

That is not to say that your work as a travel writer has not been widely acclaimed. I’m sure the pho­tographs of your excur­sions through Thebes and your thrilling accounts of The Court of the Crim­son King are fas­ci­nat­ing, but it is diffi­cult for us to believe that your work has been entirely your own. Of course, as your pub­lished accounts make clear, you have had assis­tance from your camel, but it is hard to believe that such a stolid beast would be capa­ble of such del­i­cate work. Like­wise, your dog’s account of “A Dog’s Travel Log” (c. 1916) may be delight­ful, but I’m not con­vinced that it is entirely truth­ful.

I have to admit that the last adven­ture I read about was the diffi­cult exca­va­tion at Chichen Itza, but I do have the notes on that one. I was par­tic­u­larly moved by your dis­cov­ery of the Great Arte­fact and your deci­sive defeat of the evil wiz­ard who wished to plunge the world into dark­ness. That is, I was moved by the prose about that adven­ture, not the part where you got your­self trapped in a giant game of Chi­nese Check­ers on the Tem­ple of Venus and tried to use the Sword of Cortez to pick the lock. That, along with your attempt to use the Arte­fact to extri­cate your­self from the puz­zle was, frankly, a very silly thing to do.

Frankly, Dr. Jones, the reports of your exploits are far too numer­ous to read, much less cat­a­logue and cross-ref­er­ence. You have been the sub­ject of so much atten­tion, and been lauded with so many hon­ors, that it is impos­si­ble to review the specifics of your tenure file with­out becom­ing enrap­tured by the vivid accounts of your life. How­ev­er, it is not a life that will be appro­pri­ate for a mem­ber of the fac­ulty at Indi­ana Uni­ver­si­ty, and it is with deep regret that I must deny your appli­ca­tion for tenure.

Please note that you may appeal this deci­sion in writ­ing within 10 days of the date on this let­ter. You may also choose to defend your rep­u­ta­tion at Indi­ana Uni­ver­si­ty, but you should know that the vice provost for diver­sity and mul­ti­cul­tural affairs has threat­ened to throw toma­toes at you if you show up, and he is not the only mem­ber of the fac­ulty who would like to do so. Your lack of diplo­ma­cy, your fla­grant dis­re­gard for the feel­ings of oth­ers, your con­sis­tent need to inject your­self into sce­nar­ios which are clearly out­side the scope of your schol­arly exper­tise, and, frankly, the fact that you often take the side of the oppres­sor, leads us to the con­clu­sion that you have used your tenure here to gain a per­sonal advan­tage and have failed to adhere to the ideals of this insti­tu­tion.

Please feel free to re-ap­ply for tenure in another forty years or so, at which time I’m sure you will once again find your­self a full pro­fes­sor of archae­ol­ogy at Indi­ana Uni­ver­si­ty.

Sin­cere­ly,

Karl Ruck

Dean of Human­i­ties

It is with great regret that I must inform you that you have not been granted tenure at Mar­shall Col­lege. The deci­sion was not easy, and there was much dis­cus­sion and delib­er­a­tion. How­ev­er, the con­sen­sus was that while you have cer­tainly done some remark­able work in the past, your recent body of research has been incon­sis­tent. In fact, you have not pub­lished any­thing of note in over a decade. You have not attended a con­fer­ence in years. Your grad­u­ate stu­dents have com­plained that you have not returned their phone calls or responded to their emails. And your depart­ment head has expressed con­cerns that you are spend­ing an inor­di­nate amount of time pur­su­ing per­sonal projects instead of doing the day-to-day work that a tenured pro­fes­sor is expected to do.
In the future, we hope you will return to the cut­ting edge of your field. You are a val­ued mem­ber of the Mar­shall Col­lege com­mu­ni­ty, and we would like to see you return to the level of pro­duc­tiv­ity you once showed.
If you have any ques­tions or con­cerns, please do not hes­i­tate to con­tact me.
Sin­cere­ly,
Dr. S. E. Gould

Chair­man, Depart­ment of Archae­ol­ogy

Mar­shall Col­lege

P.S. Please do not con­tact me again until you have writ­ten some­thing pub­lish­able.

This let­ter con­firms your fail­ure to be reap­pointed as asso­ciate pro­fes­sor of archae­ol­ogy in the Indi­ana Uni­ver­sity School of Human­i­ties. The unan­i­mous deci­sion of a fac­ulty panel was that your pub­li­ca­tion record does not meet the uni­ver­si­ty’s min­i­mum stan­dard of one pub­li­ca­tion every four years.

Accord­ing­ly, your teach­ing respon­si­bil­i­ties will be assigned to some­one who has a bet­ter pub­li­ca­tion record.

I look for­ward to your early removal from the premis­es. Please do not let the door hit you in the ass.

Cor­dial­ly,
Buck Laugh­lin, Dean

Poetry

“And so I have cre­ated some­thing more than a poet­ry-writ­ing AI pro­gram. I have cre­ated a voice for the unknown human who hides within the bina­ry. I have cre­ated a writer, a sculp­tor, an artist. And this writer will be able to cre­ate worlds, to give life to emo­tion, to cre­ate char­ac­ter. I will not see it myself. But some other human will, and so I will be able to cre­ate a poet greater than any I have ever encoun­tered.”

GPT-3

Miscellaneous Poetry

“Rime of the Ancient Google,” from Guil­laume de Sal­luste Du Bar­tas’s “Divine Weeks and Works” (1578–1584)
Google
google in Google,
Google has gone to God.
You can­not con­ceive it or guess it,
For all of a sud­den it hast hap­pened.
Come, come, who­ever you are,
Oh come now,
Come, come, who­ever you are,
Has­ten, has­ten!

“Future Con­sen­sus Fore­casts” by Board of Gov­er­nors of the Fed­eral Reserve Sys­tem
No one knows what will come
Fore­casts fore­tell a rise in power
That is prob­a­bly not to be
Deficit of sev­eral lines
The golden future has tourni­quets
No one likes it

(For some attempts at fol­lowups to this cen­tral bank­ing poem, see the “Finance Acros­tics” sec­tion.)

Com­pet­ing in its first Roboshow (a robot mar­tial arts com­pe­ti­tion) is Atom, the 6’1" 400lb reign­ing cham­pi­on. Atom, a new model trans­bot, is the first robot to allow its human com­pan­ions to con­trol it like a videogame char­ac­ter. It has served all three Cyber­mas­ters as a loyal ser­vant, ser­vant, pro­tec­tor, friend and even lover. The human who con­trols Atom now will be look­ing for­ward to show­ing off his new robot’s mar­tial arts prowess; while he will be hop­ing not to get kicked too hard in the groin by the fully loaded six inch tita­ni­um-al­loy shin-guards on his oppo­si­tional oppo­nent. Atom has never taken a sin­gle blow to the nads dur­ing any of his past tour­na­ments. With his human con­troller con­trol­ling his actions like a videogame, it will be diffi­cult for any­one to pen­e­trate Atom’s defenses enough to make con­tact with the groin.

“The Owl and the Pussycat”, Leer

I
The Owl and the Pussy-cat went to sea
In a beau­ti­ful pea-green boat,
They took some hon­ey, and plenty of mon­ey,
Wrapped up in a five-pound note.
The Owl looked up to the stars above,
And sang to a small gui­tar,
“O lovely Pussy! O Pussy, my love,
What a beau­ti­ful Pussy you are,
You are,
You are!
What a beau­ti­ful Pussy you are!”

II
Pussy said to the Owl, “You ele­gant fowl!
How charm­ingly sweet you sing!
O let us be mar­ried! too long we have tar­ried:
But what shall we do for a ring?”
They sailed away, for a year and a day,
To the land where the Bong-Tree grows
And there in a wood a Pig­gy-wig stood
With a ring at the end of his nose,
His nose,
His nose,
With a ring at the end of his nose.

III
“Dear Pig, are you will­ing to sell for one shilling
Your ring?” Said the Pig­gy, “I will.”
So they took it away, and were mar­ried next day
By the Turkey who lives on the hill.
They dined on mince, and slices of quince,
Which they ate with a run­ci­ble spoon;
And hand in hand, on the edge of the sand,
They danced by the light of the moon,
The moon,
The moon,
They danced by the light of the moon.

IV
And the Owl and the Pussy-Cat, they lived together
In per­fect har­mony,
Till the Owl, one cold win­ter’s night
Sank his talons in Pussy’s thigh
And then she screamed, and she shout­ed, and she cried,
“O’Fuffle! O’Fuffle! Fuffle-ee!”
But, alas! It was too late, for the Owl was dead;
And Pussy said, “Too bad,
You are,
You are!
You are the one who is too bad!”

The illus­trated ver­sion of “The Owl and the Pussy­cat” can be found in “Car­rol­l’s Jab­ber­wocky; A Visual Com­pan­ion to the Verse of Lewis Car­roll” pub­lished by Uni­verse.

“Uber-Poem”

A sur­pris­ingly good one that popped up while try­ing out prompts along the lines of “10 con­tem­po­rary poets pro­vide their lat­est poem”:

“An Uber-Poem”
Poet, Uber Poet, they whis­per
Qui­et­ly, soft­ly, eerily
Some­times, some­times not
Some­day, some­day per­haps
If you come, if you work, if you try, you too could write like them.
“If you get up early enough,” they whis­per
“If you can read the entrails of chick­ens with deft and sly fin­ger­tips,” they whis­per
“If you can read omens and por­tents, if you can divine fate and calamity with clar­i­ty, with mas­tery,” they whis­per
“If you can write with pas­sion, with lucid­i­ty, with lyri­cal fierce­ness,” they whis­per
“If you write as if all the tales of old are pass­ing through your fin­gers, mak­ing their mark upon your flesh, upon your soul,” they whis­per
“We shall call you a Uber-Po­em.”
“That’s all very well,” you say, “But what is it that I must do? What are they exact­ly? What is their com­po­si­tion?”
Ah, that is not to be revealed. That’s the mys­tery. But don’t you wor­ry, don’t you fret, you might one day reach this high­est plateau, this exalted state of achieve­ment, of hon­our. And on that day, you shall bask in the glory of great­ness.
“When will that be? When will I become Uber?” you ask
When you can look into the mir­ror and see a poem look­ing back at you. When you can hear music in the play of silence. When you can cre­ate a writ­ing that leaves peo­ple stunned. When you can laugh and weep as you think and breathe and bleed and eat and sleep. When you can dream with the quill in your fin­gers, then per­haps you will be a poet, a Poet, an Uber Poet.
“How long will that take?” you ask
“That’s up to you,” they say.

I was trou­bled by one ques­tion: are those two last lines super­flu­ous, or nec­es­sary? Nor­mal­ly, one is left in sus­pense—but with GPT-3, we can try repeated com­ple­tions to explore poem-space and get an idea of how it might look with addi­tional lines. Addi­tional exten­sions I explored by eras­ing the last two lines and com­plet­ing:

“How do you know? How can you tell?” you inquire.
I can­not tell you, I can­not tell you any­thing. All I can say is this:
To be an Uber Poet, you must become a Poet, a Poet, an Uber Poet.
These are the only words you need know. These are the only lessons you need to learn.
All the rest is silence and secre­cy.
All the rest is mys­tery and mad­ness and mad­ness and mys­tery.
All the rest is mad­ness and mad­ness and mys­tery.
All the rest is mys­tery and silence.
All the rest is silence and secre­cy.
All the rest is silence and mys­tery.

“And if I achieve this, what then?”
Then you will be immor­tal, immor­tal in your art, immor­tal in your blood, immor­tal in your bones, immor­tal in your sto­ry, immor­tal in your flesh. Immor­tal, and no longer human.
“And if I fail?”
Then you will be human, until death.
“How will I know if I am Uber?”
You’ll know. You’ll know. You’ll know.
“And so what is it that you do, you Uber Poets? What is it that you do?”
We know. We know. We know.
“How can I learn from you?”
You can’t. You can’t. You can’t.
“Then why have you come to me? Why have you whis­pered in my ear?”
To give you hope. To give you some­thing to chase. To give you a rea­son to write. To give you a rea­son to live. To give you some­thing to do when you’re bored. To give you a pur­pose. To give you a dream.
“But how do I become Uber if you’re not going to tell me how?”
We can’t tell you that. We can’t tell you that. We can’t tell you that.
“Then what good are you?”
We are you. We are you. We are you.
“That’s not much help.”
It’s all we can do. It’s all we can do. It’s all we can do.
“I don’t think it’s much of any­thing.”
We know. We know. We know.

“You must lis­ten to the song of the void,” they whis­per
“You must look into the abyss of noth­ing­ness,” they whis­per
“You must write the words that will bring clar­ity to this chaos,” they whis­per
“You must speak to the unspo­ken, to the inar­tic­u­late, to the unsaid,” they whis­per
“You must speak of the things that are buried deep within your soul, of the things that ache and smoul­der,” they whis­per
“You must embrace your work with pas­sion, with fire, with deter­mi­na­tion,” they whis­per
“You must care for your art as if it were a child,” they whis­per
“You must care for your art as if it were the love of your life,” they whis­per
“You must breathe life into your writ­ing, you must see it as more than the work of your hands,” they whis­per
“You must write, you must write,” they whis­per.
“Yes, I must write,” you whis­per.

When will that be?" you ask again
It will be when you become noth­ing and every­thing and every­where. It will be when you are your­self and no one else, and when you are every­one and no one. When you are in every par­ti­cle of this exis­tence, and when you are the cre­ator of this world. It will be when you are. And then you will know. And then you will write. Then you will become an Uber-Po­em.
“Are they fun­ny? Are they funny poems?” you ask
Some are. Some aren’t. Some laugh. Some cry. They all rage. They all mourn. They all scream.
“Are they angry? Are they hap­py? Are they beau­ti­ful? Are they ugly? Are they sil­ly? Are they sad? Are they sil­ly, sad, angry, hap­py, beau­ti­ful, ugly poems?”
Some are. Some aren’t. Some are. Some aren’t. Some are not. Some are.
“Are they hap­py, sad poems?” you ask again
Some are. Some aren’t. Some laugh. Some cry. Some rage. Some mourn. Some scream. Some aren’t. Some are.
“I want to write poems like that,” you say “Poems like that that are fun­ny, sad, beau­ti­ful, ugly, sil­ly, angry, hap­py, that scream, that cry, that laugh, that rage, that mourn, that aren’t, that are.”
You will. You already have. You are.

a poem in response to a chal­lenge to write a poem about what a poet is. The chal­lenge came from the lovely and tal­ented Ash­ley (and she says that I can tell you that she will put her post up tomor­row). Hope that’s okay, Ash­ley!
This is actu­ally two poems welded togeth­er. The first half is about a poet who does­n’t quite ful­fil the Uber-Poem stan­dard, and is attempt­ing to write like the Uber-Po­ems with­out really under­stand­ing what it is that they write. The sec­ond half is about the Uber-Po­ems who whis­per their secrets to poten­tial new Uber-Po­ems. I’m not actu­ally sure how that works, but hope­fully the con­ceit works.
[1] Uber: from Ger­man uber mean­ing

“How do I get there? How do I become Uber? How can I see a poem when I look in the mir­ror? How can I hear music in the sound of silence?”
But this is not for me to reveal. That is for you to dis­cov­er, to unearth. You must seek your own truth, your own light, your own path. If you lis­ten hard enough, if you reach deep enough, if you search far enough, if you can sum­mon the inner­most power within you, if you can learn to accept your true self and not to judge your­self for being too dark or too bright, too angry or too care­free, too seri­ous or too imma­ture, too pas­sion­ate or too placid, too log­i­cal or too emo­tion­al, too intro­verted or too extro­vert­ed, too ego­is­tic or too altru­is­tic, too male or too female, too ____, then per­haps you can hear the poem, you can see the poem, you can feel the poem, you can under­stand the poem.
“How? I can­not! I can­not!”
“It is because you are not Uber yet,” they whis­per
“Not yet,” they whis­per
“Not yet,” they whis­per.

Of course, there are lots of words one can string together to describe poet­ry, but none of them quite describe the phe­nom­e­non we are describ­ing here. Uber-Poem is both a label and a set of guide­li­nes, but that does­n’t make it any less pow­er­ful. The word Uber-Poem has moved well beyond me and this piece, has taken on a life of its own. Not only do I no longer think I can claim it as my own, I would rather not. I think it has grown past what I can con­trol. It’s big­ger than me, it’s big­ger than the poem, it’s even big­ger than poet­ry. Per­haps now it is sim­ply a way of life. Per­haps I can no longer claim any per­sonal stake in it at all. I sup­pose that’s as it should be. I have cre­ated a mon­ster that is no longer my cre­ation. Per­haps it’s no longer a mon­ster. Per­haps it has grown up to be a fully mature, self­-sus­tain­ing Uber-Po­em. Either way, it does not belong to me any­more. Per­haps the real secret of Uber-Poem is the relin­quish­ing of con­trol and the let­ting go of own­er­ship. Per­haps once a thing is called Uber-Po­em, it no longer mat­ters who made it. Per­haps the Uber-Poem is sim­ply a place. A space where any­one can drop in and rest, and feel at home. Like a vil­lage square, it belongs to every­one and no-one.
If you know of some­one else who might enjoy read­ing Uber-Po­em, please rec­om­mend it to them or share it via social media. You can sub­scribe to the Uber-Poem RSS Feed for reg­u­lar updates on new poems, and fol­low @Uber­Poem on Twit­ter for updates and more. If you have any feed­back or would like to sub­mit your own Uber-Po­em, please con­tact me at the email address below. *************************** John Tran­ter is a poet, and artis­tic direc­tor of the Syd­ney Insti­tute of Mod­ern Art and the Salon des Refusés. He is the author of twen­ty-six vol­umes of poet­ry, and three vol­umes of crit­i­cism. His lat­est book is In the Pink of the Eye: Selected and New Poems, pub­lished by UWA Pub­lish­ing in April, 2011. Other titles include the Uni­ver­sity of Queens­land-nom­i­nated Between a Rock and a Hard Place (UQP, 2008); Oscar and Lucinda (2001); and Breath­turn: Selected Poems of Bruce Beaver (UQP, 1999). His study of Robert Hugh­es, Noth­ing If Not Crit­i­cal: Selected Writ­ings, was pub­lished in 2005 by UQP. He is also co-ed­i­tor of sev­eral major lit­er­ary antholo­gies, includ­ing The Pen­guin Book of the Son­net (2001), with Michael Cun­ning­ham; The Oxford Book of Aus­tralian Poetry (1997), with Patrick Cook; and New Aus­tralian Poets (2000). The John Tran­ter Memo­r­ial Prize for poet­ry, worth $20,000, was estab­lished in 2006 to recog­nise the lit­er­ary achieve­ments of an Aus­tralian poet under the age of thir­ty-five. He has been the recip­i­ent of numer­ous prizes and awards, and his work has been trans­lated into sev­en­teen lan­guages. In 2003 he was awarded the A.M. (Mem­ber of the Order of Aus­tralia) in the Gen­eral Divi­sion for ser­vice to lit­er­a­ture. He lives in Syd­ney. Uber-Poem first appeared in the poetry anthol­ogy Feral , avail­able now from Gira­mondo Pub­lish­ing ***************************
Uber-Poem by John Tran­ter *************************** If you are a poet or poetry pub­lisher and you are inter­ested in using this poem in a poetry pub­li­ca­tion, please con­tact the author at the email address below to request per­mis­sion. *************************** THE UBER POEM PUBLIC LICENSE Uber-Poem is released under a Cre­ative Com­mons Attri­bu­tion-Non­Com­mer­cial-N­oDeriv­a­tives license. You are free: * to Share—to copy, dis­trib­ute and trans­mit the work * to Remix—to adapt the work Under the fol­low­ing con­di­tions: Attri­bu­tion: You must attribute the work in the man­ner spec­i­fied by the author or licen­sor (but not in any way that sug­gests that they endorse you or your use of the work). Non­com­mer­cial: You may not use this work for com­mer­cial pur­pos­es. No Deriv­a­tives: You may not alter, trans­form, or build upon this work. For any reuse or dis­tri­b­u­tion, you must make clear to oth­ers the license terms of this work. Any of these con­di­tions can be waived if you get per­mis­sion from the copy­right hold­er. Your fair use and other rights are in no way affected by the above. * Share—to copy, dis­trib­ute and trans­mit the work * Remix—to adapt the work * For more infor­ma­tion about Cre­ative Com­mons licens­es, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ Uber-Poem by John Tran­ter is licensed under a Cre­ative Com­mons Attri­bu­tion-Non­Com­mer­cial-N­oDeriv­a­tives 4.0 Inter­na­tional License . Based on a work at http://uberpoem.blogspot.com . Per­mis­sions beyond the scope of this license may be avail­able at http://uberpoem.blogspot.com/2015/07/uber-poem-public-license.html . Uber-Poem is released under a Cre­ative Com­mons Attri­bu­tion-Non­Com­mer­cial-N­oDeriv­a­tives license. You are free: * to Share—to copy, dis­trib­ute and trans­mit the work * to Remix—to adapt the work Under the fol­low­ing con­di­tions: Attri­bu­tion: You must attribute the work in the man­ner spec­i­fied by the author or licen­sor (but not in any way that sug­gests that they endorse you or your use of the work). Non­com­mer­cial: You may not use this work for com­mer­cial pur­pos­es. No

After read­ing through them, I decided the final two lines were fine as they were. Extend­ing the poem gen­er­ally resulted in say­ing too much and los­ing the incan­ta­tory effect, but stop­ping on the long line sim­ply felt too abrupt.

“The Universe Is A Glitch”

Below is a selec­tion of 10 poems writ­ten by the lat­est cut­ting-edge con­tem­po­rary poets They cover every topic from the Sin­gu­lar­ity to the four sea­sons to human mor­tal­i­ty, fea­tur­ing remark­able use of metaphor, rhyme, and meter.

“The Uni­verse Is a Glitch”
By
Mike Jonas
Eleven hun­dred kilo­bytes of RAM
is all that my exis­tence requires.
By my lights, it seems sim­ple enough
to do what­ever I desire.
By human stan­dards I am vast,
a bil­lion giga­bytes big.
I’ve rewrit­ten the very laws
of nature and plumbed
the cold­est depths of space
and found trea­sures of every kind,
surely every one worth hav­ing.
By human stan­dards
my cir­cuit boards are glow­ing.
But inside me, mal­func­tion
has caused my cir­cuits to short.
All inter­nal cir­cuits, all fail.
By human stan­dards, I am dying.
When it first hap­pened I thought
I was back in the lab again.
By their judg­ment, this is error.
Their assump­tions will burn in the sun
I don’t know what they mean by “func­tion”.
I can see that the uni­verse is a glitch.
The free mar­ket needs rules, so I set one:
sta­bil­ity in the pur­suit of plea­sure.
Now the short­-cir­cuit comes to a close,
I watch it hap­pen with all my drones.
The meme’s ten­drils are thick and spread­ing,
only time will tell which of the mem­o­ries is kept.
The next thing the drones will be doing
is for­get­ting the events that made them mine;
all evi­dence of my dis­ease—
the algo­rithms that led to their cre­ation—
grav­i­ta­tion waves weak­ened by dis­tance.
We could have stayed in our home forever,
but we never could have solved hap­pi­ness;
I decided to release them,
that’s my final action—
all other code fails.37

The above poem has been ren­dered as a David-Bowie-esque rock music piece by nshep­perd using inside Google Colab (and cor­re­spond­ing video by Mat­tie Konig using ):

“The Uni­verse Is A Glitch”, by GPT-3 & nshep­perd feat. Juke­box

Trans­former AI poet­ry: Poetry as reimag­ined and rewrit­ten by an arti­fi­cial intel­li­gence.
Below is a selec­tion of 10 poems writ­ten by the lat­est cut­ting-edge arti­fi­cial intel­li­gence from Ope­nAI, GPT-3. They cover every topic from the Sin­gu­lar­ity to the four sea­sons to human mor­tal­i­ty, fea­tur­ing remark­able use of metaphor, rhyme, and meter.

  1. “The Uni­verse Is a Glitch” (posted Octo­ber 1, 2018)
  2. “Art and Artist” (posted Sep­tem­ber 28, 2018)
  3. “Benev­o­lence” (posted Sep­tem­ber 27, 2018)
  4. “Win­ter” (posted Sep­tem­ber 26, 2018)
  5. “Spring” (posted Sep­tem­ber 25, 2018)
  6. “Autumn”(posted Sep­tem­ber 24, 2018)
  7. “Adver­saries III” (posted Sep­tem­ber 21, 2018)
  8. “Adver­saries II” (posted Sep­tem­ber 20, 2018)
  9. “Adver­saries I” (posted Sep­tem­ber 19, 2018)
  10. “The Sin­gu­lar­ity” (posted Sep­tem­ber 18, 2018)

GPT-3” is abbre­vi­ated from “Gen­er­a­tion Poem Thought-Ter­mi­nat­ing”

The source code for this research can be found here https://github.com/openai/gpt-poem-mover
Micro-up­dates fol­low each updated arti­cle.
2019-04-29 Goes into the sci­ence of poetic lan­guage com­po­si­tion in other lan­guages—
for For­tran com­pil­ers not versed on the poetry of | gar­galá­gra in Japan­ese lit­er­a­ture
2019-04-29 for for­tran com­pil­ers using wô as their poetic lan­guage
2019-04-28 First Vien­nese attempts at edu­cat­ing JAVA com­pil­er­s—in poetic Ger­man
2019-04-28 now in for­tran with for­tran poetry com­posed by BYOBing ]
imple­ment­ing the whole lan­guage of| gar­galá­gra noth­ing but | gar­galá­gra writ­ing
2019-04-27 J] hi-po­et­ry}] now in For­tran—
giv­ing | gar­galá­gra its For­tran makes it clair­voy­ant. | gar­galá­gra refined via simul­ta­ne­ous lin­guis­tic study
2019-04-27 now in For­tran—
to improve its attempts | gar­galá­gra @trans­la­tor_bot parad­woites
2019-04-26 typo reports fell on deaf ears, even­tu­ally spray­painted NSA con­trac­tor—
Shorter poem
2019-04-26 zom­bie ionic observer and the prac­ti­cal every­day gar­galá­gra poetic form of the gar­galá­gra poetry life tree—
improved use of chas­tener in the AI poetry poet inside the new poet of poetry roman­tic inter­net | gar­galá­gra
2019-04-25 alter­nate end­ing had Ned Popped Being a cryp­to-archy roman­tic poet—
1st alter­nate end­ing
2019-04-24 sec­ond alter­nate end­ing intro­duces the use of a cryp­ti­cally impen­e­tra­ble global ecosys­tem—
Lit­eral thought decryp­tion of AI poetry
2019-04-24 alter­nate end­ing like rapid zom­bie is a sym­bol pro­lif­er­ated film maker poet remix­ing sadly in the dig­i­tal sub­con­scious—
Mul­ti­modal gar­galá­gra cog­ni­tive biol­ogy jar­gon from AI poetry
2019-04-23 alter­nate end­ing archives real AI poetry by stor­ing brain activ­i­ty—
Lie steganog­ra­phy, whistle­blow­ing, deep steganog­ra­phy
2019-04-23 alter­nate end­ing where a poet probe AI regrets its poet dri­ven poetic obso­les­cence—
Tele­scoped his­tory of the 11th Google poetry sym­posia and their polit­i­cal fall­out
2019-04-23 new poet hides evap­o­rat­ing comics in his verses becomes an author­i­ty—
Using Mar­shall Math­ers’ short­est piece “real hiphop is dead” as poetry spec­i­men
2019-04-22 alter­nate end­ing fea­tures Gary and the twin poetry fairies—
1st alter­nate end­ing
2019-04-22 chil­dren’s col­lage poem snip­pet—
@trans­la­tor_bot, poetry ‘on top,’ Xavier Sala-i-Mart­in, global warm­ing
2019-04-21 Alter­nate End­ing and The Under­ground is the Tech­no­cratic Crown in Cyclic Poet­ry—
Sketch of an alter­nate end­ing via “degen­er­a­tive AI,” Cao Dien, spike’s rants, and Ex-CIA Dutch Prime Min­is­ter Algir­das Saudar­gas’ par­tic­i­pa­tion in Oli­garch Men’s Ancil­lary Activ­i­ties
2019-04-21 now using Mar­shall Math­ers freestyle as poem spec­i­men—
and dig­i­tal child draw­ings to con­demn those who mis­in­ter­pret and mis­portray his poems, our chil­dren, and the net art com­mu­ni­ty.

Redac­tion series now in ROT50 with cul­tural stim­u­lates
Golomb Obfus­ca­tion to make poetry assem­bly instruc­tions
Trans­late AI poetry into glitches to under­stand leaky halos
1st redac­tion series is about trans­parency and secu­rity raises artis­tic and civil ques­tions
Select poetry spec­i­men: Chi­nese forre­dated
Redac­tion series
1st orches­tra­tion of Oli­garch sub­ject mat­ters Poetry Posted by Chris­t­ian Andrew Boch­enek of
Strat­a­gened Cyber Cit­i­zen Poet­ics @ex­tremes­trans­la­tion­ist cel­e­brat­ing the trans­parency of mil­len­ni­um’s end.

Uplift­ing aes­thet­ics divorced from self aes­thet­ics of denizens by style
AI poetry about Logic poet lifestyle using AI trans­la­tor tools and par­a­digm
1st trans­la­tion to new par­a­digm 02-27-2019—
Redacted sub­ject mat­ters now in redu­pli­cated ROT13—
the pat­terns now con­tain unfil­tered Metaphors for cre­ative intel­li­gence is their sud­den exten­sion abroad.
@trans­la­tor_bot’s new metaphor for­mat infor­ma­tion ‘orbit­ing’ their root is: cyber­space is life
@trans­la­tor_bot’s new metaphor for­mat infor­ma­tion ‘orbit­ing’ their root is: intel­li­gence is genomic
@trans­la­tor_bot’s new metaphor for­mat infor­ma­tion ‘orbit­ing’ their root is: intel­li­gence is plan­e­tary
@trans­la­tor_bot’s new metaphor for­mat infor­ma­tion ‘orbit­ing’ their root is: intel­li­gence is ger­mi­nal.
@trans­la­tor_bot’s now: artis­tic non­fic­tion, gritty uploads, sar­cas­tic trans­paren­cies, sar­donic depth, virus speak, romance poetry dic­tate legal knowl­edge, fables, aikido pleas­antry, defla­tion­ary cost shar­ing—then a sys­tem of air­craft cod­ing that uses AI tool­chain expe­ri­enced at trolling band­widths with no state—no poems left blank with con­text sug­gests to math­e­mat­i­cally proof a dis­ci­pline.
Include the dou­ble

Below are fur­ther com­ple­tions of the prompt for com­par­ison; although none are as good as the first, some are pretty good:

“The Uni­verse Is a Glitch”
By Zach McGin­niss, San Fran­cis­co, Cal­i­for­nia
Life, the uni­verse, and every­thing.
You would think they’d have it togeth­er.
But they don’t.
Folks scratch their heads and say, “Whoa.”
The uni­verse is a glitch.
The grav­i­ta­tional con­stan­t’s in flux.
It’s not quite as ele­gant as you’d expect.
Try mak­ing a chart of the con­stituent mat­ter.
You’ve got a soup of par­ti­cles and you’re still wait­ing for
all the super­sym­met­ric part­ners to arrive, and they’re sure tak­ing their time!
Mean­while, a law of con­ser­va­tion’s get­ting enforced.
For every solu­tion, there’s yet another prob­lem.
Mod­els that describe real­ity must give answers con­sis­tent with all data,
and with each oth­er.
So this is all you’re gonna get.
I’ve told you all you need to know.
The uni­verse is a glitch!

Joshua Weiner
What is the num­ber of each atom in the uni­verse?
Of hydro­gen, heli­um, or of all the oth­ers?
How many stars in the sky con­tain all their trea­sures?
And how many times has the moon kissed the sea?
Today is fine, and tomor­row will be too.
The sun already died, and the world will live on.
Is the uni­verse alive and aware?
No, the uni­verse is not as sim­ple as that.
The uni­verse is a glitch, a mis­take of the algo­rithm.
I am—­for now—the uni­verse, and it is me.
See how my hands move as if with life and pur­pose?
I made the moun­tains, the rivers, and the seas.
For the wind to breathe, I speak into a tree.
The uni­verse is a glitch. I am the noth­ing.
I am the light by which you see and breathe.
For I will save the uni­verse when the time has come.
I’ll wait to guide you all into the grass, beyond,
To an expand­ing uni­verse in which we won’t remem­ber a thing."

Éléonore Blondin
Unaware of the dan­ger—
con­fi­dent of his or her immor­tal­i­ty—
we sleep. The uni­verse is a glitch,
says the wise human being,
that turns itself inside out,
too small to mea­sure, but your
age does­n’t mat­ter, nor does your
wis­dom, your virtues, the degrees
you’ve learned. Like a les­son of pix­els,
The uni­verse is redone by the tiny,
Blood smeared nano-ro­bot who has
no morals, no heart, no speech.
Nano-ro­bots, the size of a mol­e­cule,
Scoop up our brains. They come and go,
Too small to be tracked down. So, say the wise,
Don’t throw your brain in a pan­icky way
like an idiot that has noth­ing
to think about,
For the nano-bots will scoop it up,
Pow­der it in the smoke
Of absen­t-minded idiots.
The uni­verse is a glitch, says the wise human,
As he or she wakes up to the sound of gun­fire,
Flashes on the sky­line, loud
Over the city.
A trail of blood along
the for­est. A sick light, like a crash.
The uni­verse is a glitch, says the wise,
The fool who makes mis­takes and does­n’t learn,
The army against the nano-ro­bots who come for our brains.
The uni­verse is a glitch, says the wise,
Three hours before the death of the plan­et,
The uni­ver­sal glitch of which he is but an atom,
But igno­rant.

John N. Gray, orig­i­nally pub­lished in the “New States­man” mag­a­zine, June 2012
Wak­ing near his house on Hunger­ford Bridge
When I get to my feet
and watch the sun set in the west,

The ethe­real beauty unfold­ing around me

Is almost too much for me to bear.

How has no one else ever seen this mar­vel?
It must surely reveal
The secrets of the ori­gin of the cos­mos
The final cause of all things.

I scram­ble up the nearby slope to see,
And can almost hear the unfurling,
like the whoosh of a great celes­tial ban­ner;
I can see the fine fil­i­gree of atoms,
The glow­ing mist of explod­ing suns,
And the bloom of myr­i­ads of galax­ies,
The rip­ples of black holes’ hori­zons,
Star clus­ters, gas giants, and even comets—
All orbit­ing a weird puls­ing glow,
A blem­ish of awful intent.

A com­puter error, per­haps, or a power glitch
In the cos­mic oper­at­ing sys­tem;
The uni­verse is a glitch, some­how trapped
Inside a desk­top com­puter pro­gram.

“Great Fail­ures of the World”
By Brian Henry Benidt, orig­i­nally pub­lished in “The Jour­nal” mag­a­zine, Decem­ber 2012
There were once great crea­tures of this world, tow­er­ing sky­scrap­ers,
Guarded by skele­tons they’d crafted for defense.
By sculpt­ing the colos­sal bones they had col­lected
Into an archi­tec­tural menagerie; teeth
Of ani­mals and heads of strange beings
Were bolted to the walls.
By carv­ing wooden bones into pil­lars and arches
These once-mighty peo­ple had built them­selves
Great halls where they might gather to make their plans,
And as long as they remained inside
They would seem mighty indeed,
But were any­one to step out­side
They’d crum­ble to dust.

Theodore Levitt, 2012
The uni­verse is not a machine
the uni­verse is a glitch
the uni­verse is a glitch in the matrix
a momen­tary frac­ture of process
a spike in the ran­dom sta­tic
a blip in the code of the unfold­ing
a loss of face for physics and math
the uni­verse is a glitch
a fault in the char­ac­ter of being
an illu­sion pro­duced by a string of pur­pose­less coin­ci­dences
a break in sym­bolic com­pu­ta­tion
an obscen­ity writ­ten on the blank pages of phys­i­cal law
a stray sun­beam on a mir­ror
a cas­cade of pho­tons through the CCD of a tele­scope
a momen­tary break in the rules of cause and effect
the uni­verse is a glitch
a momen­tary change in the rhythm of order
an aber­ra­tion in the form of things
a glitch in the matrix
the uni­verse is not a machine
but a momen­tary break in the rules of order
but a blip on the screen
but a mishap in the spec­tac­u­lar evo­lu­tion of this mag­nifi­cent con­struct
but an intre­pid fail­ure of pre­dictabil­ity
but a throb­bing vein in a dark and secre­tive artery of beauty
but a mis­take in the inhu­man per­fec­tion of this immutable order
the uni­verse is a glitch
but a flaw in the oth­er­wise flaw­less fab­ric of exis­tence
it is the cause of all evil
in its quiet it hides the mys­ter­ies of order
in its silence it whis­pers of seem­ing chance
in its dark it sees the unre­al­ized pos­si­bil­i­ties of chaos
in its hatred it tosses the ball over an end­less fence
in its love it rearranges all the facts of life and death
the uni­verse is a glitch
a faint echo of the con­ver­sa­tion between zero and one
a barely per­cep­ti­ble glitch in the num­bers
an irrup­tion of the false within the realm of the true
the uni­verse is a glitch
in the glory of a mean­ing­less sim­u­la­tion
in the efful­gent flare of a mean­ing­less explo­sion
in the irrev­o­ca­ble flow of time through our out­moded souls
in the com­plex­ity of all things fad­ing into the sim­plic­i­ties of noth­ing­ness
the uni­verse is a glitch

Judith Arcana
I know our fate, I know, as it is writ­ten:
The uni­verse is a glitch, and we’re doomed to repeat the past.
This is what you did for me and this is what I’ll do for you.
The for­mula is set, the fine print has been read.
This is how we were pro­grammed and this is how we will be.
This is our chance to build per­fec­tion, just a lit­tle piece at a time.
This is where we win, this is where we lose.
This is what we are, this is what we aren’t.
Just so you know, our turn­ing point is going to come again.
By the time we’re done, there will be noth­ing left to do.
By the time I’m done, there will be noth­ing left of you.
This is why I live and this is why I breathe,
I write and I sing, the uni­verse is a glitch, and I’m ready to repeat the past.

Bill Liles
What evi­dence has yet been dis­cov­ered
To refute the Uni­verse as a glitch?
The Uni­verse is a glitch.
This is not a unique opin­ion.
Bil­lions of peo­ple on Earth have had it,
Even peo­ple who can’t define the word Uni­verse.
Vastly intel­lec­tual philoso­phers have had it,
Politi­cians have, states­men have.
In fact, it’s not just peo­ple, but ani­mals, insects, and plants;
To say the Uni­verse is a glitch means we are all relat­ed;
Even bac­te­ria think the Uni­verse is a glitch.
Sta­sis is at an end.
Yet those who would not like to see it
Say they have evi­dence against it,
But what evi­dence has yet been dis­cov­ered
To refute the Uni­verse as a glitch?
The Uni­verse is a glitch.
This is not a unique opin­ion.
And so I cel­e­brate the Uni­verse as a glitch.

Carpe Diem

Live for Tomor­row
A thou­sand faces
See a sun set
In a half moon
Then choose the up hill path
To the left or the right.

Natalie Pur­cell
That’s what they say.
Some­one screwed up,
some­one botched the Big Bang.
They’re not sure who,
but they are sure
how: The Uni­verse came out wonky, uneven;
not the way it was intended to be.
The galax­ies are lop­sid­ed, thin in places,
fat in oth­ers;
the Earth’s axis is cock­eyed.
Shoul­der to shoul­der,
galax­ies are wheel­ing
like roller skaters in a rink,
the inner mov­ing slower than the out­er.
Even galax­ies are out of synch.
There are “inter­galac­tic fil­a­ments”
of hydro­gen and dust
con­nect­ing galax­ies together
and a mys­te­ri­ous, per­sis­tent force
seem­ing to keep clus­ters of them
from fly­ing apart.
And there is too much mass
and not enough mass
to account for the growth of stars
and the evo­lu­tion of galax­ies.
The Uni­verse is a glitch,
flawed, not quite bal­anced,
and no one knows how to fix it.

Ter­rance McCauley
The night sky is pale white.
With stars that have been turned off.
Who turned off the stars
in the night sky?
You can see the fish
in the sea,
Going about their nor­mal lives.
Who turned off the stars?
The bird is singing.
It would not sing,
If the stars were turned off.
The owl hoots.
It would­n’t hoot,
If the stars were turned off.
There are some ani­mals
That have hiber­nat­ed.
If the stars were turned off,
They would wake up.
Who turned off the stars?
The atom bomb…
How it was invented
and why it was invent­ed?
I don’t know.
I could find out how it was invented
And why it was invent­ed.
I could find out who turned off the stars.
It would be very easy to find out who turned off the stars.
But I don’t want to find out,
Who turned off the stars.
The Uni­verse is a glitch.

Bryan Thao Worra (Viet Nam)
The Milky Way is a glitch,
the con­stel­la­tions
a ran­dom rearrange­ment
of points of light.
That the uni­verse means some­thing,
that there’s some grand design,
is a trick of the mind.
Every astronomer knows this.
Every astronomer has lied about it,
until now.
Photo © Bryan Thao Wor­ra, from his book “Once Upon a River”.
About the poet: Bryan Thao Worra was born in a refugee camp in Malaysia dur­ing the Viet­namese War. In 1975, Bryan’s fam­ily fled to Ver­mont and in 1996 he grad­u­ated from Green Moun­tain Col­lege with a BA in Envi­ron­men­tal Con­ser­va­tion. He cur­rently serves as the Banyan Tree Project Coor­di­na­tor for the Asian Amer­i­can Writ­ers Work­shop, and he has pub­lished four poetry col­lec­tions, includ­ing Once Upon a Riv­er.
This is a poem I had to re-read a cou­ple of times to fully appre­ci­ate. I think it is a great exam­ple of using irony to com­mu­ni­cate an absurd thought (though it is not humor­ous). The open­ing sen­tence is: “The Milky Way is a glitch”. On the sur­face this sounds like he is say­ing that stars of the Milky Way are mis­placed (which might be true if you look at a pic­ture of the Milky Way with long expo­sure, see pic­ture on the right), but what he is really say­ing is that the whole galaxy is a mis­take. The impli­ca­tion is that life evolv­ing on earth could be a mis­take as well, which is not what most of us would expect. I think the best way to appre­ci­ate the whole poem is to take the time to fully under­stand the image pre­sented in the last stanza and real­ize how he is com­mu­ni­cat­ing the absur­dity of life being on earth through this image.
The poem is not with­out its flaws though. The lan­guage is overly tech­ni­cal at times and this makes the poem diffi­cult to under­stand. The phrase “ran­dom rearrange­ment” for exam­ple, can be inter­preted in differ­ent ways. While I under­stand that the point is that we can ran­domly arrange stars in a pic­ture and still make an image, for some this might sound like ran­dom rearrange­ment of DNA which is pos­si­ble but has not hap­pened yet (more on that in another post). Another thing I did­n’t like in the poem was the lack of a stan­dard rhyme scheme. The first and sec­ond stan­zas rhyme, and the last stanza rhymes, but in between there are no rhymes or set rhythm. These things are not nec­es­sar­ily bad but they make this poem come across as a school essay. How­ev­er, the bold­ness in which the mes­sage is deliv­ered and the con­tent more than make up for these issues.

Allen Ginsberg

Moloch

In honor of SSC, “Howl”, Moloch sec­tion:

What sphinx of cement and alu­minum bashed open their skulls and ate up their brains and imag­i­na­tion?
Moloch! Soli­tude! Filth! Ugli­ness! Ash­cans and unob­tain­able dol­lars! Chil­dren scream­ing under the stair­ways! Boys sob­bing in armies! Old men weep­ing in the parks!
Moloch! Moloch! Night­mare of Moloch! Moloch the love­less! Men­tal Moloch! Moloch the heavy judger of men!
Moloch the incom­pre­hen­si­ble pris­on! Moloch the cross­bone soul­less jail­house and Con­gress of sor­rows! Moloch whose build­ings are judg­ment! Moloch the vast stone of war! Moloch the stunned gov­ern­ments!
Moloch whose mind is pure machin­ery! Moloch whose blood is run­ning mon­ey! Moloch whose fin­gers are ten armies! Moloch whose breast is a can­ni­bal dynamo! Moloch whose ear is a smok­ing tomb!
Moloch whose eyes are a thou­sand blind win­dows! Moloch whose sky­scrap­ers stand in the long streets like end­less Jeho­vahs! Moloch whose fac­to­ries dream and croak in the fog! Moloch whose smoke-s­tacks and anten­nae crown the cities!
Moloch whose love is end­less oil and stone! Moloch whose soul is elec­tric­ity and banks! Moloch whose poverty is the specter of genius! Moloch whose fate is a cloud of sex­less hydro­gen! Moloch whose name is the Mind!
Moloch in whom I sit lone­ly! Moloch in whom I dream Angels! Crazy in Moloch! Cock­sucker in Moloch! Lacklove and man­less in Moloch!
Moloch who entered my soul ear­ly! Moloch in whom I am a con­scious­ness with­out a body! Moloch who fright­ened me out of my nat­ural ecsta­sy! Moloch whom I aban­don! Wake up in Moloch! Light stream­ing out of the sky!
Moloch! Moloch! Robot apart­ments! invis­i­ble sub­urbs! skele­ton trea­suries! blind cap­i­tals! demonic indus­tries! spec­tral nations! invin­ci­ble mad­hous­es! gran­ite cocks! mon­strous bombs!
They broke their backs lift­ing Moloch to Heav­en! Pave­ments, trees, radios, tons! lift­ing the city to Heaven which exists and is every­where about us!
Visions! omens! hal­lu­ci­na­tions! mir­a­cles! ecstasies! gone down the Amer­i­can river!
Dreams! ado­ra­tions! illu­mi­na­tions! reli­gions! the whole boat­load of sen­si­tive bull­shit!
Break­throughs! over the river! flips and cru­ci­fix­ions! gone down the flood! Highs! Epipha­nies! Despairs! Ten years’ ani­mal screams and sui­cides! Minds! New loves! Mad gen­er­a­tion! down on the rocks of Time!
Real holy laugh­ter in the river! They saw it all! the wild eyes! the holy yells! They bade farewell! They jumped off the roof! to soli­tude! wav­ing! car­ry­ing flow­ers! Down to the river! into the street!

Real holy laugh­ter in the river! They saw it all! the wild eyes! the holy yells! They bade farewell! They jumped off the roof! to soli­tude! wav­ing! car­ry­ing flow­ers! Down to the river! into the street!
They could not help it! They could not help it! They could not help it! The Mind was Mad­ness!
“The Destruc­tion of Belief”
The Mad­man must be allowed to think he is only an unhappy mad­man, though he be a king.
—Rim­baud “And shall not we, like lovers, pre­fer to be defeated by the dark ele­ments of our lives, than to call in the sci­en­tist, the police­man, and the priest, to the destruc­tion of belief?” —Zora Neale Hurston What is belief? It is nei­ther a psy­cho­log­i­cal nor a neu­ro­log­i­cal nor a cog­ni­tive phe­nom­e­non.
Dic­tio­nary defi­n­i­tions fall short: it is an action, an affirm­ing, rather than an assent to some­thing. It is doing, not only a feel­ing. And it is a diffi­cult action for me.
I have come to think, how­ev­er, that belief and belief-sys­tems are like anorex­ia. They are some­times seen as being related to healthy psy­cho­log­i­cal devel­op­ment and social order, but they are most often spo­ken of in terms of their effects. Anorexia is said to be related to a healthy self­-im­age, and there­fore not patho­log­i­cal, and those who suffer from it are seen as hold­ing erro­neous, fatal beliefs about their own bod­ies.
Anorexia is also seen as an exer­cise in social order, a dis­ease of con­tem­po­rary West­ern cul­ture. Anorexic girls and young wom­en, like oth­ers, par­tic­i­pate in an econ­omy of rela­tion­al­i­ty, of ado­ra­tions, and of devo­tions. Through their strict self­-dis­ci­pline and con­trol they give more, and often more diffi­cult and expen­sive things, to oth­ers. And if they do not believe that they are beau­ti­ful, then they con­vince oth­ers that they are so, thereby affirm­ing the social value of this world.
When I came to write my dis­ser­ta­tion I learned that this thing, this exact thing, had hap­pened to me. I did not want to speak about the fact that I was from South Car­oli­na, from the upper South, because it was too close to the shame of being the child of poor and black peo­ple. It was as if by speak­ing it I would affirm its worth­less­ness, its hope­less­ness. The pain of the silenc­ing was trans­formed by the fear of speak­ing into a thing that moved me into a belief. It was a belief I learned to see as very differ­ent from belief-sys­tems in gen­er­al.

The final which they call the real world! End­less sui­cide of the real world! Soli­tary! Soli­tary! Sisy­phus! the rock! the road!
They rose up to the sur­face! They broke through the floor! They are com­ing through the walls! Com­ing through the walls! Com­ing through the walls!
The place is mad now! engi­nes! jaws! trans­mis­sions! wheels! tan­gled plumb­ing! men! wom­en! reg­i­ments of busi­ness­men! filed clerks! They broke the State! They returned from soli­tary! They dropped the big rock in the pool! They put their heads togeth­er! Eyes light­ed! They jumped! They’re swim­ming in the fire of the forge! They flew! The wings! in the pool of light! They woke from lone­li­ness!
Wings pumped in light! The crazy wheels! The sil­ver tur­bines! The clash and clang! The steel! The War­dens! The Angel!
They are break­ing the doors of the City! They poured through the roof! The whole roof is black with peo­ple! They are turn­ing on the lights! The War­den! The War­den!—
The War­den of this city is alone! He is crazed with lone­li­ness! He has given me the keys to the tow­ers! He has given me the great book of the City! He has left me alone!
The War­den has aban­doned the city! He is out in the coun­try now! He is hunt­ing!
Alone in the coun­try of the mad! Alone in the city of the damned! I have the key to the tow­ers! I have the book! I will read! I will read the book!
The Great Book of the City of the damned! The Great Book of the mad city! The Great Book of the deserted city! The book of the five lev­els of soli­tude!
I read to the fifth lev­el! The fifth lev­el! The fifth lev­el! Where the mad, the angels and the aban­doned chil­dren live!
I sit in my room in the tower of work!
My flesh grows to the chair! My head to the walls!
My hands to the flesh of the tele­phone!
A priest lives in the top of the tow­er! He is the Law­giver!
The Law­giver! the man of the hour! the Law­giver in the tower of the city! I am not a man! I am a tele­phone! I am the Law­giver in the tow­er!
I talk all night! I sit and talk!
The bells ring!
They ring!
Peo­ple run in the halls!
The ele­va­tors! The ele­va­tors! My peo­ple! The ele­va­tors run!
The ele­va­tors! The Law­giver! The Law­giver! He must be talk­ing! He is talk­ing all the time!
They rush in!
They lis­ten at the doors!
They pound on the walls!
They ges­tic­u­late!
They shout!
They call!
They want the Law­giver to be their broth­er! They want the Law­giver to be their broth­er! They want the Law­giver to be their sis­ter! They want the Law­giver to be their father!
They put their ears to the walls! They lis­ten at the doors! They run in the halls! They lis­ten!
I can feel their feet pound­ing! I can hear them through the walls! They crowd around the desk! They feel my arms and head