This Waifu Does Not Exist

I describe how I made the website (TWDNE) for displaying random anime faces generated by StyleGAN neural networks, and how it went viral.
anime, NN, Python, shell, technology, GPT, tutorial
2019-02-192020-01-20 finished certainty: highly likely importance: 3

Gen­er­at­ing high­-qual­ity anime faces has long been a task neural net­works strug­gled with. The in­ven­tion of StyleGAN in 2018 has effec­tively solved this task and I have trained a StyleGAN model which can gen­er­ate high­-qual­ity anime faces at 512px res­o­lu­tion. To show off the re­cent pro­gress, I made a web­site, for dis­play­ing ran­dom StyleGAN 2 faces. TWDNE dis­plays a differ­ent neu­ral-net-gen­er­ated face & plot sum­mary every 15s. The site was pop­u­lar and went vi­ral on­line, es­pe­cially in Chi­na. The model can also be used in­ter­ac­tively for ex­plo­ration & edit­ing in the .

TWDNE faces have been used as screen­savers, user avatars, char­ac­ter art for game packs or on­line games, up­loaded to Pix­iv, given away in streams, and used in a re­search pa­per (). TWDNE re­sults also helped in­spired Sizigi Stu­dio’s on­line in­ter­ac­tive waifu GAN, , which gen­er­ates even bet­ter anime faces than my StyleGAN re­sults.

In De­cem­ber 2018, (source code/demo video) came out, a stun­ning fol­lowup to their 2017 (source/video), which im­proved the gen­er­a­tion of high­-res­o­lu­tion (1024px) re­al­is­tic hu­man faces even fur­ther. In my long-run­ning dab­bling in gen­er­at­ing anime with GANs, ProGAN had by far the best re­sults, but on my 2×1080ti GPUs, rea­son­able re­sults re­quired >3 weeks, and it was prov­ing diffi­cult to get de­cent re­sults in ac­cept­able time; StyleGAN ex­cited me be­cause it used a rad­i­cally differ­ent ar­chi­tec­ture which seemed like it might be able to han­dle non-pho­to­graphic im­ages like ani­me.

The source code & trained mod­els were re­leased 2019-02-04. The wait was ag­o­niz­ing, but I im­me­di­ately ap­plied it to my faces (based on a cor­pus I made by pro­cess­ing ), with as­ton­ish­ing re­sults: after 1 day, the faces were su­pe­rior to ProGAN after 3 weeks—so StyleGAN turned out to not just work well on ani­me, but it im­proved more on ProGAN for anime than it did for pho­tographs!

While I was do­ing this & shar­ing re­sults on Twit­ter, other peo­ple be­gan set­ting up web­sites to show StyleGAN sam­ples from the pre­trained mod­els or StyleGANs they’d trained them­selves: the quiz “Which Face is Real”, /“This Per­son Does Not Ex­ist”/“This Rental Does Not Ex­ist”/“These Cats Do Not Ex­ist”/“This Car Does Not Ex­ist”/“This Mar­ket­ing Blog Does Not Ex­ist” etc.

Since my StyleGAN anime faces were so good, I thought I’d hop on the band­wagon and cre­at­ed, yes, —one might say that “wai­fus” do not ex­ist, but these wai­fus es­pe­cially do not ex­ist.


“I feel like that rat in the ex­per­i­ment where it can press a but­ton for in­stant grat­i­fi­ca­tion—I can’t stop re­fresh­ing”


A screen­shot of “This Waifu Does Not Ex­ist” (TWDNE) show­ing a ran­dom StyleGAN-generated anime face and a ran­dom GPT-2-117M text sam­ple con­di­tioned on anime keywords/phrases.
“It is so sad to say that this manga has never been seen by any anime fans in the real world and this is an is­sue that must be ad­dressed. Please make anime movies about me. Please make anime about me. Please make anime about your beau­ti­ful cat. Please make anime movies about me. Please make anime about your cute cat. I wish you the best of luck in your life.
Please make anime about me. Please make anime about my cute cute kit­ten.” —TWDNE #283 (sec­ond screen­shot of TWDNE)
64 TWDNE face sam­ples se­lected from so­cial me­dia, in an 8×8 grid.
Face sam­ples se­lected by users of Art­breeder
Time-lapse video of TWDNE, show­ing many differ­ent pairs of faces/texts.

Obor­mot made a ver­sion of TWDNE–dubbed “These Wai­fus Do Not Ex­ist”–which dis­plays a con­stant­ly-up­dat­ing grid of anime faces, with an al­ter­na­tive ver­sion dis­play­ing an in­fi­nite mov­ing grid. On a large screen, these are par­tic­u­larly strik­ing as one Baidu fo­rum poster demon­strated:

Pho­to­graph by 大藏游星 of “These Wai­fus Do Not Ex­ist” dis­played max­i­mized on a large com­puter mon­i­tor.

And the slid­ing-grid is hyp­notic (half-hour-long video ver­sion):

Video of “These Wai­fus Do Not Ex­ist” show­ing in­fi­nite grid scroll of gen­er­ated anime faces.


“Once opened I was shown a waifu so lovely and pure that I stared in amaze­ment and awe.
Then the page re­freshed au­to­mat­i­cal­ly.
Now I am doomed to for­ever re­fresh to get her back, know­ing it shall never be.”


TWDNE is a sim­ple sta­tic web­site which has 100,000 ran­dom StyleGAN faces and 100,000 ran­dom -s­mall text snip­pets; it dis­plays a new image/text pair every 15 sec­onds.

TWDNE is im­ple­mented as a sta­tic site serv­ing pre-gen­er­ated files.

Why sta­tic in­stead of us­ing a GPU server to gen­er­ate im­ages on the fly? The (al­low­ing for im­age edit­ing), but was­n’t avail­able in time for TWDNE and -style evo­lu­tion­ary ex­plo­ration has­n’t been im­ple­mented at all, so there was no point in rent­ing an ex­pen­sive (>$100/month) GPU server to gen­er­ate ran­dom faces on the fly, and it is bet­ter to sim­ply gen­er­ate a large num­ber of ran­dom sam­ples and show those; any­one who wants to look at even more faces can down­load the model and run it them­selves (which would let them con­trol psi or re­train it on new datasets such as faces of a spe­cific char­ac­ter or at­tribute they are look­ing for). This is also far eas­ier to im­ple­ment.

There are 3 groups of ran­dom faces gen­er­ated with differ­ent hy­per­pa­ra­me­ter set­tings to show the full spec­trum of the trade­off be­tween qual­ity & di­ver­sity in StyleGAN. The first 70k tex­t-sam­ples are gen­er­ated us­ing OA’s pub­licly-re­leased model GPT-2-117M, given a ran­dom seed 1–70,000 + a long prompt with many ani­me-re­lated words & phrases I picked ar­bi­trar­ily while play­ing with it; the fi­nal 30k were gen­er­ated us­ing a 2-step process, where GPT-2-117M was re­trained on an Anime News Net­work dataset of short plot syn­opses to emit plot syn­opses which are then fed into the orig­i­nal GPT-2-117M as a prompt for it to en­large on. (Un­for­tu­nate­ly, it is not yet pos­si­ble to make the gen­er­ated face & text re­lated in any way, but some of the jux­ta­po­si­tions will, by chance, be amus­ing any­way.) One can see the first set of 40k all dis­played in a video by Scavin.

The sta­tic site is a sin­gle HTML page, ./index.html, plus 100,000 im­ages at the file­names ./example-{0..99999}.jpg and 100,000 text snip­pets at ./snippet-{0..99999}.txt. The JS se­lects a ran­dom in­te­ger 0–99,9991, loads the im­age with that ID in the back­ground, swaps it2, and loads a new text snip­pet with the same ID; this re­peats every 15s. Ad­di­tional JS adds but­tons for forc­ing an im­me­di­ate re­fresh, stop­ping the re­fresh­ing process (per­haps be­cause the user likes a pair & wants to look at it longer or screenshot/excerpt it), and of course load­ing Google An­a­lyt­ics to keep an eye on traffic.

There is no rewind but­ton or his­to­ry-track­ing, as that would cheapen the ex­pe­ri­ence, elim­i­nat­ing the & the feel­ing of read­ing through an anime ver­sion of the in­fi­nite —if the user is too slow, a face or story will van­ish, effec­tively for­ever (un­less they want to go through them by hand).

A large pile of re­spon­sive CSS (writ­ten by Obor­mot) in the HTML page at­tempts to make TWDNE us­able on all de­vices from a small smart­phone screen3 to a widescreen 4k dis­play, re­siz­ing the faces to fit the screen width and putting the im­age & text side-by-side on suffi­cient­ly-wide dis­plays.

As a sta­tic site, it can be hosted on Ama­zon S3 as a bucket of files, and cached by Cloud­Flare (a fea­ture which turned out to be crit­i­cal when TWDNE went vi­ral). The to­tal TWDNEv1 web­site size is ~6GB. (As of TWDNEv3, with all ver­sions & text snip­pets, it weighs 34G­B.)

The main up­front cost was ~$50 to pre­pay 4 years of DNS for (in­fu­ri­at­ing­ly, turned out to have been squat­ted just hours be­fore I be­gan work­ing on it, pos­si­bly be­cause of my ear­lier tweet); while Cloud­Flare is free, it does­n’t cache 100% and the non-Cloud­Flare-cached S3 band­width & host­ing cost $98 in Feb­ru­ary 2019.

“There’s this neural net­work that gen­er­ates anime girl­s…­some re­sults look nor­mal and oth­ers are ter­ri­fy­ing. I painted a col­lage of my fa­vorite re­sults.” —Di­nosar­den, 2019-02-22
“Re­al­ity can be what­ever he wants.”Venyes [Thanos meme about “These Wai­fus Do Not Ex­ist”]


  • The StyleGAN model used for the TWDNEv1 sam­ples (294MB, .pkl); al­ter­nate down­load via (avail­able for any Unix; al­ter­na­tive im­ple­men­ta­tions are avail­able for ):

    rsync --recursive --times --verbose rsync:// ./

  • all TWDNEv1–3 faces & text snip­pets (34GB) are avail­able for down­load via a pub­lic rsync mir­ror:

    rsync --recursive --times --verbose rsync:// ./twdne/

  • all 100,000 text sam­ples (50MB, .txt)


Training StyleGAN

The process of cre­at­ing the face dataset & train­ing a StyleGAN is too in­volved to go into here. For a de­tailed tu­to­r­ial & use­ful patches/scripts, see the main ar­ti­cle .



“We’re reach­ing lev­els of smug never thought pos­si­ble”


I don’t know what hap­pened here.

Gen­er­at­ing the faces is straight­for­ward. The StyleGAN repo pro­vides, which down­loads one of the Nvidia mod­els, loads it, and gen­er­ates a sin­gle face with the fixed ran­dom seed 5; to make this more use­ful, I sim­ply re­place the re­mote URL with a lo­cal model file, change the ran­dom seed to None so a differ­ent seed is used every time, and loop n times to gen­er­ate n faces:

<     url = '' # karras2019stylegan-ffhq-1024x1024.pkl
<     with dnnlib.util.open_url(url, cache_dir=config.cache_dir) as f:
<         _G, _D, Gs = pickle.load(f)
>     _G, _D, Gs = pickle.load(open("results/02046-sgan-faces-2gpu/network-snapshot-011809.pkl", "rb"))
<     rnd = np.random.RandomState(5)
<     latents = rnd.randn(1, Gs.input_shape[1])
>     for i in range(60000,70000):
>         rnd = np.random.RandomState(None)

I ran the script on the then-lat­est face model with psi=0.7 for 60k 512px faces4, up­scaled each im­age to 1024px with waifu2x, and used Im­ageMag­ick to con­vert the PNGs to JPGs at qual­ity = 25% to save ~90% space/bandwidth (av­er­age im­age size, 63k­b).

“I’m ter­ri­fied but also in­trigued by this cryp­tid ghost waifu.”


For the next 10,000, be­cause peo­ple on­line were par­tic­u­larly en­joy­ing look­ing at the weird­est & most bizarre faces (and be­cause the weird sam­ples help re­but the com­mon mis­con­cep­tion that StyleGAN merely mem­o­rizes), I in­creased the trun­ca­tion hy­per­pa­ra­me­ter (line 41) to psi = 1.0. For the fi­nal 30k, I took the then-lat­est model and set psi=0.6.

An overview of faces:

100 ran­dom sam­ple im­ages from the StyleGAN anime faces on TWDNE, arranged in a 10×10 grid.


“I can’t stop hit­ting re­load”

Cory Doc­torow

The first set of 100k faces was gen­er­ated us­ing all of Dan­booru2017’s faces both SFW & NSFW, the de­fault face crop­ping set­tings, and 2 ad­di­tional Holo/Asuka datasets I had. This led to some prob­lems, so of faces with ex­panded mar­gins I call ‘por­traits’, to re­train the anime face StyleGAN on. When done train­ing, they were con­sid­er­ably bet­ter, so I gen­er­ated an­other 100k and re­place the old ones.


“Re­cent­ly, he has be­gun to craft ob­jects of great pow­er. And many ad­mire him more for this. But within his cre­ations, one finds hints of a twisted hu­mour. Look closely at his mir­a­cles and vic­tims be­gin to emerge. Take an older project of his, . A won­der to be­hold? An amuse­ment but noth­ing more?…How many more young men sit, hunched and en­slaved to this mag­ic? What strange pur­pose does this serve?”

Ghen­le­zo, “Be­ware of the Gw­ern”

In De­cem­ber 2019, Nvidia re­leased . S2 di­ag­nosed the frus­trat­ing ‘blob’ ar­ti­facts that dogged S1 sam­ples, both face & por­trait, as stem­ming from a fun­da­men­tal flaw in the S1 ar­chi­tec­ture, and the best way the neural net­work could fig­ure out to work around the flaw. It re­moved the flaw, and made a few other more mi­nor changes (for bet­ter la­tent spaces etc). Aaron Gokaslan trained a S2 model on the por­trait dataset, and I used it to gen­er­ate a fresh batch of 100k faces, with differ­ent 𝜓 val­ues (0.6, 0.8, & 1.1) as be­fore:

python3 generate-images --seeds=0-50000 --truncation-psi=0.8 \
python3 generate-images --seeds=50001-75000 --truncation-psi=0.6 \
python3 generate-images --seeds=75001-100000 --truncation-psi=1.1 \
100 ran­dom sam­ple im­ages from the StyleGAN 2 anime por­trait faces in TWDNEv3, arranged in a 10×10 grid.

I did­n’t delete the v1–2 im­ages, but did move them to a sub­di­rec­tory ($ID), where they are also rsync-able.


“the f— did i just read”


I thought it would be funny to in­clude NN-gen­er­ated text about ani­me, the way “This Rental Does Not Ex­ist” in­cluded char-RNN gen­er­ated text about ho­tel rooms. But the ac­com­pa­ny­ing GPT-2-117M snip­pet gen­er­a­tion turned out to be a lit­tle more tricky than faces.

GPT-2-117M: prompted plot summaries

“It’s a GAN AI, and it just sits there, end­lessly gen­er­at­ing fake anime char­ac­ters along with gib­ber­ish anime sto­ry-plots.”

Bruce Ster­ling

The full GPT-2-1.5b model which gen­er­ates the best text has not been re­leased by Ope­nAI, but they did re­lease a much smaller model which gen­er­ates de­cent but not great text sam­ples, and that would just have to do.

I used the gpt-2-PyTorch CLI pack­age to run the down­loaded model with­out mess­ing with the orig­i­nal OA code my­self. The repo ex­plains how to down­load the GPT-2-117M model and pip install the Python de­pen­den­cies. Hy­per­pa­ra­me­ter-wise, I roughly ap­prox­i­mated the Ope­nAI set­tings by us­ing top_k=30 (OA used top_k=40 but I did­n’t see much of a qual­ity differ­ence and it slowed down an al­ready slow mod­el); the tem­per­a­ture hy­per­pa­ra­me­ter I did not change but gpt-2-PyTorch seems to use the equiv­a­lent of OA’s 0.7 tem­per­a­ture.

I found that feed­ing in a long prompt with many ani­me-re­lated phrases & words seemed to im­prove out­put qual­i­ty, and I no­ticed it would eas­ily gen­er­ate MAL/Wikipedia-like “plot sum­maries” if I prompted it the right way, so I went with that. (I threw in a par­ody light novel ti­tle to see if GPT-2-117M would catch on, and as for “Rac­coon Girl”, that was a lit­tle joke about which I’d seen so many anime memes about in January/February 2019.)

I could­n’t fig­ure out gpt-2-PyTorch’s mini­batch func­tion­al­i­ty, so I set­tled for run­ning it the sim­plest way pos­si­ble, 1 text sam­ple per in­vo­ca­tion, and par­al­leliz­ing it with parallel as usu­al.

The Bash script I used to gen­er­ate all sam­ples:

function gpt2 {
    BIT="$(($@ % 2))"; # use ID/seed modulo 2 to split GPT-2-117M instances evenly across my 2 GPUs
    CUDA_VISIBLE_DEVICES="$BIT" python --seed "$@" --top_k 30 --text "Anime ai nani arigatou gomen \
        sayonara chigau dame Madoka jigoku kami kanojo mahou magical girl youkai 4koma yonkoma Japan Oreimo baka \
        chibi gakuran schoolgirl school uniform club Funimation Gainax Khara Ghibli Hayao Miyazaki Saber Fate \
        Stay Night Pop Team Epic Japanese Aria Escaflowne Kanon Clannad comedy itai manga Shonen Jump pocky \
        tsundere urusai weeaboo yaoi yuri zettai harem senpai otaku waifu weeb fanfiction doujinshi trope \
        Anime News Network Anime Central Touhou kanji kaiju Neon Genesis Evangelion Spice and Wolf Holo Asuka \
        kawaii bishonen bishojo visual novel light novel video game story plot Fruits Basket Toradora Taiga \
        Aisaka tiger Detective Conan Pokemon Osamu Tezuka cat ears neko romantic comedy little sister character \
        plot drama article nekomimi bunny isekai tanuki catgirl moe manga manga manga anime anime anime review \
        plot summary. An exciting cute kawaii new anime series based on a light novel shoujo manga called  \
        \"I Can't Believe My Alien Visitor Is My Little Sister\" or \"MAVISM\", a sequel to \"Raccoon Girl\", \
        is the latest hit. In the first episode of this anime, " \
    >> /media/gwern/Data/thiswaifudoesnotexist/snippet-"$@".txt;
    cat /media/gwern/Data/thiswaifudoesnotexist/snippet-"$@".txt; }
export -f gpt2
seq 0 70000 | parallel --jobs 12 --progress gpt2
# Computers / CPU cores / Max jobs to run
# 12:local / 32 / 12
# Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
# local:1/0/100%/0.0s           harem anime heroine                 "Raccoon Girl II" was shown at Ani \
# me Funimation's Anime Festival (and it's a very popular anime) and was nominated as a top 100 anime  \
# on the Tokyo Otaku Mode list (I'll tell about that soon, I promise) . The  suitable ending for the f \
# irst episode (after a lot of scenes with a lot of bad guys, which is the end of episode three of the \
#  series) was shown as an early preview of the game which                  can also be downloaded fro \
# m the website.
# In my second review, I went by many titles, which you can find all here to download t \
# he episodes as well, but for the sake of brevity, here is an overview of the games I saw (I won't go \
#  into all the other titles because I think there are too many titles you can download that might be  \
# good, I have some I wouldn't name because I don't want to spoil them)           土安黄满收
# The only one t \
# hat I did not watch the whole series for my liking was "AoT, the End". The first episode             \
#   is about   (Katsuko)          x2 and             x2 was my first love story,         x2 is one whe \
# re it all gets a second time. This anime              x2 is not only  a really great  game and it    \
#             x2,          x3.  But for a few moments of enjoyment            x3 was the ending and I  \
# feel that I am not here to argue that "it wasn't even good  but it was awesome  and it  changed my l \
# ife  it was fun  it made my eyes roll  I felt that it  was  great, it  fixed any  that  I"
# 100%|████████████████████████████████████████████████████████████████| 512⁄512 [00:14<00:00, 35.54it/s]
# ...

While run­ning the first pass, some GPT-2-117M in­stances will fail for rea­sons like run­ning out of GPU VRAM (each in­stance takes al­most 1GB of VRAM and my 1080tis only have ~10GB us­able VRAM each). These can be fixed by look­ing for empty text files, ex­tract­ing their ID/seed, and try­ing again:

MISSING_TXT=$(find /media/gwern/Data/thiswaifudoesnotexist/ -type f -name "*.txt" -size 0 \
              | cut -d '-' -f 2 | cut -d '.' -f 1 | sort --numeric-sort)
echo "$MISSING_TXT" | parallel --jobs 5 --progress gpt2

The fi­nal gen­er­ated GPT-2-117M text sam­ples have a ma­jor flaw: it will fre­quently end its gen­er­a­tion of anime text and switch to an­other top­ic, as de­noted by the to­ken <endoftext>. These other top­ics would be things like sports news or news ar­ti­cles about Don­ald Trump, and would ruin the mood if in­cluded on TWDNE. The prob­lem there is that gpt-2-PyTorch does not mon­i­tor the mod­el’s out­put, it just runs the model for an ar­bi­trary num­ber of steps re­gard­less of whether <endoftext> has been reached or not. To re­move these un­wanted topic changes & leave only anime text, I run each text file through sed and it exits/stops pro­cess­ing when the to­ken is reached, thereby delet­ing the to­ken & every­thing after­wards:

find /media/gwern/Data/thiswaifudoesnotexist/ -name "snippet-*.txt" -type f \
    -exec sed -i '/<|endoftext|>/{s/<|endoftext|>.*//;q}' {} \;

This leaves 70k clean ani­me-themed gib­ber­ish text sam­ples which can be loaded by the same ran­dom­iza­tion process as the faces—hours of fun for the whole fam­i­ly.

GPT-2-anime plot synopses for GPT-2-117M

“okay my brain lit­er­ally can­not com­pre­hend the fact that these aren’t just real draw­ings and were made by a com­put­er. what”


I needed to feed GPT-2-117M such a large prompt be­cause it is a gen­eral lan­guage mod­el, which has learned about anime as merely one of count­less top­ics in its gi­ant cor­pus. The pri­mary goal of train­ing such lan­guage mod­els is to then use them on nar­row task-spe­cific cor­puses to ‘fine­tune’ or ‘trans­fer learn’ or pro­vide ‘in­for­ma­tive pri­ors’ for a new mod­el, with the global knowl­edge serv­ing to model all the generic lan­guage (if you have a small cor­pus of med­ical clin­i­cal re­ports, you don’t want to waste pre­cious data learn­ing some­thing like cap­i­tal­iza­tion) and get­ting the model up to speed for the new cor­pus, and tap­ping into hid­den knowl­edge of the orig­i­nal.

By the same rea­son­ing, train­ing GPT-2-117M on an anime text cor­pus might lead to bet­ter or at least, fun­nier, gen­er­ated text. nshep­perd wrote some GPT-2 train­ing code for fine­tun­ing and re­trained GPT-2-117M for ~3 epochs on Cang­gih P Wi­bowo’s 2016 Anime News Net­work scrape, us­ing the ti­tle & plot syn­op­sis fields (TITLE|PLOT SYNOPSIS) I have also used the fine­tun­ing code for Project Guten­berg po­etry and us­ing the po­etry as a run­ning ex­am­ple.

GPT-2-117M ap­peared to start over­fit­ting quickly and train­ing was stopped, yield­ing a fi­nal trained check­point we call “GPT-2-anime” (441M­B). To use it, one can sim­ply un­pack it into the GPT-2-117M model di­rec­to­ry, over­writ­ing the files, and then run­ning the OA code as usu­al.

Sam­ple un­con­di­tional out­put:

======================================== SAMPLE 1
 a boy who works in a magic shop. Her magic power is not despite the power
these animals and the atmosphere grows stronger every night. She can't use
magic at all so she can get her witch powers back.
Shirobako: My Island (TV)|"Abe Eigaku is a rookie high school student who is
about to have a battle dyed by the colors of samurai and a sordid fangirl who
only wished to destroy the hell that befall him. One day, he runs into a
samurai and strikes a deal. The deal: If he can get to, then distract the
soaked Edo sword will be able to change his form. The deal was an attack, and
the young boy is left with a injuries and he is left with bitter injuries in
the hands of the Kumano family. From the experience that inherits the "gods'
ability" in him, he comes into an outdoor fighting ring with the Gices. There
he meets Setsu, a former show comor who has changed his life---like his father,
himself as a ""demon"" of the Gices, and a member of the Gices. At the family's
house, he makes his ultimate weapon, the three-headed demon Gekkou, Kabuto.
Yukino comes to his aid, and their rivalry soon evolves as they head into more
serious and skillful monsters. With his ability, Yukino faces others both
fierce and charming. The series follows their everyday lives, together with
their personal ones and each other, as they go through their daily lives and
try to grow up as close to each other as possible to the one they love.
Berserk: The Golden Age Arc (TV)|Guts is a young man who has been accepted into
a powerful industrial city by a mysterious woman named Null Mother, and is then
summoned to the city of Midland. The first he ever summoned to his world was
the Ghost Ship, a brutal battle that has been held on for thousands of years.
Now he is on the eve of the Bloody War, when the inhabitants of the little
world try to destroy the automated sentry. However, Guts cannot stop them and
blithelyave their fate as the inhabitants of Midland are devourant of the very
world they reside in.
Lupin III: The Legend of the Gold of Babylon (movie)|Deep beneath New York city
are buried tablets that tell the tale of Babylon's gold that was lost during
Babylon's destruction. Lupin is interested in finding this gold, but will have
to deal with two mafia families and Zenigata during his quest in unsolving the
mystery of the tablets. During Lupin's journey he encounters an old woman who
has a connection with this treasure.
Fortune Arterial: Akai Yakusoku (TV)|Kohei Hasekura has lived a live of
transferring schools for boys all his life. He's been the site of a recently
bankrupt well known as Moon River for years and has been looking forward to
getting it back. At his new school, however, he sees the school's most wanted
fight and gets a duel to choose the strongest in the class. The battle is about
to be fought by a monster called Hildegarn either. Hasekura's friend Orca
Buchizouji gets dragged into the fight and loses a majority of key to the duel.
Hasekura fights alone against the bullies and punks from the previous season,
and when the duel is over everyone comes to an end.
Gunslinger Girl: Il Teatrino (TV)|When the Social Welfare Agency investigates
the disappearance of a operative, their inquiry leads them right into the lair
of their rival, the Five Republics. The assassin Triela infiltrates the hostile
organization, but her search is cut short when she finds herself staring down
at the barrel of a gun...
Yuruyuri Nachu Yachumi! (TV)|
Magical Sisters Yoyo & Nene (TV)|Yoyo and Nene are witches living in the
Magical Kingdom who specialize in curse and decusing. They are negotiating with
a woman who wants to find her sister, a witch that disappeared twelve years
ago, when a monstrous tree appears in front of Yoyo and Nene´s house. Embedded
within the tree are unfamiliar buildings which prompt Yoyo to explore these
strange constructions. During her scout, Yoyo unintentionally ends up thrown
into modern day Japan. He is taken away by a doll-cat who falls from a magical
tree on his way back. He is taken back to Japan by Gogyou, a girl who is their
own, who falls from a magical tree. He and his friend Nanami lands on a magical
journey to recover the lost magical powers of the magical stone.
Sakura Taisen: Ecole de Paris (TV)|"Anastasia was supposed to be invaded by
various people. She is so damaged that she can't see anything. But her friendces

There are two im­me­di­ately vis­i­ble prob­lems with the GPT-2-anime out­put.

  1. Too Short: prompted/conditional out­put from GPT-2-anime looks… much the same.

    Ap­par­ently what hap­pened was that dur­ing the fine­tun­ing, GPT-2-117M learned that the for­mat was in­cor­ri­gi­ble and in­ter­nal­ized that to such an ex­tent that it would only gen­er­ate ti­tle+short­-syn­op­sis-fol­lowed-by-new-pairs. If a prompt is pro­vid­ed, it might in­flu­ence the first gen­er­ated pair, but then GPT-2-anime knows that the prompt is guar­an­teed to be near-ir­rel­e­vant to the next one (a­side from per­haps fran­chise en­tries al­pha­bet­ized to­geth­er, like se­quels) and can be ig­nored, and each ti­tle+­plot syn­op­sis is only a few sen­tences long, so GPT-2-anime will tran­si­tion quickly to the next one, which will be effec­tively un­con­di­tion­al.

    This is an­noy­ing but per­haps not that big of a prob­lem. One can surely fix it by us­ing a dataset in which the title/synopsis is then fol­lowed by a much longer plot de­scrip­tion (per­haps by merg­ing this ANN scrape with Wikipedia ar­ti­cles). And for gen­er­at­ing ran­dom text snip­pets for TWDNE, we don’t need the con­trol since the un­con­di­tional sam­ples are all on-topic about anime now (which was part of the point of the fine­tun­ing).

  2. Low Qual­ity: some out­puts are thor­oughly un­sat­is­fac­tory on their own mer­its.

    The first en­try lacks a se­ries ti­tle and any in­tro­duc­tion of the premise. One en­try has a ti­tle—but no plot. And while most are shorter than I would like, the last one is sim­ply far too short even for a plot syn­op­sis.

    This is a prob­lem. The out­puts are mod­er­ately in­ter­est­ing (eg “Shi­robako: My Is­land”) and pro­vide a good plot syn­op­sis which is a nice start­ing point, but as-is, are un­ac­cept­ably low-qual­ity for TWDNE pur­pos­es, since they are so much shorter & less in­ter­est­ing on av­er­age than the long prompted GPT-2-117M ex­am­ples were.

How­ev­er, the 2 prob­lems sug­gest their own so­lu­tion: if the GPT-2-anime plot syn­opses are good premises but GPT-2-anime re­fuses to con­tinue them with plot/dialogue, while GPT-2-117M gen­er­ates good long plot/dialogue con­tin­u­a­tions but only if it’s given a good prompt to force it into anime mode with use­ful key­words, why not com­bine them? Gen­er­ate a wacky plot syn­op­sis with GPT-2-anime, and then feed it into GPT-2-117M—to get the best of both worlds!

As be­fore, sed will take care of the <endoftext> mark­ers (& if one does­n’t like the syn­opses’ “Source:” end-mark­ers, they can be re­moved with sed -e 's/ (Source: .*//'), the ti­tles can be ital­i­cized, and we can post-process it fur­ther for qual­i­ty: if ei­ther of the syn­op­sis or plot sum­mary are too short, drop that text snip­pet en­tire­ly. This should yield long con­sis­tent high­-qual­ity sam­ples of ani­me-only text sam­ples of the form title/synopsis/coherent plot-sum­mary or di­a­logue or ar­ti­cle about said ti­tle+syn­op­sis.

After gen­er­at­ing 12MB of GPT-2-anime plot syn­opses yield­ing ~35k syn­opses (as­sum­ing some per­cent­age will be thrown out for be­ing too short), I fed them into this script for GPT-2-117M sim­i­lar to be­fore, to gen­er­ate tex­t-s­nip­pets #70,000–100,000:

fgrep '|' /home/gwern/src/gpt-2/samples | sed -e 's/^\(.*\)|/_\1_: /' | \
while IFS= read -r PROMPT; do
    if [ ${#PROMPT} -gt 150 ]; then
        PLOT="$(CUDA_VISIBLE_DEVICES=1 python --seed 5 --top_k 40 \
                --text "Japanese anime manga light novel. $PROMPT. Plot \
                        Summary. In the first episode of this anime " | \
                sed -e '/<|endoftext|>/{s/<|endoftext|>.*//;q}'))"

        if [ ${#PLOT} -gt 250 ]; then
            echo "$PROMPT In the first episode of this anime $PLOT" \
              >> "$TARGET/snippet-$I.txt"

            echo "$TARGET/snippet-$I.txt"
            cat "$TARGET/snippet-$I.txt"


Sam­ple com­bined out­put:

_Mayonaka wa Junketsu_: His older brother has been hospitalized for about 5
weeks for about 6 weeks. Because of his unaccidental death, Ryouta's parents
send him to live with his uncle, Yuuichirou. To his surprise, Yuuichirou is
actually the son of his father's lover! Will Ryouta get to know Yuuichirou?
(from B-U) In the first episode of this anime 【The Second Episode】, the
story takes place in Koegi-ji town. After Yuuichirou and his family leave to
live with his uncle, they encounter "Ryouta and the three of them". They learn
about their real relationship, but they're not going to tell this kind of story
for fear of being banned from the show. The characters' father, Kazui, is the
first name his family does not use. The story begins with the trio and Kazui
being together, while their parents are fighting for the survival of their
families, Yuuichirou and Ryouta in the middle of a battle. 【In the Next
Chapter】 I also do a series of interviews with people from the anime.
---------------------------The story is about three girls in the house.
-------------------From that moment on, the plot follows them as their story
goes on. They are all in shock about what has happened, but they believe that
Yuuichirou is the real Ryouta. From that moment on, they feel they have a
chance to change their path for the better after the events at the hospital
which led to Yuuichirou's death to come home and they try to gain friends by
helping Yuuichirou. ---------------There are many other characters besides the
three in the story, so what is the real one? The main character is called
Yuuichirou Kazui ( 招仮猿 名子 ) 【First Character】 【Second
Character】 【Third Character】 ---------------This anime tells a story very
similar to the show, just with characters that make you smile. If you are in a
certain situation and see the person that is your favorite, there is no
problem. However, you need to be mindful of the following things to make sure
you know who you are dealing with. The story is very short, but there are many
things that need to be done in order to make you smile. There are many times
that I would write a scene that just needs to end, the most important part, of
the scene, or I would have to make the scene not end at all. As long as it
doesn't end with a big scene that was in a certain moment, you wouldn't be able
to use your eyes to make out that someone is laughing, or a child being played
or something. Just be mindful of how you are dealing with those moments and
keep yourself a little calm even if they are actually happening

_Shijin no Koibito_: Collection of short stories: (1) Shibatte (2) Love Me Tell
Me (3) I Love You (4) I've Fine (5) Meido (6) Take Of The Class (7) Liar (8) Take Of
The Precious Parade In the first episode of this anime iku wa Hana
kawakukushigai wa (Love me tell me a secret), a student named Ritoye has to
learn to read all of the kanji correctly. While they are doing this, they'll
notice something odd, they can't talk and the class will change. They'll also
notice some other things that happened, and Ritoye will get annoyed with the
school and will leave her class. Once they're through with her class, she'll
explain the problems and the solutions. She'll begin by looking back through
their textbooks, searching for the kanji that says 'I'm on the line' and 'I'm
here to talk a little bit about the story'. At some point in the story Ritoye
finds out that the class used to be 'on the line' with 'I'm just being a
teacher', so if someone is on the line they can start a lesson by asking the
name of that person you're teaching and giving a name of the class 'I-Mitsu'
which you have to pick from a few names on the internet. While the topic gets a
little heated, there'll be a brief period of humor where the class becomes even
funnier. One character shows off some random drawings while Ritoye is sitting
by her desk, so maybe it's not a complete mess of drawings, but I was sure it
would have been better if it was. The manga was originally published in
February 2014. I'll update this article if anything changes, then give up.
------------- RITOYE: A school principal

~ ~ ~

My parents will be around a week or two after school. During that time, the
school's president will take care of the school and it will be my job to pick
up students and take care of the textbooks.

We don't even need to call her. Her name is Ritoye.

She has all the classes they need, but she still says she doesn't like 'those

I love you everyone. I really do.'

I'm here, to get this in front of class, as I want to make sure she knows I'm

That's what the head teacher says.

I'll just take everything from you to make sure she knows that I'm serious.

"You, you're the best teacher in class, aren't you?"

It doesn't matter how difficult, we'll give you everything. We'll

_Hitodenashi no Hirusagari_: A mysterious disease. A disease. The disease has
drastically differ from the usual suspects of the type of disease, but it is
basically living in a hospital as a guinea pig for a few weeks. A short story
of a person's persistent determination to save a person's life, and the story
of a person's struggles to find it in this "comedy" story. (Source: MU) In the
first episode of this anime 『Hitodenashi no Hirusagari』 (1962) the
protagonist, Hiroki, begins his life. In the next episode he has to learn that
he cannot leave Kyoto. From this he starts to find out more about himself and
the world, as well as a great deal about the life he has created for himself.
He learns a great deal about himself, as well as about the world. It goes
through the story like the story of a kid having a terrible experience...or the
story of young people taking it upon themselves. What Hiroki learns, however,
is quite difficult to grasp because of the many factors that are involved.

This is the first light novel I read that started off as a light novel, in the
spirit of the first book that came out in English.

When I came through the last arc of this short anime, I was very disappointed.
I was expecting much much more of a series of stories and anime but instead, I
found that it felt more about the plot. As the first episode started, in a
similar manner to the last arc, each arc is different from the previous arc.

From what I can gather, many plot points were missed when read through and how
often it was made a point of focus to tell a story instead of the original.

At the end of a story, you get to make your own sense of it based on how you
read it.

One time there was two main things left out as the main plot that needed me to
know a lot more.

First, I couldn't really get that the stories you saw in the first arcs were
actually the people that you could actually talk to.

This left me feeling more like I had lost some of my sense of immersion in a
story and just didn't really find it there. For example, I started to feel that
the main plot was very different from what you would see in the first arcs. The
two main themes we saw in the first arcs were how to live and how to do a life.

I found the story of how the life of a person is different from one that you
might see in the books.

The story of a person's living and living alone is also different from the
story of the world that you might see in a short play.

And to answer the question that many people have asked me over the last six
months, and these comments, but in order to

_The King Of Debt_: Souta is a rich guy, who is unlucky in love, and punishes
by the rich son of a rich Chinese mafia as he is tossed about debt. But is he
really bad at such a guy? (from B-U) In the first episode of this anime
何月处, Souta is forced to marry a rich kid to buy him a piece of his ass. A
guy who doesn't know how to make friends with strangers, only has a friend for
the rich guy's sake. Souta does this by being the only person in the house that
doesn't have a job but a car, and even a friend of hers can't really call her
sister 'fae'. He also has an idol he can sing on stage, and he is forced to
fight for her (a guy named Katsu) against Katsu himself in the episode entitled
Souta Is Fae. In a final episode, he is forced to run the family business for
$100,000. What's the point in doing it? The kids only get to run the business,
but when they pay out the money they want in their inheritance, Souta starts to
run the shop as well. It's very simple for Souta and the guy he's running his
business for. After the episode has been rated H, the new episode has been
rated H...

favorite favorite favorite favorite ( 5 reviews )

Topics: Drama, TV, Romance, Romance

Community Audio 91 91 Seiken Hamamoto's Hacronyms 2 1 of 1 View 164 of 190

Topics: Drama, TV, Fantasy, Fantasy

Community Audio 88 88 The Man From The Outer Heaven 4 3 of 3 View 164 of 189

Topic: Drama, OVA

Community Audio 87 87 Seiken Hamamoto's Hacronyms 9 8 of 8 View 164 of 190

Topic: Drama, OVA

Community Audio 85 85 The Man From The Outer Heaven 3 2 of 2 View 164 of 185

Topic: Drama, OVA

Community Audio 84 84 Seiken Hamamoto's Hacronyms 2 2 of 2 View 164 of 189

Topic: Drama, OVA

Community Audio 83 83 Seiken Hamamoto's Hacronyms 3 3 of 3 View 164 of 190

Topic: OVA, DVD, Blu-ray, DVD-R

Topics: Drama, TV

Community Audio 82 82 Seiken Hamamoto's Hacronyms 3 3 of 3 View 164 of 190

Topic: Drama, OVA

Community Audio 81 81 Seiken Hamamoto's Hacronyms 2 2 of 2 View 164 of 188

Topic: OVA, DVD,

_Yuuhi Zukan_: A story about a student council president and his student
council president, and how they are going to boot on the plan of how to change
the financial system. Unfortunately, this plan results in both failure and
budget failure. (from B-U) In the first episode of this anime 一日本の橙
and other characters form a conspiracy to gain access to his store, after being
cut in half, they make their way to the store to recruit them using the powers
of a demon. The demon has three characteristics: a strong, sharp eye, a strong
hand; a black eye, with white and black pupils with a big black mark on, and a
silver eye, with a black eye mark. The Demon's ability to create spells is
"demonic mana" that can be activated as if it were "guru aura". After a brief
delay, however, the Demon enters a black space that has a black face, and then
the Demon is attacked by a small black hole in his chest. The Demon attacks the
small black hole and when the black hole dies, the Demon's power to create
spells is restored. The demon returns to his house where is an old man called
Tsubayasu who says that he is going to create a dragon with great power. He has
been searching for a good magic and after his work with Tsubayasu and his
students for about half a decade now, he sees the dragon coming on a distant
night. Once this happens, his plans fall apart. When he discovers it was some
magic wizard he has been using, he leaves Tsubayasu thinking that he can not
use the dragon because of the power-up and his plans have come to an end. He
wakes up his parents and tells them that Tsubayasu had found money and a dragon
he created. (from B-U). 一日京初日 歋未夢の橙面者 (from B-U) A
young girl named Fushikata is assigned to work for the student council
President in the store. Fushikata is an ordinary girl who works as a maid, yet
she is also an extremely dangerous person. She lives with her mother and sister
as the main characters of the series. One day Fushikata's mother goes missing.
When they return to the store, a group of students, with a group of girls,
enter. They are given the key to the store, but after they get home Fushikata
is killed by the two of them. The group discovers that the woman they have lost
is Tsubayasu's old teacher, Hidetoshi. At this time Fushikata starts to think
about how she should treat him at a later date but that she is too busy

The cor­re­spond­ing 30k faces are gen­er­ated & up­scaled as be­fore, index.html up­dated to ran­dom­ized 0–100,000, and so on.


Ope­nAI re­leased GPT-3 in June 2020 as a SaaS, with the model avail­able only through a closed beta API. I was given ac­cess, and ran ex­ten­sive us­ing the web in­ter­face. After sat­is­fy­ing my­self with that, I thought to up­date TWDNE with GPT-3 anime plot sum­maries.

Be­cause GPT-3 is so much more pow­er­ful than any of the GPT-2 mod­els, the two-phase process and ex­ten­sive key­words can be omit­ted en­tirely in fa­vor of just a sin­gle short prompt—GPT-3 will get the idea im­me­di­ate­ly. After some ex­per­i­men­ta­tion ( re­mains a black art), I found that I could get a novel anime ti­tle & plot sum­mary by fram­ing it as a re­view of an anime from the fu­ture (eg 2022). A sim­ple prompt that worked nicely was

2022 Anime Fall Sea­son Pre­views and Re­views

Pre­views of the lat­est and hottest up­com­ing Japan­ese anime this fall sea­son!

Be­low, a re­view of the themes and plot of the first episode of the wide­ly-an­tic­i­pated 2022 fall orig­i­nal new 1-sea­son anime


The API it­self is straight­for­ward to use, and can be in­ter­acted with via their Python pack­age or just curl. You have to pro­vide an API key, but oth­er­wise, one just needs the tem­per­a­ture and top-p (for nu­cleus sam­pling) and a prompt, and one gets back the text com­ple­tion. Since the prompt is so short, we don’t need to worry about is­sues like our text to­k­eniz­ing into (which was a ma­jor is­sue with lit­er­ary us­es) or hit­ting the con­text win­dow (dou­bled to 2048 BPEs, but still all too painfully nar­row). The re­turned string is JSON like this:

  "id": "cmpl-zFdIa6r5oO6AV4iogorDoAXh",
  "object": "text_completion",
  "created": 1593397747,
  "model": "davinci:2020-05-03",
  "choices": [
      "text": "Yutori-chan which is produced by Sunrise:\nThe Kind-Hearted T-back Hurdler \ Yutori-chan\nAt \
      long last we welcome the challenging and heartwarming anime Yutori-chan produced by",
      "index": 0,
      "logprobs": null,
      "finish_reason": "length"

GPT-3 Generation

So, to gen­er­ate new anime plot sum­maries, one can just loop through with curl, ex­tract the text field with jq, do a lit­tle re­for­mat­ting, and that’s it:

for i in {0..100000};
    echo -n "Review of the themes and plot of the first episode of the new anime " > snippet-$i.txt
    curl --silent ''
        -H 'Content-Type: application/json' -H 'Authorization: XYZ' # NOTE: insert your own API token here
        -d '{"temperature": 0.95,"top_p": 0.98, # }
            "prompt": "2022 Anime Fall Season Previews and Reviews\nPreviews of the latest and hottest \
              upcoming Japanese anime this fall season!\nBelow, a review of the themes and plot of the \
              first episode of the widely-anticipated 2022 fall original new 1-season anime \"",
            "max_tokens": 700 }' | \
  ## Select just the text completion:
  jq '.choices[0].text' | \
  ## unescape quotes:
  sed -e 's/\\\"/"/g' | \
  tee --append snippet-$i.txt
  echo -n "…" >> snippet-$i.txt
  sleep 3s

The hy­per­pa­ra­me­ters are vanilla GPT-3 set­tings:

  1. best-of (BO = 1): us­ing Meena-style best-of rank­ing like BO = 20 is not worth the ex­pense here as we are not ask­ing tricky ques­tions or as­sign­ing tasks with a ‘right an­swer’; for cre­ative writ­ing like anime re­views, reg­u­lar sam­pling is fine
  2. tem­per­a­ture: vary­ing tem­per­a­tures from 0.80 to 0.95 all work fine, so this is not a task that is tem­per­a­ture-sen­si­tive—ap­par­ently there are al­ways rea­son­able com­ple­tions even if a lot of low-prob­a­bil­ity to­kens are se­lected
  3. top-p: I gen­er­ally use nu­cleus sam­pling to cut off the gib­ber­ish tail of the bot­tom 2–5%; here, it did­n’t make much differ­ence that I could tell, so I left it at my usual 2% (0.98).

The use of fu­ture years in the prompt does­n’t al­ways suc­ceed in gen­er­at­ing novel ani­me; here is a fail­ure case, a per­fectly sen­si­ble, co­her­ent, plau­si­bly-hu­man-writ­ten anime re­view which is, how­ev­er, too close to re­view­ing :

Re­view of the themes and plot of the first episode of the new anime “DARLING in the FRANKXX”, sched­uled to pre­miere on Oc­to­ber 7th, 2017.

Vi­su­als: 4.0/5.0 (Av­er­age)

The vi­su­als of this new anime are noth­ing par­tic­u­larly spec­tac­u­lar, but still pretty good. Char­ac­ter de­signs are fairly unique, but there’s not a whole lot of col­ors for this episode. It does set up for a darker story and does every­thing pretty effec­tive­ly.

Vi­su­als are rarely an in­di­ca­tor of the ani­me’s qual­i­ty, but it gives you a feel for what it looks like over­all.

Sound: 3.5/5.0

This open­ing song was pretty cool, and the seiyuus did a great job in mak­ing their char­ac­ters come to life.

Mu­sic was­n’t spec­tac­u­lar, but the voice ac­tors did a great job in bring­ing their char­ac­ters to life.

Sto­ry: 3.5/5.0

Sur­pris­ingly this seems like it’s go­ing to be a pretty in­tense and gloomy ro­mance story about hav­ing kids and falling in love. I’ll be defi­nitely check­ing this out when it comes out.

The plot sum­mary for “DARLING in the FRANKXX” is ba­si­cally the setup of this episode.

Char­ac­ter: 3.5/5.0

This char­ac­ter I did­n’t re­ally like when she first started show­ing up, but by the end of the episode I was pretty much warm­ing up to her. The MC does­n’t re­ally do a whole lot in this episode, but he seems pretty nice. A lot of de­vel­op­ment is prob­a­bly gonna go on with the other char­ac­ters, so I’ll be watch­ing out for that.

Clos­ing Thoughts: (note: this was writ­ten be­fore this anime came out)

This anime did end up be­ing quite de­cent, and my com­plaints about it mostly con­cerned the time crunch, so it’s not the ani­me’s fault at all. I had only two other things that would have im­proved this anime even more: One would be to in­clude more gags in­volv­ing the flow­ers, which were ac­tu­ally pretty cute. They were a funny way to tell us about who the char­ac­ters were and their dy­nam­ics with one an­oth­er. The other would be to not rush things so much. Any­ways this was a pretty great anime over­all. I hope they do a sec­ond sea­son. A quick re­view about this ani­me: (writ­ten after it was aired) When you first hear “DARLING in the FRANKXX”, you might not think that a 1920’s themed anime with an odd premise could be done, but be­lieve me when I tell you it can, and it was done re­ally well. A re­view of the first episode can be found on my pre­vi­ous post. They in­tro­duced the key el­e­ments to the plot well, and had a good plot out­line. The be­gin­ning setup took a lit­tle get­ting used to, but it be­came quite cute by the end. Themes of the episode were mostly ro­mance, and they did the themes well. It uses its mu­sic and vi­su­als quite effec­tive­ly. Over­all the story was quite unique and well done. Other no­table things would in­clude a rel­a­tively high amount of di­ver­sity in char­ac­ters, such as hav­ing a “hene­nak” as well as hav­ing a rel­a­tively equal amount of fe­male to male char­ac­ters. The art­style for this anime was solid, al­though the art team was­n’t re­ally given enough time to come up with a great bud­get for it. There­fore, the few col­ors it did have were used to the best of their abil­i­ties in draw­ing the char­ac­ters. Over­all, it was a pretty fun anime and I hope

But they are still much bet­ter than the GPT-2 ones!

So I gen­er­ated 100,002 fi­nal snip­pets from 2020-07-12–2020-09-05, some­times tweak­ing the tem­per­a­ture or prompt for va­ri­ety’s sake, and re­sam­pling any com­ple­tions be­low ~550 char­ac­ters. (To­tal: 330,800 lines; 55,301,038 words; 328,099,060 bytes.)

GPT-3 Download

Snip­pets can be seen at TWDNE, of course but also avail­able as tar­balls; down­load:

  • lo­cal mir­ror (77MB)

  • Mega mir­ror

  • Rsync mir­ror:

    rsync --verbose --recursive rsync:// ./


“Anon, please tell me the artist of that pic­ture. The way they draw that hair is amaz­ing.”


“Some­one know who she is? (for a friend)”

“Prob­a­bly from some weird vi­sual nov­el.”


I set up the first ver­sion of TWDNE with faces Tues­day 2019-02-19; overnight, it went vi­ral after be­ing posted to a Chi­nese FLOSS/technology web­site, re­ceiv­ing hun­dreds of thou­sands of unique vis­i­tors over the next few days (>700,000 unique vis­i­tors), with sur­pris­ingly long av­er­age ses­sions (I guess look­ing at all the pos­si­ble faces is hyp­notic, or some peo­ple were us­ing it as a screen­saver):

Google An­a­lyt­ics traffic sta­tis­tics for TWDNE, 2019-02-19–2019-02-23

This traffic surge was ac­com­pa­nied by a band­width surge (>3.5T­B):

Cloud­Flare cache band­width us­age for TWDNE, 2019-02-19–2019-02-23, and 2019-02-22–2019-02-23

By 2019-04-03, there were >1 mil­lion unique vis­i­tors. Traffic re­mained steady at sev­eral thou­sand vis­i­tors a day for the next few months (~900GB/month), pro­duc­ing Ama­zon S3 bills of ~$90/month, so in July 2019 I moved host­ing to a ng­inx Het­zner serv­er. By 2019-07-20, TWDNE traffic hit 1,161,978 unique users in 1,384,602 ses­sions; be­cause of the JS re­fresh, ‘pageviews’ (12,102,431) is not par­tic­u­larly mean­ing­ful, but we can in­fer from the re­mark­able 1m:48s length of the av­er­age ses­sion, that TWDNE users are look­ing at >7 im­ages per ses­sion.

As jokes go, this was a good one. Once whole-im­ages are solved, per­haps I can make a “This Booru Does Not Ex­ist” web­site to show off sam­ples from that!

  1. One ben­e­fit is that by load­ing the tar­get files dy­nam­i­cally in­stead of pro­vid­ing a brows­able di­rec­tory or some­thing more ma­chine-read­able, this hides the images/snippets from search en­gi­nes—which is a good thing since it avoids ex­pos­ing mean­ing­less text to search en­gine users.↩︎

  2. Orig­i­nal­ly, an was used for sim­plic­i­ty, but a user pointed out that this led to a less-pleas­ing ex­pe­ri­ence since when the whole page re­load­ed, the new image/text would vis­i­bly down­load & dis­play, while there was no rea­son the next im­age could­n’t be down­loaded in the back­ground and the vis­i­ble im­age changed atom­i­cal­ly.↩︎

  3. One dispir­it­ing anec­dote about mo­bile users: the ini­tial ver­sion of TWDNE was es­sen­tially bro­ken on smart­phones as I for­got to add any CSS which would scale the faces down to fit on the screen com­fort­ably. About 460k unique users, more than half on mo­bile, vis­ited be­fore I no­ticed the prob­lem on my own. Ap­par­ently mo­bile users, and es­pe­cially those forced to use Sa­fari on iOS (no­to­ri­ous for sup­port­ing few fea­tures & stan­dard­s), ex­pect web­sites to be bro­ken & don’t bother com­plain­ing.↩︎

  4. This takes sev­eral hours be­cause the script is not op­ti­mized at all and does one face at a time in­stead of run­ning mini­batches of ~20 faces, which is fine for a one-off dump, but if one were do­ing many such projects or larger dumps, would be worth tak­ing to the time to fix.↩︎