Making Anime With BigGAN

Experiments in using BigGAN to generate anime faces and whole anime images; semi-successful.
anime, NN, Python, technology, tutorial
2019-02-042021-01-29 in progress certainty: highly likely importance: 5


Fol­low­ing my , I ex­plore BigGAN, an­other re­cent GAN with SOTA re­sults on one of the most com­plex im­age do­mains tack­led by GANs so far (Im­a­geNet). BigGAN’s ca­pa­bil­i­ties come at a steep com­pute cost, how­ev­er.

Us­ing the un­offi­cial BigGAN-PyTorch reim­ple­men­ta­tion, I ex­per­i­mented in 2019 with 128px Im­a­geNet trans­fer learn­ing (suc­cess­ful) with ~6 GPU-days, and from-scratch 256px anime por­traits of 1000 char­ac­ters on an 8×2080ti ma­chine for a month (mixed re­sult­s). My BigGAN re­sults are good but com­pro­mised by the com­pute ex­pense & prac­ti­cal prob­lems with the re­leased BigGAN code base. While BigGAN is not yet su­pe­rior to StyleGAN for many pur­pos­es, BigGAN-like ap­proaches may be nec­es­sary to scale to whole anime im­ages.

For fol­lowup ex­per­i­ments, Shawn Presser, I and oth­ers (col­lec­tive­ly, “Ten­sor­fork”) have used Ten­sor­flow Re­search Cloud TPU cred­its & the com­pare_­gan BigGAN reim­ple­men­ta­tion. Run­ning this at scale on the full Dan­booru2019 dataset in May 2020, we have reached the best anime GAN re­sults to date (later ex­ceeded by ).

Just StyleGAN ap­proaches have proven diffi­cult and pos­si­bly in­ad­e­quate to the ul­ti­mate goal, and mo­ti­vated my eval­u­a­tion of NNs which have demon­strated the abil­ity to model much harder datasets like Im­a­geNet at large pix­el-sizes. The pri­mary ri­val GAN to StyleGAN for large-s­cale im­age syn­the­sis as of mid-2019 is BigGAN (; offi­cial BigGAN-PyTorch im­ple­men­ta­tion & mod­els).

BigGAN suc­cess­fully trains on up to 512px im­ages from Im­a­geNet, from all 1000 cat­e­gories (con­di­tioned on cat­e­go­ry), with near-pho­to­re­al­is­tic re­sults on the best-rep­re­sented cat­e­gories (dogs), and ap­par­ently can even han­dle the far larger in­ter­nal Google JFT dataset. In con­trast, StyleGAN, while far less com­pu­ta­tion­ally de­mand­ing, shows poorer re­sults on more com­plex cat­e­gories (Kar­ras et al 2018’s LSUN CATS StyleGAN; our whole-Dan­booru2018 pi­lots) and has not been demon­strated to scale to Im­a­geNet, much less be­yond.

BigGAN Advantages

BigGAN does this by com­bin­ing a few im­prove­ments on stan­dard DCGANs (most of which are not used in StyleGAN):

Brock et al 2018: BigGAN-deep ar­chi­tec­ture (Fig­ure 16, Ta­ble 5)

BigGAN Disadvantages

The down­side is that, as the name in­di­cates, BigGAN is both a big model and re­quires big com­pute (par­tic­u­lar­ly, big mini­batch­es)—­some­where around $20,000, we es­ti­mate, based on pub­lic TPU pric­ing.

This present a dilem­ma: larg­er-s­cale por­trait mod­el­ing or whole-anime im­age mod­el­ing may be be­yond StyleGAN’s cur­rent ca­pa­bil­i­ties; but while BigGAN may be able to han­dle those tasks, we can’t afford to train it!

Must it cost that much? Prob­a­bly not. In par­tic­u­lar, BigGAN’s use of a fixed large mini­batch through­out train­ing is prob­a­bly in­effi­cient: it is highly un­likely that the ben­e­fits of a n = 2048 mini­batch are nec­es­sary at the be­gin­ning of train­ing when the Gen­er­a­tor is gen­er­at­ing sta­tic which looks noth­ing at all like real data, and at the end of train­ing, that may still be too small a mini­batch (Brock et al 2018 note that the ben­e­fits of larger mini­batches had not sat­u­rated at n = 2048 but time/­com­pute was not avail­able to test larger still mini­batch­es, which is con­sis­tent with the ob­ser­va­tion that the harder & more RL-like a prob­lem, the larger the mini­batch it need­s). Typ­i­cal­ly, mini­batches and/or learn­ing rates are sched­uled: im­pre­cise gra­di­ents are ac­cept­able early on, while as the model ap­proaches per­fec­tion, more ex­act gra­di­ents are nec­es­sary. So it should be pos­si­ble to start out with mini­batches a tiny frac­tion of the size and grad­u­ally scale them up dur­ing train­ing, sav­ing an enor­mous amount of com­pute com­pared to BigGAN’s re­ported num­bers. The gra­di­ent noise scale could pos­si­bly be used to au­to­mat­i­cally set the to­tal mini­batch scale, al­though I did­n’t find any ex­am­ples of any­one us­ing it in Py­Torch this way. And us­ing TPU pods pro­vides large amounts of VRAM, but is not nec­es­sar­ily the cheap­est form of com­pute.

BigGAN Transfer Learning

An­other op­ti­miza­tion is to ex­ploit trans­fer learn­ing from the re­leased mod­els, and reuse the enor­mous amount of com­pute in­vested in them. The prac­ti­cal de­tails there are fid­dly. The orig­i­nal BigGAN 2018 re­lease in­cluded the 128px/256px/512px Gen­er­a­tor Ten­sor­flow mod­els but not their Dis­crim­i­na­tors, nor a train­ing code­base; the compare_gan Ten­sor­flow code­base re­leased in early 2019 in­cludes an in­de­pen­dent im­ple­men­ta­tion of BigGAN that can po­ten­tially train them, and I be­lieve that the Gen­er­a­tor may still be us­able for trans­fer learn­ing on its own and if not—­given the ar­gu­ments that Dis­crim­i­na­tors sim­ply mem­o­rize data and do not learn much be­yond that—the Dis­crim­i­na­tors can be trained from scratch by sim­ply freez­ing a G while train­ing its D on G out­puts for as long as nec­es­sary. The 2019 Py­Torch re­lease in­cludes a differ­ent mod­el, a full 128px model with G/D (at 2 points in its train­ing), and code to con­vert the orig­i­nal Ten­sor­flow mod­els into Py­Torch for­mat; the catch there is that the pre­trained model must be loaded into ex­actly the same ar­chi­tec­ture, and while the Py­Torch code­base de­fines the ar­chi­tec­ture for 32/64/128/256px BigGANs, it does not (as of 2019-06-04) de­fine the ar­chi­tec­ture for a 512px BigGAN or BigGAN-deep (I tried but could­n’t get it quite right). It would also be pos­si­ble to do model surgery and pro­mote the 128px model to a 512px mod­el, since the two up­scal­ing blocks (128px→256px and 256px→512px) should be easy to learn (sim­i­lar to my use of wai­fu2x to fake a 1024px StyleGAN anime face mod­el). Any­way, the up­shot is that one can only use the 128px/256px pre­trained mod­els; the 512px will be pos­si­ble with a small up­date to the Py­Torch code­base.

All in all, it is pos­si­ble that BigGAN with some tweaks could be afford­able to train. (At least, with some crowd­fund­ing…)

Experiments

BigGAN-PyTorch

BigGAN: Danbooru2018-1K Experiments

To test out the wa­ter, I ran three BigGAN ex­per­i­ments:

  1. I first ex­per­i­mented with re­train­ing the Im­a­geNet 128px model1.

    That re­sulted in al­most to­tal mode col­lapse when I re-en­abled G after 2 days; in­ves­ti­gat­ing, I re­al­ized that I had mis­un­der­stood: it was a brand­new BigGAN mod­el, trained in­de­pen­dent­ly, and came with its ful­ly-trained D al­ready. Oops.

  2. trans­fer learn­ing the 128px Im­a­geNet Py­Torch BigGAN model to the 1k anime por­traits; suc­cess­ful with ~6 GPU-days

  3. train­ing from scratch a 256px BigGAN-deep on the 1k por­traits;

    Par­tially suc­cess­ful after ~240 GPU-days: it reached com­pa­ra­ble qual­ity to StyleGAN be­fore suffer­ing se­ri­ous mode col­lapse due, pos­si­bly, be­ing forced to run with small mini­batch sizes by BigGAN bugs

Danbooru2018-1K Dataset

Constructing D1k

Con­struct­ing a new Dan­booru-1k dataset: as BigGAN re­quires con­di­tion­ing in­for­ma­tion, I con­structed new 512px whole-im­age & por­trait datasets by tak­ing the 1000 most pop­u­lar Dan­booru2018 char­ac­ters, with char­ac­ters as cat­e­gories, and cropped out por­traits as usu­al:

cat metadata/20180000000000* | fgrep -e '"name":"solo"' | fgrep -v '"rating":"e"' | \
    jq -c '.tags | .[] | select(.category == "4") | .name' | sort | uniq --count | \
    sort --numeric-sort > characters.txt
mkdir ./characters-1k/ ; cd ./characters-1k/
cpCharacterFace () { # }
    CHARACTER="$@"
    CHARACTER_SAFE=$(echo $CHARACTER | tr '[:punct:]' '.')
    mkdir "$CHARACTER_SAFE"
    echo "$CHARACTER" "$CHARACTER_SAFE"
    IDS=$(cat ../metadata/* | fgrep '"name":"'$CHARACTER\" | fgrep -e '"name":"solo"' \ # )
          | fgrep -v '"rating":"e"' | jq .id | tr -d '"')
    for ID in $IDS; do
        BUCKET=$(printf "%04d" $(( $ID % 1000 )) );
        TARGET=$(ls ../original/$BUCKET/$ID.*)
        CUDA_VISIBLE_DEVICES="" nice python ~/src/lbpcascade_animeface/examples/crop.py \
            ~/src/lbpcascade_animeface/lbpcascade_animeface.xml "$TARGET" "./$CHARACTER_SAFE/$ID"
    done
    }
export -f cpCharacterFace
tail -1200 ../characters.txt | cut -d '"' -f 2 | parallel --progress cpCharacterFace

I merged a num­ber of re­dun­dant fold­ers by hand2, cleaned as usu­al, and did fur­ther crop­ping as nec­es­sary to reach 1000. This re­sulted in 212,359 por­trait faces, with the largest class (Hat­sune Miku) hav­ing 6,624 im­ages and the small­est classes hav­ing ~0 or 1 im­ages. (I don’t know if the class im­bal­ance con­sti­tutes a real prob­lem for BigGAN, as Im­a­geNet it­self is im­bal­anced on many lev­el­s.)

The data-load­ing code at­tempts to make the class in­dex/ID num­ber line up with the folder count, so the nth al­pha­bet­i­cal folder (char­ac­ter) should have class ID n, which is im­por­tant to know for gen­er­at­ing con­di­tional sam­ples. The fi­nal set/IDs (as de­fined for my Dan­booru 1K dataset by find_classes):

2k.tan: 0 · abe.nana: 1 · abigail.williams..fate.grand.order.: 2 · abukuma..kantai.collection.: 3
admiral..kantai.collection.: 4 · aegis..persona.: 5 · aerith.gainsborough: 6 · afuro.terumi: 7
agano..kantai.collection.: 8 · agrias.oaks: 9 · ahri: 10 · aida.mana: 11
aino.minako: 12 · aisaka.taiga: 13 · aisha..elsword.: 14 · akagi..kantai.collection.: 15
akagi.miria: 16 · akashi..kantai.collection.: 17 · akatsuki..kantai.collection.: 18 · akaza.akari: 19
akebono..kantai.collection.: 20 · akemi.homura: 21 · aki.minoriko: 22 · aki.shizuha: 23
akigumo..kantai.collection.: 24 · akitsu.maru..kantai.collection.: 25 · akitsushima..kantai.collection.: 26 · akiyama.mio: 27
akiyama.yukari: 28 · akizuki..kantai.collection.: 29 · akizuki.ritsuko: 30 · akizuki.ryou: 31
akuma.homura: 32 · albedo: 33 · alice..wonderland.: 34 · alice.margatroid: 35
alice.margatroid..pc.98.: 36 · alisa.ilinichina.amiella: 37 · altera..fate.: 38 · amagi..kantai.collection.: 39
amagi.yukiko: 40 · amami.haruka: 41 · amanogawa.kirara: 42 · amasawa.yuuko: 43
amatsukaze..kantai.collection.: 44 · amazon..dragon.s.crown.: 45 · anastasia..idolmaster.: 46 · anchovy: 47
android.18: 48 · android.21: 49 · anegasaki.nene: 50 · angel..kof.: 51
angela.balzac: 52 · anjou.naruko: 53 · aoba..kantai.collection.: 54 · aoki.reika: 55
aori..splatoon.: 56 · aozaki.aoko: 57 · aqua..konosuba.: 58 · ara.han: 59
aragaki.ayase: 60 · araragi.karen: 61 · arashi..kantai.collection.: 62 · arashio..kantai.collection.: 63
archer: 64 · arcueid.brunestud: 65 · arima.senne: 66 · artoria.pendragon..all.: 67
artoria.pendragon..lancer.: 68 · artoria.pendragon..lancer.alter.: 69 · artoria.pendragon..swimsuit.rider.alter.: 70 · asahina.mikuru: 71
asakura.ryouko: 72 · asashimo..kantai.collection.: 73 · asashio..kantai.collection.: 74 · ashigara..kantai.collection.: 75
asia.argento: 76 · astolfo..fate.: 77 · asui.tsuyu: 78 · asuna..sao.: 79
atago..azur.lane.: 80 · atago..kantai.collection.: 81 · atalanta..fate.: 82 · au.ra: 83
ayanami..azur.lane.: 84 · ayanami..kantai.collection.: 85 · ayanami.rei: 86 · ayane..doa.: 87
ayase.eli: 88 · baiken: 89 · bardiche: 90 · barnaby.brooks.jr: 91
battleship.hime: 92 · bayonetta..character.: 93 · bb..fate...all.: 94 · bb..fate.extra.ccc.: 95
bb..swimsuit.mooncancer...fate.: 96 · beatrice: 97 · belfast..azur.lane.: 98 · bismarck..kantai.collection.: 99
black.hanekawa: 100 · black.rock.shooter..character.: 101 · blake.belladonna: 102 · blanc: 103
boko..girls.und.panzer.: 104 · bottle.miku: 105 · boudica..fate.grand.order.: 106 · bowsette: 107
bridget..guilty.gear.: 108 · busujima.saeko: 109 · c.c.: 110 · c.c..lemon..character.: 111
caesar.anthonio.zeppeli: 112 · cagliostro..granblue.fantasy.: 113 · camilla..fire.emblem.if.: 114 · cammy.white: 115
caren.hortensia: 116 · caster: 117 · cecilia.alcott: 118 · celes.chere: 119
charlotte..madoka.magica.: 120 · charlotte.dunois: 121 · charlotte.e.yeager: 122 · chen: 123
chibi.usa: 124 · chiki: 125 · chitanda.eru: 126 · chloe.von.einzbern: 127
choukai..kantai.collection.: 128 · chun.li: 129 · ciel: 130 · cirno: 131
clarisse..granblue.fantasy.: 132 · clownpiece: 133 · consort.yu..fate.: 134 · cure.beauty: 135
cure.happy: 136 · cure.march: 137 · cure.marine: 138 · cure.moonlight: 139
cure.peace: 140 · cure.sunny: 141 · cure.sunshine: 142 · cure.twinkle: 143
d.va..overwatch.: 144 · daiyousei: 145 · danua: 146 · darjeeling: 147
dark.magician.girl: 148 · dio.brando: 149 · dizzy: 150 · djeeta..granblue.fantasy.: 151
doremy.sweet: 152 · eas: 153 · eila.ilmatar.juutilainen: 154 · elesis..elsword.: 155
elin..tera.: 156 · elizabeth.bathory..brave...fate.: 157 · elizabeth.bathory..fate.: 158 · elizabeth.bathory..fate...all.: 159
ellen.baker: 160 · elphelt.valentine: 161 · elsa..frozen.: 162 · emilia..re.zero.: 163
emiya.kiritsugu: 164 · emiya.shirou: 165 · emperor.penguin..kemono.friends.: 166 · enma.ai: 167
enoshima.junko: 168 · enterprise..azur.lane.: 169 · ereshkigal..fate.grand.order.: 170 · erica.hartmann: 171
etna: 172 · eureka: 173 · eve..elsword.: 174 · ex.keine: 175
failure.penguin: 176 · fate.testarossa: 177 · felicia: 178 · female.admiral..kantai.collection.: 179
female.my.unit..fire.emblem.if.: 180 · female.protagonist..pokemon.go.: 181 · fennec..kemono.friends.: 182 · ferry..granblue.fantasy.: 183
flandre.scarlet: 184 · florence.nightingale..fate.grand.order.: 185 · fou..fate.grand.order.: 186 · francesca.lucchini: 187
frankenstein.s.monster..fate.: 188 · fubuki..kantai.collection.: 189 · fujibayashi.kyou: 190 · fujimaru.ritsuka..female.: 191
fujiwara.no.mokou: 192 · furude.rika: 193 · furudo.erika: 194 · furukawa.nagisa: 195
fusou..kantai.collection.: 196 · futaba.anzu: 197 · futami.mami: 198 · futatsuiwa.mamizou: 199
fuuro..pokemon.: 200 · galko: 201 · gambier.bay..kantai.collection.: 202 · ganaha.hibiki: 203
gangut..kantai.collection.: 204 · gardevoir: 205 · gasai.yuno: 206 · gertrud.barkhorn: 207
gilgamesh: 208 · ginga.nakajima: 209 · giorno.giovanna: 210 · gokou.ruri: 211
graf.eisen: 212 · graf.zeppelin..kantai.collection.: 213 · grey.wolf..kemono.friends.: 214 · gumi: 215
hachikuji.mayoi: 216 · hagikaze..kantai.collection.: 217 · hagiwara.yukiho: 218 · haguro..kantai.collection.: 219
hakurei.reimu: 220 · hamakaze..kantai.collection.: 221 · hammann..azur.lane.: 222 · han.juri: 223
hanasaki.tsubomi: 224 · hanekawa.tsubasa: 225 · hanyuu: 226 · haramura.nodoka: 227
harime.nui: 228 · haro: 229 · haruka..pokemon.: 230 · haruna..kantai.collection.: 231
haruno.sakura: 232 · harusame..kantai.collection.: 233 · hasegawa.kobato: 234 · hassan.of.serenity..fate.: 235
hata.no.kokoro: 236 · hatoba.tsugu..character.: 237 · hatsune.miku: 238 · hatsune.miku..append.: 239
hatsuyuki..kantai.collection.: 240 · hatsuzuki..kantai.collection.: 241 · hayami.kanade: 242 · hayashimo..kantai.collection.: 243
hayasui..kantai.collection.: 244 · hecatia.lapislazuli: 245 · helena.blavatsky..fate.grand.order.: 246 · heles: 247
hestia..danmachi.: 248 · hex.maniac..pokemon.: 249 · hibari..senran.kagura.: 250 · hibiki..kantai.collection.: 251
hieda.no.akyuu: 252 · hiei..kantai.collection.: 253 · higashi.setsuna: 254 · higashikata.jousuke: 255
high.priest: 256 · hiiragi.kagami: 257 · hiiragi.tsukasa: 258 · hijiri.byakuren: 259
hikari..pokemon.: 260 · himejima.akeno: 261 · himekaidou.hatate: 262 · hinanawi.tenshi: 263
hinatsuru.ai: 264 · hino.akane..idolmaster.: 265 · hino.akane..smile.precure..: 266 · hino.rei: 267
hirasawa.ui: 268 · hirasawa.yui: 269 · hiryuu..kantai.collection.: 270 · hishikawa.rikka: 271
hk416..girls.frontline.: 272 · holo: 273 · homura..xenoblade.2.: 274 · honda.mio: 275
hong.meiling: 276 · honma.meiko: 277 · honolulu..azur.lane.: 278 · horikawa.raiko: 279
hoshi.shouko: 280 · hoshiguma.yuugi: 281 · hoshii.miki: 282 · hoshimiya.ichigo: 283
hoshimiya.kate: 284 · hoshino.fumina: 285 · hoshino.ruri: 286 · hoshizora.miyuki: 287
hoshizora.rin: 288 · hotarumaru: 289 · hoto.cocoa: 290 · houjou.hibiki: 291
houjou.karen: 292 · houjou.satoko: 293 · houjuu.nue: 294 · houraisan.kaguya: 295
houshou..kantai.collection.: 296 · huang.baoling: 297 · hyuuga.hinata: 298 · i.168..kantai.collection.: 299
i.19..kantai.collection.: 300 · i.26..kantai.collection.: 301 · i.401..kantai.collection.: 302 · i.58..kantai.collection.: 303
i.8..kantai.collection.: 304 · ia..vocaloid.: 305 · ibaraki.douji..fate.grand.order.: 306 · ibaraki.kasen: 307
ibuki.fuuko: 308 · ibuki.suika: 309 · ichigo..darling.in.the.franxx.: 310 · ichinose.kotomi: 311
ichinose.shiki: 312 · ikamusume: 313 · ikazuchi..kantai.collection.: 314 · illustrious..azur.lane.: 315
illyasviel.von.einzbern: 316 · imaizumi.kagerou: 317 · inaba.tewi: 318 · inami.mahiru: 319
inazuma..kantai.collection.: 320 · index: 321 · ingrid: 322 · inkling: 323
inubashiri.momiji: 324 · inuyama.aoi: 325 · iori.rinko: 326 · iowa..kantai.collection.: 327
irisviel.von.einzbern: 328 · iroha..samurai.spirits.: 329 · ishtar..fate.grand.order.: 330 · isokaze..kantai.collection.: 331
isonami..kantai.collection.: 332 · isuzu..kantai.collection.: 333 · itsumi.erika: 334 · ivan.karelin: 335
izayoi.sakuya: 336 · izumi.konata: 337 · izumi.sagiri: 338 · jack.the.ripper..fate.apocrypha.: 339
jakuzure.nonon: 340 · japanese.crested.ibis..kemono.friends.: 341 · jeanne.d.arc..alter...fate.: 342 · jeanne.d.arc..alter.swimsuit.berserker.: 343
jeanne.d.arc..fate.: 344 · jeanne.d.arc..fate...all.: 345 · jeanne.d.arc..granblue.fantasy.: 346 · jeanne.d.arc..swimsuit.archer.: 347
jeanne.d.arc.alter.santa.lily: 348 · jintsuu..kantai.collection.: 349 · jinx..league.of.legends.: 350 · johnny.joestar: 351
jonathan.joestar: 352 · joseph.joestar..young.: 353 · jougasaki.mika: 354 · jougasaki.rika: 355
jun.you..kantai.collection.: 356 · junketsu: 357 · junko..touhou.: 358 · kaban..kemono.friends.: 359
kaburagi.t.kotetsu: 360 · kaenbyou.rin: 361 · kaenbyou.rin..cat.: 362 · kafuu.chino: 363
kaga..kantai.collection.: 364 · kagamine.len: 365 · kagamine.rin: 366 · kagerou..kantai.collection.: 367
kagiyama.hina: 368 · kagura..gintama.: 369 · kaguya.luna..character.: 370 · kaito: 371
kaku.seiga: 372 · kakyouin.noriaki: 373 · kallen.stadtfeld: 374 · kamikaze..kantai.collection.: 375
kamikita.komari: 376 · kamio.misuzu: 377 · kamishirasawa.keine: 378 · kamiya.nao: 379
kamoi..kantai.collection.: 380 · kaname.madoka: 381 · kanbaru.suruga: 382 · kanna.kamui: 383
kanzaki.ranko: 384 · karina.lyle: 385 · kasane.teto: 386 · kashima..kantai.collection.: 387
kashiwazaki.sena: 388 · kasodani.kyouko: 389 · kasugano.sakura: 390 · kasugano.sora: 391
kasumi..doa.: 392 · kasumi..kantai.collection.: 393 · kasumi..pokemon.: 394 · kasumigaoka.utaha: 395
katori..kantai.collection.: 396 · katou.megumi: 397 · katsura.hinagiku: 398 · katsuragi..kantai.collection.: 399
katsushika.hokusai..fate.grand.order.: 400 · katyusha: 401 · kawakami.mai: 402 · kawakaze..kantai.collection.: 403
kawashiro.nitori: 404 · kay..girls.und.panzer.: 405 · kazama.asuka: 406 · kazami.yuuka: 407
kenzaki.makoto: 408 · kijin.seija: 409 · kikuchi.makoto: 410 · kino: 411
kino.makoto: 412 · kinomoto.sakura: 413 · kinugasa..kantai.collection.: 414 · kirigaya.suguha: 415
kirigiri.kyouko: 416 · kirijou.mitsuru: 417 · kirima.sharo: 418 · kirin..armor.: 419
kirino.ranmaru: 420 · kirisame.marisa: 421 · kirishima..kantai.collection.: 422 · kirito: 423
kiryuuin.satsuki: 424 · kisaragi..kantai.collection.: 425 · kisaragi.chihaya: 426 · kise.yayoi: 427
kishibe.rohan: 428 · kishin.sagume: 429 · kiso..kantai.collection.: 430 · kiss.shot.acerola.orion.heart.under.blade: 431
kisume: 432 · kitakami..kantai.collection.: 433 · kiyohime..fate.grand.order.: 434 · kiyoshimo..kantai.collection.: 435
kizuna.ai: 436 · koakuma: 437 · kobayakawa.rinko: 438 · kobayakawa.sae: 439
kochiya.sanae: 440 · kohinata.miho: 441 · koizumi.hanayo: 442 · komaki.manaka: 443
komeiji.koishi: 444 · komeiji.satori: 445 · kongou..kantai.collection.: 446 · konjiki.no.yami: 447
konpaku.youmu: 448 · konpaku.youmu..ghost.: 449 · kooh: 450 · kos.mos: 451
koshimizu.sachiko: 452 · kotobuki.tsumugi: 453 · kotomine.kirei: 454 · kotonomiya.yuki: 455
kousaka.honoka: 456 · kousaka.kirino: 457 · kousaka.tamaki: 458 · kozakura.marry: 459
kuchiki.rukia: 460 · kujikawa.rise: 461 · kujou.karen: 462 · kula.diamond: 463
kuma..kantai.collection.: 464 · kumano..kantai.collection.: 465 · kumoi.ichirin: 466 · kunikida.hanamaru: 467
kuradoberi.jam: 468 · kuriyama.mirai: 469 · kurodani.yamame: 470 · kuroka..high.school.dxd.: 471
kurokawa.eren: 472 · kuroki.tomoko: 473 · kurosawa.dia: 474 · kurosawa.ruby: 475
kuroshio..kantai.collection.: 476 · kuroyukihime: 477 · kurumi.erika: 478 · kusanagi.motoko: 479
kusugawa.sasara: 480 · kuujou.jolyne: 481 · kuujou.joutarou: 482 · kyon: 483
kyonko: 484 · kyubey: 485 · laffey..azur.lane.: 486 · lala.satalin.deviluke: 487
lancer: 488 · lancer..fate.zero.: 489 · laura.bodewig: 490 · leafa: 491
lei.lei: 492 · lelouch.lamperouge: 493 · len: 494 · letty.whiterock: 495
levi..shingeki.no.kyojin.: 496 · libeccio..kantai.collection.: 497 · lightning.farron: 498 · lili..tekken.: 499
lilith.aensland: 500 · lillie..pokemon.: 501 · lily.white: 502 · link: 503
little.red.riding.hood..grimm.: 504 · louise.francoise.le.blanc.de.la.valliere: 505 · lucina: 506 · lum: 507
luna.child: 508 · lunamaria.hawke: 509 · lunasa.prismriver: 510 · lusamine..pokemon.: 511
lyn..blade...soul.: 512 · lyndis..fire.emblem.: 513 · lynette.bishop: 514 · m1903.springfield..girls.frontline.: 515
madotsuki: 516 · maekawa.miku: 517 · maka.albarn: 518 · makigumo..kantai.collection.: 519
makinami.mari.illustrious: 520 · makise.kurisu: 521 · makoto..street.fighter.: 522 · makoto.nanaya: 523
mankanshoku.mako: 524 · mao..pokemon.: 525 · maou..maoyuu.: 526 · maribel.hearn: 527
marie.antoinette..fate.grand.order.: 528 · mash.kyrielight: 529 · matoi..pso2.: 530 · matoi.ryuuko: 531
matou.sakura: 532 · matsuura.kanan: 533 · maya..kantai.collection.: 534 · me.tan: 535
medicine.melancholy: 536 · medjed: 537 · meer.campbell: 538 · megumin: 539
megurine.luka: 540 · mei..overwatch.: 541 · mei..pokemon.: 542 · meiko: 543
meltlilith: 544 · mercy..overwatch.: 545 · merlin.prismriver: 546 · michishio..kantai.collection.: 547
midare.toushirou: 548 · midna: 549 · midorikawa.nao: 550 · mika..girls.und.panzer.: 551
mikasa.ackerman: 552 · mikazuki.munechika: 553 · miki.sayaka: 554 · millia.rage: 555
mima: 556 · mimura.kanako: 557 · minami.kotori: 558 · minamoto.no.raikou..fate.grand.order.: 559
minamoto.no.raikou..swimsuit.lancer...fate.: 560 · minase.akiko: 561 · minase.iori: 562 · miqo.te: 563
misaka.mikoto: 564 · mishaguji: 565 · misumi.nagisa: 566 · mithra: 567
miura.azusa: 568 · miyafuji.yoshika: 569 · miyako.yoshika: 570 · miyamoto.frederica: 571
miyamoto.musashi..fate.grand.order.: 572 · miyaura.sanshio: 573 · mizuhashi.parsee: 574 · mizuki..pokemon.: 575
mizunashi.akari: 576 · mizuno.ami: 577 · mogami..kantai.collection.: 578 · momo.velia.deviluke: 579
momozono.love: 580 · mononobe.no.futo: 581 · mordred..fate.: 582 · mordred..fate...all.: 583
morgiana: 584 · morichika.rinnosuke: 585 · morikubo.nono: 586 · moriya.suwako: 587
moroboshi.kirari: 588 · morrigan.aensland: 589 · motoori.kosuzu: 590 · mumei..kabaneri.: 591
murakumo..kantai.collection.: 592 · murasa.minamitsu: 593 · murasame..kantai.collection.: 594 · musashi..kantai.collection.: 595
mutsu..kantai.collection.: 596 · mutsuki..kantai.collection.: 597 · my.unit..fire.emblem..kakusei.: 598 · my.unit..fire.emblem.if.: 599
myoudouin.itsuki: 600 · mysterious.heroine.x: 601 · mysterious.heroine.x..alter.: 602 · mystia.lorelei: 603
nadia: 604 · nagae.iku: 605 · naganami..kantai.collection.: 606 · nagato..kantai.collection.: 607
nagato.yuki: 608 · nagatsuki..kantai.collection.: 609 · nagi: 610 · nagisa.kaworu: 611
naka..kantai.collection.: 612 · nakano.azusa: 613 · nami..one.piece.: 614 · nanami.chiaki: 615
nanasaki.ai: 616 · nao..mabinogi.: 617 · narmaya..granblue.fantasy.: 618 · narukami.yuu: 619
narusawa.ryouka: 620 · natalia..idolmaster.: 621 · natori.sana: 622 · natsume..pokemon.: 623
natsume.rin: 624 · nazrin: 625 · nekomiya.hinata: 626 · nekomusume: 627
nekomusume..gegege.no.kitarou.6.: 628 · nepgear: 629 · neptune..neptune.series.: 630 · nero.claudius..bride...fate.: 631
nero.claudius..fate.: 632 · nero.claudius..fate...all.: 633 · nero.claudius..swimsuit.caster...fate.: 634 · nia.teppelin: 635
nibutani.shinka: 636 · nico.robin: 637 · ninomiya.asuka: 638 · nishikino.maki: 639
nishizumi.maho: 640 · nishizumi.miho: 641 · nitocris..fate.grand.order.: 642 · nitocris..swimsuit.assassin...fate.: 643
nitta.minami: 644 · noel.vermillion: 645 · noire: 646 · northern.ocean.hime: 647
noshiro..kantai.collection.: 648 · noumi.kudryavka: 649 · nu.13: 650 · nyarlathotep..nyaruko.san.: 651
oboro..kantai.collection.: 652 · oda.nobunaga..fate.: 653 · ogata.chieri: 654 · ohara.mari: 655
oikawa.shizuku: 656 · okazaki.yumemi: 657 · okita.souji..alter...fate.: 658 · okita.souji..fate.: 659
okita.souji..fate...all.: 660 · onozuka.komachi: 661 · ooi..kantai.collection.: 662 · oomori.yuuko: 663
ootsuki.yui: 664 · ooyodo..kantai.collection.: 665 · osakabe.hime..fate.grand.order.: 666 · oshino.shinobu: 667
otonashi.kotori: 668 · panty..psg.: 669 · passion.lip: 670 · patchouli.knowledge: 671
pepperoni..girls.und.panzer.: 672 · perrine.h.clostermann: 673 · pharah..overwatch.: 674 · phosphophyllite: 675
pikachu: 676 · pixiv.tan: 677 · platelet..hataraku.saibou.: 678 · platinum.the.trinity: 679
pod..nier.automata.: 680 · pola..kantai.collection.: 681 · priest..ragnarok.online.: 682 · princess.king.boo: 683
princess.peach: 684 · princess.serenity: 685 · princess.zelda: 686 · prinz.eugen..azur.lane.: 687
prinz.eugen..kantai.collection.: 688 · prisma.illya: 689 · purple.heart: 690 · puru.see: 691
pyonta: 692 · qbz.95..girls.frontline.: 693 · rachel.alucard: 694 · racing.miku: 695
raising.heart: 696 · ramlethal.valentine: 697 · ranka.lee: 698 · ranma.chan: 699
re.class.battleship: 700 · reinforce: 701 · reinforce.zwei: 702 · reisen.udongein.inaba: 703
reiuji.utsuho: 704 · reizei.mako: 705 · rem..re.zero.: 706 · remilia.scarlet: 707
rensouhou.chan: 708 · rensouhou.kun: 709 · rias.gremory: 710 · rider: 711
riesz: 712 · ringo..touhou.: 713 · ro.500..kantai.collection.: 714 · roll: 715
rosehip: 716 · rossweisse: 717 · ruby.rose: 718 · rumia: 719
rydia: 720 · ryougi.shiki: 721 · ryuuguu.rena: 722 · ryuujou..kantai.collection.: 723
saber: 724 · saber.alter: 725 · saber.lily: 726 · sagisawa.fumika: 727
saigyouji.yuyuko: 728 · sailor.mars: 729 · sailor.mercury: 730 · sailor.moon: 731
sailor.saturn: 732 · sailor.venus: 733 · saint.martha: 734 · sakagami.tomoyo: 735
sakamoto.mio: 736 · sakata.gintoki: 737 · sakuma.mayu: 738 · sakura.chiyo: 739
sakura.futaba: 740 · sakura.kyouko: 741 · sakura.miku: 742 · sakurai.momoka: 743
sakurauchi.riko: 744 · samidare..kantai.collection.: 745 · samus.aran: 746 · sanya.v.litvyak: 747
sanzen.in.nagi: 748 · saotome.ranma: 749 · saratoga..kantai.collection.: 750 · sasaki.chiho: 751
saten.ruiko: 752 · satonaka.chie: 753 · satsuki..kantai.collection.: 754 · sawamura.spencer.eriri: 755
saya: 756 · sazaki.kaoruko: 757 · sazanami..kantai.collection.: 758 · scathach..fate...all.: 759
scathach..fate.grand.order.: 760 · scathach..swimsuit.assassin...fate.: 761 · seaport.hime: 762 · seeu: 763
seiran..touhou.: 764 · seiren..suite.precure.: 765 · sekibanki: 766 · selvaria.bles: 767
sendai..kantai.collection.: 768 · sendai.hakurei.no.miko: 769 · sengoku.nadeko: 770 · senjougahara.hitagi: 771
senketsu: 772 · sento.isuzu: 773 · serena..pokemon.: 774 · serval..kemono.friends.: 775
sf.a2.miki: 776 · shameimaru.aya: 777 · shana: 778 · shanghai.doll: 779
shantae..character.: 780 · sheryl.nome: 781 · shibuya.rin: 782 · shidare.hotaru: 783
shigure..kantai.collection.: 784 · shijou.takane: 785 · shiki.eiki: 786 · shikinami..kantai.collection.: 787
shikinami.asuka.langley: 788 · shimada.arisu: 789 · shimakaze..kantai.collection.: 790 · shimamura.uzuki: 791
shinjou.akane: 792 · shinki: 793 · shinku: 794 · shiomi.shuuko: 795
shirabe.ako: 796 · shirai.kuroko: 797 · shirakiin.ririchiyo: 798 · shiranui..kantai.collection.: 799
shiranui.mai: 800 · shirasaka.koume: 801 · shirase.sakuya: 802 · shiratsuyu..kantai.collection.: 803
shirayuki.hime: 804 · shirogane.naoto: 805 · shirona..pokemon.: 806 · shoebill..kemono.friends.: 807
shokuhou.misaki: 808 · shouhou..kantai.collection.: 809 · shoukaku..kantai.collection.: 810 · shuten.douji..fate.grand.order.: 811
signum: 812 · silica: 813 · simon: 814 · sinon: 815
soga.no.tojiko: 816 · sona.buvelle: 817 · sonoda.umi: 818 · sonohara.anri: 819
sonozaki.mion: 820 · sonozaki.shion: 821 · sora.ginko: 822 · sorceress..dragon.s.crown.: 823
souryuu..kantai.collection.: 824 · souryuu.asuka.langley: 825 · souseiseki: 826 · star.sapphire: 827
stocking..psg.: 828 · su.san: 829 · subaru.nakajima: 830 · suigintou: 831
suiren..pokemon.: 832 · suiseiseki: 833 · sukuna.shinmyoumaru: 834 · sunny.milk: 835
suomi.kp31..girls.frontline.: 836 · super.pochaco: 837 · super.sonico: 838 · suzukaze.aoba: 839
suzumiya.haruhi: 840 · suzutsuki..kantai.collection.: 841 · suzuya..kantai.collection.: 842 · tachibana.arisu: 843
tachibana.hibiki..symphogear.: 844 · tada.riina: 845 · taigei..kantai.collection.: 846 · taihou..azur.lane.: 847
taihou..kantai.collection.: 848 · tainaka.ritsu: 849 · takagaki.kaede: 850 · takakura.himari: 851
takamachi.nanoha: 852 · takami.chika: 853 · takanashi.rikka: 854 · takao..azur.lane.: 855
takao..kantai.collection.: 856 · takara.miyuki: 857 · takarada.rikka: 858 · takatsuki.yayoi: 859
takebe.saori: 860 · tama..kantai.collection.: 861 · tamamo..fate...all.: 862 · tamamo.cat..fate.: 863
tamamo.no.mae..fate.: 864 · tamamo.no.mae..swimsuit.lancer...fate.: 865 · tanamachi.kaoru: 866 · taneshima.popura: 867
tanned.cirno: 868 · taokaka: 869 · tatara.kogasa: 870 · tateyama.ayano: 871
tatsumaki: 872 · tatsuta..kantai.collection.: 873 · tedeza.rize: 874 · tenryuu..kantai.collection.: 875
tenshi..angel.beats..: 876 · teruzuki..kantai.collection.: 877 · tharja: 878 · tifa.lockhart: 879
tina.branford: 880 · tippy..gochiusa.: 881 · tokiko..touhou.: 882 · tokisaki.kurumi: 883
tokitsukaze..kantai.collection.: 884 · tomoe.gozen..fate.grand.order.: 885 · tomoe.hotaru: 886 · tomoe.mami: 887
tone..kantai.collection.: 888 · toono.akiha: 889 · tooru..maidragon.: 890 · toosaka.rin: 891
toramaru.shou: 892 · toshinou.kyouko: 893 · totoki.airi: 894 · toudou.shimako: 895
toudou.yurika: 896 · toujou.koneko: 897 · toujou.nozomi: 898 · touko..pokemon.: 899
touwa.erio: 900 · toyosatomimi.no.miko: 901 · tracer..overwatch.: 902 · tsukikage.yuri: 903
tsukimiya.ayu: 904 · tsukino.mito: 905 · tsukino.usagi: 906 · tsukumo.benben: 907
tsurumaru.kuninaga: 908 · tsuruya: 909 · tsushima.yoshiko: 910 · u.511..kantai.collection.: 911
ujimatsu.chiya: 912 · ultimate.madoka: 913 · umikaze..kantai.collection.: 914 · unicorn..azur.lane.: 915
unryuu..kantai.collection.: 916 · urakaze..kantai.collection.: 917 · uraraka.ochako: 918 · usada.hikaru: 919
usami.renko: 920 · usami.sumireko: 921 · ushio..kantai.collection.: 922 · ushiromiya.ange: 923
ushiwakamaru..fate.grand.order.: 924 · uzuki..kantai.collection.: 925 · vampire..azur.lane.: 926 · vampy: 927
venera.sama: 928 · verniy..kantai.collection.: 929 · victorica.de.blois: 930 · violet.evergarden..character.: 931
vira.lilie: 932 · vita: 933 · vivio: 934 · wa2000..girls.frontline.: 935
wakasagihime: 936 · wang.liu.mei: 937 · warspite..kantai.collection.: 938 · watanabe.you: 939
watarase.jun: 940 · watatsuki.no.yorihime: 941 · waver.velvet: 942 · weiss.schnee: 943
white.mage: 944 · widowmaker..overwatch.: 945 · wo.class.aircraft.carrier: 946 · wriggle.nightbug: 947
xenovia.quarta: 948 · xp.tan: 949 · xuanzang..fate.grand.order.: 950 · yagami.hayate: 951
yagokoro.eirin: 952 · yahagi..kantai.collection.: 953 · yakumo.ran: 954 · yakumo.yukari: 955
yamada.aoi: 956 · yamada.elf: 957 · yamakaze..kantai.collection.: 958 · yamashiro..azur.lane.: 959
yamashiro..kantai.collection.: 960 · yamato..kantai.collection.: 961 · yamato.no.kami.yasusada: 962 · yang.xiao.long: 963
yasaka.kanako: 964 · yayoi..kantai.collection.: 965 · yazawa.nico: 966 · yin: 967
yoko.littner: 968 · yorha.no..2.type.b: 969 · yorigami.shion: 970 · yowane.haku: 971
yuffie.kisaragi: 972 · yui..angel.beats..: 973 · yuigahama.yui: 974 · yuki.miku: 975
yukikaze..kantai.collection.: 976 · yukine.chris: 977 · yukinoshita.yukino: 978 · yukishiro.honoka: 979
yumi..senran.kagura.: 980 · yuna..ff10.: 981 · yuno: 982 · yura..kantai.collection.: 983
yuubari..kantai.collection.: 984 · yuudachi..kantai.collection.: 985 · yuugumo..kantai.collection.: 986 · yuuki..sao.: 987
yuuki.makoto: 988 · yuuki.mikan: 989 · yuzuhara.konomi: 990 · yuzuki.yukari: 991
yuzuriha.inori: 992 · z1.leberecht.maass..kantai.collection.: 993 · z3.max.schultz..kantai.collection.: 994 · zero.two..darling.in.the.franxx.: 995
zeta..granblue.fantasy.: 996 · zooey..granblue.fantasy.: 997 · zuihou..kantai.collection.: 998 · zuikaku..kantai.collection.: 999

(A­side from be­ing po­ten­tially use­ful to sta­bi­lize train­ing by pro­vid­ing su­per­vi­sion/meta­data, use of class­es/­cat­e­gories re­duces the need for char­ac­ter-spe­cific trans­fer learn­ing for spe­cial­ized StyleGAN mod­els, since you can just gen­er­ate sam­ples from a spe­cific class. For the 256px mod­el, I pro­vide down­load­able sam­ples for each of the 1000 class­es.)

D1K Download

D1K (20GB; n = 822,842 512px JPEGs) and the por­trait-crop ver­sion, D1K-por­traits (18GB; n = 212,359) are avail­able for down­load:

rsync --verbose --recursive rsync://78.46.86.149:873/biggan/d1k ./d1k/

The JPG com­pres­sion turned out to be too ag­gres­sive and re­sult in no­tice­able ar­ti­fact­ing, so in early 2020 I re­gen­er­ated D1k from Dan­booru2019 for fu­ture pro­jects, cre­at­ing D1K-2019-512px: a fresh set of top-1k solo char­ac­ter im­ages, s/q Dan­booru2019, no JPEG com­pres­sion.

Merges of over­lap­ping char­ac­ters were again nec­es­sary; the full set of tag merges:

m() { mv ./$1/* ./$2/ && rmdir ./$1; }
m alice.margatroid..pc.98. alice.margatroid; m artoria.pendragon..all. saber; m artoria.pendragon..lancer. saber;
m artoria.pendragon..lancer.alter. saber; m artoria.pendragon..swimsuit.rider.alter. saber; m artoria.pendragon..swimsuit.ruler...fate. saber;
m atago..midsummer.march...azur.lane. atago..azur.lane.; m bardiche fate.testarossa; m bb..fate...all. matou.sakura;
m bb..fate.extra.ccc. matou.sakura; m bb..swimsuit.mooncancer...fate. matou.sakura; m bottle.miku hatsune.miku;
m cure.beauty aoki.reika; m cure.happy hoshizora.miyuki; m cure.march midorikawa.nao;
m cure.marine kurumi.erika; m cure.melody houjou.hibiki; m cure.moonlight tsukikage.yuri;
m cure.peace kise.yayoi; m cure.peach momozono.love; m cure.sunny hino.akane..smile.precure..;
m cure.sunshine myoudouin.itsuki; m cure.sword kenzaki.makoto; m cure.twinkle amanogawa.kirara;
m eas higashi.setsuna; m elizabeth.bathory..brave...fate. elizabeth.bathory..fate.; m elizabeth.bathory..fate...all. elizabeth.bathory..fate.;
m ex.keine kamishirasawa.keine; m frankenstein.s.monster..swimsuit.saber...fate. frankenstein.s.monster..fate.; m frederica.bernkastel furude.rika;
m furudo.erika furude.rika; m graf.eisen vita; m hatsune.miku..append. hatsune.miku;
m ishtar..fate.grand.order. ishtar..fate...all.; m jeanne.d.arc..alter...fate. jeanne.d.arc..fate.; m jeanne.d.arc..alter.swimsuit.berserker. jeanne.d.arc..fate.;
m jeanne.d.arc..fate...all. jeanne.d.arc..fate.; m jeanne.d.arc..swimsuit.archer. jeanne.d.arc..fate.; m jeanne.d.arc.alter.santa.lily jeanne.d.arc..fate.;
m kaenbyou.rin..cat. kaenbyou.rin; m kiyohime..swimsuit.lancer...fate. kiyohime..fate.grand.order.; m konpaku.youmu..ghost. konpaku.youmu;
m kyonko kyon; m lancer cu.chulainn..fate...all.; m medb..fate.grand.order. medb..fate...all.;
m medjed nitocris..fate.grand.order.; m meltryllis..swimsuit.lancer...fate. meltryllis; m minamoto.no.raikou..swimsuit.lancer...fate. minamoto.no.raikou..fate.grand.order.;
m miyamoto.musashi..swimsuit.berserker...fate. miyamoto.musashi..fate.grand.order.; m mordred..fate...all. mordred..fate.; m mysterious.heroine.x saber;
m mysterious.heroine.x..alter. saber; m mysterious.heroine.xx..foreigner. saber; m nero.claudius..bride...fate. nero.claudius..fate.;
m nero.claudius..fate...all. nero.claudius..fate.; m nero.claudius..swimsuit.caster...fate. nero.claudius..fate.; m nitocris..swimsuit.assassin...fate. nitocris..fate.grand.order.;
m oda.nobunaga..fate...all. oda.nobunaga..fate.; m okita.souji..alter...fate. okita.souji..fate.; m okita.souji..fate...all. okita.souji..fate.;
m princess.of.the.crystal takakura.himari; m princess.serenity tsukino.usagi; m prinz.eugen..unfading.smile...azur.lane. prinz.eugen..azur.lane.;
m prisma.illya illyasviel.von.einzbern; m purple.heart neptune..neptune.series.; m pyonta moriya.suwako;
m racing.miku hatsune.miku; m raising.heart takamachi.nanoha; m reinforce.zwei reinforce;
m rensouhou.chan shimakaze..kantai.collection.; m roll.caskett roll; m saber.alter saber;
m saber.lily saber; m sailor.jupiter kino.makoto; m sailor.mars hino.rei;
m sailor.mercury mizuno.ami; m sailor.moon tsukino.usagi; m sailor.saturn tomoe.hotaru;
m sailor.venus aino.minako; m sakura.miku hatsune.miku; m scathach..fate.grand.order. scathach..fate...all.;
m scathach..swimsuit.assassin...fate. scathach..fate...all.; m scathach.skadi..fate.grand.order. scathach..fate...all.; m schwertkreuz yagami.hayate;
m seiren..suite.precure. kurokawa.eren; m shanghai.doll alice.margatroid; m shikinami.asuka.langley souryuu.asuka.langley;
m su.san medicine.melancholy; m taihou..forbidden.feast...azur.lane. taihou..azur.lane.; m tamamo..fate...all. tamamo.cat..fate.;
m tamamo.no.mae..fate. tamamo.cat..fate.; m tamamo.no.mae..swimsuit.lancer...fate. tamamo.cat..fate.; m tanned.cirno cirno;
m ultimate.madoka kaname.madoka; m yuki.miku hatsune.miku

Down­load:

rsync --verbose --recursive rsync://78.46.86.149:873/biggan/d1k-2019-512px/ ./d1k-2019-512px/
D1K BigGAN Conversion

BigGAN re­quires the dataset meta­data to be de­fined in utils.py, and then, if us­ing HDF5 archives it must be processed into a HDF5 archive, along with In­cep­tion sta­tis­tics for the pe­ri­odic test­ing (although I min­i­mize test­ing, the pre­processed sta­tis­tics are still nec­es­sary).

HDF5 is not nec­es­sary and can be omit­ted, BigGAN-Pytorch can read im­age fold­ers, if you pre­fer to avoid the has­sle.

The utils.py must be edited to add meta­data per dataset (no CLI), which looks like this to de­fine a 128px Dan­booru-1k por­trait dataset:

 # Convenience dicts
-dset_dict = {'I32': dset.ImageFolder, 'I64': dset.ImageFolder,
+dset_dict = {'I32': dset.ImageFolder, 'I64': dset.ImageFolder,
              'I128': dset.ImageFolder, 'I256': dset.ImageFolder,
              'I32_hdf5': dset.ILSVRC_HDF5, 'I64_hdf5': dset.ILSVRC_HDF5,
              'I128_hdf5': dset.ILSVRC_HDF5, 'I256_hdf5': dset.ILSVRC_HDF5,
-             'C10': dset.CIFAR10, 'C100': dset.CIFAR100}
+             'C10': dset.CIFAR10, 'C100': dset.CIFAR100,
+             'D1K': dset.ImageFolder, 'D1K_hdf5': dset.ILSVRC_HDF5 }
 imsize_dict = {'I32': 32, 'I32_hdf5': 32,
                'I64': 64, 'I64_hdf5': 64,
                'I128': 128, 'I128_hdf5': 128,
                'I256': 256, 'I256_hdf5': 256,
-               'C10': 32, 'C100': 32}
+               'C10': 32, 'C100': 32,
+               'D1K': 128, 'D1K_hdf5': 128 }
 root_dict = {'I32': 'ImageNet', 'I32_hdf5': 'ILSVRC32.hdf5',
              'I64': 'ImageNet', 'I64_hdf5': 'ILSVRC64.hdf5',
              'I128': 'ImageNet', 'I128_hdf5': 'ILSVRC128.hdf5',
              'I256': 'ImageNet', 'I256_hdf5': 'ILSVRC256.hdf5',
-             'C10': 'cifar', 'C100': 'cifar'}
+             'C10': 'cifar', 'C100': 'cifar',
+             'D1K': 'characters-1k-faces', 'D1K_hdf5': 'D1K.hdf5' }
 nclass_dict = {'I32': 1000, 'I32_hdf5': 1000,
                'I64': 1000, 'I64_hdf5': 1000,
                'I128': 1000, 'I128_hdf5': 1000,
                'I256': 1000, 'I256_hdf5': 1000,
-               'C10': 10, 'C100': 100}
-# Number of classes to put per sample sheet
+               'C10': 10, 'C100': 100,
+               'D1K': 1000, 'D1K_hdf5': 1000 }
+# Number of classes to put per sample sheet
 classes_per_sheet_dict = {'I32': 50, 'I32_hdf5': 50,
                           'I64': 50, 'I64_hdf5': 50,
                           'I128': 20, 'I128_hdf5': 20,
                           'I256': 20, 'I256_hdf5': 20,
-                          'C10': 10, 'C100': 100}
+                          'C10': 10, 'C100': 100,
+                          'D1K': 1, 'D1K_hdf5': 1 }

Each dataset ex­ists in 2 forms, as the orig­i­nal im­age folder and then as the processed HDF5:

python make_hdf5.py --dataset D1K512 --data_root /media/gwern/Data2/danbooru2018
python calculate_inception_moments.py --parallel --dataset D1K_hdf5 --batch_size 64 \
    --data_root /media/gwern/Data2/danbooru2018
## Or ImageNet example:
python make_hdf5.py --dataset I128 --data_root /media/gwern/Data/imagenet/
python calculate_inception_moments.py --dataset I128_hdf5 --batch_size 64 \
    --data_root /media/gwern/Data/imagenet/

make_hdf5.py will write the HDF5 to a ILSVRC*.hdf5 file, so re­name it to what­ever (eg D1K.hdf5).

BigGAN-PyTorch Training

With the HDF5 & In­cep­tion sta­tis­tics cal­cu­lat­ed, it should be pos­si­ble to run like so:

python train.py --dataset D1K --parallel --shuffle --num_workers 4 --batch_size 32 \
    --num_G_accumulations 8 --num_D_accumulations 8 \
    --num_D_steps 1 --G_lr 1e-4 --D_lr 4e-4 --D_B2 0.999 --G_B2 0.999 --G_attn 64 --D_attn 64 \
    --G_nl inplace_relu --D_nl inplace_relu --SN_eps 1e-6 --BN_eps 1e-5 --adam_eps 1e-6 \
    --G_ortho 0.0 --G_shared --G_init ortho --D_init ortho --hier --dim_z 120 --shared_dim 128 \
    --G_eval_mode --G_ch 96 --D_ch 96 \
    --ema --use_ema --ema_start 20000 --test_every 2000 --save_every 1000 --num_best_copies 5 \
    --num_save_copies 2 --seed 0 --use_multiepoch_sampler --which_best FID \
    --data_root /media/gwern/Data2/danbooru2018

The ar­chi­tec­ture is spec­i­fied on the com­mand line and must be cor­rect; ex­am­ples are in the scripts/ di­rec­to­ry. In the above ex­am­ple, --num_D_steps...--D_ch should be left strictly alone and the key pa­ra­me­ters are be­fore/after that ar­chi­tec­ture block. In this ex­am­ple, my 2×1080ti can sup­port a batch size of n = 32 & the gra­di­ent ac­cu­mu­la­tion over­head with­out OOMing. In ad­di­tion to that, it’s im­por­tant to en­able EMA, which makes a truly re­mark­able differ­ence in the gen­er­ated sam­ple qual­ity (which is in­ter­est­ing be­cause EMA sounds re­dun­dant with mo­men­tum/learn­ing rates, but is­n’t). The big batches of BigGAN are im­ple­mented by --batch_size times --num_{G/D}_accumulations; I would need an ac­cu­mu­la­tion of 64 to match n = 2048. With­out EMA, sam­ples are low qual­ity and change dras­ti­cally at each it­er­a­tion; but after a cer­tain num­ber of it­er­a­tions, sam­pling is done with EMA, which av­er­ages each it­er­a­tion offline (but one does­n’t train us­ing the av­er­aged mod­el!3), shows that col­lec­tively these it­er­a­tions are sim­i­lar be­cause they are ‘or­bit­ing’ around a cen­tral point and the im­age qual­ity is clearly grad­u­ally im­prov­ing when EMA is turned on.

Trans­fer learn­ing is not sup­ported na­tive­ly, but a sim­i­lar trick as with StyleGAN is fea­si­ble: just drop the pre­trained mod­els into the check­point folder and re­sume (which will work as long as the ar­chi­tec­ture is iden­ti­cal to the CLI pa­ra­me­ter­s).

The sam­ple sheet func­tion­al­ity can eas­ily over­load a GPU and OOM. In utils.py, it may be nec­es­sary to sim­ply com­ment out all of the sam­pling func­tion­al­ity start­ing with utils.sample_sheet.

The main prob­lem run­ning BigGAN is odd bugs in BigGAN’s han­dling of epochs/it­er­a­tions and chang­ing gra­di­ent ac­cu­mu­la­tions. With --use_multiepoch_sampler, it does com­pli­cated cal­cu­la­tions to try to keep sam­pling con­sis­tent across epoches with pre­cisely the same or­der­ing of sam­ples re­gard­less of how often the BigGAN job is start­ed/stopped (eg on a clus­ter), but as one in­creases the to­tal mini­batch size and it pro­gresses through an epoch, it tries to in­dex data which does­n’t ex­ist and crash­es; I was un­able to fig­ure out how the cal­cu­la­tions were go­ing wrong, ex­act­ly.4

While with that op­tion dis­abled and larger to­tal mini­batches used, a differ­ent bug gets trig­gered, lead­ing to in­scrutable crash­es:

...
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
Traceback (most recent call last):
  File "train.py", line 228, in <module>
    main()
  File "train.py", line 225, in main
    run(config)
  File "train.py", line 172, in run
    for i, (x, y) in enumerate(pbar):
  File "/root/BigGAN-PyTorch-mooch/utils.py", line 842, in progress
1    for n, item in enumerate(items):
  File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 631, in __next__
    idx, batch = self._get_batch()
  File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 601, in _get_batch
    return self.data_queue.get(timeout=MP_STATUS_CHECK_INTERVAL)
  File "/opt/conda/lib/python3.7/queue.py", line 179, in get
    self.not_empty.wait(remaining)
  File "/opt/conda/lib/python3.7/threading.py", line 300, in wait
    gotit = waiter.acquire(True, timeout)
  File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 274, in handler
    _error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 21103) is killed by signal: Bus error.

There is no good workaround here: start­ing with small fast mini­batches com­pro­mises fi­nal qual­i­ty, while start­ing with big slow mini­batches may work but then costs far more com­pute. I did find that the G/D ac­cu­mu­la­tions can be im­bal­anced to al­low in­creas­ing the G’s to­tal mini­batch (which ap­pears to be the key for bet­ter qual­i­ty) but then this risks desta­bi­liz­ing train­ing. These bugs need to be fixed be­fore try­ing BigGAN for re­al.

BigGAN: ImageNet → Danbooru2018-1K

In any case, I ran the 128px Im­a­geNet → Dan­booru2018-1K for ~6 GPU-days (or ~3 days on my 2×1080ti work­sta­tion) and the train­ing mon­tage in­di­cates it was work­ing fine:

Train­ing mon­tage of the 128px Im­a­geNet → Dan­booru2018-1K; suc­cess­ful

Some­time after that, while con­tin­u­ing to play with im­bal­anced mini­batches to avoid trig­ger­ing the it­er­a­tion/crash bugs, it di­verged badly and mod­e-col­lapsed into sta­t­ic, so I killed the run, as the point ap­pears to have been made: trans­fer learn­ing is in­deed pos­si­ble, and the speed of the adap­ta­tion sug­gests ben­e­fits to train­ing time by start­ing with a high­ly-trained model al­ready.

BigGAN: 256px Danbooru2018-1K

More se­ri­ous­ly, I be­gan train­ing a 256px model on Dan­booru2018-1K por­traits. This re­quired re­build­ing the HDF5 with 256px set­tings, and since I was­n’t do­ing trans­fer learn­ing, I used the BigGAN-deep ar­chi­tec­ture set­tings since that has bet­ter re­sults & is smaller than the orig­i­nal BigGAN.

My own 2×1080ti were in­ad­e­quate for rea­son­able turn­around on train­ing a 256px BigGAN from scratch—they would take some­thing like 4+ months wall­clock— so I de­cided to shell out for a big cloud in­stance. AWS/GCP are too ex­pen­sive, so I used this to in­ves­ti­gate Vast.ai as an al­ter­na­tive: they typ­i­cally have much lower prices.

Vast.ai setup was straight­for­ward, and I found a nice in­stance: an 8×2080ti ma­chine avail­able for just $1.7/hour (AWS, for com­par­ison, would charge closer to $2.16/hour for just 8 K80 halves). So I ran 2019-05-02–2019-06-03 their 8×2080ti in­stance ($1.7/hour; to­tal: $1373.64).

That is ~250 GPU-days of train­ing, al­though this is a mis­lead­ing way to put it since the Vast.ai bill in­cludes band­width/hard-drive in that to­tal and the GPU uti­liza­tion was poor so each ‘GPU-day’ is worth about a third less than with the 128px BigGAN which had good GPU uti­liza­tion and the 2080tis were overkill. It should be pos­si­ble to do much bet­ter with the same bud­get in the fu­ture.

The train­ing com­mand:

python train.py --model BigGANdeep --dataset D1K_hdf5 --parallel --shuffle --num_workers 16 \
    --batch_size 56 --num_G_accumulations 8 --num_D_accumulations 8 --num_D_steps 1 --G_lr 1e-4 \
    --D_lr 4e-4 --D_B2 0.999 --G_B2 0.999 --G_attn 64 --D_attn 64 --G_ch 128 --D_ch 128 \
    --G_depth 2 --D_depth 2 --G_nl inplace_relu --D_nl inplace_relu --SN_eps 1e-6 --BN_eps 1e-5 \
    --adam_eps 1e-6 --G_ortho 0.0 --G_shared --G_init ortho --D_init ortho --hier --dim_z 64 \
    --shared_dim 64 --ema --use_ema --G_eval_mode --test_every 200000 --sv_log_interval 1000 \
    --save_every 90 --num_best_copies 1 --num_save_copies 1 --seed 0 --no_fid \
    --num_inception_images 1 --augment --data_root ~/tmp --resume --experiment_name \
    BigGAN_D1K_hdf5_BigGANdeep_seed0_Gch128_Dch128_Gd2_Dd2_bs64_Glr1.0e-04_Dlr4.0e-04_Gnlinplace_relu_Dnlinplace_relu_Ginitortho_Dinitortho_Gattn64_Dattn64_Gshared_hier_ema

The sys­tem worked well but BigGAN turns out to have se­ri­ous per­for­mance bot­tle­necks (ap­par­ently in syn­chro­niz­ing batch­norm across GPUs) and did not make good use of the 8 GPUs, av­er­ag­ing GPU uti­liza­tion ~30% ac­cord­ing to nvidia-smi. (On my 2×1080tis with the 128px, GPU-utilization was closer to 95%.) In ret­ro­spect, I prob­a­bly should’ve switched to a less ex­pen­sive in­stance like a 8×1080ti where it likely would’ve had sim­i­lar through­put but cost less.

Train­ing pro­gressed well up un­til it­er­a­tions #80–90k, when I be­gan see­ing signs of mode col­lapse:

Train­ing mon­tage of the 256px Dan­booru2018-1K; semi­-suc­cess­ful (note when EMA be­gins to be used for sam­pling im­ages at ~8s, and the mode col­lapse at the end)

I was un­able to in­crease the mini­batch to more than ~500 be­cause of the bugs, lim­it­ing what I could do against mode col­lapse, and I sus­pect the small mini­batch was why mode col­lapse was hap­pen­ing in the first place. (Gokaslan tried the last check­point I saved—#95,160—with the same set­tings, and ran it to #100,000 it­er­a­tions and ex­pe­ri­enced near-to­tal mode col­lapse.)

The last check­point I saved from be­fore mode col­lapse was #83,520, saved on 2019-05-28 after ~24 wall­clock days (ac­count­ing for var­i­ous crashes & time set­ting up & tweak­ing).

Ran­dom sam­ples, in­ter­po­la­tion grids (not videos), and class-con­di­tional sam­ples can be gen­er­ated us­ing sample.py; like train.py, it re­quires the ex­act ar­chi­tec­ture to be spec­i­fied. I used the fol­low­ing com­mand (many of the op­tions are prob­a­bly not nec­es­sary, but I did­n’t know which):

python sample.py --model BigGANdeep --dataset D1K_hdf5 --parallel --shuffle --num_workers 16 \
    --batch_size 56 --num_G_accumulations 8 --num_D_accumulations 8 --num_D_steps 1 \
    --G_lr 1e-4 --D_lr 4e-4 --D_B2 0.999 --G_B2 0.999 --G_attn 64 --D_attn 64 --G_ch 128 \
    --D_ch 128 --G_depth 2 --D_depth 2 --G_nl inplace_relu --D_nl inplace_relu --SN_eps 1e-6 \
    --BN_eps 1e-5 --adam_eps 1e-6 --G_ortho 0.0 --G_shared --G_init ortho --D_init ortho --hier \
    --dim_z 64 --shared_dim 64 --ema --use_ema --G_eval_mode --test_every 200000 \
    --sv_log_interval 1000 --save_every 90 --num_best_copies 1 --num_save_copies 1 --seed 0 \
    --no_fid --num_inception_images 1 --skip_init --G_batch_size 32 --use_ema --G_eval_mode \
    --sample_random --sample_sheets --sample_interps --resume --experiment_name 256px

Ran­dom sam­ples are al­ready well-rep­re­sented by the train­ing mon­tage. The in­ter­po­la­tions look sim­i­lar to StyleGAN in­ter­po­la­tions. The class-con­di­tional sam­ples are the most fun to look at be­cause one can look at spe­cific char­ac­ters with­out the need to re­train the en­tire mod­el, which while only tak­ing a few hours at most, is a has­sle.

256px Danbooru2018-1K Samples

In­ter­po­la­tion im­ages and 5 char­ac­ter-spe­cific ran­dom sam­ples (A­suka, Holo, Rin, Chen, Ruri) for our 256px BigGAN trained on 1000 char­ac­ters from Dan­booru2018:

Ran­dom in­ter­po­la­tion sam­ples (256px BigGAN trained on 1000 Dan­booru2018 char­ac­ter por­traits)
Souryuu Asuka Lan­g­ley (Neon Gen­e­sis Evan­ge­lion), class #825 ran­dom sam­ples
Holo (Spice and Wolf), class #273 ran­dom sam­ples
Rin Tohsaka (Fate/S­tay Night), class #891
Yakumo Chen (Touhou), class #123 ran­dom sam­ples
Ruri Hoshino (Mar­t­ian Suc­ces­sor Nades­ico), class #286 ran­dom sam­ples

256px BigGAN Downloads

Model & sam­ple down­loads:

Evaluation

Sar­cas­tic com­men­tary on BigGAN qual­ity by /u/K­lock­box

The best re­sults from the 128px BigGAN model look about as good as could be ex­pected from 128px sam­ples; the 256px model is fairly good, but suffers from much more no­tice­able ar­ti­fact­ing than 512px StyleGAN, and cost $1373 (a 256px StyleGAN would have been closer to $400 on AWS). In BigGAN’s de­fense, it had clearly not con­verged yet and could have ben­e­fited from much more train­ing and much larger mini­batch­es, had that been pos­si­ble. Qual­i­ta­tive­ly, look­ing at the more com­plex el­e­ments of sam­ples, like hair or­na­ments/hats, I feel like BigGAN was do­ing a much bet­ter job of cop­ing with com­plex­ity & fine de­tail than StyleGAN would have at a sim­i­lar point.

How­ev­er, train­ing 512px por­traits or whole-Dan­booru im­ages is in­fea­si­ble at this point: while the cost might be only a few thou­sand dol­lars, the var­i­ous bugs mean that it may not be pos­si­ble to sta­bly train to a use­ful qual­i­ty. It’s a dilem­ma: at small or easy do­mains, StyleGAN is much faster (if not bet­ter); but at large or hard do­mains, mode col­lapse is too risky and en­dan­gers the big in­vest­ment nec­es­sary to sur­pass StyleGAN.

To make BigGAN vi­able, it needs at least:

  • mini­batch size bugs fixed to en­able up to n = 2048 (or larg­er, as gra­di­ent noise scale in­di­cates)
  • 512px ar­chi­tec­tures de­fined, to al­low trans­fer learn­ing from the re­leased Ten­sor­flow 512px Im­a­geNet model
  • op­ti­miza­tion work to re­duce over­head and al­low rea­son­able GPU uti­liza­tion on >2-GPU sys­tems

With those done, it should be pos­si­ble to train 512px por­traits for <$1,000 and whole-Dan­booru im­ages for <$10,000. (Given the re­lease of Deep­Dan­booru as a Ten­sor­Flow mod­el, en­abling an ani­me-spe­cific per­cep­tual loss, it would also be in­ter­est­ing to in­ves­ti­gate ap­ply­ing pre­train­ing to BigGAN.)

compare_gan

Danbooru2019+e621 256px BigGAN

As part of test­ing our mod­i­fi­ca­tions to compare_gan, in­clud­ing sam­pling from mul­ti­ple datasets to in­crease n and us­ing to sta­bi­lize it and adding an ad­di­tional (crude, lim­it­ed) kind of self­-su­per­vised loss to the D, we trained sev­eral 256px BigGANs, ini­tially on Dan­booru2019 SFW but then adding in the TWDNE por­traits & e621/e621-por­traits part­way through train­ing. This desta­bi­lized the mod­els great­ly, but the flood loss ap­pears to have stopped di­ver­gence and they grad­u­ally re­cov­ered. Run #39 did some­what bet­ter than run #40; the self­-su­per­vised vari­ants never re­cov­ered. This in­di­cated to us that our self­-su­per­vised loss needed heavy re­vi­sion (as in­deed it did), and that flood loss was more valu­able than ex­pect­ed, and we in­ves­ti­gated it fur­ther; the im­por­tant part ap­pears—­for GANs, any­way—to be the stop-loss part, halt­ing train­ing of G/D when it gets ‘too good’. Freez­ing mod­els is an old GAN trick which is mostly not used post-WGAN, but ap­pears use­ful for BigGAN, per­haps be­cause of the spiky loss curve, es­pe­cially early in train­ing. (The qual­ity of later BigGAN runs as much high­er. We only dis­cov­ered much later in 2020 that the com­pare_­gan de­fault be­hav­ior of re­siz­ing im­ages to 256px by ‘ran­dom crop­ping’ was se­ri­ously desta­bi­liz­ing for both BigGAN & StyleGAN com­pared to ‘top crop­ping’, and only in 2021 did Shawn Presser dis­cover what seems to be the fa­tal flaw of com­pare_­gan’s im­ple­men­ta­tion: an in­no­cen­t-seem­ing omis­sion of a ‘+1’ in the gamma pa­ra­me­ter of an im­age pro­cess­ing step.)

We ran it for 607,250 it­er­a­tions on a TPUv3-256 pod un­til 2020-05-15. Con­fig:

{"dataset.name": "images_256", "resnet_biggan.Discriminator.blocks_with_attention": "B2",
"resnet_biggan.Discriminator.ch": 96, "resnet_biggan.Generator.blocks_with_attention": "B5",
"resnet_biggan.Generator.ch": 96, "resnet_biggan.Generator.plain_tanh": false, "ModularGAN.d_lr": 0.0005,
"ModularGAN.d_lr_mul": 3.0, "ModularGAN.ema_start_step": 4000, "ModularGAN.g_lr": 6.66e-05,
"ModularGAN.g_lr_mul": 1.0, "options.batch_size": 2048, "options.d_flood": 0.2,
"options.datasets": "gs://XYZ-euw4a/datasets/danbooru2019-s/danbooru2019-s-0*,gs://XYZ-euw4a/datasets/e621-s/e621-s-0*,
    gs://XYZ-euw4a/datasets/portraits/portraits-0*,gs://XYZ-euw4a/datasets/e621-portraits-s-512/e621-portraits-s-512-0*",
"options.g_flood": 0.05, "options.labels": "", "options.random_labels": true, "options.z_dim": 140,
"run_config.experimental_host_call_every_n_steps": 50, "run_config.keep_checkpoint_every_n_hours": 0.5,
"standardize_batch.use_cross_replica_mean": true, "TpuSummaries.save_image_steps": 50, "TpuSummaries.save_summary_steps": 1}
90 ran­dom EMA sam­ples (un­trun­cat­ed) from the 256px BigGAN trained on Dan­booru2019/anime-por­trait­s/e621/e621-por­traits.
In­ter­po­la­tion us­ing High­CWu Pad­dlePad­dle Google Co­lab note­book

The model is avail­able for down­load:

rsync --verbose rsync://78.46.86.149:873/biggan/2020-05-18-spresser-biggan-256px-danbooruplus-run39-607250.tar.xz ./

compare_gan con­fig:

$ cat bigrun39b/operative_config-603500.gin
# Parameters for AdamOptimizer:
AdamOptimizer.beta1 = 0.0
AdamOptimizer.beta2 = 0.999
AdamOptimizer.epsilon = 1e-08
AdamOptimizer.use_locking = False

# Parameters for batch_norm:
# None.

# Parameters for BigGanResNetBlock:
BigGanResNetBlock.add_shortcut = True

# Parameters for conditional_batch_norm:
conditional_batch_norm.use_bias = False

# Parameters for cross_replica_moments:
cross_replica_moments.group_size = None
cross_replica_moments.parallel = True

# Parameters for D:
D.batch_norm_fn = None
D.layer_norm = False
D.spectral_norm = True

# Parameters for dataset:
dataset.name = 'images_256'
dataset.seed = 547

# Parameters for resnet_biggan.Discriminator:
resnet_biggan.Discriminator.blocks_with_attention = 'B2'
resnet_biggan.Discriminator.ch = 96
resnet_biggan.Discriminator.channel_multipliers = None
resnet_biggan.Discriminator.project_y = True

# Parameters for G:
G.batch_norm_fn = @conditional_batch_norm
G.spectral_norm = True

# Parameters for resnet_biggan.Generator:
resnet_biggan.Generator.blocks_with_attention = 'B5'
resnet_biggan.Generator.ch = 96
resnet_biggan.Generator.channel_multipliers = None
resnet_biggan.Generator.embed_bias = False
resnet_biggan.Generator.embed_y = True
resnet_biggan.Generator.embed_y_dim = 128
resnet_biggan.Generator.embed_z = False
resnet_biggan.Generator.hierarchical_z = True
resnet_biggan.Generator.plain_tanh = False

# Parameters for hinge:
# None.

# Parameters for loss:
loss.fn = @hinge

# Parameters for ModularGAN:
ModularGAN.conditional = True
ModularGAN.d_lr = 0.0005
ModularGAN.d_lr_mul = 3.0
ModularGAN.d_optimizer_fn = @tf.train.AdamOptimizer
ModularGAN.deprecated_split_disc_calls = False
ModularGAN.ema_decay = 0.9999
ModularGAN.ema_start_step = 4000
ModularGAN.experimental_force_graph_unroll = False
ModularGAN.experimental_joint_gen_for_disc = False
ModularGAN.fit_label_distribution = False
ModularGAN.g_lr = 6.66e-05
ModularGAN.g_lr_mul = 1.0
ModularGAN.g_optimizer_fn = @tf.train.AdamOptimizer
ModularGAN.g_use_ema = True

# Parameters for no_penalty:
# None.

# Parameters for normal:
normal.mean = 0.0
normal.seed = None

# Parameters for options:
options.architecture = 'resnet_biggan_arch'
options.batch_size = 2048
options.d_flood = 0.2
options.datasets = \
    'gs://darnbooru-euw4a/datasets/danbooru2019-s/danbooru2019-s-0*,gs://darnbooru-euw4a/datasets/e621-s/e621-s-0*,gs://darnbooru-euw4a/datasets/portraits/portraits-0*,gs://darnbooru-euw4a/datasets/e621-portraits-s-512/e621-portraits-s-512-0*'
options.description = \
    'Describe your GIN config. (This appears in the tensorboard text tab.)'
options.disc_iters = 2
options.discriminator_normalization = None
options.g_flood = 0.05
options.gan_class = @ModularGAN
options.image_grid_height = 3
options.image_grid_resolution = 1024
options.image_grid_width = 3
options.labels = ''
options.lamba = 1
options.model_dir = 'gs://darnbooru-euw4a/runs/bigrun39b/'
options.num_classes = 1000
options.random_labels = True
options.training_steps = 250000
options.transpose_input = False
options.z_dim = 140

# Parameters for penalty:
penalty.fn = @no_penalty

# Parameters for replace_labels:
replace_labels.file_pattern = None

# Parameters for run_config:
run_config.experimental_host_call_every_n_steps = 50
run_config.iterations_per_loop = 250
run_config.keep_checkpoint_every_n_hours = 0.5
run_config.keep_checkpoint_max = 10
run_config.save_checkpoints_steps = 250
run_config.single_core = False
run_config.tf_random_seed = None

# Parameters for spectral_norm:
spectral_norm.epsilon = 1e-12
spectral_norm.singular_value = 'auto'

# Parameters for standardize_batch:
standardize_batch.decay = 0.9
standardize_batch.epsilon = 1e-05
standardize_batch.use_cross_replica_mean = True
standardize_batch.use_moving_averages = False

# Parameters for TpuSummaries:
TpuSummaries.save_image_steps = 50
TpuSummaries.save_summary_steps = 1

# Parameters for train_imagenet_transform:
train_imagenet_transform.crop_method = 'random'

# Parameters for weights:
weights.initializer = 'orthogonal'

# Parameters for z:
z.distribution_fn = @tf.random.normal
z.maxval = 1.0
z.minval = -1.0
z.stddev = 1.0

  1. Im­a­geNet re­quires you to sign up & be ap­proved to down­load from them, but 2 months later I had heard noth­ing back (and still have not as of Jan­u­ary 2021). So I used the data from ILSVRC2012_img_train.tar (MD5: 1d675b47d978889d74fa0da5fadfb00e; 138GB) which I down­loaded from the Im­a­geNet LSVRC 2012 Train­ing Set (Ob­ject De­tec­tion) tor­rent.↩︎

  2. Dan­booru can clas­sify the same char­ac­ter un­der mul­ti­ple tags: for ex­am­ple, Sailor Moon char­ac­ters are tagged un­der their “Sailor X” name for im­ages of their trans­formed ver­sion, and their real names for ‘civil­ian’ im­ages (eg ‘Sailor Venus’ or ‘Cure Moon­light’, the for­mer of which I merged with ‘Aino Mi­nako’). Some pop­u­lar fran­chises have many vari­ants of each char­ac­ter: the Fate fran­chise, es­pe­cially with the suc­cess of Fate/­Grand Or­der, is a par­tic­u­lar offend­er, with quite a few vari­ants of char­ac­ters like Saber.↩︎

  3. One would think it would, but I asked Brock and ap­par­ently it does­n’t help to oc­ca­sion­ally ini­tial­ize from the EMA snap­shots. EMA is a mys­te­ri­ous thing.↩︎

  4. As far as I can tell, it has some­thing to do with the dataloader code in utils.py: the cal­cu­la­tion of length and the it­er­a­tor do some­thing weird to ad­just for pre­vi­ous train­ing, so the net effect is that you can run with a fixed mini­batch ac­cu­mu­la­tion and it’ll be fine, and you can re­duce the num­ber of ac­cu­mu­la­tions, and it’ll sim­ply un­der­run the dat­aload­er, but if you in­crease the num­ber of ac­cu­mu­la­tions, if you’ve trained enough per­cent­age-wise, it’ll im­me­di­ately flip over into a neg­a­tive length and in­dex­ing into it be­comes com­pletely im­pos­si­ble, lead­ing to crash­es. Un­for­tu­nate­ly, I only ever want to in­crease the mini­batch ac­cu­mu­la­tion… I tried to fix it but the logic is too con­vo­luted for me to fol­low it.↩︎

  5. Mir­ror: rsync --verbose rsync://78.46.86.149:873/biggan/2019-05-28-biggan-danbooru2018-snapshot-83520.tar.xz ./↩︎

  6. Mir­ror: rsync --verbose rsync://78.46.86.149:873/biggan/2019-06-04-biggan-256px-danbooru20181k-83520-randomsamples.tar ./↩︎