2018 News

Annual summary of 2018 Gwern.net newsletters, selecting my best writings, the best 2018 links by topic, and the best books/movies/anime I saw in 2018, with some general discussion of the year.
2018-12-082021-02-28 finished certainty: log importance: 0

This is the of the Gw­ern.net newslet­ter (), sum­ma­riz­ing the best of the monthly 2018 newslet­ters:

  1. end of year sum­mary

Pre­vi­ous an­nual newslet­ters: , 2016, 2015.


2018 went well, with much in­ter­est­ing news and sev­eral stim­u­lat­ing trips. My 2018 writ­ings in­clud­ed:

  1. Dan­booru2017: a new dataset of 2.94m anime im­ages (1.9tb) with 77.5m de­scrip­tive tags

  2. Em­bryo se­lec­tion: Overview of ma­jor cur­rent ap­proaches for com­plex-trait ge­netic en­gi­neer­ing, FAQ, mul­ti­-stage se­lec­tion, chro­mo­some/ga­mete se­lec­tion, op­ti­mal search of batches, & ro­bust­ness to er­ror in util­ity weights

  3. re­views:

Site traffic (more de­tailed break­down) was again up as com­pared with the year be­fore: 2018 saw 736,486 pageviews by 332,993 unique users (vs 551,635 by 265,836 in 2017).



Over­all, 2018 was much like 2017 was, but more so. In all of AI, ge­net­ics, VR, Bit­coin, and gen­eral cul­ture/pol­i­tics, the trends of 2017 con­tin­ued through 2018 or even ac­cel­er­at­ed.

AI: In 2018, the DL rev­o­lu­tion came for NLP. Con­vo­lu­tions, at­ten­tion, and big­ger com­pute cre­ated ever larger NNs which could then kick bench­mark ass and take names. Ad­di­tional seeds were planted for log­i­cal/re­la­tion­al/nu­mer­i­cal rea­son­ing (was­n’t logic an­other one of those things deep learn­ing would never be able to do…?).

Else­where, re­in­force­ment learn­ing was hot (eg the RL sub­red­dit traffic stats in­creased sev­er­al­fold over 2017, which it­self had in­creased sev­er­al­fold), with Go fol­lowed by hu­man-level DoTA 2. OA5 was an amaz­ing achieve­ment given how com­plex DoTA is, with fog of war, team tac­tics, and far larger state space, in­te­grat­ing the full spec­trum of strat­egy from twitch tac­tics up to long-term strat­egy and pre-s­e­lec­tion of units. (Given OA5’s pro­gress, I was dis­ap­pointed to see min­i­mal DM progress on the Star­craft II front in 2018, but it turned out I just needed more pa­tience.) DRL of course en­joyed ad­di­tional pro­gress, no­tably ro­bot­ics: sam­ple-effi­cient ro­botic con­trol and learn­ing from ob­ser­va­tion­s/im­i­ta­tion are closer than ever.

Think­ing a lit­tle more broadly about where DL/DRL has seen suc­cess­es, the rise of DL has been the fall of .

No one is now sur­prised in the least bit when a com­puter mas­ters some com­plex sym­bolic task like chess or Go these days; we are sur­prised by the de­tails like it hap­pen­ing about 10 years be­fore many would’ve pre­dict­ed, or that the Go player can be trained overnight in wall­clock time, or that the same ar­chi­tec­ture can be ap­plied with min­i­mal mod­i­fi­ca­tion to give a top chess en­gine. For all the fuss over Al­phaGo, no one pay­ing at­ten­tion was re­ally sur­prised. If you went back 10 years ago and told some­one, ‘by the way, by 2030, both Go and Ari­maa can be played at a hu­man level by an AI’, they’d shrug.

Peo­ple are much more sur­prised to see DoTA 2 agents, or Google Waymo cars dri­ving around en­tire met­ro­pol­i­tan ar­eas, or gen­er­a­tion of pho­to­re­al­is­tic faces or to­tally re­al­is­tic voic­es. The progress in ro­bot­ics has also been ex­cit­ing to any­one pay­ing at­ten­tion to the space: the DRL ap­proaches are get­ting ever bet­ter and sam­ple-effi­cient and good at im­i­ta­tion. I don’t know how many blue-col­lar work­ers they will put out of work—even if soft­ware is solved, the ro­botic hard­ware is still ex­pen­sive! But fac­to­ries will be sali­vat­ing over them, I’m sure. (The fu­ture of self­-driv­ing cars is in con­sid­er­ably more doubt.)

A stan­dard­-is­sue min­i­mum-wage Homo sapi­ens work­er-u­nit has a lot of ad­van­tages. I ex­pect there will be a lot of blue-col­lar jobs for a long time to come, for those who want them. But they’ll be in­creas­ingly crummy jobs. This will make a lot of peo­ple un­hap­py. I think of Turch­in’s ‘elite over­pro­duc­tion’ con­cep­t—how much of po­lit­i­cal strife now is sim­ply that we’ve overe­d­u­cated so many peo­ple in de­grees that were al­most en­tirely sig­nal­ing-based and not of in­trin­sic value in the real world and there were no slots avail­able for them and now their ex­pec­ta­tions & lack of use­ful skills are col­lid­ing with re­al­i­ty? In po­lit­i­cal sci­ence, they say rev­o­lu­tions hap­pen not when things are go­ing bad­ly, but when things are go­ing not as well as every­one ex­pect­ed.

We’re at an in­ter­est­ing point—as Le­Cun put it, I think, ‘any­thing a hu­man can do with <1s of thought, deep learn­ing can do now’, while older sym­bolic meth­ods can out­per­form hu­mans in a num­ber of do­mains where they use >>1s of thought. As NNs get big­ger and the train­ing meth­ods and ar­chi­tec­tures and datasets are re­fined, the ‘<1s’ will grad­u­ally ex­pand. So there’s a pin­cer move­ment go­ing on, and some­times hy­brid ap­proaches can crack a hu­man re­doubt (eg Al­phaGo com­bined the hoary tree search for long-term >>1s thought with CNNs for the in­tu­itive in­stan­ta­neous gut-re­ac­tion eval­u­a­tion of a board <1s, and to­gether they could learn to be su­per­hu­man). As long as what hu­mans do with <1s of thought was out of reach, as long as the ‘sim­ple’ prim­i­tives of vi­sion and move­ment could­n’t be han­dled, the sym­bol ground­ing and frame prob­lems were hope­less. “How does your de­sign turn a photo of a cat into the sym­bol CAT which is use­ful for in­fer­ence/­plan­ning/learn­ing, ex­act­ly?” But now we have a way to re­li­ably go from chaotic re­al-world data to rich se­man­tic nu­meric en­cod­ings like vec­tor em­bed­dings. That’s why peo­ple are so ex­cited about the fu­ture of DL.

The biggest dis­ap­point­ment, by far, in AI was self­-driv­ing cars.

2018 was go­ing to be the year of self­-driv­ing cars, as Waymo promised all & sundry a full pub­lic launch and the start of scal­ing out, and every re­port of ex­pen­sive deals & in­vest­ments bade fair to launch, but the launch kept not hap­pen­ing—and then the Uber pedes­trian fa­tal­ity hap­pened. This fa­tal­ity was the re­sult of a cas­cade of in­ter­nal de­ci­sions & pres­sure to put an un­sta­ble, er­rat­ic, known dan­ger­ous self­-driv­ing car on the road, then de­lib­er­ately dis­able its emer­gency brak­ing, de­lib­er­ately dis­able the car’s emer­gency brak­ing, not pro­vide any alerts to the safety dri­vers, and then re­move half the safety dri­vers, re­sult­ing in a fa­tal­ity hap­pen­ing un­der what should have been near-ideal cir­cum­stances, and in­deed the soft­ware de­tected the pedes­trian long in ad­vance and would have braked if it had been al­lowed (“Pre­lim­i­nary NTSB Re­port: High­way HWY18MH010”); par­tic­u­larly egre­gious given Uber’s past in­ci­dents (like cov­er­ing up run­ning a red light). Com­par­isons to Chal­lenger come to mind.

The in­ci­dent should not have affected per­cep­tion of self­-driv­ing cars—the fact that a far-below-SOTA sys­tem is un­safe when its brakes are de­lib­er­ately dis­abled so it can­not avoid a fore­seen ac­ci­dent tells us noth­ing about the safety of the best self­-driv­ing cars. That self­-driv­ing cars are dan­ger­ous when done badly should not come as news to any­one or change any be­liefs, but it black­ened per­cep­tions of self­-driv­ing cars nev­er­the­less. Per­haps be­cause of it, the promised Waymo launch was de­layed all the way to De­cem­ber and then was purely a ‘pa­per launch’, with no dis­cernible differ­ence from its pre­vi­ous smal­l­-s­cale op­er­a­tions.

Which leads me to ques­tion why the cred­i­ble buildup be­fore­hand of ve­hi­cles & per­son­nel & deals if the pa­per launch was what was al­ways in­tend­ed; did the Uber in­ci­dent trig­ger an in­ter­nal re­view and a ma­jor re-e­val­u­a­tion of how ca­pa­ble & safe their sys­tem re­ally is and a re­sort to a pa­per launch to save face? What went wrong, not at Uber but at Waymo? As Waymo goes, so the sec­tor goes.

2018 for ge­net­ics saw many of the fruits of 2017 be­gin to ma­ture: the usual large-s­cale GWASes con­tin­ued to come out, in­clud­ing both SSGAC3 (Lee et al 2018) and an im­me­di­ate boost by bet­ter analy­sis in Al­le­grini et al 2018 (as I pre­dicted last year); uses of PGSes in other stud­ies, such as the for­bid­den ex­am­i­na­tion of life out­come differ­ences pre­dicted by IQ/EDU PGSes, are in­creas­ingly rou­tine. In par­tic­u­lar, med­ical PGSes are now reach­ing lev­els of clin­i­cal util­ity that even doc­tors can see their val­ue.

This trend need not pe­ter out, as the on­com­ing datasets keep get­ting more enor­mous; con­sumer DTC ex­trap­o­lat­ing from an­nounced sales num­bers has reached stag­ger­ing num­bers and po­ten­tially into the hun­dreds of mil­lions, and there are var­i­ous an­nounce­ments like the UKBB aim­ing for 5 mil­lion whole-genomes, which would’ve been bonkers even a few years ago. (Why now? Prices have fallen enough. Per­haps an en­ter­pris­ing jour­nal­ist could dig into why Il­lu­mina could keep WGS prices so high for so long…) The promised land is be­ing reached.

The drum­beat of CRISPR suc­cesses reached a peak in the case of He Jiankui, who—­com­pletely out of the blue—p­re­sented the world with the fait ac­com­pli of CRISPR ba­bies. The most strik­ing as­pect is the tremen­dous back­lash: not just from West­ern­ers (which is to be ex­pect­ed, and is rather hyp­o­crit­i­cal of many of the ge­neti­cists in­volved, who talked pre­vi­ously of be­ing wor­ried about po­ten­tial back­lash from pre­ma­ture CRISPR use and then, when that hap­pened, did their level best to make the back­lash hap­pen by com­pet­ing for the most hy­per­bolic con­dem­na­tion), but also from Chi­na. Al­most as strik­ing was how quickly com­men­ta­tors set­tled on a Nar­ra­tive, in­ter­pret­ing every­thing as neg­a­tively as pos­si­ble even where that re­quired flatly ig­nor­ing re­port­ing (claim­ing he launched a PR blitz, when the AP scooped him) or strains credulity (how can we be­lieve the hos­pi­tal’s face-sav­ing claims that Jiankui ‘forged’ every­thing, when they were so effu­sive be­fore the back­lash be­gan? Or any gov­ern­ment state­ments com­ing out of Chi­na, of all places, about an in­defi­nitely im­pris­oned sci­en­tist?), or cit­ing the most du­bi­ous pos­si­ble re­search (like can­di­date-gene or an­i­mal model re­search on CCR5).

Re­gard­less, the taboo has been bro­ken. Only time will tell if this will spur more rig­or­ous­ly-con­ducted CRISPR re­search to do it right, or will set back the field for decades & be an ex­am­ple of the . I am cau­tiously op­ti­mistic that it will be the for­mer.

Genome syn­the­sis work ap­pears to con­tinue to roll along, al­though noth­ing of ma­jor note oc­curred in 2018. Prob­a­bly the most in­ter­est­ing area in terms of fun­da­men­tal work was the progress on both mice & hu­man ga­me­to­ge­n­e­sis and stem cell con­trol. This is the key en­abling tech­nol­ogy for both mas­sive em­bryo se­lec­tion (break­ing the egg bot­tle­neck by al­low­ing gen­er­a­tion of hun­dreds or thou­sands of em­bryos and thus mul­ti­ple-SD gains from se­lec­tion) and then IES (It­er­ated Em­bryo Se­lec­tion).

VR con­tin­ued steady grad­ual growth; with no ma­jor new hard­ware re­leases (Ocu­lus Go does­n’t coun­t), there was not much to tell be­yond the Steam sta­tis­tics or Sony an­nounc­ing PSVR sales >3m. (I did have an op­por­tu­nity to play the pop­u­lar Beat Saber with my mother & sis­ter; all of us en­joyed it.) More in­ter­est­ing will be the 2019 launch of which comes close to the hy­po­thet­i­cal mass-con­sumer break­through VR head­set: mo­bile/no wires, with a res­o­lu­tion boost and full hand/­po­si­tion track­ing, in a rea­son­ably priced pack­age, with a promised large li­brary of es­tab­lished VR games ported to it. It lacks foveated ren­der­ing or retina res­o­lu­tion, but oth­er­wise seems like a ma­jor up­grade in terms of mass ap­peal; if it con­tin­ues to eke out mod­est sales, that will be con­sis­tent with the nar­ra­tive that VR is on the long slow slog adop­tion path sim­i­lar to early PCs or the In­ter­net (in­stantly ap­peal­ing & clearly the fu­ture to the early adopters who try it, but still tak­ing decades to achieve any mass pen­e­tra­tion) rather than post-i­Phone smart­phones.

Bit­coin: the long slide from the bub­ble con­tin­ued, to my con­sid­er­able schaden­freude (2017 but more so…). The most in­ter­est­ing story of the year for me was the rea­son­ably suc­cess­ful launch of the long-awaited Au­gur pre­dic­tion mar­ket, which had no ‘DAO mo­ment’ and the over­all mech­a­nism ap­pears to be work­ing. Oth­er­wise, not much to re­mark on.

A short note on pol­i­tics: I main­tain my 2017 com­ments (but more so…). For all the emo­tion in­vested in the ‘great awok­en­ing’ and the con­tin­ued Gi­rar­dian scape­goat­ing/back­lash, now 3 years in, it is in­creas­ingly clear that Don­ald Trump’s pres­i­dency has been ab­surdly over­rated in im­por­tance. De­spite his abil­ity to do sub­stan­tial dam­age like launch­ing trade wars or dis­tor­tionary tax cuts, that is hardly un­prece­dented as most pres­i­dents do se­vere eco­nomic dam­age of some form or an­oth­er; while other blun­ders like his in­effec­tual North Ko­rea pol­icy merely con­tin­ues a long his­tory of in­effec­tive pol­icy (and was in­evitable once the South Ko­rean pop­u­la­tion chose to elect Moon Jae-in). Every minute you spend ob­sess­ing over stuff like the Mueller Re­port has been wast­ed: Trump re­mains what New York­ers have al­ways known him to be—an in­com­pe­tent nar­cis­sist.

Let’s try to fo­cus more on long-term is­sues such as global eco­nomic growth or ge­netic en­gi­neer­ing.



  1. Mc­Na­ma­ra’s Fol­ly: The Use of Low-IQ Troops in the Viet­nam War, Gre­gory 2015
  2. Bad Blood: Se­crets and Lies in a Sil­i­con Val­ley Startup, 2018
  3. The Vac­ci­na­tors: Small­pox, Med­ical Knowl­edge, and the ‘Open­ing’ of Japan, Jan­netta 2007 (re­view)
  4. Like En­gen­dr’ing Like: Hered­ity and An­i­mal Breed­ing in Early Mod­ern Eng­land, Rus­sell 1986 ()
  5. Cat Sense: How the New Fe­line Sci­ence Can Make You a Bet­ter Friend to Your Pet, Brad­shaw 2013 ()
  6. , Roland & Shi­man 2002 ()
  7. The Op­er­a­tions Eval­u­a­tion Group: A His­tory of Naval Op­er­a­tions Analy­sis, Tid­man 1984


  1. Fu­ji­wara Teika’s Hun­dred-Poem Se­quence of the Shōji Era, 1200: A Com­plete Trans­la­tion, with In­tro­duc­tion and Com­men­tary, Brower 1978


Non­fic­tion movies:

  1. , 2016 (re­view)


  1. (re­view)
  2. (2000)
  3. (re­view)
  4. (re­view)


  1. , sea­sons 1–8 ()
  2. (re­view)