2016 News

N/A
newsletter
2016-12-312021-01-11 finished certainty: log importance: 0


This is the 2016 sum­mary edi­tion of the Gw­ern.net newslet­ter, sum­ma­riz­ing the best of the monthly 2016 newslet­ters:

Pre­vi­ous­ly: .

Writings

De­spite tak­ing two long trips and some per­sonal trou­bles (plumb­ing, an epic lap­top fail­ure, & law en­force­men­t), 2016 was a much bet­ter year for my sta­tis­tics & writ­ing than 2015:

  1. Wikipedia ar­ti­cles on &
  2. Arm­strong’s con­trol prob­lem: Reinforce.js demo

Site traffic was healthy: 635,123 pageviews by 312,659 users.

Media

Overview

Con­tin­u­ing the 2015 trends, 2016 was a ban­ner year for AI & ge­net­ics.

In AI, demon­strat­ing the po­ten­tial for rapid ad­vance, Al­phaGo went from low pro­fes­sional level as of Oc­to­ber 2015 to world cham­pion lev­el, crush­ing Lee Sedol 4-1 with sub­stan­tial mar­gin, and just when every­one had for­got­ten, then a re­fined (pre­sum­ably pure self­-play) ver­sion of Al­phaGo went 60-0 in blitz matches on­line with many of the top Go play­ers (in­clud­ing Ke Jie). The trans­la­tion RNNs fi­nally made their long-awaited ap­pear­ance in com­mer­cial pro­duc­tion with Google Trans­late, mak­ing for the largest jump in trans­la­tion qual­ity in decades, bring­ing many trans­la­tion pairs up to sur­pris­ingly high qual­ity (even Japan­ese⟺Eng­lish trans­la­tions are now semi­-com­pre­hen­si­ble, as op­posed to the sta­tus quo to­tal gib­ber­ish); com­bined with the rapid progress in voice tran­scrip­tion and the sur­pris­ing re­sults of hu­man-level lipread­ing, one can now imag­ine a NN-pow­ered Ba­belfish (which, com­bined with HUDs, could be rev­o­lu­tion­ary for the deaf & hear­ing-im­paired). Gen­er­a­tive ad­ver­sar­ial net­works (GANs) re­mained a cen­tral topic of AI re­search, with bet­ter the­o­ret­i­cal un­der­stand­ing (link­ing them to re­in­force­ment learn­ing), and many tweaks and in­cre­men­tal re­fine­ments in­creas­ing the size of fea­si­ble gen­er­ated im­ages (eg StackGAN’s large bird/flower im­age gen­er­a­tion ca­pa­bil­i­ty); how­ev­er, GANs cur­rently have not de­liv­ered any mean­ing­ful in­creases on any ap­plied tasks & re­main a so­lu­tion in search of a prob­lem, so that is some­thing to hope for in 2017—demon­stra­tion that the un­su­per­vised or gen­er­a­tive as­pects of GANs can be use­fully em­ployed for plan­ning or some­thing. Per­haps the most ex­cit­ing work in 2016 was the long-term work on ar­chi­tec­ture in pro­vid­ing large-s­cale mem­ory mech­a­nisms (in the form of effi­cient ex­ter­nal mem­ory or en­coded into the weights of large ex­pand­ing or sharded NNs), in learn­ing to train large-s­cale NNs (“syn­thetic gra­di­ents”), and in a par­tic­u­larly sur­pris­ing set of pa­pers, demon­strat­ing that NNs+re­in­force­men­t-learn­ing can effi­ciently learn how to de­sign NN ar­chi­tec­tures & units. (This was not some­thing any­one doubted could be done, but pre­vi­ous RL work sug­gested that it was years away & no one could man­age it with­out whole GPU farms; but as far as Google was con­cerned… “You see, I told you it could­n’t be done with­out turn­ing the whole coun­try into a fac­to­ry. You have done just that.”) Since NNs do not de­cay like bi­o­log­i­cal neu­rons, and are not hard-lim­ited by skull vol­umes or calo­ries, and since all tasks share mu­tual in­for­ma­tion & form in­for­ma­tive pri­ors for each other crit­i­cal to sam­ple-effi­cient learn­ing, there is a lot of in­her­ent pres­sure to­wards large grow­ing mul­ti­-task NNs which do trans­fer learn­ing & can op­ti­mize at mul­ti­ple lev­els end-to-end; as GPU RAM lim­its lift, we’ll see more of these. Aside from the im­por­tant work in “NNs all the way down” vein, re­in­force­ment learn­ing grew in im­por­tance and it is in­creas­ingly com­mon to use RL meth­ods to con­trol mem­ory or net­work com­po­nents, in­ter­act with an en­vi­ron­ment (often broadly in­ter­pret­ed, as any­thing which can be turned into a tree, which goes far be­yond games like Go or chess & in­cludes the­o­rem prov­ing or pro­gram op­ti­miza­tion), or learn to op­ti­mize a non-d­iffer­en­tiable reward/loss func­tion, and I am ex­cited to see plan­ning re-e­merge as a theme after the dom­i­nance of mod­el-free meth­ods over the past 3 years; we will see more of that in 2017, doubtless, es­pe­cially as some of the ar­chi­tec­tural tweaks from 2016 (some of which claim as much as an or­der of mag­ni­tude im­prove­ment on ALE sam­ple-effi­cien­cy) get tried out & reused.

In ge­net­ics, the growth of UK Biobank and the in­tro­duc­tion of LD score re­gres­sion & other sum­ma­ry-s­ta­tis­tic-only meth­ods con­tin­ued dri­ving large-s­cale re­sults; the study of hu­man made an ab­surd amount of progress in 2016, demon­strat­ing shared ge­netic in­flu­ences on count­less phe­no­typic traits and per­va­sive in­ter­cor­re­la­tions of good traits and dis­ease traits, re­spec­tive­ly. De­tect­ing re­cent hu­man evo­lu­tion has been diffi­cult due to lack of an­cient DNA to com­pare with, but the sup­ply of that has grown steadi­ly, per­mit­ting some spe­cific ex­am­ples to be nailed down, and a new method based on con­tem­po­rary whole genomes may blow the area wide open as whole genomes have re­cently crossed the $1,000 mark and in com­ing years, sci­en­tific projects & med­ical biobanks will shift over to whole genomes. An­other pos­si­ble field ex­plo­sion is “genome syn­the­sis”—I was as­ton­ished to learn that it is now fea­si­ble to syn­the­size from scratch en­tire chro­mo­somes of ar­bi­trary de­sign, and that a hu­man genome could po­ten­tially be syn­the­sized for ~$1,000,000,000, which would ren­der to­tally ob­so­lete any con­sid­er­a­tions of em­bryo selection/CRISPR/iterated em­bryo se­lec­tion, with an ac­tive ad­vo­cacy effort for a genome syn­the­sis project to be launched. 2017 will bring fur­ther dis­cov­er­ies of how hu­mans have adapted to lo­cal en­vi­ron­ments and their so­ci­eties over the past cen­turies & mil­len­nia. Hon­or­able men­tions should also go to the steady (and dis­qui­et­ing) progress to­wards it­er­ated em­bryo se­lec­tion, and a scat­ter­ing of re­sults from the con­tin­u­ous­ly-grow­ing-sam­ple-sizes GWASes: as pre­dict­ed, the education/intelligence hits have in­creased dras­ti­cally as sam­ple size in­creased, and the his­tor­i­cally diffi­cult tar­gets of per­son­al­ity & de­pres­sion have fi­nally yielded some more hits. One par­tic­u­larly in­trigu­ing GWAS fo­cused on vi­o­lence & crim­i­nal be­hav­ior with good re­sults, so that trait will yield as well to fur­ther study. Past GWASes con­tin­ued to be ap­plied; the re­sults of Bel­sky et al 2016 will come as no sur­prise, but will frus­trate the crit­ics who in­sist that all non-dis­ease re­sults are method­olog­i­cal ar­ti­facts or merely re­flect pop­u­la­tion struc­ture. CRISPR progress con­tin­ues as ex­pect­ed, with the first uses in hu­mans in 2016 by Chi­nese & Amer­i­can sci­en­tists.

Less cos­mi­cal­ly, one of the big tech sto­ries of 2016 was the roll­out of con­sumer VR—­suc­cess­ful but not epochal, clearly the (or at least, a) fu­ture of gam­ing but no killer app. Ocu­lus had a rocky launch caused by its de­ci­sion to launch pre­ma­ture­ly, with­out mo­tion con­trols, which the launch of HTC/Valve’s Vive made clear is not an op­tional fea­ture for truly com­pelling VR (and my own brief ex­pe­ri­ence with an Ocu­lus Rift at a Best Buy demo left me long­ing, after just 20 sec­onds in The Climb, for hand track­ing), but the lack of mo­tion con­trols & com­pelling con­tent made for a slow launch. The Vive had a bet­ter launch with ex­cel­lent mo­tion con­trols & track­ing, the com­pa­ra­ble Ocu­lus Touch con­trols only re­ally ship­ping half a year later in De­cem­ber, demon­strat­ing why Ocu­lus launched when it did—it was ei­ther bite the bul­let of a bad launch, or let Vive rule un­op­posed. Some­what to my sur­prise, Sony’s quiet Project Mor­pheus launched suc­cess­fully as PlaySta­tion VR, mak­ing for 3 high­-qual­ity com­pet­ing VR headsets/ecosystems. (Sony had not seemed se­ri­ous about the whole VR thing so I doubted it would launch in 2016 or at al­l.) While most gamers, much less peo­ple, do not feel a burn­ing need for get­ting into VR at the mo­ment (my­self in­clud­ed, as I think the screen res­o­lu­tions need im­prove­men­t), what is no­table is what did­n’t hap­pen: we did not see wide­spread re­ports of vom­it­ing, of peo­ple swear­ing off VR forever, of VR be­ing dis­carded as a 3D-TV-like gim­mick, of de­vel­op­ers flood­ing in & get­ting burned, of sales plum­met­ing and be­ing well be­low the mil­lion-mark, of the ini­tial trickle of games sput­ter­ing out… In short, of any of the things that the naysay­ers pre­dicted would doom con­sumer VR. The worst that the early adopters, crit­ics, and reg­u­lar peo­ple have to say is that there are not enough good games (de­creas­ingly true by the end of 2016), that the head­sets and GPUs cost too much (true but will pre­dictably be fixed as time pass­es), that the Ocu­lus Rift lacked mo­tion con­trols (fixed as of De­cem­ber 2016), and the res­o­lu­tion is too low / de­vices are wired / re­quire ex­ter­nal track­ing (likely im­proved sub­stan­tially in the sec­ond gen­er­a­tion, pos­si­bly fixed en­tirely by the third or fourth)—noth­ing fa­tal or im­por­tant, in other words. So it looks like VR is here to stay! It’s nice that at least one part of my child­hood’s fu­ture has fi­nally hap­pened.

Books

Non­fic­tion:

  1. Don’t Sleep, There Are Snakes: Life and Lan­guage in the Ama­zon­ian Jun­gle, 2009 (on the )
  2. A Life of Sir Fran­cis Gal­ton
  3. The Sports Gene, Ep­stein
  4. For­tune’s For­mula, Pound­stone 2005
  5. , 1975
  6. The The­ory of Spe­cial Op­er­a­tions, 1992
  7. Ti­tan: The Life of John D. Rock­e­feller, Sr., Cher­now
  8. The Ge­nius Fac­tory, Plotz 2005
  9. The Rid­dle of the Labyrinth, Fox

Fic­tion:

  1. Po­ems of Ger­ard Man­ley Hop­kins, Ger­ard Man­ley Hop­kins

TV/movies

Non­fic­tion movies:

Fic­tion:

Ani­me:

  1. (re­view)
  2. (re­view)