Annual summary of 2018 gwern.net newsletters, selecting my best writings, the best 2018 links by topic, and the best books/movies/anime I saw in 2018, with some general discussion of the year.
2018-12-08–2020-09-13 finished certainty: log importance: 0
- end of year summary
2018 went well, with much interesting news and several stimulating trips. My 2018 writings included:
Embryo selection: Overview of major current approaches for complex-trait genetic engineering, FAQ, multi-stage selection, chromosome/gamete selection, optimal search of batches, & robustness to error in utility weights
Overall, 2018 was much like 2017 was, but more so. In all of AI, genetics, VR, Bitcoin, and general culture/politics, the trends of 2017 continued through 2018 or even accelerated.
AI: In 2018, the DL revolution came for NLP. Convolutions, attention, and bigger compute created ever larger NNs which could then kick benchmark ass and take names. Additional seeds were planted for logical/relational/numerical reasoning (wasn’t logic another one of those things deep learning would never be able to do…?).
Elsewhere, reinforcement learning was hot (eg the RL subreddit traffic stats increased severalfold over 2017, which itself had increased severalfold), with Go followed by human-level DoTA 2. OA5 was an amazing achievement given how complex DoTA is, with fog of war, team tactics, and far larger state space, integrating the full spectrum of strategy from twitch tactics up to long-term strategy and pre-selection of units. (Given OA5’s progress, I was disappointed to see minimal DM progress on the Starcraft II front in 2018, but it turned out I just needed more patience.) DRL of course enjoyed additional progress, notably robotics: sample-efficient robotic control and learning from observations/imitation are closer than ever.
Thinking a little more broadly about where DL/DRL has seen successes, the rise of DL has been the fall of Moravec’s paradox.
No one is now surprised in the least bit when a computer masters some complex symbolic task like chess or Go these days; we are surprised by the details like it happening about 10 years before many would’ve predicted, or that the Go player can be trained overnight in wallclock time, or that the same architecture can be applied with minimal modification to give a top chess engine. For all the fuss over AlphaGo, no one paying attention was really surprised. If you went back 10 years ago and told someone, ‘by the way, by 2030, both Go and Arimaa can be played at a human level by an AI’, they’d shrug.
People are much more surprised to see DoTA 2 agents, or Google Waymo cars driving around entire metropolitan areas, or generation of photorealistic faces or totally realistic voices. The progress in robotics has also been exciting to anyone paying attention to the space: the DRL approaches are getting ever better and sample-efficient and good at imitation. I don’t know how many blue-collar workers they will put out of work—even if software is solved, the robotic hardware is still expensive! But factories will be salivating over them, I’m sure. (The future of self-driving cars is in considerably more doubt.)
A standard-issue minimum-wage Homo sapiens worker-unit has a lot of advantages. I expect there will be a lot of blue-collar jobs for a long time to come, for those who want them. But they’ll be increasingly crummy jobs. This will make a lot of people unhappy. I think of Turchin’s ‘elite overproduction’ concept—how much of political strife now is simply that we’ve overeducated so many people in degrees that were almost entirely signaling-based and not of intrinsic value in the real world and there were no slots available for them and now their expectations & lack of useful skills are colliding with reality? In political science, they say revolutions happen not when things are going badly, but when things are going not as well as everyone expected.
We’re at an interesting point—as LeCun put it, I think, ‘anything a human can do with <1s of thought, deep learning can do now’, while older symbolic methods can outperform humans in a number of domains where they use >>1s of thought. As NNs get bigger and the training methods and architectures and datasets are refined, the ‘<1s’ will gradually expand. So there’s a pincer movement going on, and sometimes hybrid approaches can crack a human redoubt (eg AlphaGo combined the hoary tree search for long-term >>1s thought with CNNs for the intuitive instantaneous gut-reaction evaluation of a board <1s, and together they could learn to be superhuman). As long as what humans do with <1s of thought was out of reach, as long as the ‘simple’ primitives of vision and movement couldn’t be handled, the symbol grounding and frame problems were hopeless. “How does your design turn a photo of a cat into the symbol CAT which is useful for inference/planning/learning, exactly?” But now we have a way to reliably go from chaotic real-world data to rich semantic numeric encodings like vector embeddings. That’s why people are so excited about the future of DL.
The biggest disappointment, by far, in AI was self-driving cars.
2018 was going to be the year of self-driving cars, as Waymo promised all & sundry a full public launch and the start of scaling out, and every report of expensive deals & investments bade fair to launch, but the launch kept not happening—and then the Uber pedestrian fatality happened. This fatality was the result of a cascade of internal decisions & pressure to put an unstable, erratic, known dangerous self-driving car on the road, then deliberately disable its emergency braking, deliberately disable the car’s emergency braking, not provide any alerts to the safety drivers, and then remove half the safety drivers, resulting in a fatality happening under what should have been near-ideal circumstances, and indeed the software detected the pedestrian long in advance and would have braked if it had been allowed (“Preliminary NTSB Report: Highway HWY18MH010”); particularly egregious given Uber’s past incidents (like covering up running a red light). Comparisons to Challenger come to mind.
The incident should not have affected perception of self-driving cars—the fact that a far-below-SOTA system is unsafe when its brakes are deliberately disabled so it cannot avoid a foreseen accident tells us nothing about the safety of the best self-driving cars. That self-driving cars are dangerous when done badly should not come as news to anyone or change any beliefs, but it blackened perceptions of self-driving cars nevertheless. Perhaps because of it, the promised Waymo launch was delayed all the way to December and then was purely a ‘paper launch’, with no discernible difference from its previous small-scale operations.
Which leads me to question why the credible buildup beforehand of vehicles & personnel & deals if the paper launch was what was always intended; did the Uber incident trigger an internal review and a major re-evaluation of how capable & safe their system really is and a resort to a paper launch to save face? What went wrong, not at Uber but at Waymo? As Waymo goes, so the sector goes.
2018 for genetics saw many of the fruits of 2017 begin to mature: the usual large-scale GWASes continued to come out, including both SSGAC3 (Lee et al 2018) and an immediate boost by better analysis in Allegrini et al 2018 (as I predicted last year); uses of PGSes in other studies, such as the forbidden examination of life outcome differences predicted by IQ/EDU PGSes, are increasingly routine. In particular, medical PGSes are now reaching levels of clinical utility that even doctors can see their value.
This trend need not peter out, as the oncoming datasets keep getting more enormous; consumer DTC extrapolating from announced sales numbers has reached staggering numbers and potentially into the hundreds of millions, and there are various announcements like the UKBB aiming for 5 million whole-genomes, which would’ve been bonkers even a few years ago. (Why now? Prices have fallen enough. Perhaps an enterprising journalist could dig into why Illumina could keep WGS prices so high for so long…) The promised land is being reached.
The drumbeat of CRISPR successes reached a peak in the case of He Jiankui, who—completely out of the blue—presented the world with the fait accompli of CRISPR babies. The most striking aspect is the tremendous backlash: not just from Westerners (which is to be expected, and is rather hypocritical of many of the geneticists involved, who talked previously of being worried about potential backlash from premature CRISPR use and then, when that happened, did their level best to make the backlash happen by competing for the most hyperbolic condemnation), but also from China. Almost as striking was how quickly commentators settled on a Narrative, interpreting everything as negatively as possible even where that required flatly ignoring reporting (claiming he launched a PR blitz, when the AP scooped him) or strains credulity (how can we believe the hospital’s face-saving claims that Jiankui ‘forged’ everything, when they were so effusive before the backlash began? Or any government statements coming out of China, of all places, about an indefinitely imprisoned scientist?), or citing the most dubious possible research (like candidate-gene or animal model research on CCR5).
Regardless, the taboo has been broken. Only time will tell if this will spur more rigorously-conducted CRISPR research to do it right, or will set back the field for decades & be an example of the unilateralist’s curse. I am cautiously optimistic that it will be the former.
Genome synthesis work appears to continue to roll along, although nothing of major note occurred in 2018. Probably the most interesting area in terms of fundamental work was the progress on both mice & human gametogenesis and stem cell control. This is the key enabling technology for both massive embryo selection (breaking the egg bottleneck by allowing generation of hundreds or thousands of embryos and thus multiple-SD gains from selection) and then IES (Iterated Embryo Selection).
VR continued steady gradual growth; with no major new hardware releases (Oculus Go doesn’t count), there was not much to tell beyond the Steam statistics or Sony announcing PSVR sales >3m. (I did have an opportunity to play the popular Beat Saber with my mother & sister; all of us enjoyed it.) More interesting will be the 2019 launch of Oculus Quest which comes close to the hypothetical mass-consumer breakthrough VR headset: mobile/no wires, with a resolution boost and full hand/position tracking, in a reasonably priced package, with a promised large library of established VR games ported to it. It lacks foveated rendering or retina resolution, but otherwise seems like a major upgrade in terms of mass appeal; if it continues to eke out modest sales, that will be consistent with the narrative that VR is on the long slow slog adoption path similar to early PCs or the Internet (instantly appealing & clearly the future to the early adopters who try it, but still taking decades to achieve any mass penetration) rather than post-iPhone smartphones.
Bitcoin: the long slide from the bubble continued, to my considerable schadenfreude (2017 but more so…). The most interesting story of the year for me was the reasonably successful launch of the long-awaited Augur prediction market, which had no ‘DAO moment’ and the overall mechanism appears to be working. Otherwise, not much to remark on.
A short note on politics: I maintain my 2017 comments (but more so…). For all the emotion invested in the ‘great awokening’ and the continued Girardian scapegoating/backlash, now 3 years in, it is increasingly clear that Donald Trump’s presidency has been absurdly overrated in importance. Despite his ability to do substantial damage like launching trade wars or distortionary tax cuts, that is hardly unprecedented as most presidents do severe economic damage of some form or another; while other blunders like his ineffectual North Korea policy merely continues a long history of ineffective policy (and was inevitable once the South Korean population chose to elect Moon Jae-in). Every minute you spend obsessing over stuff like the Mueller Report has been wasted: Trump remains what New Yorkers have always known him to be—an incompetent narcissist.
Let’s try to focus more on long-term issues such as global economic growth or genetic engineering.
- McNamara’s Folly: The Use of Low-IQ Troops in the Vietnam War, Gregory 2015 (review; see also Low Aptitude Men in the Military: Who Profits, Who Pays?, Laurence & Ramsberger 1991)
- Bad Blood: Secrets and Lies in a Silicon Valley Startup, Carreyrou 2018 (review)
- The Vaccinators: Smallpox, Medical Knowledge, and the ‘Opening’ of Japan, Jannetta 2007
- Like Engendr’ing Like: Heredity and Animal Breeding in Early Modern England, Russell 1986 (review)
- Cat Sense: How the New Feline Science Can Make You a Better Friend to Your Pet, Bradshaw 2013 (review)
- Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983–1993, Roland & Shiman 2002 (review)
- The Operations Evaluation Group: A History of Naval Operations Analysis, Tidman 1984 (review)
- A Quiet Place (review)
- Shadow of the Vampire (2000)
- Conan the Barbarian (review)
- Ant-Man and the Wasp (review)