created: 30 November 2017; modified: 17 Jan 2018; status: in progress; confidence: log; importance: 0
- The Kelly Coin-Flipping Problem: Exact Solutions via Decision Trees
Story Of Your LifeIs Not A Time-Travel Story
Banner Ads Considered Harmful (Here)
- On the history of the tank/neural-net urban legend
- Efficiently calculating the average maximum sample from a sample of Gaussians
Site traffic (July 2017-January 2018) was up: 326,852 page-views by 155,532 unique users.
AI: as I hoped in 2016, 2017 saw a re-emergence of model-based RL with various deep approaches to learning reasoning, meta-RL, and environment models. Using relational logics and doing planning over internal models is no longer one of the things
deep learning can’t do. My selection for the single biggest breakthrough of the year was when AlphaGo racked up a second major intellectual victory with the demonstration by Zero that using a simple expert iteration algorithm (with MCTS as the expert) does not only solve the long-standing problem of NN self-play being wildly unstable, but also allows superior learning to the complicated human-initialized AGs in both wallclock time & end strength, which is deeply humbling - 2000 years of study and tens of millions of active players, and that’s all it takes to surpass the best human Go players ever in the supposedly uniquely human domain of subtle global pattern recognition. Expert iteration is an intriguingly general and underused design pattern, which I think may prove quite useful, especially if people can remember that it is not limited to two-player games but is a general method for solving any MDP. The second most notable would be GAN work: Wasserstein GAN losses considerably ameliorated the instability issues when using GANs with various architectures, and although WGANs can still diverge or fail to learn, they are not so much of a black art as the original DCGANs tended to be. This probably helped with later GAN work in 2017, such as the invention of the CycleGAN architecture which accomplishes magical & bizarre kinds of learning such as learning, using horse and zebra images, to turn an arbitrary horse image into a zebra & vice-versa, or your face into a car or a bowl of ramen soup.
Who ordered that? I didn’t, but it’s delicious & hilarious anyway, and suggests that GANs really will be important in unsupervised learning because they are clearly learning a lot about their domains. Additional demonstrations like being able to translate between human languages given only monolingual corpuses merely emphasize that lurking power - I still feel that CycleGAN should not work, but it does. The path to larger-scale photorealistic GANs was discovered by Nvidia’s ProGAN paper: essentially StackGAN’s approach of layering several GANs trained incrementally as upscalers does work (as I expected), but you need much more GPU-compute to reach 1024x1024-size photos and it helps if each new upscaling GAN is only gradually blended in to avoid the random initialization destroying everything previously learned (analogous to transfer learning needing low learning rates or to freeze layers). Finally, GANs started turning up as useful components in semi-supervised learning in the GAIL paradigm for deep RL robotics. I expect GANs are still a while off from being productized or truly critical for anything - they remain a solution in search of a problem but less so than last year. Indeed, from AlphaGo to GANs, 2017 was the year of deep RL. Papers tumbled out constantly, accompanied by ambitious commercial moves: Jeff Dean laid out a vision for using NNs/deep RL essentially everywhere inside Google’s software stack, Google began full self-driving services in Phoenix, while noted researchers like Pieter Abbeel founded robotics startups betting that deep RL has finally cracked imitation & few-shot learning. One thing missing from 2017 for me was use of very large NNs using expert mixtures, synthetic gradients, or other techniques; in retrospect, this may reflect hardware limitations as non-Googlers increasingly hit the limits of what can be iterated on reasonably quickly using just 1080tis or P100s. So I am intrigued by the increasing availability of Google’s second-generation TPUs (which can do training) and by discussions of upcoming NN accelerators which might break Nvidia’s costly monopoly and offer 100s of teraflops or petaflops at researcher budgets.
Genetics in 2017 was a straightline continuation of 2016: the UKBB dataset came online and is fully armed & operational, with whole-genomes now following, resulting in the typical flurries of papers on everything which is heritable (ie everything). Genetic engineering had a banner year between CRISPR and other methods in the pipeline - it seemed like every week there was a new mouse or human trial curing something or other, to the point where I lost track and the NYT has begun reporting on clinical trials being delayed by lack of virus manufacturing capacity. (A good problem to have!) Genome synthesis continues to greatly concern me but nothing newsworthy happened in 2017 other than, presumably, continuing to get cheaper on schedule. Intelligence research did not deliver any particularly amazing results as the SSGAC paper has apparently been delayed to 2017, but we saw two critical methodological improvements which I expect to yield fruit in 2017-2018: first, as genetic correlation researchers have noted for years, genetic correlations should be able to boost power considerably by correcting for measurement error & increasing effective sample size by appropriate combination of polygenic scores, and MTAG demonstrates this works well for intelligence (Hill et al 2017b increases PGS to 7% variance); second, Hsu’s lasso predictions were proven true by Lello et al 2017 demonstrating the creation of a polygenic score explaining most SNP heritability/predicting 40% of height variance. The use of these two simultaneously with SSGAC & other datasets ought to boost IQ PGSes to at least 10% and possibly much more. Perhaps the most notable single development was the resolution of the long-standing dysgenics question using molecular genetics: has the demographic transition in at least some Western countries led to decreases in the genetic potential for intelligence (mean polygenic score), as suggested by most but not all phenotypic analyses of intelligence/education/fertility? Yes, in Iceland & the USA, dysgenics has indeed done that on a meaningful scale, as shown by straightforward calculations of mean polygenic score by birth decade. More interestingly, the increasing availability of ancient DNA allows for preliminary analyses of how polygenic scores change over time: over tens of thousands of years, human intelligence & disease traits appear to have been slowly selected against (consistent with most genetic variants being harmful & under purifying selection) but that trend at some point relatively recently rapidly reversed.
For 2016, I noted that the main story of VR was that it hadn’t failed & was modestly successful; 2017 saw the continuation of this trend as it climbs into its
trough of productivity - the media hype has popped and for 2017, VR just kept succeeding and building up an increasingly large library of games & applications, while the price continued to drop dramatically (as everyone should have realized but didn’t) with the Oculus now in the $300s. Perhaps the major surprise for me was that Sony’s quiet and noncommittal approach to its headset (which made me wonder if it would be launched at all) masked a huge success, as PSVR has sold into the millions of units and is probably the most popular
real VR solution despite its technical drawbacks compared to Vive/Oculus. There continues to be no killer app, but the many upcoming hardware improvements like 4K displays or wireless headsets or eyetracking+foveated-rendering will continue increasing quality while prices drop and libraries continue to build up; if there is any natural limit to the VR market, I haven’t seen any sign of it yet. So for 2018-2019, I wonder if VR will simply continue to grow gradually with mobile smartphone VR solutions eating the lunch of full headsets, or if there will be a breakout moment where the price, quality, library, and a killer app hit a critical combination?
Bitcoin underwent one of its periodic
bubbles, complete with the classic accusations that this time Bitcoin will surely go to zero, the fee spikes mean Bitcoin will never scale (
nobody goes there anymore, it’s too popular), people can’t use it to pay for anything, it’s a clear scam because of various peoples’ foolishness like taking out mortgages to gamble on further increases, Coinbase is run by fools & knaves, random other altcoins have bubbled too & will doubtless replace Bitcoin soon, Bitcoin has failed to achieve any libertarian goals and is now a plaything of the rich, people who were wrong about Bitcoin every time from $1 in 2011 to now will claim to be right morally, the PoW security is wasteful, etc - one could copy-paste most articles or comments from the last bubble (or the one before that, or before that) into this one with no change other than the numbers. As such, while I have benefited a great deal from it, there is little worth saying about it other than to note its existence with bemusement.
A short note on politics: Donald Trump’s presidency and its backlash in the form of sexual harassment scandals have received truly disproportionate coverage and have become almost an addiction. They have distracted from important issues and from important facts like 2017 being one of the best years in human history, many scientific & technological improvements and breakthroughs like genetic engineering or AI, or global & US economic growth. Objectively, Trump’s first year has been largely a non-event; a few things were accomplished like packing federal courts and a bizarre tax bill, but overall not much happened, and Trump has not lived up to the apocalyptic predictions & hysteria. If the next 3 years are similar to 2017, one would have to admit that Trump as president turned out better than electing George W. Bush!
- Selected Non-Fictions, Jorge Luis Borges (review)
- Site Reliability Engineering: How Google Runs Production Systems (January review)
- The Playboy Interview: Volume II (review)
- Artificial Life, Levy
- Possible Worlds and Other Essays, J.B.S. Haldane 1927 (review)
- Annual Berkshire Hathaway letters of Warren Buffett (review)
- Tokyo: A Certain Style (review)
- The Grand Strategy of the Roman Empire: From the First Century CE to the Third, Luttwak 2016 (review)
- Moon Dust: In Search of the Men Who Fell to Earth, Smith 2005 (review)
- Unsong, Scott Alexander (review)
- Unforgotten Dreams: Poems by the Zen monk Shōtetsu, Steven D. Carter
- The Anubis Gates, Tim Powers (January review)
- Sunset in a Spiderweb: Sijo Poetry of Ancient Korea, Baron & Kim 1974
- Breaking Bad (May review)
- Blade Runner 2049 (October review)
- L.A. Confidential
- Hackers (1995; January review)
- Once Upon A Time In The West (February review)
- All About Eve (June review)
- Cool Hand Luke
- The Terminator
- 10 Cloverfield Lane (March review)