“Wasserstein GAN”, Arjovsky et al 2017 (a surprisingly small tweak fixes both mode collapse & divergence, yielding stable GANs which reliably fit datasets like anime faces in my experiments with the code; underfitting/expressiveness seems to remain an issue, though)
“The Cat’s-Meat Man” (how cats & dogs were fed before modern processed pet food: regular deliveries of low-grade meat, usually horse-meat from the surplus of millions per year; focusing on industrial districts where cats were heavily employed)
Site Reliability Engineering: How Google Runs Production Systems (not a book whose principles I ever expect to apply as intended, but it is fascinating to read about the technical problems & capabilities at Google-scale, the clever solutions applying rarefied computer science techniques, and the war stories about how things can go wrong. Inspired a little by it, I added a number of tests to my Gwern.net sync script.)
On a side note: in The Anubis Gates, magic is associated with the moon and normality with the earth & sun & Christianity; practitioners are pained and weakened by the touch of the earth and the growth of Christianity, which protects against & destroys magic (and so by the 1800s setting, magic has become almost useless), and gradually lose weight & float as they gravitate towards the moon. The wizened leader of the magicians is so many millennia old and steeped in magic that he would fall towards the moon, and must live in a domed building while upside down lest he fall to the moon like other magicians, rotating around the dome as the moon moves. Given this Chekhov’s gun, it is not surprising that eventually he does fall out of the dome and falls into the sky, presumably to his doom. What happens to him? The character sits around and moves normally while upside down, and neither bounces nor struggles to move; this suggests that an equivalent of Earth gravity now pulls him to the moon (or if we assume the Earth continues to pull, twice Earth gravity, never mind that the moon is much smaller—this is magic). After falling out of the dome, he will start accelerating upwards at ~9.8 m/s2; as there is air resistance, he will soon hit terminal velocity, which for a human is ~54m/s, so after ~6s, he will stop accelerating and then begin accelerating slowly as the air thins out & offers less resistance, increasing by ~1% every 160 meters.
Logically, one would expect him to impact the Moon at some extraordinary velocity as there will be effectively no terminal velocity for his body once out of the earth’s atmosphere, but he will have died long before of thirst, hypothermia, or hypoxia (in increasing order of speed) and probably well before he hits the Armstrong limit of 19000m. At the Armstrong limit, even the water on one’s tongue boils. In any case, the magician will be moving so fast that 19000m is trivial and will be reached within ~5.8 minutes, leaving little time for starvation/thirst/hypothermia; by this point, he will have previously lapsed into unconsciousness and be suffering from cerebral hypoxia, yielding brain death within 5–10 minutes. So in general, we can safely assume that less than 10 minutes after falling out of the dome, the magician is dead from hypoxia accelerated by hypothermia. But the body will keep on going. How long does it take to reach the Moon? The moon is on average 384400000 meters away, but the body will keep accelerating once it gets past the bulk of the earth’s atmosphere; assuming no more terminal velocity after 500 seconds, the body will reach the average distance of the moon after ~9351s or ~2.6h, traveling at something like 86795m/s or 86km/s. (Strictly speaking, he might miss the moon and have to spiral around, or might even never impact, but I can’t calculate the orbital mechanics there.) Being a wizened and now desiccated/frozen body, it probably doesn’t weigh anywhere over 50kg, but will still pack a major punch with a kinetic energy of 0.2*50*(86795^2) joules or 7.5 megajoules (for comparison 1 ton of TNT is ~4.2 GJ).
Newsletter tag: archive of all issues back to 2013 for the gwern.net newsletter (monthly updates, which will include summaries of projects I’ve worked on that month (the same as the changelog), collations of links or discussions from my subreddit, and book/movie reviews.)
This page is a changelog for Gwern.net: a monthly reverse chronological list of recent major writings/changes/additions.
Following my writing can be a little difficult because it is often so incremental. So every month, in addition to my regular /r/Gwern subreddit submissions, I write up reasonably-interesting changes and send it out to the mailing list in addition to a compilation of links & reviews (archives).
A subreddit for posting links of interest and also for announcing updates to gwern.net (which can be used as a RSS feed). Submissions are categorized similar to the monthly newsletter and typically will be collated there.
Haghani & Dewey 2016 experiment with a double-or-nothing coin-flipping game where the player starts with $25 and has an edge of 60%, and can play 300 times, choosing how much to bet each time, winning up to a maximum ceiling of $250. Most of their subjects fail to play well, earning an average $91, compared to Haghani & Dewey 2016’s heuristic benchmark of ~$240 in winnings achievable using a modified Kelly Criterion as their strategy. The KC, however, is not optimal for this problem as it ignores the ceiling and limited number of plays.
We solve the problem of the value of optimal play exactly by using decision trees & dynamic programming for calculating the value function, with implementations in R, Haskell, and C. We also provide a closed-form exact value formula in R & Python, several approximations using Monte Carlo/random forests/neural networks, visualizations of the value function, and a Python implementation of the game for the OpenAI Gym collection. We find that optimal play yields $246.61 on average (rather than ~$240), and so the human players actually earned only 36.8% of what was possible, losing $155.6 in potential profit. Comparing decision trees and the Kelly criterion for various horizons (bets left), the relative advantage of the decision tree strategy depends on the horizon: it is highest when the player can make few bets (at b = 23, with a difference of ~$36), and decreases with number of bets as more strategies hit the ceiling.
In the Kelly game, the maximum winnings, number of rounds, and edge are fixed; we describe a more difficult generalized version in which the 3 parameters are drawn from Pareto, normal, and beta distributions and are unknown to the player (who can use Bayesian inference to try to estimate them during play). Upper and lower bounds are estimated on the value of this game. In the variant of this game where subjects are not told the exact edge of 60%, a Bayesian decision tree approach shows that performance can closely approach that of the decision tree, with a penalty for 1 plausible prior of only $1. Two deep reinforcement learning agents, DQN & DDPG, are implemented but DQN fails to learn and DDPG doesn’t show acceptable performance, indicating better deep RL methods may be required to solve the generalized Kelly game.
In generating a sample of n datapoints drawn from a normal/Gaussian distribution with a particular mean/SD, how big on average the biggest datapoint is will depend on how large n is. Knowing this average is useful in a number of areas like sports or breeding or manufacturing, as it defines how bad/good the worst/best datapoint will be (eg the score of the winner in a multi-player game).
The order statistic of the mean/average/expectation of the maximum of a draw of n samples from a normal distribution has no exact formula, unfortunately, and is generally not built into any programming language’s libraries.
I implement & compare some of the approaches to estimating this order statistic in the R programming language, for both the maximum and the general order statistic. The overall best approach is to calculate the exact order statistics for the n range of interest using numerical integration via lmomco and cache them in a lookup table, rescaling the mean/SD as necessary for arbitrary normal distributions; next best is a polynomial regression approximation; finally, the Elfving correction to the Blom 1958 approximation is fast, easily implemented, and accurate for reasonably large n such as n > 100.
“GWAS meta-analysis reveals novel loci and genetic correlates for general cognitive function: a report from the COGENT consortium”, J. W. Trampush, M. L. Z. Yang, J. Yu, E. Knowles, G. Davies, D. C. Liewald, J. M. Starr, S. Djurovic, I. Melle, K. Sundet, A. Christoforou, I. Reinvang, P. DeRosse, A. J. Lundervold, V. M. Steen, T. Espeseth, K. Räikkönen, E. Widen, A. Palotie, J. G. Eriksson, I. Giegling, B. Konte, P. Roussos, S. Giakoumaki, K. E. Burdick, A. Payton, W. Ollier, M. Horan, O. Chiba-Falek, D. K. Attix, A. C. Need, E. T. Cirulli, A. N. Voineskos, N. C. Stefanis, D. Avramopoulos, A. Hatzimanolis, D. E. Arking, N. Smyrnis, R. M. Bilder, N. A. Freimer, T. D. Cannon, E. London, R. A. Poldrack, F. W. Sabb, E. Congdon, E. D. Conley, M. A. Scult, D. Dickinson, R. E. Straub, G. Donohoe, D. Morris, A. Corvin, M. Gill, A. R. Hariri, D. R. Weinberger, N. Pendleton, P. Bitsios, D. Rujescu, J. Lahti, S. Le Hellard, M. C. Keller, O. A. Andreassen, I. J. Deary, D. C. Glahn, A. K. Malhotra, T. Lencz (2017-01-17):
The complex nature of human cognition has resulted in cognitive genomics lagging behind many other fields in terms of gene discovery using genome-wide association study (GWAS) methods. In an attempt to overcome these barriers, the current study utilized GWAS meta-analysis to examine the association of common genetic variation (~8M single-nucleotide polymorphisms (SNP) with minor allele frequency ⩾1%) to general cognitive function in a sample of 35 298 healthy individuals of European ancestry across 24 cohorts in the Cognitive Genomics Consortium (COGENT). In addition, we utilized individual SNP lookups and polygenic score analyses to identify genetic overlap with other relevant neurobehavioral phenotypes. Our primary GWAS meta-analysis identified two novel SNP loci (top SNPs: rs76114856 in the CENPO gene on chromosome 2 and rs6669072 near LOC105378853 on chromosome 1) associated with cognitive performance at the genome-wide significance level (p<5 × 10−8). Gene-based analysis identified an additional three Bonferroni-corrected significant loci at chromosomes 17q21.31, 17p13.1 and 1p13.3. Altogether, common variation across the genome resulted in a conservatively estimated SNP heritability of 21.5% (s.e. = 0.01%) for general cognitive function. Integration with prior GWAS of cognitive performance and educational attainment yielded several additional significant loci. Finally, we found robust polygenic correlations between cognitive performance and educational attainment, several psychiatric disorders, birth length/weight and smoking behavior, as well as a novel genetic association to the personality trait of openness. These data provide new insight into the genetics of neurocognitive function with relevance to understanding the pathophysiology of neuropsychiatric illness.
“Prevalence and architecture of de novo mutations in developmental disorders”, Jeremy F. McRae, Stephen Clayton, Tomas W. Fitzgerald, Joanna Kaplanis, Elena Prigmore, Diana Rajan, Alejandro Sifrim, Stuart Aitken, Nadia Akawi, Mohsan Alvi, Kirsty Ambridge, Daniel M. Barrett, Tanya Bayzetinova, Philip Jones, Wendy D. Jones, Daniel King, Netravathi Krishnappa, Laura E. Mason, Tarjinder Singh, Adrian R. Tivey, Munaza Ahmed, Uruj Anjum, Hayley Archer, Ruth Armstrong, Jana Awada, Meena Balasubramanian, Siddharth Banka, Diana Baralle, Angela Barnicoat, Paul Batstone, David Baty, Chris Bennett, Jonathan Berg, Birgitta Bernhard, A. Paul Bevan, Maria BitnerGlindzicz, Edward Blair, Moira Blyth, David Bohanna, Louise Bourdon, David Bourn, Lisa Bradley, Angela Brady, Simon Brent, Carole Brewer, Kate Brunstrom, David J. Bunyan, John Burn, Natalie Canham, Bruce Castle, Kate Chandler, Elena Chatzimichali, Deirdre Cilliers, Angus Clarke, Susan Clasper, Jill ClaytonSmith, Virginia Clowes, Andrea Coates, Trevor Cole, Irina Colgiu, Amanda Collins, Morag N. Collinson, Fiona Connell, Nicola Cooper, Helen Cox, Lara Cresswell, Gareth Cross, Yanick Crow, Mariella DAlessandro, Tabib Dabir, Rosemarie Davidson, Sally Davies, Dylan de Vries, John Dean, Charu Deshpande, Gemma Devlin, Abhijit Dixit, Angus Dobbie, Alan Donaldson, Dian Donnai, Deirdre Donnelly, Carina Donnelly, Angela Douglas, Sofia Douzgou, Alexis Duncan, Jacqueline Eason, Sian Ellard, Ian Ellis, Frances Elmslie, Karenza Evans, Sarah Everest, Tina Fendick, Richard Fisher, Frances Flinter, Nicola Foulds, Andrew Fry, Alan Fryer, Carol Gardiner, Lorraine Gaunt, Neeti Ghali, Richard Gibbons, Harinder Gill, Judith Goodship, David Goudie, Emma Gray, Andrew Green, Philip Greene, Lynn Greenhalgh, Susan Gribble, Rachel Harrison, Lucy Harrison, Victoria Harrison, Rose Hawkins, Liu He, Stephen Hellens, Alex Henderson, Sarah Hewitt, Lucy Hildyard, Emma Hobson, Simon Holden, Muriel Holder, Susan Holder, Georgina Hollingsworth, Tessa Homfray, Mervyn Humphreys, Jane Hurst, Ben Hutton, Stuart Ingram, Melita Irving, Lily Islam, Andrew Jackson, Joanna Jarvis, Lucy Jenkins, Diana Johnson, Elizabeth Jones, Dragana Josifova, Shelagh Joss, Beckie Kaemba, Sandra Kazembe, Rosemary Kelsell, Bronwyn Kerr, Helen Kingston, Usha Kini, Esther Kinning, Gail Kirby, Claire Kirk, Emma Kivuva, Alison Kraus, Dhavendra Kumar, V. K. Ajith Kumar, Katherine Lachlan, Wayne Lam, Anne Lampe, Caroline Langman, Melissa Lees, Derek Lim, Cheryl Longman, Gordon Lowther, Sally A. Lynch, Alex Magee, Eddy Maher, Alison Male, Sahar Mansour, Karen Marks, Katherine Martin, Una Maye, Emma McCann, Vivienne McConnell, Meriel McEntagart, Ruth McGowan, Kirsten McKay, Shane McKee, Dominic J. McMullan, Susan McNerlan, Catherine McWilliam, Sarju Mehta, Kay Metcalfe, Anna Middleton, Zosia Miedzybrodzka, Emma Miles, Shehla Mohammed, Tara Montgomery, David Moore, Sian Morgan, Jenny Morton, Hood Mugalaasi, Victoria Murday, Helen Murphy, Swati Naik, Andrea Nemeth, Louise Nevitt, Ruth NewburyEcob, Andrew Norman, Rosie OShea, Caroline Ogilvie, KaiRen Ong, SooMi Park, Michael J. Parker, Chirag Patel, Joan Paterson, Stewart Payne, Daniel Perrett, Julie Phipps, Daniela T. Pilz, Martin Pollard, Caroline Pottinger, Joanna Poulton, Norman Pratt, Katrina Prescott, Sue Price, Abigail Pridham, Annie Procter, Hellen Purnell, Oliver Quarrell, Nicola Ragge, Raheleh Rahbari, Josh Randall, Julia Rankin, Lucy Raymond, Debbie Rice, Leema Robert, Eileen Roberts, Jonathan Roberts, Paul Roberts, Gillian Roberts, Alison Ross, Elisabeth Rosser, Anand Saggar, Shalaka Samant, Julian Sampson, Richard Sandford, Ajoy Sarkar, Susann Schweiger, Richard Scott, Ingrid Scurr, Ann Selby, Anneke Seller, Cheryl Sequeira, Nora Shannon, Saba Sharif, Charles ShawSmith, Emma Shearing, Debbie Shears, Eamonn Sheridan, Ingrid Simonic, Roldan Singzon, Zara Skitt, Audrey Smith, Kath Smith, Sarah Smithson, Linda Sneddon, Miranda Splitt, Miranda Squires, Fiona Stewart, Helen Stewart, Volker Straub, Mohnish Suri, Vivienne Sutton, Ganesh Jawahar Swaminathan, Elizabeth Sweeney, Kate TattonBrown, Cat Taylor, Rohan Taylor, Mark Tein, I. Karen Temple, Jenny Thomson, Marc Tischkowitz, Susan Tomkins, Audrey Torokwa, Becky Treacy, Claire Turner, Peter Turnpenny, Carolyn Tysoe, Anthony Vandersteen, Vinod Varghese, Pradeep Vasudevan, Parthiban Vijayarangakannan, Julie Vogt, Emma Wakeling, Sarah Wallwark, Jonathon Waters, Astrid Weber, Diana Wellesley, Margo Whiteford, Sara Widaa, Sarah Wilcox, Emily Wilkinson, Denise Williams, Nicola Williams, Louise Wilson, Geoff Woods, Christopher Wragg, Michael Wright, Laura Yates, Michael Yau, Chris Nellker, Michael Parker, Helen V. Firth, Caroline F. Wright, David R. FitzPatrick, Jeffrey C. Barrett Matthew E. Hurles (2017-01-25):
The genomes of individuals with severe, undiagnosed developmental disorders are enriched in damaging de novo mutations (DNMs) in developmentally important genes. Here we have sequenced the exomes of 4,293 families containing individuals with developmental disorders, and meta-analysed these data with data from another 3,287 individuals with similar disorders. We show that the most important factors influencing the diagnostic yield of DNMs are the sex of the affected individual, the relatedness of their parents, whether close relatives are affected and the parental ages. We identified 94 genes enriched in damaging DNMs, including 14 that previously lacked compelling evidence of involvement in developmental disorders. We have also characterized the phenotypic diversity among these disorders. We estimate that 42% of our cohort carry pathogenic DNMs in coding sequences; approximately half of these DNMs disrupt gene function and the remainder result in altered protein function. We estimate that developmental disorders caused by DNMs have an average prevalence of 1 in 213 to 1 in 448 births, depending on parental age. Given current global demographics, this equates to almost 400,000 children born per year.
Online media use has become an increasingly important behavioral domain over the past decade. However, studies into the etiology of individual differences in media use have focused primarily on pathological use. Here, for the first time, we test the genetic influences on online media use in a UK representative sample of 16 year old twins, who were assessed on time spent on educational (n = 2,585 twin pairs) and entertainment websites (n = 2,614 twin pairs), time spent gaming online (n = 2,635 twin pairs), and Facebook use (n = 4,333 twin pairs). Heritability was substantial for all forms of online media use, ranging from 34% for educational sites to 37% for entertainment sites and 39% for gaming. Furthermore, genetics accounted for 24% of the variance in Facebook use. Our results support an active model of the environment, where young people choose their online engagements in line with their genetic propensities.
Epidemiological studies suggest that educational attainment is affected by genetic variants. Results from recent genetic studies allow us to construct a score from a person’s genotypes that captures a portion of this genetic component. Using data from Iceland that include a substantial fraction of the population we show that individuals with high scores tend to have fewer children, mainly because they have children later in life. Consequently, the average score has been decreasing over time in the population. The rate of decrease is small per generation but marked on an evolutionary timescale. Another important observation is that the association between the score and fertility remains highly significant after adjusting for the educational attainment of the individuals.
Epidemiological and genetic association studies show that genetics play an important role in the attainment of education. Here, we investigate the effect of this genetic component on the reproductive history of 109,120 Icelanders and the consequent impact on the gene pool over time. We show that an educational attainment polygenic score, POLYEDU, constructed from results of a recent study is associated with delayed reproduction (p < 10−100) and fewer children overall. The effect is stronger for women and remains highly significant after adjusting for educational attainment. Based on 129,808 Icelanders born between 1910 and 1990, we find that the average POLYEDU has been declining at a rate of ~0.010 standard units per decade, which is substantial on an evolutionary timescale. Most importantly, because POLYEDU only captures a fraction of the overall underlying genetic component the latter could be declining at a rate that is two to three times faster.
Mortality selection is a general concern in the social and health sciences. Recently, existing health and social science cohorts have begun to collect genomic data. Causes of selection into a genomic dataset can influence results from genomic analyses. Selective non-participation, which is specific to a particular study and its participants, has received attention in the literature. But mortality selection—the very general phenomenon that genomic data collected at a particular age represents selective participation by only the subset of birth cohort members who have survived to the time of data collection—has been largely ignored. Here we test the hypothesis that such mortality selection may significantly alter estimates in polygenetic association studies of both health and non-health traits. We demonstrate mortality selection into genome-wide SNP data collection at older ages using the U.S.-based Health and Retirement Study (HRS). We then model the selection process. Finally, we test whether mortality selection alters estimates from genetic association studies. We find evidence for mortality selection. Healthier and more socioeconomically advantaged individuals are more likely to survive to be eligible to participate in the genetic sample of the HRS. Mortality selection leads to modest drift in estimating time-varying genetic effects, a drift that is enhanced when estimates are produced from data that has additional mortality selection. There is no general solution for correcting for mortality selection in a birth cohort prior to entry into a longitudinal study. We illustrate how genetic association studies using HRS data can adjust for mortality selection from study entry to time of genetic data collection by including probability weights that account for mortality selection. Mortality selection should be investigated more broadly in genetically-informed samples from other cohort studies.
We give an overview of recent exciting achievements of deep reinforcement learning (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration. After that, we discuss important mechanisms for RL, including attention and memory, unsupervised learning, transfer learning, multi-agent RL, hierarchical RL, and learning to learn. Then we discuss various applications of RL, including games, in particular, AlphaGo, robotics, natural language processing, including dialogue systems, machine translation, and text generation, computer vision, neural architecture design, business management, finance, healthcare, Industry 4.0, smart grid, intelligent transportation systems, and computer systems. We mention topics not reviewed yet, and list a collection of RL resources. After presenting a brief summary, we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant update.
We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the corresponding optimization problem is sound, and provide extensive theoretical work highlighting the deep connections to other distances between distributions.
With the advent of large labelled datasets and high-capacity models, the performance of machine vision systems has been improving rapidly. However, the technology has still major limitations, starting from the fact that different vision problems are still solved by different models, trained from scratch or fine-tuned on the target data. The human visual system, in stark contrast, learns a universal representation for vision in the early life of an individual. This representation works well for an enormous variety of vision problems, with little or no change, with the major advantage of requiring little training data to solve any of them.
Hamiltonian Monte Carlo has proven a remarkable empirical success, but only recently have we begun to develop a rigorous understanding of why it performs so well on difficult problems and how it is best applied in practice. Unfortunately, that understanding is confined within the mathematics of differential geometry which has limited its dissemination, especially to the applied communities for which it is particularly important. In this review I provide a comprehensive conceptual account of these theoretical foundations, focusing on developing a principled intuition behind the method and its optimal implementations rather of any exhaustive rigor. Whether a practitioner or a statistician, the dedicated reader will acquire a solid grasp of how Hamiltonian Monte Carlo works, when it succeeds, and, perhaps most importantly, when it fails.
[Memoir of an ex-theoretical-physics grad student at the University of Rochester with Sarada Rajeev who gradually became disillusioned with physics research, burned out, and left to work in finance and is now a writer. Henderson was attracted by the life of the mind and the grandeur of uncovering the mysteries of the universe, only to discover that, after the endless triumphs of the 20th century and predicting enormous swathes of empirical experimental data, theoretical physics has drifted and become a branch of abstract mathematics, exploring ever more recondite, simplified, and implausible models in the hopes of obtaining any insight into physics’ intractable problems; one must be brilliant to even understand the questions being asked by the math and incredibly hardworking to make any progress which hasn’t already been tried by even more brilliant physicists of the past (while living in ignominious poverty and terror of not getting a grant or tenure), but one’s entire career may be spent chasing a useless dead end without one having any clue.]
The next thing I knew I was crouched in a chair in Rajeev’s little office, with a notebook on my knee and focused with everything I had on an impromptu lecture he was giving me on an esoteric aspect of some mathematical subject I’d never heard of before. Zeta functions, or elliptic functions, or something like that. I’d barely introduced myself when he’d started banging out equations on his board. Trying to follow was like learning a new game, with strangely shaped pieces and arbitrary rules. It was a challenge, but I was excited to be talking to a real physicist about his real research, even though there was one big question nagging me that I didn’t dare to ask: What does any of this have to do with physics?
…Even a Theory of Everything, I started to realize, might suffer the same fate of multiple interpretations. The Grail could just be a hall of mirrors, with no clear answer to the “What?” or the “How?”—let alone the “Why?” Plus physics had changed since Big Al bestrode it. Mathematical as opposed to physical intuition had become more central, partly because quantum mechanics was such a strange multi-headed beast that it diminished the role that everyday, or even Einstein-level, intuition could play. So much for my dreams of staring out windows and into the secrets of the universe.
…If I did lose my marbles for a while, this is how it started. With cutting my time outside of Bausch and Lomb down to nine hours a day—just enough to pedal my mountain bike back to my bat cave of an apartment each night, sleep, shower, and pedal back in. With filling my file cabinet with boxes and cans of food, and carting in a coffee maker, mini-fridge, and microwave so that I could maximize the time spent at my desk. With feeling guilty after any day that I didn’t make my 15-hour quota. And with exceeding that quota frequently enough that I regularly circumnavigated the clock: staying later and later each night until I was going home in the morning, then in the afternoon, and finally at night again.
…The longer and harder I worked, the more I realized I didn’t know. Papers that took days or weeks to work through cited dozens more that seemed just as essential to digest; the piles on my desk grew rather than shrunk. I discovered the stark difference between classes and research: With no syllabus to guide me I didn’t know how to keep on a path of profitable inquiry. Getting “wonderfully lost” sounded nice, but the reality of being lost, and of re-living, again and again, that first night in the old woman’s house, with all of its doubts and dead-ends and that horrible hissing voice was … something else. At some point, flipping the lights on in the library no longer filled me with excitement but with dread.
…My mental model building was hitting its limits. I’d sit there in Rajeev’s office with him and his other students, or in a seminar given by some visiting luminary, listening and putting each piece in place, and try to fix in memory what I’d built so far. But at some point I’d lose track of how the green stick connected to the red wheel, or whatever, and I’d realize my picture had diverged from reality. Then I’d try toggling between tracing my steps back in memory to repair my mistake and catching all the new pieces still flying in from the talk. Stray pieces would fall to the ground. My model would start falling down. And I would fall hopelessly behind. A year or so of research with Rajeev, and I found myself frustrated and in a fog, sinking deeper into the quicksand but not knowing why. Was it my lack of mathematical background? My grandiose goals? Was I just not intelligent enough?
…I turned 30 during this time and the milestone hit me hard. I was nearly four years into the Ph.D. program, and while my classmates seemed to be systematically marching toward their degrees, collecting data and writing papers, I had no thesis topic and no clear path to graduation. My engineering friends were becoming managers, getting married, buying houses. And there I was entering my fourth decade of life feeling like a pitiful and penniless mole, aimlessly wandering dark empty tunnels at night, coming home to a creepy crypt each morning with nothing to show for it, and checking my bed for bugs before turning out the lights…As I put the final touches on my thesis, I weighed my options. I was broke, burned out, and doubted my ability to go any further in theoretical physics. But mostly, with The Grail now gone and the physics landscape grown so immense, I thought back to Rajeev’s comment about knowing which problems to solve and realized that I still didn’t know what, for me, they were.
John Douglas Arnold is an American billionaire, former hedge fund manager, and former natural gas trader at Enron. His firm, Centaurus Advisors, LLC, was a Houston-based hedge fund, composed mostly of former Enron traders, that specialized in trading energy products. Arnold announced his retirement from running the hedge fund on May 2, 2012. Arnold now focuses on philanthropy through Arnold Ventures LLC, formerly the Laura and John Arnold Foundation.
iPredict was a New Zealand prediction market that offered prediction exchanges on current events, political issues and economic issues. iPredict was jointly owned by the New Zealand Institute for the Study of Competition and Regulation and Victoria University of Wellington. The site launched on 9 September 2008 and closed 1 December 2016.
[130 epigrams on computer science and technology, published in 1982, for ACM’s SIGPLAN journal, by noted computer scientist and programming language researcher Alan Perlis. The epigrams are a series of short, programming-language-neutral, humorous statements about computers and programming, distilling lessons he had learned over his career, which are widely quoted.]
8. A programming language is low level when its programs require attention to the irrelevant.…19. A language that doesn’t affect the way you think about programming, is not worth knowing.…54. Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy.
15. Everything should be built top-down, except the first time.…30. In programming, everything we do is a special case of something more general—and often we know it too quickly.…31. Simplicity does not precede complexity, but follows it.…58. Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it.…65. Make no mistake about it: Computers process numbers—not symbols. We measure our understanding (and control) by the extent to which we can arithmetize an activity.…56. Software is under a constant tension. Being symbolic it is arbitrarily perfectible; but also it is arbitrarily changeable.
1. One man’s constant is another man’s variable. 34. The string is a stark data structure and everywhere it is passed there is much duplication of process. It is a perfect vehicle for hiding information.
36. The use of a program to prove the 4-color theorem will not change mathematics—it merely demonstrates that the theorem, a challenge for a century, is probably not important to mathematics.
39. Re graphics: A picture is worth 10K words—but only those to describe the picture. Hardly any sets of 10K words can be adequately described with pictures.
48. The best book on programming for the layman is Alice in Wonderland; but that’s because it’s the best book on anything for the layman.
77. The cybernetic exchange between man, computer and algorithm is like a game of musical chairs: The frantic search for balance always leaves one of the 3 standing ill at ease.…79. A year spent in artificial intelligence is enough to make one believe in God.…84. Motto for a research laboratory: What we work on today, others will first think of tomorrow.
91. The computer reminds one of Lon Chaney—it is the machine of a thousand faces.
7. It is easier to write an incorrect program than understand a correct one.…93. When someone says “I want a programming language in which I need only say what I wish done,” give him a lollipop.…102. One can’t proceed from the informal to the formal by formal means.
100. We will never run out of things to program as long as there is a single program around.
108. Whenever 2 programmers meet to criticize their programs, both are silent.…112. Computer Science is embarrassed by the computer.…115. Most people find the concept of programming obvious, but the doing impossible. 116. You think you know when you can learn, are more sure when you can write, even more when you can teach, but certain when you can program. 117. It goes against the grain of modern education to teach children to program. What fun is there in making plans, acquiring discipline in organizing thoughts, devoting attention to detail and learning to be self-critical?
Oskar Pfungst was a German comparative biologist and psychologist. While working as a volunteer assistant in the laboratory of Carl Stumpf in Berlin, Pfungst was asked to investigate the horse known as Clever Hans, who could apparently solve a wide array of arithmetic problems set to it by its owner. After formal investigation in 1907, Pfungst demonstrated that the horse was not actually performing intellectual tasks, but was watching the reaction of his human observers. Pfungst discovered this artifact in the research methodology, wherein the horse was responding directly to involuntary clues in the body language of the human trainer, who had the faculties to solve each problem. The trainer was entirely unaware that he was providing such clues.
Timothy Thomas Powers is an American science fiction and fantasy author. Powers has won the World Fantasy Award twice for his critically acclaimed novels Last Call and Declare. His 1987 novel On Stranger Tides served as inspiration for the Monkey Island franchise of video games and was optioned for adaptation into the fourth Pirates of the Caribbean film.
A secret history is a revisionist interpretation of either fictional or real history which is claimed to have been deliberately suppressed, forgotten, or ignored by established scholars. "Secret history" is also used to describe an alternative interpretation of documented facts which portrays a drastically different motivation or history from established historical events.
The Moon is Earth's only proper natural satellite. It is the fifth largest satellite in the Solar System, larger than any dwarf planet and the largest natural satellite in the Solar System relative to the size of its planet, at a quarter the diameter of Earth, comparable to the width of Australia. The Moon orbits Earth at an average lunar distance of 384,400 km (238,900 mi), or 1.28 light-seconds. Its gravitational influence produces Earth's tides and slightly lengthens Earth's day. The Moon is a differentiated rocky body; has a surface gravity of 0.1654 g, about one-sixth of Earth's; and lacks a significant atmosphere, hydrosphere or magnetic field. A planetary-mass moon, it has among satellites with a known density the second highest surface gravity and density in the Solar System after Jupiter's moon Io.
Gravity, or gravitation, is a natural phenomenon by which all things with mass or energy—including planets, stars, galaxies, and even light—are brought toward one another. On Earth, gravity gives weight to physical objects, and the Moon's gravity causes the ocean tides. The gravitational attraction of the original gaseous matter present in the Universe caused it to begin coalescing and forming stars and caused the stars to group together into galaxies, so gravity is responsible for many of the large-scale structures in the Universe. Gravity has an infinite range, although its effects become increasingly weaker as objects get further away.
Terminal velocity is the maximum velocity attainable by an object as it falls through a fluid. It occurs when the sum of the drag force (Fd) and the buoyancy is equal to the downward force of gravity (FG) acting on the object. Since the net force on the object is zero, the object has zero acceleration.
Venturing into the environment of space can have negative effects on the human body. Significant adverse effects of long-term weightlessness include muscle atrophy and deterioration of the skeleton. Other significant effects include a slowing of cardiovascular system functions, decreased production of red blood cells, balance disorders, eyesight disorders and changes in the immune system. Additional symptoms include fluid redistribution, loss of body mass, nasal congestion, sleep disturbance, and excess flatulence.
Hypothermia is defined as a body core temperature below 35.0 °C (95.0 °F) in humans. Symptoms depend on the temperature. In mild hypothermia, there is shivering and mental confusion. In moderate hypothermia, shivering stops and confusion increases. In severe hypothermia, there may be paradoxical undressing, in which a person removes their clothing, as well as an increased risk of the heart stopping.
The Armstrong limit or Armstrong's line is a measure of altitude above which atmospheric pressure is sufficiently low that water boils at the normal temperature of the human body. Exposure to pressure below this limit results in a rapid loss of consciousness, followed by a series of changes to cardiovascular and neurological functions, and eventually death, unless pressure is restored within 60–90 seconds. On Earth, the limit is around 18–19 km above sea level, above which atmospheric air pressure drops below 0.0618 atm. The U.S. Standard Atmospheric model sets the Armstrong pressure at an altitude of 63,000 feet (19,202 m).
Cerebral hypoxia is a form of hypoxia, specifically involving the brain; when the brain is completely deprived of oxygen, it is called cerebral anoxia. There are four categories of cerebral hypoxia; they are, in order of severity: diffuse cerebral hypoxia (DCH), focal cerebral ischemia, cerebral infarction, and global cerebral ischemia. Prolonged hypoxia induces neuronal cell death via apoptosis, resulting in a hypoxic brain injury.
In physics, the kinetic energy (KE) of an object is the energy that it possesses due to its motion.It is defined as the work needed to accelerate a body of a given mass from rest to its stated velocity. Having gained this energy during its acceleration, the body maintains this kinetic energy unless its speed changes. The same amount of work is done by the body when decelerating from its current speed to a state of rest.
Randall Jarrelljə-REL was an American poet, literary critic, children's author, essayist, and novelist. He was the 11th Consultant in Poetry to the Library of Congress—a position that now bears the title Poet Laureate of the United States.
Hackers is a 1995 American crime film directed by Iain Softley and starring Jonny Lee Miller, Angelina Jolie, Jesse Bradford, Matthew Lillard, Laurence Mason, Renoly Santiago, Lorraine Bracco, and Fisher Stevens. The film follows a group of high school hackers and their involvement in a corporate extortion conspiracy. Made in the mid-1990s when the Internet was unfamiliar to the general public, it reflects the ideals laid out in the Hacker Manifesto quoted in the film: "This is our world now... the world of the electron and the switch [...] We exist without skin color, without nationality, without religious bias... and you call us criminals. [...] Yes, I am a criminal. My crime is that of curiosity." The film received mixed reviews from critics, and underperformed at the box office upon release, but has gone on to achieve cult classic status.
This Is Spinal Tap is a 1984 American mockumentary film co-written and directed by Rob Reiner in his directorial debut. It stars Christopher Guest, Michael McKean, and Harry Shearer as members of the fictional English heavy metal band Spinal Tap, and Reiner as Martin "Marty" Di Bergi, a documentary filmmaker who follows them on their American tour. The film satirizes the behavior and musical pretensions of rock bands and the hagiographic tendencies of rock documentaries such as The Song Remains the Same (1976), and The Last Waltz (1978) and follows the similar All You Need Is Cash (1978) by The Rutles. Most of its dialogue was improvised and dozens of hours were filmed. It produced the 1984 soundtrack album of the same name.
Kubo and the Two Strings is a 2016 American stop-motion animated action fantasy film directed by Travis Knight and produced by animation studio Laika. It stars the voices of Charlize Theron, Art Parkinson, Ralph Fiennes, George Takei, Cary-Hiroyuki Tagawa, Brenda Vaccaro, Rooney Mara, and Matthew McConaughey.
Haven't You Heard? I'm Sakamoto is a Japanese manga series written and illustrated by Nami Sano. The manga follows a high school student named Sakamoto, who has a reputation for being the "coolest" person among the entire student body. The series has been licensed for an English release by Seven Seas Entertainment. An anime television adaptation produced by Studio Deen aired between April 8 to July 5, 2016.
One-Punch Man is a Japanese superhero franchise created by the artist ONE. It tells the story of Saitama, a superhero who can defeat any opponent with a single punch but seeks to find a worthy foe after growing bored by a lack of challenge due to his overwhelming strength. ONE wrote the original webcomic version in early 2009.
Subscription page for the monthly gwern.net newsletter. There are monthly updates, which will include summaries of projects I’ve worked on that month (the same as the changelog), collations of links or discussions from my subreddit, and book/movie reviews. You can also browse the archives since December 2013.
In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.
Exome sequencing, also known as whole exome sequencing (WES), is a genomic technique for sequencing all of the protein-coding regions of genes in a genome. It consists of two steps: the first step is to select only the subset of DNA that encodes proteins. These regions are known as exons – humans have about 180,000 exons, constituting about 1% of the human genome, or approximately 30 million base pairs. The second step is to sequence the exonic DNA using any high-throughput DNA sequencing technology.
Alan Jay Perlis was an American computer scientist and professor at Purdue University, Carnegie Mellon University and Yale University. He is best known for his pioneering work in programming languages and was the first recipient of the Turing Award.