November 2017 gwern.net newsletter with links on genetics, reinforcement learning, psychology, economics; 2 book reviews, 1 movie review newsletter 2017-10-17–2021-01-04finishedcertainty: logimportance: 0
“Big Data: Astronomical or Genomical?”, Stephens et al 2015 (Are power estimates suggesting we need n = 1–2m for IQ GWASes or other traits a cause for pessimism? No; 23andMe and Ancestry.com are doing around that much each year now, and genomics in general continues to follow earlier exponential projections. In another 5–10 years, there may be enough sequencing capacity to sequence the entire global population once, and potentially billions of available cumulative raw genomes. Against that, phenotyping a few million will not be a big deal.)
“A Year  in Computer Vision”, Duffy & Flynn (review of primarily DL progress in 2016 on image classification, object detection/tracking, segmentation, upscaling, 3D geometry estimation, and fundamental CNN & dataset R&D)
500,000+ people annually are operated on to insert stents starting in the 1990s; this surgery was never tested with placebo controls and was based on self-report measurements—rather than hard endpoints like mortality—before being rolled out, despite surgeries failing half the time in placebo-controlled studies (Wartolowska et al 2014, see also Ending Medical Reversal), as doctors thought stents not working was “unbelievable” and US IRBs would have killed a placebo-controlled study of stents as being ‘unethical’. The non-US study found minimal benefit (blinded effect size 1/6th that of non-blinded). Said study is, nevertheless, expected to not affect stent surgeries or IRB standards. As Yudkowsky asks, “Why aren’t they rioting?”
“Industrial Espionage and Productivity”, Glitz & Meyersson 2017 (Why do countries like East Germany or China engage in so much industrial espionage? Because it works & helps compensate for their own inefficiency.)
Newsletter tag: archive of all issues back to 2013 for the gwern.net newsletter (monthly updates, which will include summaries of projects I’ve worked on that month (the same as the changelog), collations of links or discussions from my subreddit, and book/movie reviews.)
This page is a changelog for Gwern.net: a monthly reverse chronological list of recent major writings/changes/additions.
Following my writing can be a little difficult because it is often so incremental. So every month, in addition to my regular /r/Gwern subreddit submissions, I write up reasonably-interesting changes and send it out to the mailing list in addition to a compilation of links & reviews (archives).
A subreddit for posting links of interest and also for announcing updates to gwern.net (which can be used as a RSS feed). Submissions are categorized similar to the monthly newsletter and typically will be collated there.
Design: A decision analysis of revenue vs readers yields an maximum acceptable total traffic loss of ~3%. Power analysis of historical Gwern.net traffic data demonstrates that the high autocorrelation yields low statistical power with standard tests & regressions but acceptable power with ARIMA models. I design a long-term Bayesian ARIMA(4,0,1) time-series model in which an A/B-test running January–October 2017 in randomized paired 2-day blocks of ads/no-ads uses client-local JS to determine whether to load & display ads, with total traffic data collected in Google Analytics & ad exposure data in Google AdSense. The A/B test ran from 2017-01-01 to 2017-10-15, affecting 288 days with collectively 380,140 pageviews in 251,164 sessions.
Correcting for a flaw in the randomization, the final results yield a surprisingly large estimate of an expected traffic loss of −9.7% (driven by the subset of users without adblock), with an implied −14% traffic loss if all traffic were exposed to ads (95% credible interval: −13–16%), exceeding my decision threshold for disabling ads & strongly ruling out the possibility of acceptably small losses which might justify further experimentation.
Thus, banner ads on Gwern.net appear to be harmful and AdSense has been removed. If these results generalize to other blogs and personal websites, an important implication is that many websites may be harmed by their use of banner ad advertising without realizing it.
In the three decades since the first predictive genetic tests became available, a great deal of data has accumulated to show how people respond to knowing previously unknowable things. The rise of genetic testing has presented scientists with a 30-year experiment that has yielded some surprising insights into human behavior. The data suggest that the vast majority react in ways that at first seem counterintuitive, or at least flout what experts predicted. But as genetic testing becomes more widespread, the irrational behavior of a frightened few might start to look like the rational behavior of an enlightened majority. Doctors’ repeatedly failed attempts to anticipate people’s responses to genetic testing is not for want of preparation. Starting in the 1980s, they conducted surveys in which they asked how people might approach the test, were one available. They noted the answers and planned accordingly. The trouble was, when the test became a reality, their respondents didn’t do what they had said they would.
…In those preparatory surveys, roughly 70% of those at risk of Huntington’s said they would take a test if it existed. In fact, only around 15% do—a proportion that has proved stable across countries and decades. A similar pattern emerged when tests became available for other incurable brain diseases…Prenatal genetic testing is widely available, but the uptake by expecting couples in which one partner is a known carrier of an incurable disease is even lower than that of testing among at-risk adults. Most opt to have a child whose risk of developing that disease is the same as theirs was at birth. Why do people act in this seemingly irresponsible way with respect to their offspring?
A unique longitudinal study published in 2016 by Hanane Bouchghoul and colleagues at the Pitié-Salpêtrière Hospital in Paris unpacks that decision-making process. They interviewed 54 women—either Huntington’s carriers or wives of carriers—and found that if a couple received a favorable result in a first prenatal test, the majority had the child and stopped there. Most of those who got an unfavorable result terminated the pregnancy and tried again. If a second prenatal test produced a “good” result, they had the child and stopped. But if it produced a “bad” result and another termination, most changed strategy. Some opted for preimplantation genetic diagnosis, removing the need for termination, since only mutation-free embryos are implanted. Some abandoned the idea of having a child altogether. But nearly half, 45%, conceived naturally again, and this time they did not seek prenatal testing. Summarizing the findings, the geneticist on the team, Alexandra Dürr, says, “The desire to have a child overrides all else.”
…In a study that has yet to be published, Tibben has corroborated the French group’s conclusion. He followed 13 couples who, following counseling but prior to taking a prenatal test, agreed they would terminate in the case of an unfavorable result. None of them did so when they got that result. “That means there are 13 children alive in the Netherlands today, whom we can be 100% sure are [Huntington’s] carriers,” he says.
“The nature of nurture: effects of parental genotypes”, Augustine Kong, Gudmar Thorleifsson, Michael L. Frigge, Bjarni J. Vilhjálmsson, Alexander I. Young, Thorgeir E. Thorgeirsson, Stefania Benonisdottir, Asmundur Oddsson, Bjarni V. Halldórsson, Gísli Masson, Daniel F. Gudbjartsson, Agnar Helgason, Gyda Bjornsdottir, Unnur Thorsteinsdottir, Kari Stefansson (2017-11-14):
Sequence variants in the parental genomes that are not transmitted to a child/proband are often ignored in genetic studies. Here we show that non-transmitted alleles can impact a child through their effects on the parents and other relatives, a phenomenon we call genetic nurture. Using results from a meta-analysis of educational attainment, the polygenic score computed for the non-transmitted alleles of 21,637 probands with at least one parent genotyped has an estimated effect on the educational attainment of the proband that is 29.9% (P = 1.6×10−14) of that of the transmitted polygenic score. Genetic nurturing effects of this polygenic score extend to other traits. Paternal and maternal polygenic scores have similar effects on educational attainment, but mothers contribute more than fathers to nutrition/heath related traits.
One Sentence Summary
Nurture has a genetic component, i.e. alleles in the parents affect the parents’ phenotypes and through that influence the outcomes of the child.
“Estimating heritability without environmental bias”, Alexander I. Young, Michael L. Frigge, Daniel F. Gudbjartsson, Gudmar Thorleifsson, Gyda Bjornsdottir, Patrick Sulem, Gisli Masson, Unnur Thorsteinsdottir, Kari Stefansson, Augustine Kong (2017-11-14):
Heritability measures the proportion of trait variation that is due to genetic inheritance. Measurement of heritability is of importance to the nature-versus-nurture debate. However, existing estimates of heritability could be biased by environmental effects. Here we introduce relatedness disequilibrium regression (RDR), a novel method for estimating heritability. RDR removes environmental bias by exploiting variation in relatedness due to random segregation. We use a sample of 54,888 Icelanders with both parents genotyped to estimate the heritability of 14 traits, including height (55.4%, S.E. 4.4%) and educational attainment (17.0%, S.E. 9.4%). Our results suggest that some other estimates of heritability could be inflated by environmental effects.
“Common risk variants identified in autism spectrum disorder”, Jakob Grove, Stephan Ripke, Thomas D. Als, Manuel Mattheisen, Raymond Walters, Hyejung Won, Jonatan Pallesen, Esben Agerbo, Ole A. Andreassen, Richard Anney, Rich Belliveau, Francesco Bettella, Joseph D. Buxbaum, Jonas Bybjerg-Grauholm, Marie Bækved-Hansen, Felecia Cerrato, Kimberly Chambert, Jane H. Christensen, Claire Churchhouse, Karin Dellenvall, Ditte Demontis, Silvia De Rubeis, Bernie Devlin, Srdjan Djurovic, Ashle Dumont, Jacqueline Goldstein, Christine S. Hansen, Mads Engel Hauberg, Mads V. Hollegaard, Sigrun Hope, Daniel P. Howrigan, Hailiang Huang, Christina Hultman, Lambertus Klei, Julian Maller, Joanna Martin, Alicia R. Martin, Jennifer Moran, Mette Nyegaard, Terje Nærland, Duncan S. Palmer, Aarno Palotie, Carsten B. Pedersen, Marianne G. Pedersen, Timothy Poterba, Jesper B. Poulsen, Beate St Pourcain, Per Qvist, Karola Rehnström, Avi Reichenberg, Jennifer Reichert, Elise B. Robinson, Kathryn Roeder, Panos Roussos, Evald Saemundsen, Sven Sandin, F. Kyle Satterstrom, George D. Smith, Hreinn Stefansson, Kari Stefansson, Stacy Steinberg, Christine Stevens, Patrick F. Sullivan, Patrick Turley, G. Bragi Walters, Xinyi Xu, Autism Spectrum Disorders Working Group of The Psychiatric Genomics Consortium, BUPGEN, Major Depressive Disorder Working Group of the Psychiatric Genomics Consortium, 23andMe Research Team, Daniel Geschwind, Merete Nordentoft, David M. Hougaard, Thomas Werge, Ole Mors, Preben Bo Mortensen, Benjamin M. Neale, Mark J. Daly, Anders D. Børglum (2017-11-25):
Autism spectrum disorder (ASD) is a highly heritable and heterogeneous group of neurodevelopmental phenotypes diagnosed in more than 1% of children. Common genetic variants contribute substantially to ASD susceptibility, but to date no individual variants have been robustly associated with ASD. With a marked sample size increase from a unique Danish population resource, we report a genome-wide association meta-analysis of 18,381 ASD cases and 27,969 controls that identifies five genome-wide significant loci. Leveraging GWAS results from three phenotypes with significantly overlapping genetic architectures (schizophrenia, major depression, and educational attainment), seven additional loci shared with other traits are identified at equally strict significance levels. Dissecting the polygenic architecture we find both quantitative and qualitative polygenic heterogeneity across ASD subtypes, in contrast to what is typically seen in other complex disorders. These results highlight biological insights, particularly relating to neuronal function and corticogenesis and establish that GWAS performed at scale will be much more productive in the near term in ASD, just as it has been in a broad range of important psychiatric and diverse medical phenotypes.
Background: It is often assumed that selection (including participation and dropout) does not represent an important source of bias in genetic studies. However, there is little evidence to date on the effect of genetic factors on participation.
Methods: Using data on mothers (n = 7,486) and children (n = 7,508) from the Avon Longitudinal Study of Parents and Children, we 1) examined the association of polygenic risk scores for a range of socio-demographic, lifestyle characteristics and health conditions related to continued participation, 2) investigated whether associations of polygenic scores with body mass index (BMI; derived from self-reported weight and height) and self-reported smoking differed in the largest sample with genetic data and a sub-sample who participated in a recent follow-up and 3) determined the proportion of variation in participation explained by common genetic variants using genome-wide data.
Results: We found evidence that polygenic scores for higher education, agreeableness and openness were associated with higher participation and polygenic scores for smoking initiation, higher BMI, neuroticism, schizophrenia, ADHD and depression were associated with lower participation. Associations between the polygenic score for education and self-reported smoking differed between the largest sample with genetic data (OR for ever smoking per SD increase in polygenic score:0.85, 95% CI:0.81,0.89) and sub-sample (OR:0.95, 95% CI:0.88,1.02). In genome-wide analysis, single nucleotide polymorphism based heritability explained 17-31% of variability in participation.
Conclusion: Genetic association studies, including Mendelian randomization, can be biased by selection, including loss to follow-up. Genetic risk for dropout should be considered in all analyses of studies with selective participation.
A previous genome-wide association study (GWAS) of more than 100,000 individuals identified molecular-genetic predictors of educational attainment. We undertook in-depth life-course investigation of the polygenic score derived from this GWAS using the four-decade Dunedin Study (N = 918). There were five main findings. First, polygenic scores predicted adult economic outcomes even after accounting for educational attainments. Second, genes and environments were correlated: Children with higher polygenic scores were born into better-off homes. Third, children’s polygenic scores predicted their adult outcomes even when analyses accounted for their social-class origins; social-mobility analysis showed that children with higher polygenic scores were more upwardly mobile than children with lower scores. Fourth, polygenic scores predicted behavior across the life course, from early acquisition of speech and reading skills through geographic mobility and mate choice and on to financial planning for retirement. Fifth, polygenic-score associations were mediated by psychological characteristics, including intelligence, self-control, and interpersonal skill. Effect sizes were small. Factors connecting GWAS sequence with life outcomes may provide targets for interventions to promote population-wide positive development. [Keywords: genetics, behavior genetics, intelligence, personality, adult development]
The vast majority of human mutations have minor allele frequencies (MAF) under 1%, with the plurality observed only once (i.e., “singletons”). While Mendelian diseases are predominantly caused by rare alleles, their role in complex phenotypes remains largely unknown. We develop and rigorously validate an approach to jointly estimate the contribution of alleles with different frequencies, including singletons, to phenotypic variation. We apply our approach to transcriptional regulation, an intermediate between genetic variation and complex disease. Using whole genome DNA and RNA sequencing data from 360 European individuals, we find that singletons alone contribute ~23% of all cis-heritability across genes (dwarfing the contributions of other frequencies). We then integrate external estimates of global MAF from worldwide samples to improve our inference, and find that average cis-heritability is 15.3%. Strikingly, 50.9% of cis-heritability is contributed by globally rare variants (MAF<0.1%), implicating purifying selection as a pervasive force shaping the regulatory architecture of most human genes.
One Sentence Summary
The vast majority of variants so far discovered in humans are rare, and together they have a substantial impact on gene regulation.
Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.
Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning. To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions. TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values. We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network. Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the tree. We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al. 2017) on multiple Atari games. Furthermore, we present ablation studies that demonstrate the effect of different auxiliary losses on learning transition models.
Sequential decision making problems, such as structured prediction, robotic control, and game playing, require a combination of planning policies and generalisation of those plans. In this paper, we present Expert Iteration (ExIt), a novel reinforcement learning algorithm which decomposes the problem into separate planning and generalisation tasks. Planning new policies is performed by tree search, while a deep neural network generalises those plans. Subsequently, tree search is improved by using the neural network policy to guide search, increasing the strength of new plans. In contrast, standard deep Reinforcement Learning algorithms rely on a neural network not only to generalise plans, but to discover them too. We show that ExIt outperforms REINFORCE for training a neural network to play the board game Hex, and our final tree search agent, trained tabula rasa, defeats MoHex 1.0, the most recent Olympiad Champion player to be publicly released.
“Parallel WaveNet: Fast High-Fidelity Speech Synthesis”, Aaron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis C. Cobo, Florian Stimberg, Norman Casagrande, Dominik Grewe, Seb Noury, Sander Dieleman, Erich Elsen, Nal Kalchbrenner, Heiga Zen, Alex Graves, Helen King, Tom Walters, Dan Belov, Demis Hassabis (2017-11-28):
The recently-developed WaveNet architecture is the current state of the art in realistic speech synthesis, consistently rated as more natural sounding for many different languages than any previous system. However, because WaveNet relies on sequential generation of one audio sample at a time, it is poorly suited to today’s massively parallel computers, and therefore hard to deploy in a real-time production setting. This paper introduces Probability Density Distillation, a new method for training a parallel feed-forward network from a trained WaveNet with no significant difference in quality. The resulting system is capable of generating high-fidelity speech samples at more than 20 times faster than real-time, and is deployed online by Google Assistant, including serving multiple English and Japanese voices.
Fine-grained image labels are desirable for many computer vision applications, such as visual search or mobile AI assistant. These applications rely on image classification models that can produce hundreds of thousands (e.g. 100K) of diversified fine-grained image labels on input images. However, training a network at this vocabulary scale is challenging, and suffers from intolerable large model size and slow training speed, which leads to unsatisfying classification performance. A straightforward solution would be training separate expert networks (specialists), with each specialist focusing on learning one specific vertical (e.g. cars, birds...). However, deploying dozens of expert networks in a practical system would significantly increase system complexity and inference latency, and consumes large amounts of computational resources. To address these challenges, we propose a Knowledge Concentration method, which effectively transfers the knowledge from dozens of specialists (multiple teacher networks) into one single model (one student network) to classify 100K object categories. There are three salient aspects in our method: (1) a multi-teacher single-student knowledge distillation framework; (2) a self-paced learning mechanism to allow the student to learn from different teachers at various paces; (3) structurally connected layers to expand the student network capacity with limited extra parameters. We validate our method on OpenImage and a newly collected dataset, Entity-Foto-Tree (EFT), with 100K categories, and show that the proposed model performs significantly better than the baseline generalist model.
“AI Safety Gridworlds”, Jan Leike, Miljan Martic, Victoria Krakovna, Pedro A. Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, Shane Legg (2017-11-27):
We present a suite of reinforcement learning environments illustrating various safety properties of intelligent agents. These problems include safe interruptibility, avoiding side effects, absent supervisor, reward gaming, safe exploration, as well as robustness to self-modification, distributional shift, and adversaries. To measure compliance with the intended safe behavior, we equip each environment with a performance function that is hidden from the agent. This allows us to categorize AI safety problems into robustness and specification problems, depending on whether the performance function corresponds to the observed reward function. We evaluate A2C and Rainbow, two recent deep reinforcement learning agents, on our environments and show that they are not able to solve them satisfactorily.
“Percutaneous coronary intervention in stable angina (ORBITA): a double-blind, randomised controlled trial”, Rasha Al-Lamee, David Thompson, Hakim-Moulay Dehbi, Sayan Sen, Kare Tang, John Davies, Thomas Keeble, Michael Mielewczik, Raffi Kaprielian, Iqbal S. Malik, Sukhjinder S. Nijjer, Ricardo Petraco, Christopher Cook, Yousif Ahmad, James Howard, Christopher Baker, Andrew Sharp, Robert Gerber, Suneel Talwar, Ravi Assomull, Jamil Mayet, Roland Wensel, David Collier, Matthew Shun-Shin, Simon A. Thom, Justin E. Davies, Darrel P. Francis (2017-11-02):
Background: Symptomatic relief is the primary goal of percutaneous coronary intervention (PCI) in stable angina and is commonly observed clinically. However, there is no evidence from blinded, placebo-controlled randomised trials to show its efficacy.
Methods: ORBITA is a blinded, multicentre randomised trial of PCI versus a placebo procedure for angina relief that was done at five study sites in the UK. We enrolled patients with severe (≥70%) single-vessel stenoses. After enrolment, patients received 6 weeks of medication optimisation. Patients then had pre-randomisation assessments with cardiopulmonary exercise testing, symptom questionnaires, and dobutamine stress echocardiography. Patients were randomised 1:1 to undergo PCI or a placebo procedure by use of an automated online randomisation tool. After 6 weeks of follow-up, the assessments done before randomisation were repeated at the final assessment. The primary endpoint was difference in exercise time increment between groups. All analyses were based on the intention-to-treat principle and the study population contained all participants who underwent randomisation. This study is registered with ClinicalTrials.gov, number NCT02062593.
Findings: ORBITA enrolled 230 patients with ischaemic symptoms. After the medication optimisation phase and between Jan 6, 2014, and Aug 11, 2017, 200 patients underwent randomisation, with 105 patients assigned PCI and 95 assigned the placebo procedure. Lesions had mean area stenosis of 84.4% (SD 10.2), fractional flow reserve of 0.69 (0.16), and instantaneous wave-free ratio of 0.76 (0.22). There was no significant difference in the primary endpoint of exercise time increment between groups (PCI minus placebo 16.6 s, 95% CI −8.9 to 42.0, p = 0.200). There were no deaths. Serious adverse events included four pressure-wire related complications in the placebo group, which required PCI, and five major bleeding events, including two in the PCI group and three in the placebo group.
Interpretation: In patients with medically treated angina and severe coronary stenosis, PCI did not increase exercise time by more than the effect of a placebo procedure. The efficacy of invasive procedures can be assessed with a placebo control, as is standard for pharmacotherapy.
Peer effects, in which the behavior of an individual is affected by the behavior of their peers, are posited by multiple theories in the social sciences. Other processes can also produce behaviors that are correlated in networks and groups, thereby generating debate about the credibility of observational (i.e. nonexperimental) studies of peer effects. Randomized field experiments that identify peer effects, however, are often expensive or infeasible. Thus, many studies of peer effects use observational data, and prior evaluations of causal inference methods for adjusting observational data to estimate peer effects have lacked an experimental "gold standard" for comparison. Here we show, in the context of information and media diffusion on Facebook, that high-dimensional adjustment of a nonexperimental control group (677 million observations) using propensity score models produces estimates of peer effects statistically indistinguishable from those from using a large randomized experiment (220 million observations). Naive observational estimators overstate peer effects by 320 demographics) offer little bias reduction, but adjusting for a measure of prior behaviors closely related to the focal behavior reduces bias by 91 High-dimensional models adjusting for over 3,700 past behaviors provide additional bias reduction, such that the full model reduces bias by over 97 This experimental evaluation demonstrates that detailed records of individuals’ past behavior can improve studies of social influence, information diffusion, and imitation; these results are encouraging for the credibility of some studies but also cautionary for studies of rare or new behaviors. More generally, these results show how large, high-dimensional data sets and statistical learning techniques can be used to improve causal inference in the behavioral sciences.
Population-based studies on violent crime and background factors may provide an understanding of the relationships between susceptibility factors and crime. We aimed to determine the distribution of violent crime convictions in the Swedish population 1973-2004 and to identify criminal, academic, parental, and psychiatric risk factors for persistence in violent crime. The nationwide multi-generation register was used with many other linked nationwide registers to select participants. All individuals born in 1958-1980 (2,393,765 individuals) were included. Persistent violent offenders (those with a lifetime history of three or more violent crime convictions) were compared with individuals having one or two such convictions, and to matched non-offenders. Independent variables were gender, age of first conviction for a violent crime, nonviolent crime convictions, and diagnoses for major mental disorders, personality disorders, and substance use disorders. A total of 93,642 individuals (3.9%) had at least one violent conviction. The distribution of convictions was highly skewed; 24,342 persistent violent offenders (1.0% of the total population) accounted for 63.2% of all convictions. Persistence in violence was associated with male sex (OR 2.5), personality disorder (OR 2.3), violent crime conviction before age 19 (OR 2.0), drug-related offenses (OR 1.9), nonviolent criminality (OR 1.9), substance use disorder (OR 1.9), and major mental disorder (OR 1.3). The majority of violent crimes are perpetrated by a small number of persistent violent offenders, typically males, characterized by early onset of violent criminality, substance abuse, personality disorders, and nonviolent criminality.
Goodnight Moon is an American children's book written by Margaret Wise Brown and illustrated by Clement Hurd. It was published on September 3, 1947, and is a highly acclaimed bedtime story. It features a bunny saying "good night" to everything around: "Goodnight room. Goodnight moon. Goodnight cow jumping over the moon. Goodnight light, and the red balloon ...".
We often identify people using face images. This is true in occupational settings such as passport control as well as in everyday social environments. Mapping between images and identities assumes that facial appearance is stable within certain bounds. For example, a person's apparent age, gender and ethnicity change slowly, if at all. It also assumes that deliberate changes beyond these bounds (i.e., disguises) would be easy to spot. Hyper-realistic face masks overturn these assumptions by allowing the wearer to look like an entirely different person. If unnoticed, these masks break the link between facial appearance and personal identity, with clear implications for applied face recognition. However, to date, no one has assessed the realism of these masks, or specified conditions under which they may be accepted as real faces. Herein, we examined incidental detection of unexpected but attended hyper-realistic masks in both photographic and live presentations. Experiment 1 (UK; n = 60) revealed no evidence for overt detection of hyper-realistic masks among real face photos, and little evidence of covert detection. Experiment 2 (Japan; n = 60) extended these findings to different masks, mask-wearers and participant pools. In Experiment 3 (UK and Japan; n = 407), passers-by failed to notice that a live confederate was wearing a hyper-realistic mask and showed limited evidence of covert detection, even at close viewing distance (5 vs. 20 m). Across all of these studies, viewers accepted hyper-realistic masks as real faces. Specific countermeasures will be required if detection rates are to be improved.
X-Men is a Canadian-American superhero animated television series which debuted on October 31, 1992, in the United States on the Fox Kids Network. X-Men was Marvel Comics' second attempt at an animated X-Men TV series after the pilot, X-Men: Pryde of the X-Men, was not picked up.
Writer-photographer Kyoichi Tsuzuki visited a hundred apartments, condos, and houses, documenting what he saw in more than 400 color photos that show the real Tokyo style—a far cry from the serene gardens, shoji screens, and Zen minimalism usually associated with Japanese dwellings.
In this Tokyo, necessities such as beds, bathrooms, and kitchens vie for space with electronic gadgets, musical instruments, clothes, books, records, and kitschy collectibles. Candid photos vividly capture the dizzying “cockpit effect”of living in a snug space crammed floor to ceiling with stuff. And it’s not just bohemian types and students who must fit their lives and work into tight quarters, but professionals and families with children, too. In descriptive captions, the inhabitants discuss the ingenious ways they’ve adapted their home environments to suit their diverse lifestyles.
The Great Happiness Space: Tale of an Osaka Love Thief is a 2006 documentary film by Jake Clennell, describing a host club in Osaka. The male hosts and their female customers are interviewed, and through the interviews we learn about the nature of hosts clubs and why the customers are coming there.
Subscription page for the monthly gwern.net newsletter. There are monthly updates, which will include summaries of projects I’ve worked on that month (the same as the changelog), collations of links or discussions from my subreddit, and book/movie reviews. You can also browse the archives since December 2013.
The objective of this study was (1) to determine the impact of prenatal diagnosis (PND) for Huntington disease (HD) on subsequent reproductive choices and family structure; and (2) to assess whether children born after PND were informed of their genetic status. Out of 354 presymptomatic carriers of HD gene mutation, aged 18-45 years, 61 couples requested 101 PNDs. Fifty-four women, 29 female carriers and 25 spouses of male carriers, accepted to be interviewed (0.6-16.3 years after the last PND, median 6.5 years) on their obstetrical history and information given to children born after PND. Women were willing to undergo two or more PNDs with a final success rate of 75%. Reproductive decisions differed depending on the outcome of the first PND. If favourable, 62% couples decided against another pregnancy and 10% chose to have an untested child. If unfavourable, 83% decided for another pregnancy (p < 0.01), and the majority (87%) re-entered the PND procedure. In contrast, after a second PND, only 37% asked for a PND and 30% chose to have an untested child. Thirty-three percent had both, tested and untested children. Among children born after PND, 10 years and older, 75% were informed of their genetic status. The decision to prevent transmission of the HD mutation is made anew with each pregnancy. Couples may need more psychological support after PND and pre-counselling sessions should take into account the effect of the outcome of a first PND on subsequent reproductive choices.
This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.
Hex is a two player abstract strategy board game in which players attempt to connect opposite sides of a hexagonal board. Hex was invented by mathematician and poet Piet Hein in 1942 and independently by John Nash in 1948.