The pathophysiology of antisocial personality disorder (ASPD) remains unclear. Although the most consistent biological finding is reduced grey matter volume in the frontal cortex, about 50% of the total liability to developing ASPD has been attributed to genetic factors. The contributing genes remain largely unknown. Therefore, we sought to study the genetic background of ASPD. We conducted a genome-wide association study ( ) and a replication analysis of Finnish criminal offenders fulfilling DSM-IV criteria for ASPD (n = 370, n = 5850 for controls, ; n = 173, n = 3766 for controls and replication sample). The resulted in suggestive associations of two clusters of single-nucleotide polymorphisms at 6p21.2 and at 6p21.32 at the human leukocyte antigen (HLA) region. Imputation of HLA alleles revealed an independent association with DRB1*01:01 (odds ratio (OR) = 2.19 (1.53–3.14), p = 1.9 × 10-5). Two polymorphisms at 6p21.2 LINC00951–LRFN2 gene region were replicated in a separate data set, and rs4714329 reached genome-wide statistical-significance (OR = 1.59 (1.37–1.85), p = 1.6 × 10−9) in the meta-analysis. The risk allele also associated with antisocial features in the general population conditioned for severe problems in childhood family (β = 0.68, p = 0.012). Functional analysis in brain tissue in open access GTEx and Braineac databases revealed eQTL associations of rs4714329 with LINC00951 and LRFN2 in cerebellum. In humans, LINC00951 and LRFN2 are both expressed in the brain, especially in the frontal cortex, which is intriguing considering the role of the frontal cortex in behavior and the neuroanatomical findings of reduced gray matter volume in ASPD. To our knowledge, this is the first study showing genome-wide statistically-significant and replicable findings on genetic variants associated with any personality disorder.
Educated people are generally healthier, have fewer comorbidities and live longer than people with less education. Previous evidence about the effects of education come from observational studies many of which are affected by residual confounding. Legal changes to the minimum school leave age is a potential natural experiment which provides a potentially more robust source of evidence about the effects of schooling. Previous studies have exploited this natural experiment using population-level administrative data to investigate mortality, and relatively small surveys to investigate the effect on mortality.
Here, we add to the evidence using data from a large sample from the UK Biobank. We exploit the raising of the school-leaving age in the UK in September 1972 as a and regression discontinuity and instrumental variable estimators to identify the causal effects of staying on in school. Remaining in school was positively associated with 23 of 25 outcomes. After accounting for multiple hypothesis testing, we found evidence of causal effects on 12 outcomes, however, the associations of schooling and intelligence, smoking, and alcohol consumption may be due to genomic and socioeconomic confounding factors. Education affects some, but not all health and socioeconomic outcomes.
Differences between educated and less educated people may be partially due to residual genetic and socioeconomic.
Significance Statement: On average people who choose to stay in education for longer are healthier, wealthier, and live longer. We investigated the causal effects of education on health, income, and well-being later in life. This is the largest study of its kind to date and it has objective clinic measures of morbidity and aging. We found evidence that people who were forced to remain in school had higher wages and lower mortality. However, there was little evidence of an effect on intelligence later in life. Furthermore, estimates of the effects of education using conventionally adjusted regression analysis are likely to suffer from genomic. In conclusion, education affects some, but not all health outcomes later in life.
Funding: The Medical Research Council (MRC) and the University of Bristol fund the MRC Integrative Epidemiology Unit [MC_UU_12013/1, MC_UU_12013/9]. NMD is supported by the Economics and Social Research Council (ESRC) via a Future Research Leaders Fellowship [ES/N000757/1]. The research described in this paper was specifically funded by a grant from the Economics and Social Research Council for Transformative Social Science. No funding body has influenced data collection, analysis or its interpretations. This publication is the work of the authors, who serve as the guarantors for the contents of this paper. This work was carried out using the computational facilities of the Advanced Computing Research Centre and the Research Data Storage Facility of the University of Bristol. This research was conducted using the Resource.
Data access: The statistical code used to produce these results can be accessed here. The final analysis dataset used in this study is archived with , which can be accessed by contacting UK Biobank firstname.lastname@example.org.
The timing of puberty is a highly polygenic childhood trait that is epidemiologically associated with various adult diseases. Here, we analyse 1000-Genome reference panel imputed genotype data on up to ~370,000 women and identify 389 independent signals (all p < 5×10−8) for age at menarche, a notable milestone in female pubertal development. In Icelandic data from deCODE, these signals explain ~7.4% of the population variance in age at menarche, corresponding to one quarter of the estimated heritability. We implicate over 250 genes via coding variation or associated gene expression, and demonstrate enrichment across genes active in neural tissues. We identify multiple rare variants near the imprinted genes MKRN3 and DLK1 that exhibit large effects on menarche only when paternally inherited. Disproportionate effects of variants on early or late puberty timing are observed: single variant and heritability estimates are larger for early than late puberty timing in females. The opposite pattern is seen in males, with larger estimates for late than early puberty timing. Mendelian randomization analyses indicate causal inverse associations, independent of BMI, between puberty timing and risks for breast and endometrial cancers in women, and prostate cancer in men. In aggregate, our findings reveal new complexity in the genetic regulation of puberty timing and support new causal links with adult cancer risks.
Susceptibility to obesity in today’s environment has a strong genetic component. Lower socioeconomic position (SEP) is associated with a higher risk of obesity but it is not known if it accentuates genetic susceptibility to obesity. We aimed to use up to 120,000 individuals from the study to test the hypothesis that measures of socioeconomic position accentuate genetic susceptibility to obesity.
We used the Townsend deprivation index (TDI) as the main measure of socioeconomic position, and a 69-variant genetic risk score (GRS) as a measure of genetic susceptibility to obesity. We also tested the hypothesis that interactions between genetics and socioeconomic position would result in evidence of interaction with individual measures of the obesogenic environment and behaviours that correlate strongly with socioeconomic position, even if they have no obesogenic role. These measures included self-reported TV watching, diet and physical activity, and an objective measure of activity derived from accelerometers. We performed several negative control tests, including a simulated environment correlated with but not TDI, and sun protection use. We found evidence of gene-environment interactions with TDI (pinteraction = 3×10−10) such that, within the group of 50% living in the most relatively deprived situations, carrying 10 additional -raising alleles was associated with approximately 3.8 kg extra weight in someone 1.73m tall. In contrast, within the group of 50% living in the least deprivation, carrying 10 additional -raising alleles was associated with approximately 2.9 kg extra weight. We also observed evidence of interaction between sun protection use and genetics, suggesting that residual confounding may result in evidence of non-causal interactions [especially given such a weak PGS…].
Our findings provide evidence that relative social deprivation best captures aspects of the obesogenic environment that accentuate the genetic predisposition to obesity in the UK.
Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new target goals, and (2) data inefficiency i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to the task of target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows to better generalize. To address the second issue, we propose AI2-THOR framework, which provides an environment with high-quality 3D scenes and physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently.
We show that our proposed method (1) converges faster than the state-of-the-art deep end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment.methods, (2) generalizes across targets and across scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is
The supplementary video can be accessed at the following link: https://youtu.be/SmBxMDiOrvs.
2016-covington.pdf#google: “Deep Neural Networks for YouTube Recommendations”, (2016-09-15; ):
YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model [since upgraded to REINFORCE]. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.
[Keywords: recommender system, deep learning, scalability]
Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4× upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. provide an abstraction that is similar to what is found in nature: the relationship between a genotype—the hypernetwork—and a phenotype—the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
Some risks have extremely high stakes. For example, a worldwide pandemic or asteroid impact could potentially kill more than a billion people. Comfortingly, scientific calculations often put very low probabilities on the occurrence of such catastrophes. In this paper, we argue that there are important new methodological problems which arise when assessing global catastrophic risks and we focus on a problem regarding probability estimation. When an expert provides a calculation of the probability of an outcome, they are really providing the probability of the outcome occurring, given that their argument is watertight. However, their argument may fail for a number of reasons such as a flaw in the underlying theory, a flaw in the modeling of the problem, or a mistake in the calculations. If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is suspect. We develop this idea formally, explaining how it differs from the related distinctions of model and parameter uncertainty. Using the risk estimates from the Large Hadron Collider as a test case, we show how serious the problem can be when it comes to catastrophic risks and how best to address it.
Neuroimaging has largely focused on 2 goals: mapping associations between neuroanatomical features and phenotypes and building individual-level prediction models. This paper presents a complementary analytic strategy called morphometricity that aims to measure the neuroanatomical signatures of different phenotypes.
Inspired by prior work on [genetic] heritability, we define morphometricity as the proportion of phenotypic variation that can be explained by brain morphology (eg., as captured by structural brain MRI). In the dawning era of large-scale datasets comprising traits across a broad phenotypic spectrum, morphometricity will be critical in prioritizing and characterizing behavioral, cognitive, and clinical phenotypes based on their neuroanatomical signatures. Furthermore, the proposed framework will be important in dissecting the functional, morphological, and molecular underpinnings of different traits.
…Complex physiological and behavioral traits, including neurological and psychiatric disorders, often associate with distributed anatomical variation. This paper introduces a global metric, called morphometricity, as a measure of the anatomical signature of different traits. Morphometricity is defined as the proportion of phenotypic variation that can be explained by macroscopic brain morphology.
We estimate morphometricity via a linear mixed-effects model that uses an anatomical similarity matrix computed based on measurements derived from structural brain MRI scans. We examined over 3,800 unique MRI scans from 9 large-scale studies to estimate the morphometricity of a range of phenotypes, including clinical diagnoses such as Alzheimer’s disease, and nonclinical traits such as measures of cognition.
Our results demonstrate that morphometricity can provide novel insights about the neuroanatomical correlates of a diverse set of traits, revealing associations that might not be detectable through traditional statistical techniques.
[Keywords: neuroimaging, brain morphology, statistical association]
2018-bessadok.pdf: “Intact Connectional Morphometricity Learning Using Multi-view Morphological Brain Networks with Application to Autism Spectrum Disorder”, Alaa Bessadok, Islem Rekik
Genomic selection—the prediction of breeding values using DNA polymorphisms—is a disruptive method that has widely been adopted by animal and plant breeders to increase crop, forest and livestock productivity and ultimately secure food and energy supplies. It improves breeding schemes in different ways, depending on the biology of the species and genotyping and phenotyping constraints. However, both genomic selection and classical phenotypic selection remain difficult to implement because of the high genotyping and phenotyping costs that typically occur when selecting large collections of individuals, particularly in early breeding generations. To specifically address these issues, we propose a new conceptual framework called phenomic selection, which consists of a prediction approach based on low-cost and high-throughput phenotypic descriptors rather than DNA polymorphisms. We applied phenomic selection on two species of economic interest (wheat and poplar) using near-infrared spectroscopy on various tissues. We showed that one could reach accurate predictions in independent environments for developmental and productivity traits and tolerance to disease. We also demonstrated that under realistic scenarios, one could expect much higher genetic gains with phenomic selection than with genomic selection. Our work constitutes a proof of concept and is the first attempt at phenomic selection; it clearly provides new perspectives for the breeding community, as this approach is theoretically applicable to any organism and does not require any genotypic information.
When syphilis first appeared in Europe in 1495, it was an acute and extremely unpleasant disease. After only a few years it was less severe than it once was, and it changed over the next 50 years into a milder, chronic disease. The severe early symptoms may have been the result of the disease being introduced into a new host population without any resistance mechanisms, but the change in virulence is most likely to have happened because of selection favouring milder strains of the pathogen. The symptoms of the virulent early disease were both debilitating and obvious to potential sexual partners of the infected, and strains that caused less obvious or painful symptoms would have enjoyed a higher transmission rate.
2012-eckerberg.pdf: “untitled”, Berndt Eckerberg, Arne Lowden, Roberta Nagai, Torbjörn Åkerstedt
“Capacity-approaching DNA storage”, (2016-09-09):
Humanity produces data at exponential rates, creating a growing demand for better storage devices. DNA molecules are an attractive medium to store digital information due to their durability and high information density. Recent studies have made large strides in developing DNA storage schemes by exploiting the advent of massive parallel synthesis of DNA oligos and the high throughput of sequencing platforms. However, most of these experiments reported small gaps and errors in the retrieved information. Here, we report a strategy to store and retrieve DNA information that is robust and approaches the theoretical maximum of information that can be stored per nucleotide. The success of our strategy lies in careful adaption of recent developments in coding theory to the domain specific constrains of DNA storage. To test our strategy, we stored an entire computer operating system, a movie, a gift card, and other computer files with a total of 2.14×106 bytes in DNA oligos. We were able to fully retrieve the information without a single error even with a sequencing throughput on the scale of a single tile of an Illumina sequencing flow cell. To further stress our strategy, we created a deep copy of the data by PCR amplifying the oligo pool in a total of nine successive reactions, reflecting one complete path of an exponential process to copy the file 218×1012 times. We perfectly retrieved the original data with only five million reads. Taken together, our approach opens the possibility of highly reliable DNA-based storage that approaches the information capacity of DNA molecules and enables virtually unlimited data retrieval.
“The Merchant and the Alchemist's Gate”, (2007-09):
This fantasy short story by Ted Chiang follows Fuwaad ibn Abbas, a fabric merchant in the ancient city of Baghdad. It begins when he is searching for a gift to give a business associate and happens to discover a new shop in the marketplace. The shop owner, who makes and sells a variety of very interesting items, invites Fuwaad into the back workshop to see a mysterious black stone arch which serves as a gateway into the future, which the shop owner has made by the use of alchemy. Fuwaad is intrigued, and the shop owner tells him 3 stories of others who have traveled through the gate to meet and have conversation with their future selves. When Fuwaad learns that the shop keeper has another gate in Cairo that will allow people to travel even into the past, he makes the journey there to try to rectify a mistake he made 20 years earlier. [Summary adapted from Wikipedia]