“A Prospective Study of Sudden Cardiac Death among Children and Young Adults”, Bagnall et al 2016 (Rare genetic mutations implicated in >13% of unexplained cardiac deaths. Makes one wonder how much of the long tail of deaths are due to rare variants, and how much of a benefit there would be to erasing the ~80k mutations everyone carries… (Population genetics rule: rare variants are more harmful than common variants; and 1 harmful mutation, 1 reproductive death to purge it from the population.) Cool application of genetics.)
“Okhrana: The Paris Operations of the Russian Imperial Police” (The back and forth secret war of the Okhrana with the myriads of Russian revolutionaries across Europe, documented by the complete archives of the Paris Okhrana office smuggled to America after the Russian Revolution. When you see how easily and thoroughly the Okhrana had infiltrated the Russian revolutionaries, you start to see why the Communist leadership would be extraordinarily paranoid about spies—but also that the revolutionaries were, well before the Revolution, generally highly nasty folks; many of the mentioned revolutionaries would be summarily executed by their comrades.)
“The Unbelievable Tale of Jesus’s Wife”, King’s response; “Bats on the Ceiling: The Gospel of St. Karen” (The Gospel of Jesus’s Wife is a modern forgery. But it gets weirder. And kinkier. Forgeries like this always raise troubling issues about religious scriptures: if this forgery had been kept in private collections for another century before becoming known, in all likelihood, most of the damning evidence would either have disappeared or become inaccessible, and all that would be left is a few worries over the appearance. Most scriptures have even more vexed provenances than does the Gospel of Jesus’s Wife, with blackouts of centuries not uncommon, and known destruction of variants (eg the well-known destruction of all variants of the Koran). Of course, you might think, who would dare counterfeit the Word of God Himself? Yet, humans are strange and inscrutable and can talk themselves into anything—why did Fritz do it? And why did King go along so easily and unrepentantly? Probably even they don’t really know. Who knows how many Fritzes there have been throughout history…)
“Unifying Count-Based Exploration and Intrinsic Motivation”, Bellemare et al 2016 (Video; big exploration improvement for DQN-like agents: where DQN can only get to two rooms in Montezuma’s Revenge, because it takes actions mostly at random and it is unlikely that it will randomly do the 15 or 20 exact moves which will get it a reward, this version, with a way of measuring novelty and exploring novel states until a reward is found, can make it to 15 rooms!)
“Statistically Controlling for Confounding Constructs Is Harder than You Think”, Westfall & Yarkoni 2016 (This is part of why results in sociology/epidemiology/psychology are so unreliable: not only do they usually not control for genetics at all, they don’t even control for the things they think they control for. You have not controlled for SES by throwing in a discretized income variable measured in one year plus a discretized college degree variable. Variables which correlate with or predict some outcome such as poverty, may be doing no more than correcting some measurement error (frequently, due to the heavy genetic loading of most outcomes, correcting the omission of genetic information). This is why within-family designs are desirable even without worries about genetics: they hold constant shared-environment factors so you don’t need to measure or model them. Even a structural equation model (SEM) which explicitly incorporates measurement error may still have enough leakage to render ‘controlling’ misleading. See also Stouffer 1936/Thorndike 1942/Kahneman 1965.)
“Ann Roe’s scientists: original published papers” (One of the few data sets, excluding TIP/SMPY, of extremely intelligent people. I am still reading through them but one impression I get is that the education system in America when most of them were growing up around 1910–1920 was grossly inadequate & unchallenging; many of them seem to only drift into their field when they happen to run into a challenging course in college. Quite a few mention incredibly little access to books and severe poverty—although interestingly, they all come from what are clearly middle/upper-class descent families, even if in some cases they are so poor as to be unable to afford shoes! Smart kids are so much better off these days with Internet access to anything at all they want to read. As I’ve noted in reading biographies of American scientists, the academic environment pre- and post-WWII is strikingly different than the pressure-cooker race to the bottom we are familiar with now. Relative underperformance in grades compared to females is also a running theme. With the chemists and physicists, home chemistry kits seem to have been nigh universal—which is something that sure doesn’t happen these days.)
The Neu Jorker (I particularly liked the profile of a woman’s courageous journey towards equestrianism, an investigation into some knotty issues, and a retrospective of the role of capes in NYC’s crime reduction over the past 3 decades.)
This page is a changelog for Gwern.net: a monthly reverse chronological list of recent major writings/changes/additions.
Following my writing can be a little difficult because it is often so incremental. So every month, in addition to my regular /r/Gwern subreddit submissions, I write up reasonably-interesting changes and send it out to the mailing list in addition to a compilation of links & reviews (archives).
A subreddit for posting links of interest and also for announcing updates to gwern.net (which can be used as a RSS feed). Submissions are categorized similar to the monthly newsletter and typically will be collated there.
Genius Revisited documents the longitudinal results of a high-IQ/gifted-and-talented elementary school, Hunter College Elementary School (HCES); one of the most striking results is the general high education & income levels, but absence of great accomplishment on a national or global scale (eg a Nobel prize). The authors suggest that this may reflect harmful educational practices at their elementary school or the low predictive value of IQ.
I suggest that there is no puzzle to this absence nor anything for HCES to be blamed for, as the absence is fully explainable by their making two statistical errors: base-rate neglect, and regression to the mean.
First, their standards fall prey to a base-rate fallacy and even extreme predictive value of IQ would not predict 1 or more Nobel prizes because Nobel prize odds are measured at 1 in millions, and with a small total sample size of a few hundred, it is highly likely that there would simply be no Nobels.
Secondly, and more seriously, the lack of accomplishment is inherent and unavoidable as it is driven by the regression to the mean caused by the relatively low correlation of early childhood with adult IQs—which means their sample is far less elite as adults than they believe. Using early-childhood/adult IQ correlations, regression to the mean implies that HCES students will fall from a mean of 157 IQ in kindergarten (when selected) to somewhere around 133 as adults (and possibly lower). Further demonstrating the role of regression to the mean, in contrast, HCES’s associated high-IQ/gifted-and-talented high school, Hunter High, which has access to the adolescents’ more predictive IQ scores, has much higher achievement in proportion to its lesser regression to the mean (despite dilution by Hunter elementary students being grandfathered in).
This unavoidable statistical fact undermines the main rationale of HCES: extremely high-IQ adults cannot be accurately selected as kindergartners on the basis of a simple test. This greater-regression problem can be lessened by the use of additional variables in admissions, such as parental IQs or high-quality genetic polygenic scores; unfortunately, these are either politically unacceptable or dependent on future scientific advances. This suggests that such elementary schools may not be a good use of resources and HCES students should not be assigned scarce magnet high school slots.
Isaac Newton published few of his works, and only those he considered perfect after long delays. This leaves his system the world, as described in the Principia and elsewhere, incomplete, and many questions simply unaddressed, like the fate of the Sun or role of comets. But in 2 conversations with an admirer and his nephew, the elderly Newton sketched out the rest of his cosmogony.
According to Newton, the solar system is not stable and must be adjusted by angels; the Sun does not burn perpetually, but comets regularly fuel the Sun; and the final result is that humanity will be extinguished by a particularly large comet causing the sun to flare up, and requiring intelligent alien beings to arise on other planets or their moons. He further gives an anthropic argument: one reason we know that intelligent races regularly go extinct is that humanity itself arose only recently, as demonstrated by the recent innovations in every field, inconsistent with any belief that human beings have existed for hundreds of thousands or millions of years.
This is all interestingly wrong, particularly the anthropic argument. That Newton found it so absurd to imagine humanity existing for millions of years but only recently undergoing exponential improvements in technology demonstrates how counterintuitive and extraordinary the Industrial & Scientific Revolutions were.
A previous genome-wide association study (GWAS) of more than 100,000 individuals identified molecular-genetic predictors of educational attainment. We undertook in-depth life-course investigation of the polygenic score derived from this GWAS using the four-decade Dunedin Study (N = 918). There were five main findings. First, polygenic scores predicted adult economic outcomes even after accounting for educational attainments. Second, genes and environments were correlated: Children with higher polygenic scores were born into better-off homes. Third, children’s polygenic scores predicted their adult outcomes even when analyses accounted for their social-class origins; social-mobility analysis showed that children with higher polygenic scores were more upwardly mobile than children with lower scores. Fourth, polygenic scores predicted behavior across the life course, from early acquisition of speech and reading skills through geographic mobility and mate choice and on to financial planning for retirement. Fifth, polygenic-score associations were mediated by psychological characteristics, including intelligence, self-control, and interpersonal skill. Effect sizes were small. Factors connecting GWAS sequence with life outcomes may provide targets for interventions to promote population-wide positive development. [Keywords: genetics, behavior genetics, intelligence, personality, adult development]
Research has shown that genes play an important role in educational achievement. A key question is the extent to which the same genes affect different academic subjects before and after controlling for general intelligence. The present study investigated genetic and environmental influences on, and links between, the various subjects of the age-16 UK-wide standardized GCSE (General Certificate of Secondary Education) examination results for 12,632 twins. Using the twin method that compares identical and non-identical twins, we found that all GCSE subjects were substantially heritable, and that various academic subjects correlated substantially both phenotypically and genetically, even after controlling for intelligence. Further evidence for pleiotropy in academic achievement was found using a method based directly on DNA from unrelated individuals. We conclude that performance differences for all subjects are highly heritable at the end of compulsory education and that many of the same genes affect different subjects independent of intelligence.
“A Prospective Study of Sudden Cardiac Death among Children and Young Adults”, Richard D. Bagnall, Robert G. Weintraub, Jodie Ingles, Johan Duflou, Laura Yeates, Lien Lam, Andrew M. Davis, Tina Thompson, Vanessa Connell, Jennie Wallace, Charles Naylor, Jackie Crawford, Donald R. Love, Lavinia Hallam, Jodi White, Christopher Lawrence, Matthew Lynch, Natalie Morgan, Paul James, Desirée du Sart, Rajesh Puranik, Neil Langlois, Jitendra Vohra, Ingrid Winship, John Atherton, Julie McGaughran, Jonathan R. Skinner, Christopher Semsarian (2016-06-23):
Background: Sudden cardiac death among children and young adults is a devastating event. We performed a prospective, population-based, clinical and genetic study of sudden cardiac death among children and young adults.
Methods: We prospectively collected clinical, demographic, and autopsy information on all cases of sudden cardiac death among children and young adults 1 to 35 years of age in Australia and New Zealand from 2010 through 2012. In cases that had no cause identified after a comprehensive autopsy that included toxicologic and histologic studies (unexplained sudden cardiac death), at least 59 cardiac genes were analyzed for a clinically relevant cardiac gene mutation.
Results: A total of 490 cases of sudden cardiac death were identified. The annual incidence was 1.3 cases per 100,000 persons 1 to 35 years of age; 72% of the cases involved boys or young men. Persons 31 to 35 years of age had the highest incidence of sudden cardiac death (3.2 cases per 100,000 persons per year), and persons 16 to 20 years of age had the highest incidence of unexplained sudden cardiac death (0.8 cases per 100,000 persons per year). The most common explained causes of sudden cardiac death were coronary artery disease (24% of cases) and inherited cardiomyopathies (16% of cases). Unexplained sudden cardiac death (40% of cases) was the predominant finding among persons in all age groups, except for those 31 to 35 years of age, for whom coronary artery disease was the most common finding. Younger age and death at night were independently associated with unexplained sudden cardiac death as compared with explained sudden cardiac death. A clinically relevant cardiac gene mutation was identified in 31 of 113 cases (27%) of unexplained sudden cardiac death in which genetic testing was performed. During follow-up, a clinical diagnosis of an inherited cardiovascular disease was identified in 13% of the families in which an unexplained sudden cardiac death occurred.
Conclusions: The addition of genetic testing to autopsy investigation substantially increased the identification of a possible cause of sudden cardiac death among children and young adults.
Individual differences in breast size are a conspicuous feature of variation in human females and have been associated with fecundity and advantage in selection of mates. To identify common variants that are associated with breast size, we conducted a large-scale genotyping association meta-analysis in 7169 women of European descent across three independent sample collections with digital or screen film mammograms. The samples consisted of the Swedish KARMA, LIBRO-1 and SASBAC studies genotyped on iCOGS, a custom illumina iSelect genotyping array comprising of 211 155 single nucleotide polymorphisms (SNPs) designed for replication and fine mapping of common and rare variants with relevance to breast, ovary and prostate cancer. Breast size of each subject was ascertained by measuring total breast area (mm(2)) on a mammogram. We confirm genome-wide significant associations at 8p11.23 (rs10086016, p = 1.3×10(-14)) and report a new locus at 22q13 (rs5995871, p = 3.2×10−8). The latter region contains the MKL1 gene, which has been shown to impact endogenous oestrogen receptor α transcriptional activity and is recruited on oestradiol sensitive genes. We also replicated previous genome-wide association study findings for breast size at four other loci. A new locus at 22q13 may be associated with female breast size.
We consider an agent’s uncertainty about its environment and the problem of generalizing this uncertainty across observations. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into intrinsic rewards and obtain significantly improved exploration in a number of hard games, including the infamously difficult Montezuma’s Revenge.
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.
This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.
Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of "objectness".
“Progressive Neural Networks”, Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell (2016-06-15):
Learning to solve complex sequences of tasks–while both leveraging transfer and avoiding catastrophic forgetting–remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
The ability to act in multiple environments and transfer previous knowledge to new situations can be considered a critical aspect of any intelligent agent. Towards this goal, we define a novel method of multitask and transfer learning that enables an autonomous agent to learn how to behave in multiple tasks simultaneously, and then generalize its knowledge to new domains. This method, termed "Actor-Mimic", exploits the use of deep reinforcement learning and model compression techniques to train a single policy network that learns how to act in a set of distinct tasks by using the guidance of several expert teachers. We then show that the representations learnt by the deep policy network are capable of generalizing to new tasks with no prior expert guidance, speeding up learning in novel environments. Although our method can in general be applied to a wide range of problems, we use Atari games as a testing environment to demonstrate these methods.
Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short-term and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain’s specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.
We study the effectiveness of neural sequence models for premise selection in automated theorem proving, one of the main bottlenecks in the formalization of mathematics. We propose a two stage approach for this task that yields good results for the premise selection task on the Mizar corpus while avoiding the hand-engineered features of existing state-of-the-art models. To our knowledge, this is the first time deep learning has been applied to theorem proving on a large scale.
In this paper, we use deep neural networks for inverting face sketches to synthesize photorealistic face images. We first construct a semi-simulated dataset containing a very large number of computer-generated face sketches with different styles and corresponding face images by expanding existing unconstrained face data sets. We then train models achieving state-of-the-art results on both computer-generated sketches and hand-drawn sketches by leveraging recent advances in deep learning such as batch normalization, deep residual learning, perceptual losses and stochastic optimization in combination with our new dataset. We finally demonstrate potential applications of our models in fine arts and forensic arts. In contrast to existing patch-based approaches, our deep-neural-network-based approach can be used for synthesizing photorealistic face images by inverting face sketches in the wild.
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between test items arises from the influence of one or more common latent variables. Here, we present two generalizations of the network model that encompass latent variable structures, establishing network modeling as parts of the more general framework of Structural Equation Modeling (SEM). In the first generalization, we model the covariance structure of latent variables as a network. We term this framework Latent Network Modeling (LNM) and show that, with LNM, a unique structure of conditional independence relationships between latent variables can be obtained in an explorative manner. In the second generalization, the residual variance-covariance structure of indicators is modeled as a network. We term this generalization Residual Network Modeling (RNM) and show that, within this framework, identifiable models can be obtained in which local independence is structurally violated. These generalizations allow for a general modeling framework that can be used to fit, and compare, SEM models, network models, and the RNM and LNM generalizations. This methodology has been implemented in the free-to-use software package lvnet, which contains confirmatory model testing as well as two exploratory search algorithms: stepwise search algorithms for low-dimensional datasets and penalized maximum likelihood estimation for larger datasets. We show in simulation studies that these search algorithms performs adequately in identifying the structure of the relevant residual or latent networks. We further demonstrate the utility of these generalizations in an empirical example on a personality inventory dataset.
Previous research has indicated that education influences cognitive development, but it is unclear what, precisely, is being improved. Here, we tested whether education is associated with cognitive test score improvements via domain-general effects on general cognitive ability (g), or via domain-specific effects on particular cognitive skills. We conducted structural equation modeling on data from a large (n = 1,091), longitudinal sample, with a measure of intelligence at age 11 years and 10 tests covering a diverse range of cognitive abilities taken at age 70. Results indicated that the association of education with improved cognitive test scores is not mediated by g, but consists of direct effects on specific cognitive skills. These results suggest a decoupling of educational gains from increases in general intellectual capacity.
The use of tobacco products as dentifrices is still prevalent in various parts of India. Tobacco use in dentifrices is a terrible scourge which motivates continued use despite its harmful effects. Indian legislation prohibits the use of nicotine in dentifrices. Nicotine is primarily injurious to people because it is responsible for tobacco addiction and is dependence forming. The present study was motivated by an interest in examining the presence of nicotine in these dentifrices. Our earlier report indicates the presence of nicotine in toothpowders. To further curb the menace of tobacco, our team again analysed the toothpowder brands of previous years and in toothpastes as well. Eight brands of commonly used toothpastes and toothpowders were evaluated by gas chromatography-mass spectroscopy. On the whole, there are a few successes but much remains to be done. Our findings indicated the presence of nicotine in two brands of dant manjans and four brands of toothpastes. Further our finding underscores the need for stringent regulations by the regulatory authorities for preventing the addition of nicotine in these dentifrices. Hence government policy needs to be targeted towards an effective control of tobacco in these dentifrices and should be properly addressed.
Orthostatic hypotension, also known as postural hypotension, is a medical condition wherein a person's blood pressure drops when standing up or sitting down. The drop in blood pressure may be sudden, within 3 minutes or gradual. It is defined as a fall in systolic blood pressure of at least 20 mm Hg or diastolic blood pressure of at least 10 mm Hg when a person assumes a standing position. It occurs predominantly by delayed constriction of the lower body blood vessels, which is normally required to maintain an adequate blood pressure when changing position to standing. As a result, blood pools in the blood vessels of the legs for a longer period and less is returned to the heart, thereby leading to a reduced cardiac output and inadequate blood flow to the brain.
Visual snow, also known as visual static, is a condition in which people see white or black dots in parts or the whole of their visual fields. The condition is typically always present and can last years.
Closed-eye hallucinations and closed-eye visualizations (CEV) are a distinct class of hallucination. These types of hallucinations generally only occur when one's eyes are closed or when one is in a darkened room. They can be a form of phosphene. Some people report closed-eye hallucinations under the influence of psychedelics. These are reportedly of a different nature than the "open-eye" hallucinations of the same compounds. Similar hallucinations that occur due to loss of vision are called visual release hallucinations.
A phosphene is the phenomenon of seeing light without light actually entering the eye. The word phosphene comes from the Greek words phos (light) and phainein. Phosphenes that are induced by movement or sound may be associated with optic neuritis.
Approximately 30 satellite launches are insured each year, and insurance coverage is provided for about 200 in-orbit satellites. The total insured exposure for these risks is currently in excess of US$25 billion. Commercial communications satellites in geostationary Earth orbit represent the majority of these, although a larger number of commercial imaging satellites, as well as the second-generation communication constellations, will see the insurance exposure in low Earth orbit start to increase in the years ahead, from its current level of US$1.5 billion. Regulations covering Lloyd’s of London syndicates require that each syndicate reserves funds to cover potential losses and to remain solvent. New regulations under the European Union’s Solvency II directive now require each syndicate to develop models for the classes of insurance provided to determine their own solvency capital requirements. Solvency II is expected to come into force in 2016 to ensure improved consumer protection, modernized supervision, deepened EU market integration, and increased international competitiveness of EU insurers. For each class of business, the inputs to the solvency capital requirements are determined not just on previous results, but also to reflect extreme cases where an unusual event or sequence of events exposes the syndicate to its theoretical worst-case loss. To assist syndicates covering satellites to reserve funds for such extreme space events, a series of realistic disaster scenarios (RDSs) has been developed that all Lloyd’s syndicates insuring space risks must report upon on a quarterly basis. The RDSs are regularly reviewed for their applicability and were recently updated to reflect changes within the space industry to incorporate such factors as consolidation in the supply chain and the greater exploitation of low Earth orbit. The development of these theoretical RDSs will be overviewed along with the limitations of such scenarios. Changes in the industry that have warranted the recent update of the RDS, and the impact such changes have had will also be outlined. Finally, a look toward future industry developments that may require further amendments to the RDSs will also be covered by the article.
A louse-feeder was a job in interwar and Nazi-occupied Poland, at the Lviv Institute for Study of Typhus and Virology and the associated Institute in Kraków, Poland. Louse-feeders were human sources of blood for lice infected with typhus, which were then used to research possible vaccines against the disease.
Bridge of Spies is a 2015 historical drama film directed and co-produced by Steven Spielberg, written by Matt Charman and the Coen brothers, and starring Tom Hanks, Mark Rylance, Amy Ryan, and Alan Alda. Set during the Cold War, the film tells the story of lawyer James B. Donovan, who is entrusted with negotiating the release of Francis Gary Powers—a U.S. Air Force pilot whose U-2 spy plane was shot down over the Soviet Union in 1960—in exchange for Rudolf Abel, a convicted Soviet KGB spy held by the United States, whom Donovan represented at trial. The name of the film refers to the Glienicke Bridge, which connects Potsdam with Berlin, where the prisoner exchange took place. The film was an international co-production of the United States and Germany.
Basilisk is a Japanese manga series written and illustrated by Masaki Segawa. It was published in Japan in 2003 and 2004 in Kodansha's Young Magazine Uppers magazine, based on the novel The Kouga Ninja Scrolls by Futaro Yamada published in 1958. The anime, produced in 2005 by Gonzo, closely follows the manga aside from a handful of distinctions. The manga won the 2004 Kodansha Manga Award for general manga. Segawa continued producing serialized adaptations of Futaro Yamada's novels with The Yagyu Ninja Scrolls in 2005, Yama Fu-Tang in 2010, and Jū: Ninpō Makai Tensei in 2012. Additionally, a two-part novel sequel titled The Ouka Ninja Scrolls: Basilisk New Chapter, penned by Masaki Yamada, was published in 2015 with illustrations by Segawa; a manga adaptation, Basilisk: The Ouka Ninja Scrolls, illustrated by Tatsuya Shihira with character designs by Masaki Segawa, was serialized between 2017 and 2019, and an anime adaptation by Seven Arcs Pictures aired in 2018.
My Neighbor Seki is a Japanese manga series written and illustrated by Takuma Morishige. The series follows a girl named Rumi Yokoi who is constantly distracted by her neighboring classmate, Toshinari Seki, as he indulges in elaborate hobbies and somehow never gets caught in the process. Originally published as a one-shot in 2010, it started serialization in the November 2010 issue of Media Factory's Comic Flapper magazine. Vertical publishes the manga in North America. An original video animation by Shin-Ei Animation was released bundled with the limited edition of the manga's fifth volume on January 4, 2014, and a 21-episode television series adaptation aired in Japan between January and May 2014. A new manga series debuted on July 4, 2020.
Michiko & Hatchin is a Japanese anime television series. The series was produced by studio Manglobe and directed by Sayo Yamamoto, her first directorial work. The two eponymous starring roles are portrayed by noted Japanese film actresses Yōko Maki and Suzuka Ohgo. The character designs were provided by Hiroshi Shimizu, with Shigeto Koyama designing Michiko's bike and Mariko Yamagami and Shōgo Yamazaki in charge of character fashion design. The story takes place in the fictional country of Diamandra which has cultural traces from South American countries, mostly from Brazil. In the first episode, Michiko is introduced as a free-willed "sexy diva" who escapes from a supposedly inescapable prison fortress, while Hatchin is a girl fleeing her abusive foster family. The two join forces on an improbable escape to freedom. The music was composed by the Brazilian musician Alexandre Kassin and produced by Shinichirō Watanabe.
Subscription page for the monthly gwern.net newsletter. There are monthly updates, which will include summaries of projects I’ve worked on that month (the same as the changelog), collations of links or discussions from my subreddit, and book/movie reviews. You can also browse the archives since December 2013.
Newsletter tag: archive of all issues back to 2013 for the gwern.net newsletter (monthly updates, which will include summaries of projects I’ve worked on that month (the same as the changelog), collations of links or discussions from my subreddit, and book/movie reviews.)
The base rate fallacy, also called base rate neglect or base rate bias, is a fallacy. If presented with related base rate information and specific information, people tend to ignore the base rate in favor of the individuating information, rather than correctly integrating the two.
In statistics, regression toward the mean is the phenomenon that arises if a sample point of a random variable is extreme, a future point will be closer to the mean or average on further measurements. To avoid making incorrect inferences, regression toward the mean must be considered when designing scientific experiments and interpreting data. Historically, what is now called regression toward the mean was also called reversion to the mean and reversion to mediocrity.