Gwern.net: HTTPS now mandatory; HTML sections rewritten using HTML5 semantic markup for better compatibility with screen readers & offline browsers like Instapaper/Pocket; “importance” metadata added to all pages to rank them; “belief” metadata renamed more intuitively as “confidence”; finish adding 301 Redirects for all broken links & common typos; renamed & all darknet market pages for current terminology; rewrote Patreon profile
“Network Morphism”, Wei et al 2016 (A nice inspiration from category theory. I wish I saw this out in the wild, because it would speed up architecture and hyperparameter optimization immeasurably if we didn’t have to keep starting from scratch with every model.)
The details on NK use of digital censorship is interesting: steady progress in locking down Bluetooth and WiFi by software and then hardware modifications; use of Android security system/DRM to install audit logs + regular screenshots to capture foreign media consumption; a whitelist/signed-media system to block said foreign media from ever being viewed, with auto-deletion of offending files; watermarking (courtesy of an American university’s misguided outreach) of media created on desktops to trace them; network blocking and surveillance; and efforts towards automatic bulk surveillance of text messages for ‘South-Korean-style’ phrases/words. Stallman’s warnings about DRM are quite prophetic in the NK context—the system is secured against the user… For these reasons & poverty, radio (including foreign radios like Voice of America) is—surprisingly to me—the top source of information for North Koreans.
The Birds (Took second place; “One of the greatest of Euripides’s surviving works, The Trojan Women, won only second prize in a contemporary competition. We know nothing about the play that came in first.”)
10 Cloverfield Lane (I was fortunate enough to forget entirely what this movie was about in between downloading & watching, and it kept me in suspense and surprised me, particularly with the ending. I appreciated the genre-savvy and competent female lead. A good psychological suspense/horror movie.)
Newsletter tag: archive of all issues back to 2013 for the gwern.net newsletter (monthly updates, which will include summaries of projects I’ve worked on that month (the same as the changelog), collations of links or discussions from my subreddit, and book/movie reviews.)
This page is a changelog for Gwern.net: a monthly reverse chronological list of recent major writings/changes/additions.
Following my writing can be a little difficult because it is often so incremental. So every month, in addition to my regular /r/Gwern subreddit submissions, I write up reasonably-interesting changes and send it out to the mailing list in addition to a compilation of links & reviews (archives).
A subreddit for posting links of interest and also for announcing updates to gwern.net (which can be used as a RSS feed). Submissions are categorized similar to the monthly newsletter and typically will be collated there.
“Psychiatric Genomics: An Update and an Agenda”, Patrick F. Sullivan, Arpana Agrawal, Cynthia M. Bulik, Ole A. Andreassen, Anders D. Børglum, Gerome Breen, Sven Cichon, Howard J. Edenberg, Stephen V. Faraone, Joel Gelernter, Carol A. Mathews, Caroline M. Nievergelt, Jordan Smoller, Michael C. O’Donovan, for the Psychiatric Genomics Consortium (2017-03-10):
The Psychiatric Genomics Consortium (PGC) is the largest consortium in the history of psychiatry. In the past decade, this global effort has delivered a rapidly increasing flow of new knowledge about the fundamental basis of common psychiatric disorders, particularly given its dedication to rapid progress and open science. The PGC has recently commenced a program of research designed to deliver “actionable” findings—genomic results that (a) reveal the fundamental biology, (b) inform clinical practice, and (c) deliver new therapeutic targets. This is the central idea of the PGC: to convert the family history risk factor into biologically, clinically, and therapeutically meaningful insights. The emerging findings suggest that we are entering into a phase of accelerated translation of genetic discoveries to impact psychiatric practice within a precision medicine framework.
PGC Coordinating Committee: Mark Daly, Michael Gill, John Kelsoe, Karestan Koenen, Douglas Levinson, Cathryn Lewis, Ben Neale, Danielle Posthuma, Jonathan Sebat, and Pamela Sklar.
Mood instability is a core clinical feature of affective disorders, particularly major depressive disorder (MDD) and bipolar disorder (BD). It may be a useful construct in line with the Research Domain Criteria (RDoC) approach, which proposes studying dimensional psychopathological traits that cut across diagnostic categories as a more effective strategy for identifying the underlying biology of psychiatric disorders. Here we report a genome-wide association study (GWAS) of mood instability in a very large study of 53,525 cases and 60,443 controls from the UK Biobank cohort, the only such GWAS reported to date. We identified four independent loci (on chromosomes eight, nine, 14 and 18) significantly associated with mood instability, with a common SNP-based heritability estimate for mood instability of approximately 8%. We also found a strong genetic correlation between mood instability and MDD (0.60, SE=0.07, p = 8.95xl0−17), a small but statistically significant genetic correlation with schizophrenia (0.11, SE=0.04, p = 0.01), but no genetic correlation with BD. Several candidate genes harbouring variants in linkage disequilibrium with the associated loci may have a role in the pathophysiology of mood disorders, including the DCC netrin 1 receptor (DCC), eukaryotic initiation factor 2B (EIF2B2), placental growth factor (PGF) and protein tyrosine phosphatase, receptor type D (PTPRD) genes. Strengths of this study include the large sample size; however, our measure of mood instability may be limited by the use of a single self-reported question. Overall, this work suggests a polygenic basis for mood instability and opens up the field for the further biological investigation of this important cross-diagnostic psychopathological trait.
Various non-psychotic psychiatric disorders in childhood and adolescence can precede the onset of schizophrenia, but the nature of this relationship remains unclear. We investigated to what extent the association between schizophrenia and psychiatric disorders in childhood is explained by shared genetic risk factors.
Polygenic risk scores (PRS), reflecting an individual’s genetic risk for schizophrenia, were constructed for participants in two birth cohorts (2,588 children from the Netherlands Twin Register (NTR) and 6,127 from the Avon Longitudinal Study of Parents And Children (ALSPAC)). The associations between schizophrenia PRS and measures of anxiety, depression, attention deficit hyperactivity disorder (ADHD), and oppositional defiant disorder/conduct disorder (ODD/CD) were estimated at age 7, 10, 12/13 and 15 years in the two cohorts. Results were then meta-analyzed, and age-effects and differences in the associations between disorders and PRS were formally tested in a meta-regression.
The schizophrenia PRS was associated with childhood and adolescent psychopathology Where the association was weaker for ODD/CD at age 7. The associations increased with age this increase was steepest for ADHD and ODD/CD. The results are consistent with a common genetic etiology of schizophrenia and developmental psychopathology as well as with a stronger shared genetic etiology between schizophrenia and adolescent onset psychopathology.
A multivariate meta-analysis of multiple and repeated observations enabled to optimally use the longitudinal data across diagnoses in order to provide knowledge on how childhood disorders develop into severe adult psychiatric disorders.
Woolly mammoths (Mammuthus primigenius) populated Siberia, Beringia, and North America during the Pleistocene and early Holocene. Recent breakthroughs in ancient DNA sequencing have allowed for complete genome sequencing for two specimens of woolly mammoths (Palkopoulou et al 2015). One mammoth specimen is from a mainland population 45,000 years ago when mammoths were plentiful. The second, a 4300 yr old specimen, is derived from an isolated population on Wrangel island where mammoths subsisted with small effective population size more than 43-fold lower than previous populations. These extreme differences in effective population size offer a rare opportunity to test nearly neutral models of genome architecture evolution within a single species. Using these previously published mammoth sequences, we identify deletions, retrogenes, and non-functionalizing point mutations. In the Wrangel island mammoth, we identify a greater number of deletions, a larger proportion of deletions affecting gene sequences, a greater number of candidate retrogenes, and an increased number of premature stop codons. This accumulation of detrimental mutations is consistent with genomic meltdown in response to low effective population sizes in the dwindling mammoth population on Wrangel island. In addition, we observe high rates of loss of olfactory receptors and urinary proteins, either because these loci are non-essential or because they were favored by divergent selective pressures in island environments. Finally, at the locus of FOXQ1 we observe two independent loss-of-function mutations, which would confer a satin coat phenotype in this island woolly mammoth.
Author summary: We observe an excess of detrimental mutations, consistent with genomic meltdown in woolly mammoths on Wrangel Island just prior to extinction. We observe an excess of deletions, an increase in the proportion of deletions affecting gene sequences, and an excess of premature stop codons in response to evolution under low effective population sizes. Large numbers of olfactory receptors appear to have loss of function mutations in the island mammoth. These results offer genetic support within a single species for nearly-neutral theories of genome evolution. We also observe two independent loss of function mutations at the FOXQ1 locus, likely conferring a satin coat in this unusual woolly mammoth.
Genetic load is the difference between the fitness of an average genotype in a population and the fitness of some reference genotype, which may be either the best present in a population, or may be the theoretically optimal genotype. The average individual taken from a population with a low genetic load will generally, when grown in the same conditions, have more surviving offspring than the average individual from a population with a high genetic load. Genetic load can also be seen as reduced fitness at the population level compared to what the population would have if all individuals had the reference high-fitness genotype. High genetic load may put a population in danger of extinction.
Dogs (Canis lupus familiaris) were domesticated from gray wolves between 20-40kya in Eurasia, yet details surrounding the process of domestication remain unclear. The vast array of phenotypes exhibited by dogs mirror numerous other domesticated animal species, a phenomenon known as the Domestication Syndrome. Here, we use signatures persisting in the dog genome to identify genes and pathways altered by the intensive selective pressures of domestication. We identified 37 candidate domestication regions containing 17.5Mb of genome sequence and 172 genes through whole-genome SNP analysis of 43 globally distributed village dogs and 10 wolves. Comparisons with three ancient dog genomes indicate that these regions reflect signatures of domestication rather than breed formation. Analysis of genes within these regions revealed a significant enrichment of gene functions linked to neural crest cell migration, differentiation and development. Genome copy number analysis identified regions of localized sequence and structural diversity, and discovered additional copy number variation at the amylase-2b locus. Overall, these results indicate that primary selection pressures targeted genes in the neural crest as well as components of the minor spliceosome, rather than genes involved in starch metabolism. Smaller jaw sizes, hairlessness, floppy ears, tameness, and diminished craniofacial development distinguish wolves from domesticated dogs, phenotypes of the Domestication Syndrome that can result from decreased neural crest cells at these sites. We propose that initial selection acted on key genes in the neural crest and minor splicing pathways during early dog domestication, giving rise to the phenotypes of modern dogs.
The human auditory system can naturally pick out a voice in a crowded room, but creating a hearing aid that mimics that ability has stumped signal processing specialists, artificial intelligence experts, and audiologists for decades. British cognitive scientist Colin Cherry first dubbed this the “cocktail party problem” in 1953.
More than six decades later, less than 25% of people who need a hearing aid actually use one…The global US $6 billion hearing aid industry is expected to grow at 6% every year through 2020…The greatest frustration among potential users is that a hearing aid cannot distinguish between, for example, a voice and the sound of a passing car if those sounds occur at the same time. The device cranks up the volume on both, creating an incoherent din.
It’s time we solve this problem. To produce a better experience for hearing aid wearers, my lab at Ohio State University, in Columbus, recently applied machine learning based on deep neural networks to the task of segregating sounds. We have tested multiple versions of a digital filter that not only amplifies sound but can also isolate speech from background noise and automatically adjust the volumes of each separately. We believe this approach can ultimately restore a hearing-impaired person’s comprehension to match—or even exceed—that of someone with normal hearing. In fact, one of our early models boosted, from 10 to 90 percent, the ability of some subjects to understand spoken words obscured by noise. Because it’s not necessary for listeners to understand every word in a phrase to gather its meaning, this improvement frequently meant the difference between comprehending a sentence or not…Having demonstrated promising initial results with our early classification algorithms, we decided to take the next logical step—to improve the system so it could function in noisy real-world environments, and without training for specific noises and sentences. This challenge prompted us to try to do something that had never been done before: build a machine-learning program that would run on a neural network and separate speech from noise after undergoing a sophisticated training process. The program would use the ideal binary mask to guide the training of the neural network. And it worked. In a study involving 24 test subjects, we demonstrated that this program could boost the comprehension of hearing-impaired people by about 50 percent.
…People in both groups showed a big improvement in their ability to comprehend sentences amid noise after the sentences were processed through our program. People with hearing impairment could decipher only 29% of words muddled by babble without the program, but they understood 84% after the processing. Several went from understanding only 10% of words in the original sample to comprehending around 90% with the program. There were similar gains for the steady-noise scenario with hearing-impaired subjects—they went from 36% to 82% comprehension. Even people with normal hearing were able to better understand noisy sentences, which means our program could someday help far more people than we originally anticipated. Listeners with normal hearing understood 37% of the words spoken amid steady noise without the program, and 80% with it. For the babble, they improved from 42% of words to 78 percent. One of the most intriguing results of our experiment came when we asked, Could people with hearing impairment who are assisted by our program actually outperform those with normal hearing? Remarkably, the answer is yes. Listeners with hearing impairment who used our program understood nearly 20% more words in the babble and about 15% more words in steady noise than those with normal hearing who relied solely on their own auditory system to separate speech from noise. With these results, our program built from deep neural networks has come the closest to solving the cocktail party problem of any effort to date.
“Deep Voice: Real-time Neural Text-to-Speech”, Sercan O. Arik, Mike Chrzanowski, Adam Coates, Gregory Diamos, Andrew Gibiansky, Yongguo Kang, Xian Li, John Miller, Andrew Ng, Jonathan Raiman, Shubho Sengupta, Mohammad Shoeybi (2017-02-25):
We present Deep Voice, a production-quality text-to-speech system constructed entirely from deep neural networks. Deep Voice lays the groundwork for truly end-to-end neural speech synthesis. The system comprises five major building blocks: a segmentation model for locating phoneme boundaries, a grapheme-to-phoneme conversion model, a phoneme duration prediction model, a fundamental frequency prediction model, and an audio synthesis model. For the segmentation model, we propose a novel way of performing phoneme boundary detection with deep neural networks using connectionist temporal classification (CTC) loss. For the audio synthesis model, we implement a variant of WaveNet that requires fewer parameters and trains faster than the original. By using a neural network for each component, our system is simpler and more flexible than traditional text-to-speech systems, where each component requires laborious feature engineering and extensive domain expertise. Finally, we show that inference with our system can be performed faster than real time and describe optimized WaveNet inference kernels on both CPU and GPU that achieve up to 400x speedups over existing implementations.
A long-standing obstacle to progress in deep learning is the problem of vanishing and exploding gradients. Although, the problem has largely been overcome via carefully constructed initializations and batch normalization, architectures incorporating skip-connections such as highway and resnets perform much better than standard feedforward architectures despite well-chosen initialization and batch normalization. In this paper, we identify the shattered gradients problem. Specifically, we show that the correlation between gradients in standard feedforward networks decays exponentially with depth resulting in gradients that resemble white noise whereas, in contrast, the gradients in architectures with skip-connections are far more resistant to shattering, decaying sublinearly. Detailed empirical evidence is presented in support of the analysis, on both fully-connected networks and convnets. Finally, we present a new "looks linear" (LL) initialization that prevents shattering, with preliminary experiments showing the new initialization allows to train very deep networks without the addition of skip-connections.
Deep reinforcement learning methods attain super-human performance in a wide range of environments. Such methods are grossly inefficient, often taking orders of magnitudes more data than humans to achieve reasonable performance. We propose Neural Episodic Control: a deep reinforcement learning agent that is able to rapidly assimilate new experiences and act upon them. Our agent uses a semi-tabular representation of the value function: a buffer of past experience containing slowly changing state representations and rapidly updated estimates of the value function. We show across a wide range of environments that our agent learns significantly faster than other state-of-the-art, general purpose deep reinforcement learning agents.
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup—O(log N) sampling instead of O(N)—enabling the practical generation of 512×512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity. Source code for official implementation is publicly available https://github.com/SKTBrain/DiscoGAN
“Evolving Deep Neural Networks”, Risto Miikkulainen, Jason Liang, Elliot Meyerson, Aditya Rawal, Dan Fink, Olivier Francon, Bala Raju, Hormoz Shahrzad, Arshak Navruzyan, Nigel Duffy, Babak Hodjat (2017-03-01):
The success of deep learning depends on finding an architecture to fit the task. As deep learning has scaled up to more challenging tasks, the architectures have become difficult to design by hand. This paper proposes an automated method, CoDeepNEAT, for optimizing deep learning architectures through evolution. By extending existing neuroevolution methods to topology, components, and hyperparameters, this method achieves results comparable to best human designs in standard benchmarks in object recognition and language modeling. It also supports building a real-world application of automated image captioning on a magazine website. Given the anticipated increases in available computing power, evolution of deep networks is promising approach to constructing deep learning applications in the future.
Neural networks have proven effective at solving difficult problems but designing their architectures can be challenging, even for image classification problems alone. Our goal is to minimize human participation, so we employ evolutionary algorithms to discover such networks automatically. Despite significant computational requirements, we show that it is now possible to evolve models with accuracies within the range of those published in the last year. Specifically, we employ simple evolutionary techniques at unprecedented scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting from trivial initial conditions and reaching accuracies of 94.6 ensemble) and 77.0 mutation operators that navigate large search spaces; we stress that no human participation is required once evolution starts and that the output is a fully-trained model. Throughout this work, we place special emphasis on the repeatability of results, the variability in the outcomes and the computational requirements.
We present in this paper a systematic study on how to morph a well-trained neural network to a new one so that its network function can be completely preserved. We define this as network morphism in this research. After morphing a parent network, the child network is expected to inherit the knowledge from its parent network and also has the potential to continue growing into a more powerful one with much shortened training time. The first requirement for this network morphism is its ability to handle diverse morphing types of networks, including changes of depth, width, kernel size, and even subnet. To meet this requirement, we first introduce the network morphism equations, and then develop novel morphing algorithms for all these morphing types for both classic and convolutional neural networks. The second requirement for this network morphism is its ability to deal with non-linearity in a network. We propose a family of parametric-activation functions to facilitate the morphing of any continuous non-linear activation neurons. Experimental results on benchmark datasets and typical neural networks demonstrate the effectiveness of the proposed network morphism scheme.
In recent years, a number of prominent computer scientists, along with academics in fields such as philosophy and physics, have lent credence to the notion that machines may one day become as large as humans. Many have further argued that machines could even come to exceed human size by a significant margin. However, there are at least seven distinct arguments that preclude this outcome. We show that it is not only implausible that machines will ever exceed human size, but in fact impossible.
Although experimentation involving human volunteers has attracted intense study, the matter of self-experimentation among medical researchers has received much less attention. Many questions have been answered only in part, or have been left unanswered. How common is this practice? Is it more common among certain nationalities? What have been the predominant medical fields in which self-experimentation has occurred? How dangerous an act has this proved to be? What have been the trends over time? What is the future likely to bring?From the available literature, I identified and analyzed 465 documented instances of this practice, performed over the course of the past 2 centuries. Most instances occurred in the United States. The peak of self-experimentation occurred in the first half of the 20th century. Eight deaths were recorded. A number of the investigators enjoyed successful careers, including the receipt of Nobel Prizes. Although self-experimentation by physicians and other biological scientists appears to be in decline, the courage of those involved and the benefits to society cannot be denied.
There are many published theories about the politics of theHarry Potter books by J. K. Rowling, which range from them containing criticism of racism to anti-government sentiments. According to Inside Higher Ed, doctoral theses have been devoted to the Harry Potter books. There are also several university courses centred on analysis of the Potter series, including an upper division political science course.
The Flexner Report is a book-length landmark report of medical education in the United States and Canada, written by Abraham Flexner and published in 1910 under the aegis of the Carnegie Foundation. Many aspects of the present-day American medical profession stem from the Flexner Report and its aftermath.
Moriarty discusses the “Paul is dead” conspiracy theory: that the Beatles Paul McCartney has in fact been dead for the past 54 years, replaced by a lookalike to keep the Beatles media empire going.
The theory began as a rumor, and spread through early Beatlemania forums among young obsessive students, who began analyzing songs (playing them backwards as necessary) to discover hidden messages and developing an elaborate symbolic mythology where it is held that the lookalike & Beatles themselves are covertly alluding to their coverup through coded messages (possibly out of guilt), where the positioning of stars, garbled lyrics, which hand a cigarette is held in, hands held up as benedictions/wardings, interpreting scenes as funeral processions, and so on. No amount of denials or interviews with Paul McCartney could kill the theory. Most of these ‘clues’ can be debunked, given the wealth of documentation about the most minute details of the production of Beatles albums. A few oddities remain, but Moriarty suggests they are covert messages or allusions for other things, like the ‘walrus’ references, and may even have been the Beatles playing along with the theorists! What is the point of discussing this? See QAnon:]
This silly event, which happened way back when I was a kid, made a really big impression on me. It was so eerie, so deliciously creepy. And so… consuming! Clue hunting occupied me and my friends constantly for nearly six weeks! It was all we ever talked about! We spent every school night and entire weekends going over every square millimeter of these five records. We destroyed every copy we had, spinning them backwards on our cheap record players. It drove our parents nuts! “Turn me on, dead man! Turn me on, dead man!” And they hated it even more when they heard it again on the evening news!
I can’t remember the last time I had so much fun.
And, although I didn’t appreciate it at the time, something wonderful happened as we scoured these records, backwards and forwards, line by line. We memorized them. “Who Buried Paul?” is one of the best games I ever played. This ridiculous rumor sucked my entire generation into a massively multiplayer adventure. A morbid treasure hunt in which accomplices were connected by word-of-mouth, college newspapers, the alternative press and underground radio. We can only wonder what would happen if something like this were to happen today, in the age of the World Wide Web. Imagine how such a thing might get started, by accident…
…put something like this in front of people, and all kinds of evocative coincidences become likely. Why is this useful for us as entertainers? Because that moment when you peer into the mirror of chaos and discover yourself is satisfying in a uniquely personal sense. You get a little oomph when you make a connection that way. Those little oomphs are what make good stories and puzzles and movies so compelling. And those little oomphs are what made the Paul-is-dead rumor so much fun…Let your players employ their own imaginative intelligence to fill in the gaps in your worlds you can’t afford to close. Chances are, they’ll paint the chaos in exactly the colors they want to see. What’s more, they’ll enjoy themselves doing it. But the credit will be yours.
"Paul is dead" is an urban legend and conspiracy theory alleging that English musician Paul McCartney, of the Beatles, died on 9 November 1966 and was secretly replaced by a look-alike. The rumour began circulating around 1967, but grew in popularity after being reported on American college campuses in late 1969. Proponents based the theory on perceived clues found in Beatles songs and album covers. Clue-hunting proved infectious, and within a few weeks had become an international phenomenon.
…Early last summer, in the midst of all this research, a chilly sensation began tingling up and down the spines of the experimenters. These extra neutrons that were being erupted—could they not in turn become involuntary bullets, flying from one exploding uranium nucleus into the heart of another, causing another fission which would itself cause still others? Wasn’t there a dangerous possibility that the uranium would at last become explosive? That the samples being bombarded in the laboratories at Columbia University, for example, might blow up the whole of New York City? To make matters more ominous, news of fission research from Germany, plentiful in the early part of 1939, mysteriously and abruptly stopped for some months. Had government censorship been placed on what might be a secret of military importance? The press and populace, getting wind of these possibly lethal goings-on, raised a hue and cry. Nothing daunted, however, the physicists worked on to find out whether or not they would be blown up, and the rest of us along with them. Now, a year after the original discovery, word comes from Paris that we don’t have to worry.
…With typical French—and scientific—caution, they added that this was perhaps true only for the particular conditions of their own experiment, which was carried out on a large mass of uranium under water. But most scientists agreed that it was very likely true in general.
…Readers made insomnious by “newspaper talk” of terrific atomic war weapons held in reserve by dictators may now get sleep.
In 2012, “A Quiet Opening: North Koreans in a Changing Media Environment” described the effects of the steady dissolution of North Korea’s information blockade. Precipitated by the collapse of the state economy during the famine of the 1990s, North Korea’s once strict external and internal controls on the flow of information atrophied as North Korean citizens traded with one another, and goods and people flowed across the border with China. Activities unthinkable in Kim Il Sung’s day became normalized, even if many remained technically illegal. A decade into the 21st century, North Korea was no longer perfectly sealed off from the outside world and its citizens were much more connected to each other. Continued research suggests that many of the trends toward greater information access and sharing detailed in “A Quiet Opening” persist today. Yet, over the last four years, since Kim Jong Un’s emergence as leader, the picture has become more complicated.
It is tempting to view the dynamics surrounding media access and information flow in North Korea as a simple tug-of-war: North Korean citizens gain greater access to a broader range of media and communication devices, and unsanctioned content. The North Korean government, realizing this, responds through crackdowns in an attempt to reconstitute its blockade on foreign information and limit the types of media and communication devices its citizens can access. However, the reality is not so neatly binary. As the North Korean economic situation rebounded after the famine and achieved relative stability, 2 authorities developed strategies to establish new, more modern forms of control within an environment that was fundamentally altered from its pre-famine state.
Among the most significant trends to emerge in the North Korean information environment under Kim Jong Un is the shift toward greater media digitization and the expansion of networked communications. The state has ceded and now sanctioned a considerably greater level of interconnectedness between private North Korean citizens. This, at least in part, may be an acknowledgement the market economy in North Korea is here to stay, and thus the communications channels that enable the processes of a market economy must be co-opted and supported rather than rolled back.3 Although the government continues to make efforts to monitor communications and dictate what subjects are off-limits, it is allowing average citizens far greater access to communications technologies. Greater digitization and digital network access are already having profound effects on the basic dynamics and capabilities that define the information space in North Korea.
The expansion and catalyzation of person-to-person communication through mobile phones and other networked digital technologies is in many ways a promising development. However, as this report will document, from both a user and technical perspective, expanding network connectivity to a broad swath of the population is arming the North Korean government with a new array of censorship and surveillance tools that go beyond what is observed even in other authoritarian states or closed media environments. It is clear that the state’s information control strategy, while changing, is not ad hoc or ill-considered. Recent technological innovations and policy changes, on balance, may be giving the North Korean government more control than they are ceding.
…Data Sources: This study primarily draws from:
The 2015 Broadcasting Board of Governors (BBG) Survey of North Korea Refugees, Defectors and Travelers (n = 350)
A qualitative study comprised of 34 interviews with specifically recruited recent defectors conducted in May and June of 2016 specifically for this report
Technical analyses of available North Korean software and hardware
[The details on NK use of digital censorship is interesting: steady progress in locking down Bluetooth and WiFi by software and then hardware modifications; use of Android security system/DRM to install audit logs + regular screenshots to capture foreign media consumption; a whitelist/signed-media system to block said foreign media from ever being viewed, with auto-deletion of offending files; watermarking (courtesy of an American university’s misguided outreach) of media created on desktops to trace them; network blocking and surveillance; and efforts towards automatic bulk surveillance of text messages for ‘South-Korean-style’ phrases/words. Stallman’s warnings about DRM are quite prophetic in the NK context—the system is secured against the user…For these reasons & poverty, radio (including foreign radios like Voice of America) is—surprisingly to me—the top source of information for North Koreans.]
To determine the overall rate of loss of workplace teaspoons and whether attrition and displacement are correlated with the relative value of the teaspoons or type of tearoom. Longitudinal cohort study. Research institute employing about 140 people. 70 discreetly numbered teaspoons placed in tearooms around the institute and observed weekly over five months. Incidence of teaspoon loss per 100 teaspoon years and teaspoon half life. 56 (80%) of the 70 teaspoons disappeared during the study. The half life of the teaspoons was 81 days. The half life of teaspoons in communal tearooms (42 days) was significantly shorter than for those in rooms associated with particular research groups (77 days). The rate of loss was not influenced by the teaspoons' value. The incidence of teaspoon loss over the period of observation was 360.62 per 100 teaspoon years. At this rate, an estimated 250 teaspoons would need to be purchased annually to maintain a practical institute-wide population of 70 teaspoons. The loss of workplace teaspoons was rapid, showing that their availability, and hence office culture in general, is constantly threatened.
The Birds is a comedy by the Ancient Greek playwright Aristophanes. It was performed in 414 BC at the City Dionysia where it won second place. It has been acclaimed by modern critics as a perfectly realized fantasy remarkable for its mimicry of birds and for the gaiety of its songs. Unlike the author's other early plays, it includes no direct mention of the Peloponnesian War and there are few references to Athenian politics, and yet it was staged not long after the commencement of the Sicilian Expedition, an ambitious military campaign that greatly increased Athenian commitment to the war effort. In spite of that, the play has many indirect references to Athenian political and social life. It is the longest of Aristophanes' surviving plays and yet it is a fairly conventional example of Old Comedy.
The Trojan Women, also translated as The Women of Troy, and also known by its transliterated Greek title Troades, is a tragedy by the Greek playwright Euripides. Produced in 415 BC during the Peloponnesian War, it is often considered a commentary on the capture of the Aegean island of Melos and the subsequent slaughter and subjugation of its populace by the Athenians earlier that year (see History of Milos). 415 BC was also the year of the scandalous desecration of the hermai and the Athenians' second expedition to Sicily, events which may also have influenced the author.
Lotka’s law, named after Alfred J. Lotka, is one of a variety of special applications of Zipf’s law. It describes the frequency of publication by authors in any given field. It states that the number of authors making x contributions in a given period is a fraction of the number making a single contribution, following the formula 1⁄xa where a nearly always equals 2, i.e., an approximate inverse-square law, where the number of authors publishing a certain number of articles is a fixed ratio to the number of authors publishing a single article. As the number of articles published increases, authors producing that many publications become less frequent. There are 1⁄4 as many authors publishing 2 articles within a specified time period as there are single-publication authors, 1⁄9 as many publishing 3 articles, 1⁄16 as many publishing 4 articles, etc. Though the law itself covers many disciplines, the actual ratios involved are discipline-specific.
The general formula says:
XnY = C
Y = C⁄Xn
where X is the number of publications, Y the relative frequency of authors with X publications, and n and C are constants depending on the specific field (n ≈ 2).
Historically, a comparison of the tiger versus the lion has been a popular topic of discussion by hunters, naturalists, artists, and poets, and continues to inspire the popular imagination. In the past, lions and tigers reportedly competed in the wilderness, where their ranges overlapped in Eurasia. The most common reported circumstance of their meeting is in captivity, either deliberately or accidentally.
John Burdon Sanderson Haldane, nicknamed "Jack" or "JBS", was a British-Indian scientist known for his works in physiology, genetics, evolutionary biology, and mathematics. With innovative use of statistics in biology, he was one of the founders of neo-Darwinism. He was a noted geneticist and physiologist.
Warren Edward Buffett is an American investor, business tycoon, philanthropist, and the chairman and CEO of Berkshire Hathaway. He is considered one of the most successful investors in the world and has a net worth of over US$85.6 billion as of December 2020, making him the world's fourth-wealthiest person.
10 Cloverfield Lane is a 2016 American science fiction psychological thriller film directed by Dan Trachtenberg in his directorial debut, produced by J. J. Abrams and Lindsey Weber and written by Josh Campbell, Matthew Stuecken, and Damien Chazelle. The film stars Mary Elizabeth Winstead, John Goodman, and John Gallagher, Jr. It is the second installment in the Cloverfield franchise. The story follows a young woman who, after a car crash, wakes up in an underground bunker with two men who insist that an event has left the surface of Earth uninhabitable.
The Wind Rises is a 2013 Japanese animated historical drama film written and directed by Hayao Miyazaki, animated by Studio Ghibli for the Nippon Television Network, Dentsu, Hakuhodo DY Media Partners, Walt Disney Japan, Mitsubishi, Toho and KDDI and distributed by Toho. It was released on 20 July 2013, in Japan, and was released by Touchstone Pictures in North America on 21 February 2014.
Subscription page for the monthly gwern.net newsletter. There are monthly updates, which will include summaries of projects I’ve worked on that month (the same as the changelog), collations of links or discussions from my subreddit, and book/movie reviews. You can also browse the archives since December 2013.
The woolly mammoth is a species of mammoth that lived during the Pleistocene until its extinction in the Holocene epoch. It was one of the last in a line of mammoth species, beginning with Mammuthus subplanifrons in the early Pliocene. The woolly mammoth began to diverge from the steppe mammoth about 800,000 years ago in East Asia. Its closest extant relative is the Asian elephant. The appearance and behaviour of this species are among the best studied of any prehistoric animal because of the discovery of frozen carcasses in Siberia and Alaska, as well as skeletons, teeth, stomach contents, dung, and depiction from life in prehistoric cave paintings. Mammoth remains had long been known in Asia before they became known to Europeans in the 17th century. The origin of these remains was long a matter of debate, and often explained as being remains of legendary creatures. The mammoth was identified as an extinct species of elephant by Georges Cuvier in 1796.
The processes leading up to species extinctions are typically characterized by prolonged declines in population size and geographic distribution, followed by a phase in which populations are very small and may be subject to intrinsic threats, including loss of genetic diversity and inbreeding. However, whether such genetic factors have had an impact on species prior to their extinction is unclear; examining this would require a detailed reconstruction of a species' demographic history as well as changes in genome-wide diversity leading up to its extinction. Here, we present high-quality complete genome sequences from two woolly mammoths (Mammuthus primigenius). The first mammoth was sequenced at 17.1-fold coverage and dates to ~4,300 years before present, representing one of the last surviving individuals on Wrangel Island. The second mammoth, sequenced at 11.2-fold coverage, was obtained from an ~44,800-year-old specimen from the Late Pleistocene population in northeastern Siberia. The demographic trajectories inferred from the two genomes are qualitatively similar and reveal a population bottleneck during the Middle or Early Pleistocene, and a more recent severe decline in the ancestors of the Wrangel mammoth at the end of the last glaciation. A comparison of the two genomes shows that the Wrangel mammoth has a 20% reduction in heterozygosity as well as a 28-fold increase in the fraction of the genome that comprises runs of homozygosity. We conclude that the population on Wrangel Island, which was the last surviving woolly mammoth population, was subject to reduced genetic diversity shortly before it became extinct.
Wrangel Island is an island in the Arctic Ocean, between the Chukchi Sea and East Siberian Sea. Wrangel Island lies astride the 180° meridian. The International Date Line is displaced eastwards at this latitude to avoid the island as well as the Chukchi Peninsula on the Russian mainland. The closest land to Wrangel Island is the tiny and rocky Herald Island located 60 km (37 mi) to the east. Wrangel Island is the last known place on earth where woolly mammoths survived, until around 4,000 years ago.
The effective population size is the number of individuals that an idealised population would need to have in order for some specified quantity of interest to be the same in the idealised population as in the real population. Idealised populations are based on unrealistic but convenient simplifications such as random mating, simultaneous birth of each new generation, constant population size, and equal numbers of children per parent. In some simple scenarios, the effective population size is the number of breeding individuals in the population. However, for most quantities of interest and most real populations, the census population size N of a real population is usually larger than the effective population size Ne. The same population may have multiple effective population sizes, for different properties of interest, including for different genetic loci.
Brian Moriarty is an American video game developer who authored three of the original Infocom interactive fiction titles, Wishbringer (1985), Trinity (1986), and Beyond Zork (1987), as well as Loom (1990) for LucasArts.
Sir James Paul McCartney is an English singer, songwriter, musician, and record and film producer who gained worldwide fame as co-lead vocalist and bassist for the Beatles. His songwriting partnership with John Lennon remains the most successful in history. After the group disbanded in 1970, he pursued a solo career and formed the band Wings with his first wife, Linda, and Denny Laine.