newsletter/2017/03 (Link Bibliography)

“newsletter/​2017/​03” links:


  2. 02

  3. newsletter

  4. Changelog


  6. ⁠, Patrick F. Sullivan, Arpana Agrawal, Cynthia M. Bulik, Ole A. Andreassen, Anders D. Børglum, Gerome Breen, Sven Cichon, Howard J. Edenberg, Stephen V. Faraone, Joel Gelernter, Carol A. Mathews, Caroline M. Nievergelt, Jordan Smoller, Michael C. O’Donovan, for the Psychiatric Genomics Consortium (2017-03-10):

    The Psychiatric Genomics Consortium (PGC) is the largest consortium in the history of psychiatry. In the past decade, this global effort has delivered a rapidly increasing flow of new knowledge about the fundamental basis of common psychiatric disorders, particularly given its dedication to rapid progress and open science. The PGC has recently commenced a program of research designed to deliver “actionable” findings—genomic results that (a) reveal the fundamental biology, (b) inform clinical practice, and (c) deliver new therapeutic targets. This is the central idea of the PGC: to convert the family history risk factor into biologically, clinically, and therapeutically meaningful insights. The emerging findings suggest that we are entering into a phase of accelerated translation of genetic discoveries to impact psychiatric practice within a precision medicine framework.


    PGC Coordinating Committee: Mark Daly, Michael Gill, John Kelsoe, Karestan Koenen, Douglas Levinson, Cathryn Lewis, Ben Neale, Danielle Posthuma, Jonathan Sebat, and Pamela Sklar.

  7. ⁠, Amelie Baud, Megan K. Mulligan, Francesco Paolo Casale, Jesse F. Ingels, Casey J. Bohl, Jacques Callebert, Jean-Marie Launay, Jon Krohn, Andres Legarra, Robert W. Williams, Oliver Stegle (2016-11-21):

    Assessing the impact of the social environment on health and disease is challenging. As social effects are in part determined by the genetic makeup of social partners, they can be studied from associations between genotypes of one individual and phenotype of another (social genetic effects, SGE, also called indirect genetic effects). For the first time we quantified the contribution of SGE to more than 100 organismal phenotypes and genome-wide gene expression measured in laboratory mice. We find that genetic variation in cage mates (i.e. SGE) contributes to variation in organismal and molecular measures related to anxiety, wound healing, immune function, and body weight. Social genetic effects explained up to 29% of phenotypic ⁠, and for several traits their contribution exceeded that of direct genetic effects (effects of an individual’s genotypes on its own phenotype). Importantly, we show that ignoring SGE can severely bias estimates of direct genetic effects (heritability). Thus SGE may be an important source of “missing heritability” in studies of complex traits in human populations. In summary, our study uncovers an important contribution of the social environment to phenotypic variation, sets the basis for using SGE to dissect social effects, and identifies an opportunity to improve studies of direct genetic effects.

    Author Summary:

    Daily interactions between individuals can influence their health both in positive and negative ways. Often the mechanisms mediating social effects are unknown, so current approaches to study social effects are limited to a few phenotypes for which the mediating mechanisms are known a priori or suspected. Here we propose to leverage the fact that most traits are genetically controlled to investigate the influence of the social environment. To do so, we study associations between genotypes of one individual and phenotype of another individual (social genetic effects, SGE, also called indirect genetic effects). Importantly, SGE can be studied even when the traits that mediate the influence of the social environment are not known. For the first time we quantified the contribution of SGE to more than 100 organismal phenotypes and genome-wide gene expression measured in laboratory mice. We find that genetic variation in cage mates (i.e. SGE) explains up to 29% of the variation in anxiety, wound healing, immune function, and body weight. Hence our study uncovers an unexpectedly large influence of the social environment. Additionally, we show that ignoring SGE can severely bias estimates of direct genetic effects (effects of an individual’s genotypes on its own phenotype), which has important implications for the study of the genetic basis of complex traits.

  8. ⁠, Joey Ward, Rona J. Strawbridge, Nicholas Graham, Mark E. S. Bailey, Amy Freguson, Donald M. Lyall, Breda Cullen, Laura M. Pidegon, Jonathan Cavanagh, Daniel F. Mackay, Jill P. Pell, Michael O’Donovan, Valentina Escott-Price, Daniel J. Smith (2017-03-17):

    Mood instability is a core clinical feature of affective disorders, particularly major depressive disorder (MDD) and bipolar disorder (BD). It may be a useful construct in line with the Research Domain Criteria (RDoC) approach, which proposes studying dimensional psychopathological traits that cut across diagnostic categories as a more effective strategy for identifying the underlying biology of psychiatric disorders.

    Here we report a (GWAS) of mood instability in a very large study of 53,525 cases and 60,443 controls from the cohort, the only such GWAS reported to date. We identified four independent loci (on chromosomes 8, 9, 14 and 18) statistically-significantly associated with mood instability, with a common -based heritability estimate for mood instability of approximately 8%. We also found a strong genetic correlation between mood instability and MDD (0.60, SE = 0.07, p = 8.95×10−17), a small but statistically statistically-significant with schizophrenia (0.11, SE = 0.04, p = 0.01), but no genetic correlation with BD.

    Several candidate genes harbouring variants in linkage disequilibrium with the associated loci may have a role in the pathophysiology of mood disorders, including the DCC netrin 1 receptor (DCC), eukaryotic initiation factor 2B (EIF2B2), placental growth factor (PGF) and protein tyrosine phosphatase, receptor type D (PTPRD) genes. Strengths of this study include the large sample size; however, our measure of mood instability may be limited by the use of a single self-reported question.

    Overall, this work suggests a polygenic basis for mood instability and opens up the field for the further biological investigation of this important cross-diagnostic psychopathological trait.

  9. ⁠, Michel G. Nivard, Suzanne H. Gage, Jouke J. Hottenga, Catherina E. M. van Beijsterveldt, Abdel Abdellaoui, Bart M. L. Baselmans, Lannie Ligthart, Beate St Pourcain, Dorret I. Boomsma, Marcus M. Munafoò, Christel M. Middeldorp (2016-05-11):

    Various non-psychotic psychiatric disorders in childhood and adolescence can precede the onset of ⁠, but the nature of this relationship remains unclear. We investigated to what extent the association between schizophrenia and psychiatric disorders in childhood is explained by shared genetic risk factors.

    Polygenic risk scores (), reflecting an individual’s genetic risk for schizophrenia, were constructed for participants in two birth cohorts (2,588 children from the Netherlands Twin Register (NTR) and 6,127 from the Avon Longitudinal Study of Parents And Children (ALSPAC)). The associations between schizophrenia PRS and measures of anxiety, depression, attention deficit hyperactivity disorder (), and oppositional defiant disorder/​​​​conduct disorder (ODD/​​​​CD) were estimated at age 7, 10, 12–13 and 15 years in the two cohorts. Results were then meta-analyzed, and age-effects and differences in the associations between disorders and PRS were formally tested in a meta-regression.

    The schizophrenia PRS was associated with childhood and adolescent psychopathology Where the association was weaker for ODD/​​​​CD at age 7. The associations increased with age this increase was steepest for ADHD and ODD/​​​​CD. The results are consistent with a common genetic etiology of schizophrenia and developmental psychopathology as well as with a stronger shared genetic etiology between schizophrenia and adolescent onset psychopathology.

    A multivariate of multiple and repeated observations enabled to optimally use the longitudinal data across diagnoses in order to provide knowledge on how childhood disorders develop into severe adult psychiatric disorders.

  10. ⁠, Rebekah L. Rogers, Montgomery Slatkin (2017-01-24):

    (Mammuthus primigenius) populated Siberia, Beringia, and North America during the Pleistocene and early Holocene. Recent breakthroughs in ancient DNA sequencing have allowed for complete genome sequencing for two specimens of woolly mammoths (). One mammoth specimen is from a mainland population 45,000 years ago when mammoths were plentiful. The second, a 4300 yr old specimen, is derived from an isolated population on where mammoths subsisted with small more than 43-fold lower than previous populations. These extreme differences in effective population size offer a rare opportunity to test nearly neutral models of genome architecture evolution within a single species. Using these previously published mammoth sequences, we identify deletions, retrogenes, and non-functionalizing point mutations. In the Wrangel island mammoth, we identify a greater number of deletions, a larger proportion of deletions affecting gene sequences, a greater number of candidate retrogenes, and an increased number of premature stop codons. This accumulation of detrimental mutations is consistent with genomic meltdown in response to low effective population sizes in the dwindling mammoth population on Wrangel island. In addition, we observe high rates of loss of olfactory receptors and urinary proteins, either because these loci are non-essential or because they were favored by divergent selective pressures in island environments. Finally, at the locus of FOXQ1 we observe two independent loss-of-function mutations, which would confer a satin coat phenotype in this island woolly mammoth.

    Author summary: We observe an excess of detrimental mutations, consistent with genomic meltdown in woolly mammoths on Wrangel Island just to extinction. We observe an excess of deletions, an increase in the proportion of deletions affecting gene sequences, and an excess of premature stop codons in response to evolution under low effective population sizes. Large numbers of olfactory receptors appear to have loss of function mutations in the island mammoth. These results offer genetic support within a single species for nearly-neutral theories of genome evolution. We also observe two independent loss of function mutations at the FOXQ1 locus, likely conferring a satin coat in this unusual woolly mammoth.


  12. ⁠, Amanda L. Pendleton, Feichen Shen, Angela M. Taravella, Sarah Emery, Krishna R. Veeramah, Adam R. Boyko, Jeffrey M. Kidd (2017-03-21):

    Dogs (Canis lupus familiaris) were domesticated from gray wolves between 20–40kya in Eurasia, yet details surrounding the process of domestication remain unclear. The vast array of phenotypes exhibited by dogs mirror numerous other domesticated animal species, a phenomenon known as the Domestication Syndrome. Here, we use signatures persisting in the dog genome to identify genes and pathways altered by the intensive selective pressures of domestication. We identified 37 candidate domestication regions containing 17.5Mb of genome sequence and 172 genes through whole-genome SNP analysis of 43 globally distributed village dogs and 10 wolves. Comparisons with three ancient dog genomes indicate that these regions reflect signatures of domestication rather than breed formation. Analysis of genes within these regions revealed a significant enrichment of gene functions linked to neural crest cell migration, differentiation and development. Genome copy number analysis identified regions of localized sequence and structural diversity, and discovered additional copy number variation at the amylase-2b locus. Overall, these results indicate that primary selection pressures targeted genes in the neural crest as well as components of the minor spliceosome, rather than genes involved in starch metabolism. Smaller jaw sizes, hairlessness, floppy ears, tameness, and diminished craniofacial development distinguish wolves from domesticated dogs, phenotypes of the Domestication Syndrome that can result from decreased neural crest cells at these sites. We propose that initial selection acted on key genes in the neural crest and minor splicing pathways during early dog domestication, giving rise to the phenotypes of modern dogs.



  15. 2017-tang.pdf: “CRISPR  /​ ​​ ​Cas9-mediated gene editing in human zygotes using Cas9 protein⁠, Lichun Tang



  18. {#linkBibliography-spectrum)-2016 .docMetadata}, DeLiang Wang (IEEE Spectrum) (2016-12-06):

    The human auditory system can naturally pick out a voice in a crowded room, but creating a hearing aid that mimics that ability has stumped signal processing specialists, artificial intelligence experts, and audiologists for decades. British cognitive scientist Colin Cherry first dubbed this the “cocktail party problem” in 1953.

    More than six decades later, less than 25% of people who need a hearing aid actually use one…The global US $6 billion hearing aid industry is expected to grow at 6% every year through 2020…The greatest frustration among potential users is that a hearing aid cannot distinguish between, for example, a voice and the sound of a passing car if those sounds occur at the same time. The device cranks up the volume on both, creating an incoherent din.

    It’s time we solve this problem. To produce a better experience for hearing aid wearers, my lab at Ohio State University, in Columbus, recently applied machine learning based on deep neural networks to the task of segregating sounds. We have tested multiple versions of a digital filter that not only amplifies sound but can also isolate speech from background noise and automatically adjust the volumes of each separately. We believe this approach can ultimately restore a hearing-impaired person’s comprehension to match—or even exceed—that of someone with normal hearing. In fact, one of our early models boosted, from 10 to 90 percent, the ability of some subjects to understand spoken words obscured by noise. Because it’s not necessary for listeners to understand every word in a phrase to gather its meaning, this improvement frequently meant the difference between comprehending a sentence or not…Having demonstrated promising initial results with our early classification algorithms, we decided to take the next logical step—to improve the system so it could function in noisy real-world environments, and without training for specific noises and sentences. This challenge prompted us to try to do something that had never been done before: build a machine-learning program that would run on a neural network and separate speech from noise after undergoing a sophisticated training process. The program would use the ideal binary mask to guide the training of the neural network. And it worked. In a study involving 24 test subjects, we demonstrated that this program could boost the comprehension of hearing-impaired people by about 50 percent.

    …People in both groups showed a big improvement in their ability to comprehend sentences amid noise after the sentences were processed through our program. People with hearing impairment could decipher only 29% of words muddled by babble without the program, but they understood 84% after the processing. Several went from understanding only 10% of words in the original sample to comprehending around 90% with the program. There were similar gains for the steady-noise scenario with hearing-impaired subjects—they went from 36% to 82% comprehension. Even people with normal hearing were able to better understand noisy sentences, which means our program could someday help far more people than we originally anticipated. Listeners with normal hearing understood 37% of the words spoken amid steady noise without the program, and 80% with it. For the babble, they improved from 42% of words to 78 percent. One of the most intriguing results of our experiment came when we asked, Could people with hearing impairment who are assisted by our program actually outperform those with normal hearing? Remarkably, the answer is yes. Listeners with hearing impairment who used our program understood nearly 20% more words in the babble and about 15% more words in steady noise than those with normal hearing who relied solely on their own auditory system to separate speech from noise. With these results, our program built from deep neural networks has come the closest to solving the cocktail party problem of any effort to date.

  19. ⁠, Sercan O. Arik, Mike Chrzanowski, Adam Coates, Gregory Diamos, Andrew Gibiansky, Yongguo Kang, Xian Li, John Miller, Andrew Ng, Jonathan Raiman, Shubho Sengupta, Mohammad Shoeybi (2017-02-25):

    We present Deep Voice, a production-quality text-to-speech system constructed entirely from deep neural networks. Deep Voice lays the groundwork for truly end-to-end neural speech synthesis. The system comprises five major building blocks: a segmentation model for locating phoneme boundaries, a grapheme-to-phoneme conversion model, a phoneme duration prediction model, a fundamental frequency prediction model, and an audio synthesis model. For the segmentation model, we propose a novel way of performing phoneme boundary detection with deep neural networks using connectionist temporal classification (CTC) loss. For the audio synthesis model, we implement a variant of WaveNet that requires fewer parameters and trains faster than the original. By using a neural network for each component, our system is simpler and more flexible than traditional text-to-speech systems, where each component requires laborious feature engineering and extensive domain expertise. Finally, we show that inference with our system can be performed faster than real time and describe optimized WaveNet inference kernels on both CPU and that achieve up to 400× speedups over existing implementations.

  20. ⁠, David Balduzzi, Marcus Frean, Lennox Leary, JP Lewis, Kurt Wan-Duo Ma, Brian McWilliams (2017-02-28):

    A long-standing obstacle to progress in deep learning is the problem of vanishing and exploding gradients. Although, the problem has largely been overcome via carefully constructed initializations and batch normalization, architectures incorporating skip-connections such as highway and perform much better than standard feedforward architectures despite well-chosen initialization and batch normalization. In this paper, we identify the shattered gradients problem. Specifically, we show that the correlation between gradients in standard feedforward networks decays exponentially with depth resulting in gradients that resemble white noise whereas, in contrast, the gradients in architectures with skip-connections are far more resistant to shattering, decaying sublinearly. Detailed empirical evidence is presented in support of the analysis, on both fully-connected networks and convnets. Finally, we present a new “looks linear” (LL) initialization that prevents shattering, with preliminary experiments showing the new initialization allows to train very deep networks without the addition of skip-connections.

  21. FC

  22. ⁠, Alexander Pritzel, Benigno Uria, Sriram Srinivasan, Adrià Puigdomènech, Oriol Vinyals, ⁠, Daan Wierstra, Charles Blundell (2017-03-06):

    Deep methods attain super-human performance in a wide range of environments. Such methods are grossly inefficient, often taking orders of magnitudes more data than humans to achieve reasonable performance. We propose Neural Episodic Control: a deep reinforcement learning agent that is able to rapidly assimilate new experiences and act upon them. Our agent uses a semi-tabular representation of the value function: a buffer of past experience containing slowly changing state representations and rapidly updated estimates of the value function. We show across a wide range of environments that our agent learns significantly faster than other state-of-the-art, general purpose deep reinforcement learning agents.


  24. ⁠, Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas (2017-03-10):

    achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; 𝑂(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially.

    In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup—𝑂(log n) sampling instead of 𝑂(N)—enabling the practical generation of 512×512 images.

    We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.

  25. ⁠, Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, Jiwon Kim (2017-03-15):

    While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity. Source code for official implementation is publicly available https:/​​​​/​​​​​​​​SKTBrain/​​​​DiscoGAN


  27. ⁠, Risto Miikkulainen, Jason Liang, Elliot Meyerson, Aditya Rawal, Dan Fink, Olivier Francon, Bala Raju, Hormoz Shahrzad, Arshak Navruzyan, Nigel Duffy, Babak Hodjat (2017-03-01):

    The success of deep learning depends on finding an architecture to fit the task. As deep learning has scaled up to more challenging tasks, the architectures have become difficult to design by hand. This paper proposes an automated method, CoDeepNEAT, for optimizing deep learning architectures through evolution. By extending existing neuroevolution methods to topology, components, and hyperparameters, this method achieves results comparable to best human designs in standard benchmarks in object recognition and language modeling. It also supports building a real-world application of automated image captioning on a magazine website. Given the anticipated increases in available computing power, evolution of deep networks is promising approach to constructing deep learning applications in the future.

  28. ⁠, Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin (2017-03-03):

    Neural networks have proven effective at solving difficult problems but designing their architectures can be challenging, even for image classification problems alone. Our goal is to minimize human participation, so we employ evolutionary algorithms to discover such networks automatically. Despite significant computational requirements, we show that it is now possible to evolve models with accuracies within the range of those published in the last year. Specifically, we employ simple evolutionary techniques at unprecedented scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting from trivial initial conditions and reaching accuracies of 94.6% (95.6% for ensemble) and 77.0%, respectively. To do this, we use novel and intuitive mutation operators that navigate large search spaces; we stress that no human participation is required once evolution starts and that the output is a fully-trained model. Throughout this work, we place special emphasis on the repeatability of results, the variability in the outcomes and the computational requirements.

  29. ⁠, Tao Wei, Changhu Wang, Yong Rui, Chang Wen Chen (2016-03-05):

    We present in this paper a systematic study on how to morph a well-trained neural network to a new one so that its network function can be completely preserved. We define this as network morphism in this research. After morphing a parent network, the child network is expected to inherit the knowledge from its parent network and also has the potential to continue growing into a more powerful one with much shortened training time. The first requirement for this network morphism is its ability to handle diverse morphing types of networks, including changes of depth, width, kernel size, and even subnet. To meet this requirement, we first introduce the network morphism equations, and then develop novel morphing algorithms for all these morphing types for both classic and convolutional neural networks. The second requirement for this network morphism is its ability to deal with non-linearity in a network. We propose a family of parametric-activation functions to facilitate the morphing of any continuous non-linear activation neurons. Experimental results on benchmark datasets and typical neural networks demonstrate the effectiveness of the proposed network morphism scheme.

  30. ⁠, Ben Garfinkel, Miles Brundage, Daniel Filan, Carrick Flynn, Jelena Luketina, Michael Page, ⁠, Andrew Snyder-Beattie, Max Tegmark (2017-03-31):

    In recent years, a number of prominent computer scientists, along with academics in fields such as philosophy and physics, have lent credence to the notion that machines may one day become as large as humans. Many have further argued that machines could even come to exceed human size by a significant margin. However, there are at least seven distinct arguments that preclude this outcome. We show that it is not only implausible that machines will ever exceed human size, but in fact impossible.


  32. ⁠, Denes Szucs, John P. A. Ioannidis (2017-02-06):

    We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median was D = 0.93 (interquartile range: 0.64–1.46) for nominally statistically-significant results and D = 0.24 (0.11–0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience.

    Author summary:

    Biomedical science, psychology, and many other fields may be suffering from a serious replication crisis. In order to gain insight into some factors behind this crisis, we have analyzed statistical information extracted from thousands of cognitive neuroscience and psychology research papers. We established that the to discover existing relationships has not improved during the past half century. A consequence of low statistical power is that research studies are likely to report many false positive findings. Using our large dataset, we estimated the probability that a finding is false (called false report probability). With some reasonable assumptions about how often researchers come up with correct hypotheses, we conclude that more than 50% of published findings deemed to be statistically-significant are likely to be false. We also observed that cognitive neuroscience studies had higher false report probability than psychology studies, due to smaller sample sizes in cognitive neuroscience. In addition, the higher the impact factors of the journals in which the studies were published, the lower was the statistical power. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience.

  33. 2017-fanelli.pdf




  37. ⁠, Weisse, Allen B (2012):

    Although experimentation involving human volunteers has attracted intense study, the matter of self-experimentation among medical researchers has received much less attention. Many questions have been answered only in part, or have been left unanswered. How common is this practice? Is it more common among certain nationalities? What have been the predominant medical fields in which self-experimentation has occurred? How dangerous an act has this proved to be? What have been the trends over time? What is the future likely to bring?From the available literature, I identified and analyzed 465 documented instances of this practice, performed over the course of the past 2 centuries. Most instances occurred in the United States. The peak of self-experimentation occurred in the first half of the 20th century. Eight deaths were recorded. A number of the investigators enjoyed successful careers, including the receipt of Nobel Prizes. Although self-experimentation by physicians and other biological scientists appears to be in decline, the courage of those involved and the benefits to society cannot be denied.

  38. 2011-08-14-williamyu-mentalrngs.html





  43. 1986-dietz.pdf: ⁠, Mary G. Dietz (1986-09-01; history):

    most famous political work, ⁠, was a masterful act of political deception. I argue that Machiavelli’s intention was a republican one: to undo by giving him advice that would jeopardize his power, hasten his overthrow, and allow for the resurgence of the Florentine republic.

    This interpretation returns The Prince to its specific historical context. It considers Machiavelli’s advice to Lorenzo on where to reside, how to behave, and whom to arm in light of the political reality of 16th-century Florence. Evidence external to The Prince, including Machiavelli’s other writings and his own political biography, confirms his anti-Medicean sentiments, his republican convictions, and his proclivity for deception.

    Understanding The Prince as an act of political deception continues a tradition of reading Machiavelli as a radical republican. Moreover, it overcomes the difficulties of previous republican interpretations, and provides new insight into the strategic perspective and Renaissance artistry Machiavelli employed as a theoretician.

    [See also “On the Pedagogical Motive for Esoteric Writing [in Western Philosophy]”⁠, Melzer 2007; ⁠.]



  46. 1997-gottfredson.pdf

  47. ⁠, Brian Moriarty (1999-03-17):

    [March 1999 talk (video) by video game designer on conspiracy theories, gamification, art, and human psychology, in the same vein as his “The Secret of Palm 46” talk.

    Moriarty discusses the conspiracy theory: that the Beatles has in fact been dead for the past 54 years, replaced by a lookalike to keep the Beatles media empire going.

    The theory began as a rumor, and spread through early Beatlemania forums among young obsessive students, who began analyzing songs (playing them backwards as necessary) to discover hidden messages and developing an elaborate symbolic mythology where it is held that the lookalike & Beatles themselves are covertly alluding to their coverup through coded messages (possibly out of guilt), where the positioning of stars, garbled lyrics, which hand a cigarette is held in, hands held up as benedictions/​​​​wardings, interpreting scenes as funeral processions, and so on. No amount of denials or interviews with Paul McCartney could kill the theory. Most of these ‘clues’ can be debunked, given the wealth of documentation about the most minute details of the production of Beatles albums. A few oddities remain, but Moriarty suggests they are covert messages or allusions for other things, like the ‘walrus’ references, and may even have been the Beatles playing along with the theorists! What is the point of discussing this? See QAnon:]

    This silly event, which happened way back when I was a kid, made a really big impression on me. It was so eerie, so deliciously creepy. And so… consuming! Clue hunting occupied me and my friends constantly for nearly six weeks! It was all we ever talked about! We spent every school night and entire weekends going over every square millimeter of these five records. We destroyed every copy we had, spinning them backwards on our cheap record players. It drove our parents nuts! “Turn me on, dead man! Turn me on, dead man!” And they hated it even more when they heard it again on the evening news!

    I can’t remember the last time I had so much fun.

    And, although I didn’t appreciate it at the time, something wonderful happened as we scoured these records, backwards and forwards, line by line. We memorized them. “Who Buried Paul?” is one of the best games I ever played. This ridiculous rumor sucked my entire generation into a massively multiplayer adventure. A morbid treasure hunt in which accomplices were connected by word-of-mouth, college newspapers, the alternative press and underground radio. We can only wonder what would happen if something like this were to happen today, in the age of the World Wide Web. Imagine how such a thing might get started, by accident…

    …put something like this in front of people, and all kinds of evocative coincidences become likely. Why is this useful for us as entertainers? Because that moment when you peer into the mirror of chaos and discover yourself is satisfying in a uniquely personal sense. You get a little oomph when you make a connection that way. Those little oomphs are what make good stories and puzzles and movies so compelling. And those little oomphs are what made the Paul-is-dead rumor so much fun…Let your players employ their own imaginative intelligence to fill in the gaps in your worlds you can’t afford to close. Chances are, they’ll paint the chaos in exactly the colors they want to see. What’s more, they’ll enjoy themselves doing it. But the credit will be yours.







  54. 1940-sciam-harrington-nuclearweapons-dontworryitcanthappen.pdf: {#linkBibliography-american)-1940 .docMetadata doi=“10.2307/​​24988773”}, Jean Harrington (Scientific American) (1940-05-01; existential-risk):

    …Early last summer, in the midst of all this research, a chilly sensation began tingling up and down the spines of the experimenters. These extra neutrons that were being erupted—could they not in turn become involuntary bullets, flying from one exploding uranium nucleus into the heart of another, causing another fission which would itself cause still others? Wasn’t there a dangerous possibility that the uranium would at last become explosive? That the samples being bombarded in the laboratories at Columbia University, for example, might blow up the whole of New York City? To make matters more ominous, news of fission research from Germany, plentiful in the early part of 1939, mysteriously and abruptly stopped for some months. Had government censorship been placed on what might be a secret of military importance? The press and populace, getting wind of these possibly lethal goings-on, raised a hue and cry. Nothing daunted, however, the physicists worked on to find out whether or not they would be blown up, and the rest of us along with them. Now, a year after the original discovery, word comes from Paris that we don’t have to worry.

    …With typical French—and scientific—caution, they added that this was perhaps true only for the particular conditions of their own experiment, which was carried out on a large mass of uranium under water. But most scientists agreed that it was very likely true in general.

    …Readers made insomnious by “newspaper talk” of terrific atomic war weapons held in reserve by dictators may now get sleep.

  55. 2017-kretchun.pdf: ⁠, Nat Kretchun, Catherine Lee, Seamus Tuohy (2017-02-01; technology):

    In 2012, “A Quiet Opening: North Koreans in a Changing Media Environment” described the effects of the steady dissolution of North Korea’s information blockade. Precipitated by the collapse of the state economy during the famine of the 1990s, North Korea’s once strict external and internal controls on the flow of information atrophied as North Korean citizens traded with one another, and goods and people flowed across the border with China. Activities unthinkable in Kim Il Sung’s day became normalized, even if many remained technically illegal. A decade into the 21st century, North Korea was no longer perfectly sealed off from the outside world and its citizens were much more connected to each other. Continued research suggests that many of the trends toward greater information access and sharing detailed in “A Quiet Opening” persist today. Yet, over the last four years, since Kim Jong Un’s emergence as leader, the picture has become more complicated.

    It is tempting to view the dynamics surrounding media access and information flow in North Korea as a simple tug-of-war: North Korean citizens gain greater access to a broader range of media and communication devices, and unsanctioned content. The North Korean government, realizing this, responds through crackdowns in an attempt to reconstitute its blockade on foreign information and limit the types of media and communication devices its citizens can access. However, the reality is not so neatly binary. As the North Korean economic situation rebounded after the famine and achieved relative stability, 2 authorities developed strategies to establish new, more modern forms of control within an environment that was fundamentally altered from its pre-famine state.

    Among the most important trends to emerge in the North Korean information environment under Kim Jong Un is the shift toward greater media digitization and the expansion of networked communications. The state has ceded and now sanctioned a considerably greater level of interconnectedness between private North Korean citizens. This, at least in part, may be an acknowledgement the market economy in North Korea is here to stay, and thus the communications channels that enable the processes of a market economy must be co-opted and supported rather than rolled back.3 Although the government continues to make efforts to monitor communications and dictate what subjects are off-limits, it is allowing average citizens far greater access to communications technologies. Greater digitization and digital network access are already having profound effects on the basic dynamics and capabilities that define the information space in North Korea.

    The expansion and catalyzation of person-to-person communication through mobile phones and other networked digital technologies is in many ways a promising development. However, as this report will document, from both a user and technical perspective, expanding network connectivity to a broad swath of the population is arming the North Korean government with a new array of censorship and surveillance tools that go beyond what is observed even in other authoritarian states or closed media environments. It is clear that the state’s information control strategy, while changing, is not ad hoc or ill-considered. Recent technological innovations and policy changes, on balance, may be giving the North Korean government more control than they are ceding.

    Data Sources: This study primarily draws from:

    • The 2015 Broadcasting Board of Governors (BBG) Survey of North Korea Refugees, Defectors and Travelers (n = 350)
    • A qualitative study comprised of 34 interviews with specifically recruited recent defectors conducted in May and June of 2016 specifically for this report
    • Technical analyses of available North Korean software and hardware

    [The details on NK use of digital censorship is interesting: steady progress in locking down Bluetooth and WiFi by software and then hardware modifications; use of Android security system/​​​​DRM to install audit logs + regular screenshots to capture foreign media consumption; a whitelist/​​​​signed-media system to block said foreign media from ever being viewed, with auto-deletion of offending files; watermarking (courtesy of an American university’s misguided outreach) of media created on desktops to trace them; network and surveillance; and efforts towards automatic bulk surveillance of text messages for ‘South-Korean-style’ phrases/​​​​words. Stallman’s warnings about DRM are quite prophetic in the NK context—the system is secured against the user…For these reasons & poverty, radio (including foreign radios like Voice of America) is—surprisingly to me—the top source of information for North Koreans.]










  65. ⁠, Lim, Megan S. C Hellard, Margaret E. Aitken, Campbell K (2005):

    Objectives: To determine the overall rate of loss of workplace teaspoons and whether attrition and displacement are correlated with the relative value of the teaspoons or type of tearoom.

    Design: Longitudinal cohort study.

    Setting: Research institute employing about 140 people.

    Subjects: 70 discreetly numbered teaspoons placed in tearooms around the institute and observed weekly over five months.

    Main Outcome Measures: Incidence of teaspoon loss per 100 teaspoon years and teaspoon half life.

    Results: 56 (80%) of the 70 teaspoons disappeared during the study. The half life of the teaspoons was 81 days. The half life of teaspoons in communal tearooms (42 days) was significantly shorter than for those in rooms associated with particular research groups (77 days). The rate of loss was not influenced by the teaspoons’ value. The incidence of teaspoon loss over the period of observation was 360.62 per 100 teaspoon years. At this rate, an estimated 250 teaspoons would need to be purchased annually to maintain a practical institute-wide population of 70 teaspoons.

    Conclusions: The loss of workplace teaspoons was rapid, showing that their availability, and hence office culture in general, is constantly threatened.



  68. Books#possible-worlds-haldane-2001


  70. Books#berkshire-hathaway-letters-to-shareholders-buffett-2013

  71. Anime#the-wind-rises