The conventional theory about the origin of the state is that the adoption of farming increased land productivity, which led to the production of food surplus. This surplus was a prerequisite for the emergence of tax-levying elites and, eventually, states.
We challenge this theory and propose that hierarchy arose as a result of the shift to dependence on appropriable cereal grains.
Our empirical investigation, utilizing multiple data sets spanning several millennia, demonstrates a causal effect of the cultivation of cereals on hierarchy, without finding a similar effect for land productivity.
We further support our claims with several case studies.
Cereal grains can be stored and, because they are harvested seasonally, have to be stored so that they can be drawn on for year-round subsistence. The relative ease of confiscating stored cereals, their high energy density, and their durability enhance their appropriability, thereby facilitating the emergence of tax-levying elites. Roots and tubers, in contrast, are typically perennial and do not have to be reaped in a particular period, and once harvested they are rather perishable.
In the mid thirteenth century, England used only a single coin, the silver penny. The flow of coins into and out of the government’s treasury was recorded in the rolls of the Exchequer of Receipt. These receipt and issue rolls have been largely ignored, compared to the pipe rolls, which were records of audit. Some more obscure records, the memoranda of issue, help to show how the daily operations of government finance worked, when cash was the only medium available. They indicate something surprising: the receipt and issue rolls do not necessarily record transactions which took place during the periods they nominally cover. They also show that the Exchequer was experimenting with other forms of payment, using tally sticks, several decades earlier than was previously known. The rolls and the tallies indicate that the objectives of the Exchequer were not, as we would now expect, concerned with balancing income and expenditure, drawing up a budget, or even recording cash flows within a particular year. These concepts were as yet unknown. Instead, the Exchequer’s aim was to ensure the accountability of officials, its own and those in other branches of government, by allocating financial responsibility to individuals rather than institutions.
In this paper we study the long run effects of the 1959–61 Chinese Famine on mental health outcomes. We focus on cohorts that were born during the famine and examine their mental health as adults, when they are roughly 55 years of age.
We find that early-life exposure to this famine leads to a large statistically-significant negative impact on women’s mental health, while there is limited effect on men. This gender differential effect is observed because male fetuses experience a stronger natural selection as compared to female fetuses, which implies that in the longer run, surviving females may exhibit larger detrimental effects of early-life famine exposure.
Thus, the observed effects are a composite of 2 well-established factors, the survival of the fittest and the Fetal Origins hypothesis.
“The pretexts used by the Spaniards for enslaving the New World were extremely curious”, George notes; “the propagation of the Christian religion was the first reason, the next was the [Indigenous] Americans differing from them in colour, manners and customs, all of which are too absurd to take the trouble of refuting.” As for the European practice of enslaving Africans, he wrote, “the very reasons urged for it will be perhaps sufficient to make us hold such practice in execration.”
George never owned slaves himself, and he gave his assent to the legislation that abolished the slave trade in England in 1807. By contrast, no fewer than 41 of the 56 signatories to the Declaration of Independence were slave owners.
It was the Declaration that established the myth that George III was a tyrant. Yet George was the epitome of a constitutional monarch, deeply conscientious about the limits of his power. He never vetoed a single Act of Parliament, nor did he have any hopes or plans to establish anything approaching tyranny over his American colonies, which were among the freest societies in the world at the time of the American Revolution: Newspapers were uncensored, there were rarely troops in the streets and the subjects of the 13 colonies enjoyed greater rights and liberties under the law than any comparable European country of the day.
George III’s generosity of spirit came as a surprise to me as I researched in the Royal Archives, which are housed in the Round Tower at Windsor Castle. Even after George Washington defeated George’s armies in the War of Independence, the king referred to Washington in March 1797 as “the greatest character of the age”, and when George met John Adams in London in June 1785, he told him, “I will be very frank with you. I was the last to consent to the separation [between England and the colonies]; but the separation having been made, and having become inevitable, I have always said, and I say now, that I would be the first to meet the friendship of the United States as an independent power.”
Political theorists often turn to 17th-century England and the Levellers as sources of egalitarian insight. Yet by the time the Levellers were active, the claim that human beings were “equal” by nature was commonplace. Why, in Leveller hands, did a long-standing piety consistent with social hierarchy became suddenly effectual?
Inspired by Elizabeth Anderson, this article explores what equality—and the related concept of parity—meant for the Levellers, and what “the point”, as they saw it, was.
I argue that the Levellers’ key achievement was subsuming a highly controversial premise of natural parity within the existing language of natural equality.
This suggests that modern basic equality is the product of 2, potentially contradictory, principles. This, in turn, has important normative, as well as historical and conceptual, implications for how theorists understand “the point” of equality for egalitarian movements today.
…Before the 17th century, the concept of equality as applied to human beings expressed primarily a principle of their indifference in God’s eyes and under natural law. The idea that one might enjoy a distinctive status or dignity entitled to respect was conveyed by another concept. Whereas equality applied to relations of quantity or quality, parity operated in the domain of value to describe a relation of equivalence between things that might, despite their differences, be treated “on a par.” In early modern English, parity was primarily a social concept closely associated with the division of society into 2 classes: Peers, who were “accounted” as worthy by birth, and Commoners, who were not.
That the Levellers and their contemporaries had two terms where modern egalitarians have one helps explain why we struggle to make sense of what these “early egalitarians” were up to. I argue that Lilburne and his colleagues, under pressure from critics, subsumed a highly controversial idea of natural (as opposed to social) parity under the altogether less controversial premise of natural equality. They thereby transformed a benignly formal observation of species (eg. “all men are equally human”) into an assertion of shared worthiness (“all men should be treated on a par”). The “point” of equality for the Levellers was thus that it provided a less controversial language with which to claim parity with their erstwhile “betters.”
Still, even as the Leveller premise of natural parity rejected the existence of any natural distinctions of inferiority and superiority between human beings, it nevertheless accepted the existence of natural differences between them—including the difference between the sexes—on the basis of which they justified the differential (ie. unequal) distributions of rights. As critics like Cromwell pointed out, natural equality-as-parity thus tacitly preserved a hierarchical-ordering between different kinds of person that continued to make “superior” rank worth having—as in the Levellers’ implicit distinction between those who would be treated as high-status “peers” in their society of pares (born free, English, and male), and those who would remain low-status “equals” (bondsmen, “strangers”, and women).
[An apple scoop carved from a sheep’s tibia, European, 19th century. Science Museum]
These tools may look rough, but in the right hands they could be surprisingly precise. A British country magazine from 1958 contains this account of a man describing how his mother used hers: With a scoop in one hand, and an apple in the other, she would carve away the fruit’s flesh until nothing was left but a hollow skin, which would “crumple in the hand like paper.”
Yes, these were apple scoops, and their purpose was quite practical: In the days before widely accessible dentures, they allowed the elderly and toothless to enjoy fresh apples without straining their remaining teeth.
The scoops date as far back as the 1600s, and were used through the early 1900s…For this reason, dentures were not usually an option for the working class. Apple scoops, in contrast, were crafted from the most accessible of materials: sheep bones. And they could be easily made at home. John Clare, the quintessential poet of the English rural life, describes shepherds whittling away at sheep bones while waiting out a storm.
…For rural people, these bone scoops were part of an apple-centric way of life. Henry Bull’s The Herefordshire Pomona, an 1876 encyclopedia of English apples, presents a world in which apples mark each step of the yearly festive calendar, from the blessing of the new apples on St. James Day, to wassailing in the apple orchard on Twelfth Night, with no shortage of stops in between: On St. Simon and St. Jude’s Day, young women tossed apple shavings over their shoulders in hopes that the peels would land in the shape of their future husband’s first initial. Halloween meant snap-apple, a game played by constructing a kind of chandelier with an apple on one end and a lit candle on the other. Once you set it swinging, the objective was to grab a bite without being burned. Perhaps the most appetizing tradition was “lamb’s wool”, a dish made by steaming apples on a string above a pot of hot ale until they melt into a cloud of white froth—a good solution for any apple-lover lacking both teeth and sheep-bone scoops.
Yet in the few decades between Clare’s poem and Bull’s encyclopedia, apple scoops began to vanish. As Bull wistfully recalls, “Some 50 or 60 years ago, apple-scoops were in general use, and were even placed on the dessert table with a dish of apples, as crackers are with nuts… but the fashion has changed, and it is rare now to meet with one of the old bone scoops, and still more rare to see any person scooping an apple in the old-fashioned way.”
Why did the July 1914 crisis—but not crises in 1905, 1908–9, 1911, and 1912–13—escalate to great-power war despite occurring under similar international and domestic conditions? Explanations based on underlying and slowly changing structural, social, or cultural variables cannot answer this question.
Examining 3 Balkan crises of 1912–13 and the July Crisis, we refine realist explanations based on power, alliances, and reputational interests by incorporating the impact of changing power distributions and alliances in the Balkans on the great-power security system. A more complete answer to the why-1914-but-not-before question, however, requires the incorporation of Franz Ferdinand’sassassination, which went beyond a pretext for war. It eliminated the most powerful and effective proponent for peace in Vienna and fundamentally changed the nature of the decision-making process in Austria-Hungary.
Counterfactually, we argue that a hypothetical crisis with Franz Ferdinand present would probably have ended differently.
I think whaling is really cool. I can’t help it. It’s one of those things like guns and war and space colonization which hits the adventurous id. The idea that people used to go out in tiny boats into the middle of oceans and try to kill the biggest animals to ever exist on planet earth with glorified spears to extract organic material for fuel is awesome. It’s like something out of a fantasy novel.
So I embarked on this project to understand everything I could about whaling. I wanted to know why burning whale fat in lamps was the best way to light cities for about 50 years. I wanted to know how profitable whaling was, what the hunters were paid, and how many whaleships were lost at sea. I wanted to know why the classical image of whaling was associated with America and what other countries have whaling legacies. I wanted to know if the whaling industry wiped out the whales and if they can recover.
…Fun Fact 1: Right whale testicles make up 1% of their weight,23 so each testicle weighs around 700 pounds. The average American eats 222 pounds of meat per year (not counting fish),24 so a single right whale testicle should cover a family of 4 for almost a year.
[cf. Tetlock, Risi et al 2019] Effective management of global crises relies on expert judgment of their societal effects. How accurate are such judgments?
In the spring of 2020, we asked behavioral scientists (n = 717) and lay Americans (n = 394) to make predictions about COVID-19 pandemic-related societal change across social and psychological domains. Six months later we obtained retrospective assessments for the same domains (Nscientists = 270; NlayPeople = 411). Scientists and lay people were equally inaccurate in judging COVID’s impact, both in prospective predictions and retrospective assessments. Across studies and samples, estimates of the magnitude of change were off by more than 20% and less than half of participants accurately predicted the direction of changes. Critically, these insights go against public perceptions of behavioral scientists’ ability to forecast such changes (n = 203): behavioral scientists were considered most likely to accurately predict societal change and most sought after for recommendations across a wide range of professions.
Taken together, we find that behavioral scientists and lay people fared poorly at predicting the societal consequences of the pandemic and misperceive what effects it may have already had.
There are two puzzles surrounding the Pleiades, or Seven Sisters. First, why are the mythological stories surrounding them, typically involving 7 young girls being chased by a man associated with the constellation Orion, so similar in vastly separated cultures, such as the Australian Aboriginal cultures and Greek mythology? Second, why do most cultures call them “Seven Sisters” even though most people with good eyesight see only 6 stars? Here we show that both these puzzles may be explained by a combination of the great antiquity of the stories combined with the proper motion of the stars, and that these stories may predate the departure of most modern humans out of Africa around 100,000 BC.
[Keywords: Aboriginal astronomy, ethnoastronomy, history of astronomy]
Narratives of ecocide, when a society fails due to self-inflicted ecologic disaster, have been broadly applied to many major archaeological sites based on the expected environmental consequences of known land-use practices of people in the past. Ecocide narratives often become accepted in a discourse, despite a lack of direct evidence that the hypothesized environmental consequences of land-use practices occurred.
Cahokia Mounds, located in a floodplain of the central Mississippi River Valley, is one such major archaeological site where untested narratives of ecocide have persisted. The wood-overuse hypothesis suggests that tree clearance in the uplands surrounding Cahokia led to erosion, causing increasingly frequent and unpredictable floods of the local creek drainages in the floodplain where Cahokia Mounds was constructed.
Recent archaeological excavations conducted around a Mississippian Period (AD 1050–1400) of earthen mound in the Cahokia Creek floodplain shows that the Ab horizon on which the mound was constructed remained stable until industrial development. The presence of a stable ground surface (Ab horizon) from Mississippian occupation to the mid-1800s does not support the expectations of the wood-overuse hypothesis.
Ultimately, this research demonstrates that pre-Colombian ecological change does not inherently cause geomorphic change, and narratives of ecocide related to geomorphic change need to be validated with the stratigraphic record.
The majority of studies on international conflict escalation use a variety of measures of hostility including the use of force, reciprocity, and the number of fatalities. The use of different measures, however, leads to different empirical results and creates difficulties when testing existing theories of interstate conflict. Furthermore, hostility measures currently used in the conflict literature are ill suited to the task of identifying consistent predictors of international conflict escalation. This article presents a new dyadic latent measure of interstate hostility, created using a Bayesian item-response theory model and conflict data from the Militarized Interstate Dispute (MID) and Phoenix political event datasets. This model (1) provides a more granular, conceptually precise, and validated measure of hostility, which incorporates the uncertainty inherent in the latent variable; and (2) solves the problem of temporal variation in event data using a varying-intercept structure and human-coded data as a benchmark against which biases in machine-coded data are corrected. In addition, this measurement model allows for the systematic evaluation of how existing measures relate to the construct of hostility. The presented model will therefore enhance the ability of researchers to understand factors affecting conflict dynamics, including escalation and de-escalation processes.
Psychology has traditionally seen itself as the science of universal human cognition, but it has only recently begun seriously grappling with cross-cultural variation. Here we argue that the roots of cross-cultural variation often lie in the past. Therefore, to understand not only how but also why psychology varies, we need to grapple with cross-temporal variation. The traces of past human cognition accessible through historical texts and artifacts can serve as a valuable, and almost completely unutilized, source of psychological data. These data from dead minds open up an untapped and highly diverse subject pool. We review examples of research that may be classified as historical psychology, introduce sources of historical data and methods for analyzing them, explain the critical role of theory, and discuss how psychologists can add historical depth and nuance to their work. Psychology needs to become a historical science if it wants to be a genuinely universal science of human cognition and behavior.
Intrauterine contraceptive devices, sterilizations, and forced family separations: since a sweeping crackdown starting in late 2016 transformed Xinjiang into a draconian police state (China Brief, September 21, 2017), witness accounts of intrusive state interference into reproductive autonomy have become ubiquitous. While state control over reproduction has long been a common part of the birth control regime in the People’s Republic of China (PRC), the situation in Xinjiang has become especially severe following a policy of mass internment initiated in early 2017 (China Brief, May 15, 2018) by officials of the ruling Chinese Communist Party (CCP).
For the first time, the veracity and scale of these anecdotal accounts can be confirmed through a systematic analysis of government documents. The research findings of this report specifically demonstrate the following:
Natural population growth in Xinjiang has declined dramatically; growth rates fell by 84% in the two largest Uyghur prefectures between 2015 and 2018, and declined further in several minority regions in 2019. For 2020, one Uyghur region set an unprecedented near-zero birth rate target: a mere 1.05 per mille, compared to 19.66 per mille in 2018. This was intended to be achieved through “family planning work.”
Government documents bluntly mandate that birth control violations are punishable by extrajudicial internment in “training” camps. This confirms evidence from the leaked “Karakax List” document, wherein such violations were the most common reason for internment (Journal of Political Risk, February 2020).
Documents from 2019 reveal plans for a campaign of mass female sterilization in rural Uyghur regions, targeting 14 and 34% of all married women of childbearing age in two Uyghur counties that year. This project targeted all of southern Xinjiang, and continued in 2020 with increased funding. This campaign likely aims to sterilize rural minority women with three or more children, as well as some with two children—equivalent to at least 20% of all childbearing-age women. Budget figures indicate that this project had sufficient funding for performing hundreds of thousands of tubal ligation sterilization procedures in 2019 and 2020, with least one region receiving additional central government funding. In 2018, a Uyghur prefecture openly set a goal of leading its rural populations to accept widespread sterilization surgery.
By 2019, Xinjiang planned to subject at least 80% of women of childbearing age in the rural southern four minority prefectures to intrusive birth prevention surgeries (IUDs or sterilizations), with actual shares likely being much higher. In 2018, 80% of all net added IUD placements in China (calculated as placements minus removals) were performed in Xinjiang, despite the fact that the region only makes up 1.8% of the nation’s population.
Shares of women aged 18 to 49 who were either widowed or in menopause have more than doubled since the onset of the internment campaign in one particular Uyghur region. These are potential proxy indicators for unnatural deaths (possibly of interned husbands), and/or of injections given in internment that can cause temporary or permanent loss of menstrual cycles.
Between 2015 and 2018, about 860,000 ethnic Han residents left Xinjiang, while up to 2 million new residents were added to Xinjiang’s Han majority regions. Also, population growth rates in a Uyghur region where Han constitute the majority were nearly 8× higher than in the surrounding rural Uyghur regions (in 2018). These figures raise concerns that Beijing is doubling down on a policy of Han settler colonialism.
[Examination of Roman historian Tacitus’s accounts of tyranny in his Histories, focusing on the dictators Tacitus lived through, particularly the 15-year reign of Domitian.]
The masters of the Roman world surrounded their throne with darkness, concealed their irresistible strength, and humbly professed themselves the accountable ministers of the senate, whose supreme decrees they dictated and obeyed.
…Even Tacitus, as critical as he was of the cravenness of Rome’s senatorial class and of the tyrannical excesses of different emperors, was resigned to the fact that a return to the halcyon days of the republic appeared, by his time, to be impossible. As contemporary scholarship has shown, illiberal governments spawn self-replicating patterns of corruption and networks of patronage that serve only to entrench undemocratic norms and practices. By the time Tacitus was alive, the authoritarian rot had set in too deep, and the memory of past liberties was too vague. As the emperor Galba wearily tells Piso, his designated successor, in Book I of The Histories, Rome’s populace had been irredeemably altered, being now composed of “men who could endure neither complete slavery nor complete freedom.”..Although Tacitus held various responsibilities under several emperors, Domitian’s 15-year rule of terror (81 to 96 C.E.) seems to have etched the deepest psychological scars…certain passages in Agricola provide some moving indications of the author’s trauma and, as we shall see, of his survivor’s guilt. Indeed, the detailed descriptions that we do have of Domitian—most notably those provided by Suetonius and Dio Cassius—paint a bleak portrait of an increasingly unhinged despot whose behavior fuses the flamboyant eccentricities of President Gurbanguly Berdimuhamedov of Turkmenistan with the raw sadism of the Afghan warlord Rachid Dostum. Executing at least 11 senators of consular rank and exiling many more over the course of his reign, Domitian, according to Suetonius, “took a personal insult to any reference, joking or otherwise, to bald men, being extremely sensitive about his appearance”, even publishing a haircare manual in which he whined about his capillary loss. Suetonius, ever one for colorful anecdotes, recounts how, in his spare time, the disturbed ruler would while away the hours in solitude “catching flies—believe it or not—and stabbing them with a needle-sharp pen.”
Accounts of Domitian’s reign are punctuated with episodes of savagery and degradation, with the tyrant feeding a circus attendee to a pack of ravening hounds for supporting the wrong gladiator or ordering that a 90-year-old Jewish man be publicly stripped to establish whether he had been circumcised…those who emerged, staggering, from the 15-year ordeal of Domitian’s rule were “maimed in spirit, dazed and blunted.” Tacitus gives voice to this sentiment when, in Agricola, he portrays the Domitianic era as a dark, energy-leeching vacuum that drained the statesman and his peers of their youth and intellectual vitality:
During the space of fifteen years, a large portion of human life, how great a number have fallen by casual events, and, as was the fate of the most distinguished, by the cruelty of the prince; whilst we few survivors, not of others alone, but, if I may be allowed the expression, of ourselves, find a void of so many years in our lives, which has silently brought us from youth to maturity, from mature age to the very verge of life!
…as the political theorist Roger Boesche observed, one of the great themes that pervades all of Tacitus’ writings is “the idea that under despotism everyone becomes an actor and all of society wraps itself in insincerity, role-playing and pretense.”…Shame, guilt, a lingering sense of powerlessness, and self-loathing: These are all emotions common to individuals living under tyranny…Dark currents of hatred course deep below the surface of all such brutalized societies, and Tacitus provides terrifyingly vivid descriptions of the ugliness of pent-up rage and mob violence in the event of regime collapse.
…Tacitus, however, did not descend to such levels of cynicism. While he stressed the importance of compromise in order to serve the public good, he was at his most powerful when describing instances of remarkable courage emerging from some of the more unlikely places: “an emancipated slave and a woman”, who died under torture and “set an example which shone the brighter at a time when persons freeborn and male, Roman knights and senators, untouched by torture, were betraying each his nearest and dearest”; or Petronius, Nero’s “arbiter of elegance”, a court dandy whom nobody took seriously but who died laughing and, in one last gesture of theatrical defiance, embarrassed the emperor by publishing a list of his patron’s secret sexual habits and partners. Like many regime insiders-turned-dissidents, Petronius knew that the public unveiling of the tyrant’s squalid personal habits would be far more devastating than any fiery moral condemnation. Nevertheless, this author’s personal favorite would have to be the guard colonel Subrius Flavus, who, upon being condemned to death, openly vented the depth of his hatred and disdain to a rattled Nero’s face. Hauled off to a nearby field for his execution, Flavus witheringly commented on the grave that had been dug for him, which he deemed too narrow and shallow. “More bad discipline”, he let out in one final contemptuous snort before bowing his head for the executioner’s blade.
Empirical evidence on contemporary torture is sparse. The archives of the Spanish Inquisition provide a detailed historical source of quantitative and qualitative information about interrogational torture. The inquisition tortured brutally and systematically, willing to torment all who it deemed as withholding evidence. This torture yielded information that was often reliable: witnesses in the torture chamber and witnesses that were not tortured provided corresponding information about collaborators, locations, events, and practices. Nonetheless, inquisitors treated the results of interrogations in the torture chamber with skepticism. This bureaucratized torture stands in stark contrast to the “ticking bomb” philosophy that has motivated US torture policy in the aftermath of 9/11. Evidence from the archives of the Spanish Inquisition suggests torture affords no middle ground: one cannot improvise quick, amateurish, and half-hearted torture sessions, motivated by anger and fear, and hope to extract reliable intelligence.
Have great wars become less violent over time, and is there something we might identify as the long peace? We investigate statistical versions of such questions, by examining the number of battle-deaths in the Correlates of War dataset, with 95 interstate wars from 1816 to 2007. Previous research has found this series of wars to be stationary, with no apparent change over time. We develop a framework to find and assess a change-point in this battle-deaths series. Our change-point methodology takes into consideration the power law distribution of the data, models the full battle-deaths distribution, as opposed to focusing merely on the extreme tail, and evaluates the uncertainty in the estimation. Using this framework, we find evidence that the series has not been as stationary as past research has indicated. Our statistical sightings of better angels indicate that 1950 represents the most likely change-point in the battle-deaths series—the point in time where the battle-deaths distribution might have changed for the better.
Popular culture associates the lives of Roman emperors with luxury, cruelty, and debauchery, sometimes rightfully so. One missing attribute in this list is, surprisingly, that this mighty office was most dangerous for its holder. Of the 69 rulers of the unified Roman Empire, from Augustus (d. 14 CE) to Theodosius I (d. 395 CE), 62% suffered violent death. This has been known for a while, if not quantitatively at least qualitatively. What is not known, however, and has never been examined is the time-to-violent-death of Roman emperors.
This work adopts the statistical tools of survival data analysis to an unlikely population, Roman emperors, and it examines a particular event in their rule, not unlike the focus of reliability engineering, but instead of their time-to-failure, their time-to-violent-death. We investigate the temporal signature of this seemingly haphazard stochastic process that is the violent death of a Roman emperor, and we examine whether there is some structure underlying the randomness in this process or not.
Nonparametric and parametric results show that: (1) emperors faced a statistically-significantly high risk of violent death in the first year of their rule, which is reminiscent of infant mortality in reliability engineering; (2) their risk of violent death further increased after 12 years, which is reminiscent of wear-out period in reliability engineering; (3) their failure rate displayed a bathtub-like curve, similar to that of a host of mechanical engineering items and electronic components. Results also showed that the stochastic process underlying the violent deaths of emperors is remarkably well captured by a (mixture) Weibull distribution.
We discuss the interpretation and possible reasons for this uncanny result, and we propose a number of fruitful venues for future work to help better understand the deeper etiology of the spectacle of regicide of Roman emperors.
Napoleon Bonaparte had one of the most accomplished, divisive, big lives of any person in history, which reshaping the way we think about war, politics, revolution, culture, law, religion, and so much more in a mere 52 years. Any one of those elements could (and has) been isolated and made into a massive tome on its own.
So I just set out to describe and analyze all of the things I found most interesting about the man. This includes a summary of his entire life, his personality quirks, unusual events, driving beliefs, notable skills, and more. If there is an over-arching theme to be found, it’s my amazement at how an extraordinarily competent and risk-tolerant individual lived his life up to the greatest heights only to come tumbling back down to earth.
This paper aims at highlighting a methodological flaw in current biblical archaeology, which became apparent as a result of recent research in the Aravah’s Iron Age copper production centers. In essence, this flaw, which cuts across all schools of biblical archaeology, is the prevailing, overly simplistic approach applied to the identification and interpretation of nomadic elements in biblical-era societies. These elements have typically been described as representing only one form of social organization, which is simple and almost negligible in historical reconstructions. However, the unique case of the Aravah demonstrates that the role of nomads in shaping the history of the southern Levant has been underestimated and downplayed in the research of the region, and that the total reliance on stone-built archaeological features in the identification of social complexity in the vast majority of recent studies has resulted in skewed historical reconstructions. Recognizing this “architectural bias” and understanding its sources have important implications on core issues in biblical archaeology today, as both “minimalists” and “maximalists” have been using stone-built architectural remains as the key to solving debated issues related to the geneses of Ancient Israel and neighboring polities (eg. “high” vs. “low” Iron Age chronologies), in which— according to both biblical accounts and external sources—nomadic elements played a major role.
States often use demonstrations to improve perceptions of their military power. This topic has received limited attention in the literature, which typically assumes that states disguise or downplay their capabilities, advertise them only to enhance their prestige, or use demonstrations to communicate interests and resolve. Because military strength can be difficult to gauge, however, successful deterrence and assurance can require demonstrations to ensure that capabilities are viewed as credible. This article explains the logic of capability demonstrations, identifies the conditions under which they have the most utility, introduces a typology of demonstration mechanisms, and describes how emerging technology influences demonstrations.
[Keywords: signalling, demonstrations, military power, emerging technology]
Can events be accurately described as historic at the time they are happening? Claims of this sort are in effect predictions about the evaluations of future historians; that is, that they will regard the events in question as important. Here we provide empirical evidence in support of earlier philosophical arguments1 that such claims are likely to be spurious and that, conversely, many events that will one day be viewed as historic attract little attention at the time. We introduce a conceptual and methodological framework for applying machine learning prediction models to large corpora of digitized historical archives. We find that although such models can correctly identify some historically important documents, they tend to over-predict historical importance while also failing to identify many documents that will later be deemed important, where both types of error increase monotonically with the number of documents under consideration. On balance, we conclude that historical importance is extremely difficult to predict, consistent with other recent work on intrinsic limits to predictability in complex social systems2,3. However, the results also indicate the feasibility of developing ‘artificial archivists’ to identify potentially historic documents in very large digital corpora.
The Cultural Revolution was one of the greatest disasters in human history, the result of a self-reinforcing cycle of ideology failing to match reality and unsolved social problems, and the deranged reaction of zealots triggering defection and civil warfare.
Dikötter’s history of the Cultural Revolution (The Cultural Revolution: A People’s History, 1962–1976, Frank Dikötter2016; ★★★★) offers a broad overview of the multiple failures and follies of Maoism, which culminated in some of the most destructive and disastrous events in human history: the Cultural Revolution, the Great Leap Forward/Great Famine, and the Third Front.
The Cultural Revolution was not prompted by any extraordinary famine, or invasion, or genuine threat of invasion, or civil war, or disaster of any kind. How then could it have happened? The Cultural Revolution was sponsored by Mao as a way to purge the middle and upper ranks of the Communist Party of doubters, who might do to him what the Soviets had just done to Stalin: tear down his cult by revealing his monstrous crimes to the world. But Mao didn’t realize the forces he unleashed. Maoism had benefited from taking credit for post-WWII recovery and the defeat of Japan, but the more its policies were implemented and it tightened its grip, the greater the gap between its utopian promises and the grim impoverished Chinese reality became. Because its theories were radically and systematically wrong, any honest attempt to implement them was doomed to fail, and anyone pragmatic would necessarily betray the system. Old systems and ‘inequities’ reasserted themselves, to the frustration of true believers.
The only ideologically-permissible explanations were excuses like saboteurs and spies and corrupt officials. Usually kept in check, when given Mao’s imprimatur and active egging on, mass social resentment and ideological frustration boiled over, leading to a frenzy of virtue-signaling, denunciations, preference falsification spirals, murders, cannibalism, and eventually outright civil war and pandemic. Finally, Mao decided enough purging had happened and his position was secure, and brought it to an end. As strange and awful as it was, the Cultural Revolution offers food for thought on how politics can go viciously wrong, and dangerous aspects of human psychology.
Still, he was as awed as anyone when staff pried open the crates to reveal no fewer than 12 graceful, snarling specimens of Acinonyx jubatus—more commonly known as cheetahs. Each was about 5 feet long, not including the tail, and 2.5 feet tall at the shoulder…The man responsible for the whole affair, playboy adventurer Kenneth Cecil Gandar-Dower, arrived several hours later with the cheetahs’ new trainer, Hooku, in tow…To Sumpter’s bafflement, the legendary animal wrangler—who sometimes went by the Westernized name Raymond Hook—claimed that, once captured, cheetahs could be trained to hunt for sport, or tied up with nothing more than a shoelace and kept as pets.
Gandar-Dower, on the other hand, saw more than utility and companionship in the cheetahs’ spots. He saw opportunity. Like the bongo he’d procured for the London Zoo, exotic animals appreciated in value the farther they traveled from home, and trainable exotic animals even more so. The cheetahs were so receptive to commands, Gandar-Dower declared, that Maharajas in India held formal cheetah races for entertainment—and now, he intended to bring this “most modern of modern sports” to England.
…Many people at the time still believed greyhounds to be the fastest animal in the world, so he also invited a handful of reporters to measure their speed and generate positive publicity. The journalists confirmed for their readers an acceleration of standstill to 50 miles per hour in just 2 seconds, as well as the generally docile nature of the cats. “Even a full-grown cheetah, properly trained, can be relied upon not to turn savage suddenly”, Gandar-Dower was quoted as saying. “A cheetah trained from a cub becomes as tame and affectionate as a dog.”…If the cheetahs didn’t want to run, they simply didn’t—and even when they did, each tired out after only a few hundred yards. In her first race at Romford, Helen covered 265 yards in 15.86 seconds, easily surpassing the top recorded greyhound speed of 16.01 seconds. But when the track was extended to 355 yards, another cheetah named Luis failed to break the existing greyhound record. The sprints were unquestionably impressive, but their brevity was what had allowed the cheetahs to be captured and brought to England in the first place.
…It’s perhaps worth noting that the British journalists celebrating Gandar-Dower’s audacious enterprise were all men, while the Australians who acknowledged Henderson’s hands-on care were both women. But none disputed the magnificence of the cheetahs, who continued to perform regularly at Romford and make guest appearances at other stadiums throughout the winter of 1937. In some ways, however, they were too good. A close match provides more drama than a blowout, and watching a cheetah beat a greyhound by 40 yards or more was, perversely, a bit of a letdown. Even giving the greyhounds a head start couldn’t fully erase the nagging sense that the cheetahs were rubbing their opponents’ snouts in it. So in April of 1938, Henderson and Stewart came up with a new opponent for the cheetahs to race: motorcycles.
The stunt they envisioned would be a relatively safe one, since speedway motorcycles in the 1930s could reliably travel 90 miles per hour—well above the cheetahs’ maximum of 70. But not everyone found the numbers so convincing, and there was always the chance that a stalled motor could bring its deliciously meaty operator to a halt mid-race. Legendary speed champion “Bluey” Wilkinson (a nickname traditionally given to redheads in Australia) was one of several who received a telegram asking “Will you race a cheetah for £5?”, to which he quipped in return, “No, I’ll let him have it.” Other rejections quickly followed. These men were no strangers to peril—Wilkinson became world champion that year despite wearing a full-shoulder plaster cast over his recently snapped collarbone—but cheetahs were apparently a bridge too far. No professional racers would agree to participate.
…It’s possible, however, that a few cheetahs dodged fate: only five of them appeared in their last wartime race at White City Speedway in May of 1940, and the rest may have been sold to wealthy individuals. American actress Phyllis Gordon famously acquired a pet cheetah in London in late 1939, as did a foreign noblewoman named Countess Elvira de Flogny, and the timing makes it plausible that one or both were former racing cheetahs.
Under the Khmer Rouge, making love was an explicitly political act. Marriage was a political decision. Refusing to sleep with your husband was an act of political rebellion. The first claim of the totalitarian is that everything is political.
In my view, a totalitarian system must meet two minimum requirements:
In this system all human action is considered political action.
The system is ruled by a Party which claims commanding authority to direct all political action—and thus all human action—for its cause.
The great tragedies of 20th century history occurred as the totalitarian leaders attempted to translate their claim of authority over all human action into actual control over the same.
This view of totalitarian society crystallized in my mind some years ago, when I first read Liang Heng’s memoir of his youthful escapades as a Red Guard in the Cultural Revolution. A professor had asked me to review it. In that brief review I noted:
In Mao’s China the personal was always political. And not just the personal—everything anyone did was political. Maoism was a political ideology that asked its members to give everything they were, had, and did to the socialist cause. This intellectual framework implies that everything one does should be layered with political meaning. A child’s prank, a lover’s kiss, and a friend’s embrace were all political acts. The clothes one wore, the way one walked, the letters one wrote, and the words one spoke all had political valence. It was with this in mind Liang Shan warned: “Never give your opinion on anything, even if you’re asked directly” (76).
Such caution is inevitable in a world where there is no distinction between the personal and the political. Politics is the division of power, politicking the contest for it. In a system where the most intimate and private actions have political meaning, these actions will be used by those who seek power. These naked contests for control leave no room for good and evil—good becomes what those with power declare it. “One day you are red, one day you are black, and one day you are red again” (76), Liang Shan instructed, and he was correct. This struggle stretched from factions warring within the walls of Zhongnanhai to the village black class child currying for favor.
The problem is not competition: that is an ingrained aspect of human life. The special tragedy of the Maoist system was that it spared nothing from the pursuit of power. There was no aspect of life that could be cordoned off as a refuge from the storm.2
One of the extraordinary things about reading Mao’s speeches from this period is the fluidity of who was considered an ally and who was considered an enemy. Mao framed his campaigns as a struggle between “the people” and “the enemy”, but who fit into each group differed drastically based off of the Party’s perceptions of who was a credible threat to The Cause and who was not. As Mao put it:
To understand these two different types of contradictions correctly, we must first be clear on what is meant by “the people” and what is meant by “the enemy”. The concept of “the people” varies in content in different countries and in different periods of history in a given country. Take our own country for example. During the War of Resistance Against Japan, all those classes, strata and social groups opposing Japanese aggression came within the category of the people, while the Japanese imperialists, their Chinese collaborators and the pro-Japanese elements were all enemies of the people. During the War of Liberation, the U.S. imperialists and their running dogs—the bureaucrat-capitalists, the landlords and the Kuomintang reactionaries who represented these two classes—were the enemies of the people, while the other classes, strata and social groups, which opposed them, all came within the category of the people. At the present stage, the period of building socialism, the classes, strata and social groups which favour, support and work for the cause of socialist construction all come within the category of the people, while the social forces and groups which resist the socialist revolution and are hostile to or sabotage socialist construction are all enemies of the people.5
Thus a particular group could at one point be an honored part of “the people”, at another point an ally in a “united front”, and later a despised “enemy” of the regime. How the regime treated you depended very much on how threatening Party leaders believed you might be to the regime and its cause.
Today The Cause has flipped—officially—from socialist revolution to national rejuvenation. The Party works under the same schema but has shifted the “people” that Mao identified with specific economic classes to the nation at large.6 Mass mobilization campaigns have been retired. But struggle and united front campaigns have not. Xi’s great corruption purge, the Uighur labor camps of Xinjiang, the attack on Christians across China—these all follow the same methods for crushing and coercing “enemies” developed by Mao and the Party in the early ’40s. “One Country, Two Systems”, interference campaigns in the Chinese diaspora, the guided, gilded tours given to Musk and his ilk—these all follow the same methods for corrupting and controlling “allies” developed by Mao and the Party that same decade. The tools have never changed. The only thing that has changed is the Party’s assessment of who is an “enemy” and who is part of the “people.”
There is one threat, however, that the Communist legacy has poorly prepared the Party to face. Stalin and Mao conceived of their projects in cultural terms—they were not just attempting to stamp out dangerous people, but dangerous ideas. To that end both Stalin and Mao cut their countries off from the world they had no control over. If your end goal is socialist revolution this might be tenable. But if your end goal is national rejuvenation—that is, a future where China sits at the top of a global order, more wealthy and powerful than any other—then engagement with the outside world must be had. It means foreigners coming to China in great numbers, and Chinese going abroad in numbers no smaller. It means a much more accurate conception of the way the rest of the world works among the minds of the Chinese people. It means contemplating paths for China that do not involve being ruled by a dictatorial party-state.
This tension lies at the root of the Party’s problems with the West. Countries like America threaten the Party with their mere existence. Consider what these countries do: they allow dissidents from authoritarian powers shelter. Their societies spawn (even when official government policy is neutral on the question) movement after movement devoted to spreading Western ideals and ideas to other lands and peoples. They are living proof that a country does not need an one-party state to become powerful and wealthy. These things pose a threat to the Communist Party of China. The Party itself is the first to admit it.7
Regions assigned more quotas acquired more Western knowledge after abolition.
The examination system led to substantial misallocation of talent.
The skill levels of individuals in the modern sector increased following abolition.
This study uses 1899–1908 prefecture-level panel data to assess how the likelihood of passing the civil service examination affected modernization before and after the examination system’s abolition.
Because higher quotas were assigned to prefectures with an agricultural tax of over 150,000 piculs, we use a regression discontinuity design to generate an instrument that resolves potential endogeneity and ensures robust results.
We find that following abolition, prefectures with higher quotas of successful candidates tended to establish more modern firms and send more students for overseas study in Japan. A subsequent analysis using an individual dataset further shows that the skill level of these overseas students increased after abolition, especially in regions with higher per capita quotas.
This finding implies that the examination system led to substantial misallocation of talents.
[Keywords: Imperial civil examination, incentive, modern firms, overseas study]
…A major empirical challenge in doing so, however, is the abolition’s universality, which engendered no regional variations in policy implementation. Hence, to better understand the abolition’s modernizing effect, we use a simple conceptual framework that incorporates 2 choices open to Chinese elites: learn from the West and pursue modernization activities (ie. study modern science and technology) or invest in preparing for the civil examination (ie. study Confucian classics). In this model, elites with a greater chance of passing the examination are less likely to pursue (Western) modernization activities pre-abolition but more likely to do so post-abolition. Accordingly, the regions with a higher likelihood of passing the examination should be those with a larger increase in post-abolition modernization activities, allowing us to use a difference in differences (DID) method to identify the abolition’s causal impact.
…Evaluated at the sample mean, each one standard deviation increases in the logged quotas per capita (0.70) led to another 0.23 newly established modern firms and another 0.66 students traveling to Japan for overseas study per year. These empirical results are robust to controlling for geographic factors, population, level of urbanization, and Western penetration, as well as to the use of different model specifications. By estimating the yearly correlation between the logged quotas per capita and the density of modernization activities from 1899 to 1908, we also show that the pre-abolition correlation remains stable until it suddenly increases following the abolition decision.
A review of Russell 1986’s Like Engend’ring Like: Heredity and Animal Breeding in Early Modern England, describing development of selective breeding and discussing models of the psychology and sociology of innovation.
Like anything else, the idea of “breeding” had to be invented. That traits are genetically-influenced broadly equally by both parents subject to considerable randomness and can be selected for over many generations to create large average population-wide increases had to be discovered the hard way, with many wildly wrong theories discarded along the way. Animal breeding is a case in point, as reviewed by an intellectual history of animal breeding, Like Engend’ring Like, which covers mistaken theories of conception & inheritance from the ancient Greeks to perhaps the first truly successful modern animal breeder, Robert Bakewell (1725–1795).
Why did it take thousands of years to begin developing useful animal breeding techniques, a topic of interest to almost all farmers everywhere, a field which has no prerequisites such as advanced mathematics or special chemicals or mechanical tools, and seemingly requires only close observation and patience? This question can be asked of many innovations early in the Industrial Revolution, such as the flying shuttle.
Some veins in economics history and sociology suggest that at least one ingredient is an improving attitude: a detached outsider’s attitude which asks whether there is any way to optimize something, in defiance of ‘the wisdom of tradition’, and looks for improvements. A relevant English example is the English Royal Society of Arts, founded not too distant in time from Bakewell, specifically to spur competition and imitation and new inventions. Psychological barriers may be as important as anything like per capita wealth or peace in innovation.
Rulers’ long duration in the medieval period had contributed to the rise of Europe. But what explained premodern ruler duration? While the extant answers focus on formal, political institutions, I examine the role of marriage and inheritance norms in affecting ruler survival. Using a novel dataset of over 1,000 monarchs in China and Europe from 1000 to 1800 CE, I obtain two findings that have been overlooked by the existing literature. First, contrary to the view that European rulers had exceptional stability, I find that Chinese monarchs stayed in power longer than their European counterparts. Second, I find a strong effect of family practices on ruler survival. More liberal marriage and inheritance norms provided Chinese emperors with sustained availability of male heirs, which reduced palace coups. But the Church’s control of royal marriage and inheritance in Europe decreased the number of male heirs, which increased the probability of a deposition.
The 1850s through early 60s was a transformative period for nascent studies of the remote human past in Britain, across many disciplines. Naturalists and scholars with Egyptological knowledge fashioned themselves as authorities to contend with this divisive topic. In a characteristic case of long-distance fieldwork, British geologist Leonard Horner employed Turkish-born, English-educated, Cairo-based engineer Joseph Hekekyan to measure Nile silt deposits around pharaonic monuments in Egypt to address the chronological gap between the earliest historical and latest geological time. Their conclusion in 1858 that humans had existed in Egypt for exactly 13,371 years was the earliest attempt to apply geological stratigraphy to absolute human dates. The geochronology was particularly threatening to biblical orthodoxy, and the work raised private and public concerns about chronological expertise and methodology, scriptural and scientific authority, and the credibility of Egyptian informants. This essay traces these geo-archaeological investigations; including the movement of paper records, Hekekyan’s role as a go-between, and the publication’s reception in Britain. The diverse reactions to the Egyptian research reveal competing ways of knowing the prehistoric past and highlights mid-Victorian attempts to reshape the porous boundaries between scholarly studies of human antiquity.
[Keywords: Ancient Egypt, geology, archaeology, ethnology, fieldwork, prehistory, human antiquity, biblical chronology, Victorian]
Among both elites and the mass public, conservatives and liberal differ in their foreign policy preferences. Relatively little effort, however, has been put toward showing that, beyond the use of force, these differences affect the day-to-day outputs and processes of foreign policy.
This article uses United Nations voting data from 1946 to 2008 of the 5 major Anglophone democracies of the United States, the United Kingdom, Canada, Australia, and New Zealand to show that each of these countries votes more in line with the rest of the world when liberals are in power. This can be explained by ideological differences between conservatives and liberals and the ways in which the socializing power of international institutions interact with preexisting ideologies.
The results hope to encourage more research into the ways in which ideological differences among the masses and elites translate into differences in foreign policy goals and practices across governments.
Review of Roland & Shiman 2002 history of a decade of ARPA/DARPA involvement in AI and supercomputing, and the ARPA philosophy of technological acceleration; it yielded mixed results, perhaps due to ultimately insurmountable bottlenecks—the time was not yet ripe for many goals.
Review of DARPA history book, Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983–1993, Roland & Shiman 2002, which reviews a large-scale DARPA effort to jumpstart real-world uses of AI in the 1980s by a multi-pronged research effort into more efficient computer chip R&D, supercomputing, robotics/self-driving cars, & expert system software. Roland & Shiman 2002 particularly focus on the various ‘philosophies’ of technological forecasting & development, which guided DARPA’s strategy in different periods, ultimately endorsing a weak technological determinism where the bottlenecks are too large for a small (in comparison to the global economy & global R&D) organization best a DARPA can hope for is a largely agnostic & reactive strategy in which granters ‘surf’ technological changes, rapidly exploiting new technology while investing their limited funds into targeted research patching up any gaps or lags that accidentally open up and block broader applications. (For broader discussion of progress, see “Lessons from the Media Lab” & Bakewell.)
A list of unheralded improvements to ordinary quality-of-life since the 1990s going beyond computers.
It can be hard to see the gradual improvement of most goods over time, but I think one way to get a handle on them is to look at their downstream effects: all the small ordinary everyday things which nevertheless depend on obscure innovations and improving cost-performance ratios and gradually dropping costs and new material and… etc. All of these gradually drop the cost, drop the price, improve the quality at the same price, remove irritations or limits not explicitly noticed, or so on.
It all adds up.
So here is a personal list of small ways in which my ordinary everyday daily life has been getting better since the late ’80s/early ’90s (as far back as I can clearly remember these things—I am sure the list of someone growing up in the 1940s would include many hassles I’ve never known at all).
A classic pattern in technology economics, identified by Joel Spolsky, is layers of the stack attempting to become monopolies while turning other layers into perfectly-competitive markets which are commoditized, in order to harvest most of the consumer surplus; discussion and examples.
Joel Spolsky in 2002 identified a major pattern in technology business & economics: the pattern of “commoditizing your complement”, an alternative to vertical integration, where companies seek to secure a chokepoint or quasi-monopoly in products composed of many necessary & sufficient layers by dominating one layer while fostering so much competition in another layer above or below its layer that no competing monopolist can emerge, prices are driven down to marginal costs elsewhere in the stack, total price drops & increases demand, and the majority of the consumer surplus of the final product can be diverted to the quasi-monopolist. No matter how valuable the original may be and how much one could charge for it, it can be more valuable to make it free if it increases profits elsewhere. A classic example is the commodification of PC hardware by the Microsoft OS monopoly, to the detriment of IBM & benefit of MS.
This pattern explains many otherwise odd or apparently self-sabotaging ventures by large tech companies into apparently irrelevant fields, such as the high rate of releasing open-source contributions by many Internet companies or the intrusion of advertising companies into smartphone manufacturing & web browser development & statistical software & fiber-optic networks & municipal WiFi & radio spectrum auctions & DNS (Google): they are pre-emptive attempts to commodify another company elsewhere in the stack, or defenses against it being done to them.
The story is told by a prisoner of war from a totalitarian society based on Maoist China, which has gone past Orwell’s Newspeak to speak only in quotations from propaganda texts. The prisoner is nevertheless able to flexibly order & reuse quotes to tell a story about the struggle of a good man oppressed by injust officials, criticizing the government and his society’s failure to uphold its ideals.
This story demonstrates the hope that control of thought by control of language is necessarily weak, because a new language can be constructed out of the old one to communicate forbidden thoughts.
A geochemical approach using Fe:Co:Ni analyses, permits to differentiate terrestrial from extraterrestrial irons.
Meteoritic irons, Bronze Age iron artifacts, ancient terrestrial irons and lateritic ores enable to validate this approach.
Modern irons and iron ores are shown to exhibit a different relationship in a Fe:Co:Ni array.
Iron from the Bronze Age are meteoritic, invalidating speculations about precocious smelting during the Bronze Age.
Bronze Age iron artifacts could be derived from either meteoritic (extraterrestrial) or smelted (terrestrial) iron. This unresolved question is the subject of a controversy: are some, all or none made of smelted iron?
In the present paper we propose a geochemical approach, which permits us to differentiate terrestrial from extraterrestrial irons. Instead of evaluating the Ni abundance alone (or the Ni to Fe ratio) we consider the relationship between Fe, Co and Ni abundances and their ratios. The study of meteoritic irons, Bronze Age iron artifacts and ancient terrestrial irons permit us to validate this chemical approach. The major interest is that non-invasive p-XRF analyses provide reliable Fe:Co:Ni abundances, without the need to remove a sample; they can be performed in situ, in the museums where the artifacts are preserved.
The few iron objects from the Bronze Age sensu stricto that could be analyzed are definitely made of meteoritic iron, suggesting that speculations about precocious smelting during the Bronze Age should be revised. In a Fe:Co:Ni array the trend exhibited by meteoritic irons departs unambiguously from modern irons and iron ores.
The trend of Ni/Fe vs Ni/Co in different analysis points corroded to variable extents of a single object provides a robust criterion for identifying the presence of meteoritic iron. It opens the possibility of tracking when and where the first smelting operations happened, the threshold of a new era. It emphasizes the importance of analytical methods for properly studying the evolution of the use of metals and metal working technologies in our past cultures.
[Keywords: iron, Bronze Age, Iron Age, meteorite, iron ore]
He and his wife live in an apartment not far from mine that was originally occupied by his grandfather, who was the Soviet Union’s chief literary censor under Stalin. The most striking thing about the building was, and is, its history. In the nineteen-thirties, during Stalin’s purges, the House of Government earned the ghoulish reputation of having the highest per-capita number of arrests and executions of any apartment building in Moscow. No other address in the city offers such a compelling portal into the world of Soviet-era bureaucratic privilege, and the horror and murder to which this privilege often led…“Why does this house have such a heavy, difficult aura?” he said. “This is why: on the one hand, its residents lived like a new class of nobility, and on the other they knew that at any second they could get their guts ripped out.”
…This is the opening argument of a magisterial new book by Yuri Slezkine, a Soviet-born historian who immigrated to the United States in 1983, and has been a professor at the University of California, Berkeley, for many years. His book, The House of Government, is a 1200-page epic that recounts the multigenerational story of the famed building and its inhabitants—and, at least as interesting, the rise and fall of Bolshevist faith. In Slezkine’s telling, the Bolsheviks were essentially a millenarian cult, a small tribe radically opposed to a corrupt world. With Lenin’s urging, they sought to bring about the promised revolution, or revelation, which would give rise to a more noble and just era. Of course, that didn’t happen. Slezkine’s book is a tale of “failed prophecy”, and the building itself—my home for the past several years—is “a place where revolutionaries came home and the revolution went to die.”…The Soviet Union had experienced two revolutions, Lenin’s and Stalin’s, and yet, in the lofty imagery of Slezkine, the “world does not end, the blue bird does not return, love does not reveal itself in all of its profound tenderness and charity, and death and mourning and crying and pain do not disappear.” What to do then? The answer was human sacrifice, “one of history’s oldest locomotives”, Slezkine writes. The “more intense the expectation, the more implacable the enemies; the more implacable the enemies, the greater the need for internal cohesion; the greater the need for internal cohesion, the more urgent the search for scapegoats.” Soon, in Stalin’s Soviet Union, the purges began.
…N.K.V.D. agents would sometimes use the garbage chutes that ran like large tubes through many apartments, popping out inside a suspect’s home without having to knock on the door. After a perfunctory trial, which could last all of three to five minutes, prisoners were taken to the left or to the right: imprisonment or execution. “Most House of Government leaseholders were taken to the right”, Slezkine writes…eight hundred residents of the House of Government were arrested or evicted during the purges, thirty% of the building’s population. Three hundred and forty-four were shot…Before long, the arrests spread from the tenants to their nannies, guards, laundresses, and stairwell cleaners. The commandant of the house was arrested as an enemy of the people, and so was the head of the Communist Party’s housekeeping department…“He felt a premonition”, she said. “He was always waiting, never sleeping at night.” One evening, Malyshev heard footsteps coming up the corridor—and dropped dead of a heart attack. In a way, his death saved the family: there was no arrest, and thus no reason to kick his relatives out of the apartment.
…One of Volin’s brothers was…called back, arrested, and shot. One of Volin’s sisters was married to an N.K.V.D. officer, and they lived in the House of Government, in a nearby apartment. When the husband’s colleagues came to arrest him, he jumped out of the apartment window to his death. Volin, I learned, kept a suitcase packed with warm clothes behind the couch, ready in case of arrest and sentence to the Gulag…They gave their daughter, Tolya’s mother, a peculiar set of instructions. Every day after school, she was to take the elevator to the ninth floor—not the eighth, where the family lived—and look down the stairwell. If she saw an N.K.V.D. agent outside the apartment, she was supposed to get back on the elevator, go downstairs, and run to a friend’s house.
One cold Friday in 1660, Samuel Pepys encountered two unpleasant surprises. “At home found all well”, he wrote in his diary, “but the monkey loose, which did anger me, and so I did strike her.” Later that night, a candlemaker named Will Joyce (the good-for-nothing husband of one of Pepys’s cousins) stumbled in on Pepys and his aunt while “drunk, and in a talking vapouring humour of his state, and I know not what, which did vex me cruelly.” Presumably, Pepys didn’t resort to blows this time around.
The two objects of Pepys’ scorn that day, his disobedient pet monkey and his drunken cousin-in-law, were not as distant as one might think. Monkeys stood in for intoxicated humans on a surprisingly frequent basis in 17th century culture. In early modern paintings, tippling primates can frequently be seen in human clothing, smoking tobacco, playing cards, rolling dice, and just plain getting wasted.
…So what is going on with these images showing drunken and drug-selling monkeys? I think that what we’re missing when we simply see these as a form of social satire is that these are also paintings about addiction. Desire is a dominant theme in these works: monkeys are shown jealously squabbling over piles of tobacco, or even, in the example below, hoarding tulip flowers during the height of the Dutch tulipmania (they appear to be using the profits to get drunk, in the upper left)…But there’s an alternative narrative running through these paintings as well. It epitomizes the ambivalence that has long surrounded intoxicating substances, in many cultures and in many times: These monkeys seem to be having fun.
The Plataeans and the Mytilenians both heard a case arguing for their death, as well as one arguing for their continued survival. In the Mytilenian case, both the defendant and the prosecution were represented by Athenians. In the case of Plataea, the Plataeans were forced to speak in their own defense, with the Thebans arguing for their death. The parallel is clear. It is to the arguments we turn to find the contrast between the two hegemonic powers.
…What is this but to make greater enemies than you have already, and to force others to become so who would otherwise have never thought of it?
The Athenians were once a people of honor. “For glory then and honor now” was the rallying cry Pericles raised to lead his people to war (2.64.6). The Athenians began this entire drama chasing it. No longer. Athenian honor died long before the war’s close. Athenian honor could not survive the plague. Then the beastly truth was revealed: honor meant nothing but scarred skin and blistered visage. Nobility brought no recompense but rotting flesh. Eat now, drink now, be merry now, for tomorrow men will die! And die, and, die, and die. Justice, integrity, honor—mere words. Where could those words be found? Buried deep in burning heaps of flesh! Abandoned in lonely, forgotten corners where none would see them croak away! Beneath blood, phlegm, pustule, and vomit! What has honor to do with Athens? Nothing. What is more, they knew it…Thucydides relates the speech of two men in the debate over Mytilene, one Cleon, son of Cleanetus, the ‘most violent man in Athens.’ The other Diodotus, son of Eucrates, a more measured sort who does not appear elsewhere in this history. Cleon argues for the Mytilene’s extinction. Diodotus, for their salvation. They disagreed on almost every point. What sticks out, however, is what they did agree on. Both wanted everyone to know that their arguments had nothing whatsoever to do with justice, honor, or mercy.
…However, if, right or wrong, you determine to rule, you must carry out your principle and punish the Mytilenians as your interest requires; or else you must give up your empire and cultivate honesty without danger (3.37; 3.40).
In reply, Diodotus:
…However, I have not come forward either to oppose or to accuse in the matter of Mytilene; indeed, the question before us as sensible men is not their guilt, but our interests. Though I prove them ever so guilty, I shall not, therefore, advise their death, unless it be expedient; nor though they should have claims to indulgence, shall I recommend it, unless it be clearly for the good of the country
Behold the men of Athens! Dead to honor, to principle, to humanity. This was a people whose hearts had hardened. Nothing was left to Athens but the pursuit of power—and its cousin, profit. The only language they spoke was the language of naked interest. That language saved the Mytilenians. They were lucky. Interest is a fickle master. The men of Melos discovered just how twisted a master it can be. In time, so would the Athenians.
This is a list of commonly found Pāḷi numbers which occur in the literature. Besides giving integers, I have also given fractions where I have noticed they occur, and have added the different forms that are found.
For the first 10 numbers I have also included their ordinal form in parentheses, after 10 they continue simply by adding -ma as the suffix, as in examples 7–10 given below. The numbers go sequentially up to 105, and other, theoretical, numbers can be inferred from the given examples. After that I have given only forms that I have found used in the Pāḷi books.
Physical beauty & attractiveness of the general population of men/women seems to have increased greatly in the past few centuries, judging by surviving art/photos, contemporary judgments, and objective criteria like missing teeth, likely due to economic/technological/medical/nutritional improvements, but less from cosmetic tricks. Beauty may, however, be in decline very recently as some of those trends reverse (eg. now too much food, not too little).
Is physical beauty, masculine or feminine, a negative-sum, zero-sum (positional) or positive good? And has beauty increased or decreased over time? Thinking over various anecdotes and examples and changes in public health and environmental factors like nutrition and infectious disease and dentistry, I speculate that physical attractiveness of men & women in the West is not purely positional & relative, but has increased in an absolute sense over the past few centuries (albeit possibly decreasing recently as a consequence of trends like obesity).
In high school, a promising young student at the Virginia Military Institute named George C. Marshall petitioned the president for a military commission. Which President did the creator of the Marshall plan petition? William McKinley (just months before man’s life was cut short by an assassin’s bullet.) And most unbelievably, what of the fact that Robert Todd Lincoln was present as his father died of assassination, was at the train station with President James Garfield was assassinated, and was in attendance at the event in which McKinley was assassinated? 3 assassinations, spread out over 40 years. Robert Todd Lincoln himself lived to be 82, dying in 1926. He could have read stories published by F. Scott Fitzgerald. He drove in a car. He talked on the telephone. He would have heard jazz music.
And these are just the events of the so-called ‘’modern history’’.
We forget that woolly mammoths walked the earth while the pyramids were being built. We don’t realize that Cleopatra lived closer to our time than she did to the construction of those famous pyramids that marked her kingdom. We forget that Ovid and Jesus were alive at the same time. When British workers excavated the land in Trafalgar Square to build Nelson’s Column and its famous bronze lions, in the ground they found the bones of actual lions, who’d roamed that exact spot just a few thousand years before.
Isaac Newton’s cosmology apparently involved regular apocalypses caused by comets overstoking the furnace of the Sun and the repopulation of the Solar System by new intelligent species. He supports this speculation with an interestingly-incorrect anthropic argument.
Isaac Newton published few of his works, and only those he considered perfect after long delays. This leaves his system the world, as described in the Principia and elsewhere, incomplete, and many questions simply unaddressed, like the fate of the Sun or role of comets. But in 2 conversations with an admirer and his nephew, the elderly Newton sketched out the rest of his cosmogony.
According to Newton, the solar system is not stable and must be adjusted by angels; the Sun does not burn perpetually, but comets regularly fuel the Sun; and the final result is that humanity will be extinguished by a particularly large comet causing the sun to flare up, and requiring intelligent alien beings to arise on other planets or their moons. He further gives an anthropic argument: one reason we know that intelligent races regularly go extinct is that humanity itself arose only recently, as demonstrated by the recent innovations in every field, inconsistent with any belief that human beings have existed for hundreds of thousands or millions of years.
This is all interestingly wrong, particularly the anthropic argument. That Newton found it so absurd to imagine humanity existing for millions of years but only recently undergoing exponential improvements in technology demonstrates how counterintuitive and extraordinary the Industrial & Scientific Revolutions were.
The rise of agriculture during the Neolithic period has paradoxically been associated with worldwide population growth despite increases in disease and mortality. We examine the effects of sedentarization and cultivation on disease load, mortality, and fertility among Agta foragers. We report increased disease and mortality rates associated with sedentarization alongside an even larger increase in fertility associated with both participation in cultivation and sedentarization. Thus, mothers who transition to agriculture have higher reproductive fitness. We provide the first empirical evidence, to our knowledge, of an adaptive mechanism behind the expansion of agriculture, explaining how we can reconcile the Neolithic increase in morbidity and mortality with the observed demographic expansion.
The Neolithic demographic transition remains a paradox, because it is associated with both higher rates of population growth [As a result, although exact estimates vary, it has been argued that average population growth rates rose from <0.001% to ~0.04% per year during the early Neolithic] and increased morbidity and mortality rates. Here we reconcile the conflicting evidence by proposing that the spread of agriculture involved a life history quality-quantity trade-off whereby mothers traded offspring survival for increased fertility, achieving greater reproductive success despite deteriorating health.
We test this hypothesis by investigating fertility, mortality, health, and overall reproductive success in Agta hunter-gatherers whose camps exhibit variable levels of sedentarization, mobility, and involvement in agricultural activities.
We conducted blood composition tests in 345 Agta and found that viral and helminthic infections as well as child mortality rates were statistically-significantly increased with sedentarization. Nonetheless, both age-controlled fertility and overall reproductive success were positively affected by sedentarization and participation in cultivation.
Thus, we provide the first empirical evidence, to our knowledge, of an adaptive mechanism in foragers that reconciles the decline in health and child survival with the observed demographic expansion during the Neolithic.
I have argued before that any potential American foreign policy or ‘grand strategy’ that requires statesmen with a nuanced understanding of a foreign region’s cultures, politics, and languages to implement it is doomed to fail. Regional acumen is a rare trait, and one I greatly admire. But it is rare for a reason. Regional acumen just does not scale—or at least, Americans do not know how to scale it. I have said this before. But it was reinforced tonight when I stumbled—quite by accident—across this old New York Times Magazine personal by Lydia Kiesling. In it she describes her experience learning Uzbek with a FLAS grant from the Department of Education.
…This article gets to the heart of why America will always lack the kind of language and area expertise needed to succeed in the kinds of things the American people (or American leaders) often demand the United States government do. Uzbek is an obscure language. But it is an obscure language at the center of the national security concerns that have bedeviled the United States over the last decade and a half. To give a brief picture:
There are about three million Uzbeks who live in Afghanistan. Uzbeks were an essential part of the Northern Alliance’s resistance against the Taliban, and Uzbek leaders became an important part of the government established by NATO forces once the Taliban was driven from power. This is still true. Afghanistan’s current vice-president, Abdul Rashid Dostum, is an Uzbek.
Uzbekistan is the central hub of central Asia. One of the greatest defeats of our Afghan campaign happened not on the battlefield, but at the diplomats’ table. Uzbekistan’s decision to withdraw American basing and supply rights was nothing short of a disaster, forcing the United States to be even more dependent on Pakistan (our true enemy in the region) for logistic support.
Uzbek and Uighur are a hair’s breadth away from mutually intelligible. Xinjiang’s low intensity Uighur insurgency is the single greatest security concern of China, America’s greatest rival.
This is a language that matters. What happens to the woman who spent a year of her life studying it? She was rejected from the CIA (or wherever) on background technicalities, and has not used her language since. Or to be more precise, she has used it twice. Twice in four years. Twice.
This gets to the heart of America’s problem with regional acumen. Area expertise simply doesn’t pay. You may count the number of private sector jobs currently on the market that demand Uzbek fluency on two hands. And even if there were a multitude of jobs that required proficiency in Uzbek and English, there are undoubtedly several hundred—perhaps several thousand—Uzbekistanis who speak English better than Ms. Kiesling speaks Uzbek, and who will work for less pay to boot.
[Unsong is a finished (2015–2017) online web serial fantasy “kabbalah-punk” novel written by Scott Alexander (SSC). GoodReads summary:
Aaron Smith-Teller works in a kabbalistic sweatshop in Silicon Valley, where he and hundreds of other minimum-wage workers try to brute-force the Holy Names of God. All around him, vast forces have been moving their pieces into place for the final confrontation. An overworked archangel tries to debug the laws of physics. Henry Kissinger transforms the ancient conflict between Heaven and Hell into a US-Soviet proxy war. A Mexican hedge wizard with no actual magic wreaks havoc using the dark art of placebomancy. The Messiah reads a book by Peter Singer and starts wondering exactly what it would mean to do as much good as possible…
Aaron doesn’t care about any of this. He and his not-quite-girlfriend Ana are engaged in something far more important—griping about magical intellectual property law. But when a chance discovery brings them into conflict with mysterious international magic-intellectual-property watchdog UNSONG, they find themselves caught in a web of plots, crusades, and prophecies leading inexorably to the end of the world.
Jimmy Carter himself embodied both of these impulses: he embraced government action to protect the environment and public health, and he also sought to make regulation less burdensome and costly. Both causes, in fact, were personal passions. Carter had spent childhood days roaming the woods and fields in rural Georgia. “Everyone who knows me”, he said while signing the Superfund bill, “understands that one of my greatest pleasures has been to strengthen the protection of our environment.” But government efficiency also animated the president. With a background in the Navy’s nuclear submarine program, Carter was used to calculating and balancing risks and benefits for strategic purposes. As governor of Georgia, Carter also had worked to rationalize state government, abolishing and consolidating hundreds of state agencies. Now in the closing days of his presidency, Carter spoke fondly of the utterly bureaucratic cause of information management and regulatory reform. One of the “high points of my presidency”, Carter recalled, was a day in 1978 when more than 900 minor and outdated safety and health regulations “were stricken from the books.” Carter characterized the Paperwork Reduction Act as a defining legacy. The law, Carter said at the signing ceremony, was “embedding my own philosophy . . . into the laws of our Nation.” At his very first presidential cabinet meeting, Carter had directed his cabinet officers and agency heads to cut down the “extraordinary and unnecessary burden of paperwork” on the American people. Carter now announced with pride and a little uncertainty as he signed the paperwork law, “We’ve addressed the bureaucrats, and we’ve won, right?” The White House audience laughed.
This article uses the records of Carter’s domestic policy and economic advisers and his budget office to examine a crucial lead-up to that December signing ceremony: the Carter administration’s efforts to manage the costs and burdens of federal regulation. Why did Carter and his advisers believe that improving federal regulation was so important? How did the administration’s approach to regulatory reform evolve over the course of Carter’s presidency? More narrowly, why did the Carter administration initially oppose strong Office of Management and Budget oversight of regulation and then later advocate legislation to strengthen OMB’s role? This is a story of tension and conflict as the Carter administration sought to balance regulation and reform, as well as trade-offs between agency independence and White House control. Carter’s integrated approach was, in some ways, less politically successful than Reagan’s single-minded tack. Carter’s compromises inevitably disappointed some of his own constituents, the environmental and health advocates calling for tougher regulation. Yet he also did not go far enough to win over conservatives and business advocates. Few interest groups rallied to support compromise and moderation. Yet if Carter had continued his reform efforts in a second term, perhaps his effort to strike a balance might have set the country on a more mature regulatory path instead of an extended political stalemate.
The White House’s relationship to federal agencies lay at the heart of conflicts over regulatory reform. Carter was trying to figure out how to effectively oversee the executive branch. His advisers quickly grew skeptical about designating OMB to serve as the federal government’s regulatory enforcer. They instead spread regulatory oversight across several executive offices and policy groups. The White House sought to partner with the regulatory agencies to help them improve government performance with new rule-writing processes. The focus on systems and processes and the diffusion of oversight power were hallmarks of the Carter administration’s regulatory reform efforts. The strategy of partnering with the agencies made the administration’s accomplishments politically feasible, but it also ultimately frustrated White House policymakers and made them hunger for more effective oversight. Regulatory agencies and labor and environmental advocates in the Democratic coalition resisted and slowed the administration’s progress.
By the end of Carter’s term in office, the Carter administration had forcefully asserted the president’s power to review, and even to overturn, agency regulatory decisions. Carter’s senior staff also settled on OMB as the only viable agency to oversee regulatory reform. In its closing months, the Carter administration created the institutional framework that Reagan’s OMB would use for its regulatory review efforts. The Carter administration’s initial move away from OMB power and his administration’s subsequent efforts to strengthen OMB’s role are thus critical to understanding the rationale and origins of OMB’s controversial regulatory review authority. The hostile anti-regulatory rhetoric that characterized the early Reagan years differed sharply from the Carter administration’s emphasis on balanced and efficient regulation. But the central substantive thrust of Reagan’s regulatory program in the early 1980s continued efforts initiated by the Carter administration in the late 1970s.
Although commonly remembered as a liberal regulator, in part for his creation of the Department of Energy and his push for national energy conservation and planning, Carter more accurately should be seen as a leading deregulator of the 20th century. Scholars have long documented how the Carter administration enthusiastically deregulated many long-controlled industries, including air travel, trucking, finance, and railroad shipping. The administration also laid the groundwork for the decontrol of oil and natural gas prices. Carter considered his record on industry deregulation “one of the best success stories” of his presidency, and his domestic policy staff described it as “one of the President’s great domestic legacies.”
This article studies the causes of China’s Great Famine, during which 16.5 to 45 million individuals perished in rural areas.
We document that average rural food retention during the famine was too high to generate a severe famine without rural inequality in food availability; that there was substantial variance in famine mortality rates across rural regions; and that rural mortality rates were positively correlated with per capita food production, a surprising pattern that is unique to the famine years. We provide evidence that an inflexible and progressive government procurement policy (where procurement could not adjust to contemporaneous production and larger shares of expected production were procured from more productive regions) was necessary for generating this pattern and that this policy was a quantitatively important contributor to overall famine mortality.
…A back-of-the-envelope calculation shows that the inflexible and progressive procurement mechanism explains 32–43% of total famine mortality. Hence, our proposed mechanism is quantitatively important, and at the same time leaves room for other factors, such as GLF policies and the complex political environment of the time, to contribute to famine mortality.
[Keywords: famines, modern Chinese history, institutions, central planning]
…Our study proceeds in several steps. The first step is to document that after procurement, rural regions as a whole retained enough food to avert mass starvation during the famine. Since the entire rural population relied on rural food stores, we compare the food retained by rural regions after procurement to the food required by rural regions to prevent famine mortality. Using historical data on aggregate food production, government procurement and population (adjusted for the demographic composition), we find that average rural food availability for the entire rural population was almost 3× as much as the level necessary to prevent high famine mortality. We reach these conclusions after constructing the estimates to bias against finding sufficient rural food availability. Our findings are consistent with Li & Yang 2005’s estimates of high rural food availability for rural workers and imply that the high level of famine mortality was accompanied by substantial variation in famine severity within the rural population…Another study that examines the determinants of regional procurement levels is Kung & Chen 2011. They find that political radicalism increased regional procurement during the famine and explains ~16% of total famine mortality. As such, our mechanism complements theirs in explaining total famine mortality
With the fall of the Roman Empire up until the late Middle Ages, elephants virtually disappeared from Western Europe. Since there was no real knowledge of how these animals actually looked, illustrators had to rely on oral, pictorial and written transmissions to morphologically reconstruct an elephant, thus, reinventing the image of an actual existing creature. This led, in most cases, to illustrations in which the most characteristic features of elephants—such as trunk and tusks—are still visible, but that otherwise completely deviate from the real appearance and physique of these animals. In this process, zoological knowledge about elephants was overwritten by its cultural importance.
Based on a collection of these images I have reconstructed the evolution of the ‘Elephas anthropogenus’, the man-made elephant.
This article investigates the noble academy, known as the Musaeum Minervae, established by Sir Francis Kynaston in Covent Garden in 1635–1636. Drawing on a newly discovered manifesto in which Kynaston set out the case for his academy—a transcript of which is provided as an appendix—it analyses the aims behind the project, in the context of earlier English academy schemes, the nature and scope of its activities and the reasons for its collapse. Throughout the academy’s existence, Charles I provided substantial support and took a close interest in its fortunes, treating it as part of a wider project to strengthen the English aristocracy and make them fit servants of his monarchy.
If the ‘peace marriage’ (heqin) system Luttwak describes did not do the Xiongnu in, what did?
…The logistics machine the Han created to defeat the Xiongnu is one of the marvels of the ancient world3. Each of the Han’s campaigns was a feat worthy of Alexander the Great. But Alexander only pushed to India once. The Han launched these campaigns year after year for decades4. The sheer expanse of the conflict is staggering; Han armies ranged from Fergana to Manchuria, theaters 3,000 miles apart. Each campaign required the mobilization of tens of thousands of men and double the number of animals. Chang Chun-shu has tallied the numbers:
“In the many campaigns in the Western regions (Hexi, Qiang, and Xiyu) and the Xiongnu land, the Han sent a total force of over 1.2 million cavalrymen, 800,000 foot soldiers, and 10.5 million men in support and logistic roles. The total area of lad seized in Hexi alone was 426,700 square kilometers. In developing this region the Han spent 100 billion in cash per year, compared to the regular annual government revenue of 12 billion. In the process the Han government moved from the interior over 1 million people to populate and develop the Hexi river. Thus the Han conquest of the land west of the Yellow River was the greatest expansion in Chinese history.”5
The demands of the war forced the Han to restructure not only the Chinese state, but all of Chinese society.6 The Han’s willingness to radically restructure their society to meet the immense financial and logistic demands of an eighty year conflict is one of the central reasons they emerged victorious from it.
…The Han followed the same basic strategy. The aim of generals like Wei Qing and Huo Qubing was to kill every single man, woman and child they came across and by doing so instill such terror in their enemies that tribes would surrender en masse upon their arrival. By trapping the Xiongnu into one bloody slug match after another the Han forced them into a grinding war of attrition that favored the side with the larger population reserves. The Xiongnu were unprepared for such carnage in their own lands; within the first decade of the conflict the Han’s sudden attacks forced the Xiongnu to retreat from their homeland in the Ordos to the steppes of northern Mongolia. Then came a sustained—and successful—effort to apply the same sort of pressure on the Xiongnu’s allies and vassals in Turkestan and Fergana. By sacking oasis towns and massacring tribes to the east, the Han were able to terrorize the peoples of Turkestan into switching their allegiance to China or declare their independence from the Xiongnu.
The Xiongnu were left isolated north of the Orkhorn. Under constant military pressure and cut off from the goods they had always extorted from agrarian peoples in China and Turkestan, the Xiongnu political elite began to fracture. A series of succession crises and weak leaders ensued; by 58 BC the Xiongnu’s domain had fallen into open civil war. It was one of the aspiring claimants to the title of Chanyu that this conflict produced who traveled to Chang’an, accepted the Han’s suzerainty, and ended eighty years of war between the Han and the Xiongnu8.
How did the Chinese transform an enemy whose realm stretched thousands of miles across Inner Asia into a mere tributary vassal? They did it through flame and blood and terror. Any narrative of Han-Xiongnu relations that passes over these eighty years of grueling warfare is a distorted depiction of the times.
A few weeks ago a friend passed along one of the least correct essays I have ever had the misfortune to read. It was written by Edward Luttwak…In it Luttwak suggests contemporary Chinese foreign policy follows a pattern first seen in the foreign relations of the Han Dynasty two millennia ago
Formidable mounted archers and capable of sustained campaigning (a primary objective of the Steppe State), the Xiongnú ravaged and savaged and extorted tribute from the perpetually less martial, and certainly cavalry-poor Han until the latter finally felt able to resist again. Even then, 147 years of intermittent warfare ensued until Huhanye (呼韓邪), the paramount Chanyu (Qagan, Khan) of the Xiongnú, personally and formally submitted to the emperor Han Xuandi in 51 BCE, undertaking to pay homage, to leave a son at court as a hostage, and to deliver tribute, as befitted a vassal. That was a very great downfall from the familial status of earlier Chanyus of the epoch of Xiongnú predominance, who were themselves recognized as emperors, whose sons and heirs could have imperial daughters in marriage, and who from 200 BCE had received tribute from the Han, instead of the other way around. It is this successful transformation of an once superior power first into an equal (signified by imperial marriages) and then into a subservient client-state that seems to have left an indelible residue in China’s tradition of statecraft.
…if Edward Luttwak wants to talk about how the echoes of the Han-Xiongnu war can be heard in modern China’s foreign policy, I am all ears. Long term readers of The Stage know that there are few conversation starters I would find more thrilling to hear. Too many contemporary controversies cannot be understood until we step back and look at world affairs from the long view of history. But there is a catch in all this: the history has to be correct. It must accord to the facts. If one uses the past to interpret the present then your reading must be based on events that actually happened. This cannot be said for Mr. Luttwak’s essay. The story he tells simply did not happen.
Luttwak’s descriptions of the heqin policy’s aim is basically correct. It was designed to corrupt the Xiongnu and slowly ‘Sinicize’ them. It was designed, through the power of Confucian family norms, to subordinate the Xiongnu ruler to Han Emperor.
What Luttwak neglects to mention is that the policy was a complete and utter failure.
For Mao Zedong and the Chinese Communist Party, the socialist transformation after 1949 was not only a political and administrative construction, but also a process of transforming the consciousness of the people and rewriting history. To fight lukewarm attitudes and “backward thoughts” among the peasants, as well as their resistance to rural socialist transformation and collectivization of production and their private lives, Mao decided that politicizing the memory of the laboring class and reenacting class struggle would play a substantial role in ideological indoctrination and perpetuating revolution.
Beginning in the 1950s, the Party made use of grassroots historical writing, oral articulation, and exhibition to tease out the experiences and memories of individuals, families, and communities, with the purpose of legitimizing the rule of the CCP. The cultural movement of recalling the past combined grassroots histories, semi-fictional family sagas, and public oral presentations, as well as political rituals such as eating “recalling-bitterness meals” to educate the masses, particularly the young. Eventually, Mao’s emphasis on class struggle became the sole guiding principle of historical writings, which were largely fictionalized, and recalling bitterness and contrasting the past with the present became a solid part of PRC political culture, shaping the people’s political imagination of the old society and their way of narrating personal experience.
This article also demonstrates people’s suspicion of and resistance to the state’s manipulation of memory and ritualization of historical education, as well as the ongoing contestation between forgetting, remembering, and representation in China today.
[Keywords: historiography, Mao Zedong, socialist education, memory, recalling bitterness]
…Different from “speaking bitterness (诉苦 suku)” in the Land Reform movement of the late 1940s and early 1950s, which was mainly implemented as a technique of mobilization, the “recalling bitterness (忆苦 yiku)” campaign in the 1960s aimed at reenacting class struggle and reinforcing class awareness by invoking collective memory.6 During this process, which was largely interactive and involved different levels of the Chinese state apparatus, history became personalized and also gradually fictionalized, and the oral presentation of memory became ritualistic and volatile to suit the needs of different political agendas. This project of ideology-driven and class-based historical writing and oral articulation was interestingly conducted mainly by writers of fictional works or manipulated by cultural officials of the state, and there was a gradual blurring of the boundary between history and fiction. Many family history stories appeared in literary magazines rather than journals of historical research. Finally, past bitterness not only became the articulation of individual and collective memories, but also involved rituals and performance, and thus was successfully incorporated into the larger institution of propaganda and Chinese popular culture.7 As a result, all depictions of the old society in the recalling-bitterness movement were dissociated from “objective realities” and became “representational realities.”8
…Lin Biao emphasized that this campaign was a “living education” that could effectively overcome the mentality of pacifism and enhance the soldiers’ will to fight. “If the past bitterness is not understood, the present sweetness will be unknown. [Some] might regard today’s sweetness as bitterness”, Lin Biao said.20
…Collective memory can be defined as “recollections of a shared past ‘that are retained by members of a group, large or small, that experienced it,’” and this “socially constructed, historically rooted collective memory functions to create social solidarity in the present.”50 During the process of socialist education, the party-state attempted to build a class identity grounded in a shared memory of past suffering, but did so by gradually compromising historical authenticity. “Pure memory” was reworked to take on “quasi-hallucinatory forms” when it was put into images to configure tragedy and trauma.51 Emphasizing class confrontation, hatred, and bitter memory, the narrative schema of semi-fictional family histories show several common characteristics.
First, many family histories during the socialist education movement appeared in multiple literary magazines at national and provincial levels or were published in volumes dedicated to reportage literature (报告文学 baogao wenxue), emphasizing “vividness (生动 shengdong)” and “literary character (文学性 wenxuexing)” in addition to “educational meaning.”52 The famous myth about a female tenant-farmer named Leng Yueying (冷月英 1911–1984) being locked in landlord Liu Wencai’s (刘文彩 1887–1949) “water dungeon (水牢 shui lao)” was published as fact-based “reportage literature” in 1963.53 Many works in this genre were written by authors of fiction and essays. The short story writer Ai Wu wrote an article entitled “Miserable Childhood (苦难的童年 Ku’nan de tongnian)” to tell the stories of 2 peasants in the Beijing suburbs. The stories were published by the leading literary magazine People’s Literature (人民文学 Renmin wenxue) in February 1964. The same issue also contained another family history written by the famous essayist Yang Shuo (杨朔 1913–1968).
Second, landlords and capitalists were portrayed as extremely brutal and inhumane, particularly to women and children. Ralph A. Thaxton, Jr., points out that the post-famine recalling-bitterness propaganda was aimed at altering the villagers’ memory of the Great Famine (the “bitterness” produced by the CCP) and replacing it with the “bitterness” from before 1949.54 Yet, if the memory of the Great Leap Forward and the famine was more about bodily pain and hunger, the bitterness in pre-1949 China presumably had a much broader spectrum, ranging from physical pains and emotional frustrations to sociopolitical inequality, and emphasized the sense of humiliation and de-humanization in the old society. One such story recounted the experience of a boy named Xiaotieliang (小铁粱), who said that he was a helper in the house of landlord Kang and was beaten all day long. He would be beaten if he got up late, if he moved slowly, if the landlord’s little son cried, or if the pig got sick or a chicken died. If a landlord was a local philanthropist, then the story was meant to reveal his true face as a sham who hoodwinked laboring people.55 In Guizhou, the provincial literary magazine published a story entitled “The Suffering of Two Generations (两代人的苦难 liangdairen de ku’nan)”, in which a female narrator told about how the landlord’s wife pinched her breasts, causing her milk to spray several inches. This story was written by the Writing Group of the 4 Histories.56 A reader whose letter was published in the October 1964 issue of Shandong Literature (山东文学 Shandong wenxue) was deeply moved by the 3 family histories that had appeared in the magazine earlier that year. The reader said that the stories were all true and very educational, and offered his own examples of bitter experiences. He knew a 13-year-old girl, Xu Ronghua (徐荣华), who had worked as a servant and had had to carry the landlord’s daughter on her back to school. Grandma Zhang, another servant, was forced to drink her employer’s urine. Of the Zhangs’ 12 children, 3 were tortured to death by capitalists, 6 were starved to death, and the remaining 3 were sold. However, the author of the letter said that the family stories also provided evidence of how sweet the new society was. Xu Ronghua survived and became a Party member, and the sold children were returned to Grandma Zhang with the aid of the communist government. The details cited in the letter repeated the sadistic plots of the bitter story: as a wet nurse, Grandma Zhang’s breasts were pinched by her employer, Landlord Chen Number Three, with wood splints to produce more milk until her breasts became red and swollen. To prevent Zhang from breastfeeding her own child, Chen was said to have used iron rings to encase Zhang’s nipples when she went out and to have them checked when she returned.57
…Fifth, while fictitious stories were often told in the name of “reportage”, sometimes they featured a real person as the main character. The famous soldier-writer Gao Yubao (高玉宝 1927–?), an orphan who had labored for a landlord, published his autobiographical account titled Gao Yubao in 1951, which was reprinted in 1972. Gao explained how his experiences were written and revised as a semi-fiction:
With the help and cultivation of the Party and the leaders, I finally completed the first draft of the xiaoshuo (小说).64 Later, the Party Committee of the army dispatched experts to help me revise. Based on the draft, we cut, concentrated, and summarized the characters and the plots, and thus finished this novel.65
Here Gao does not deny that his work is a fictional xiaoshuo based on personal experiences, and that it had been reworked by the author and professional writers to meet the needs of political propaganda. Gao further discussed how his understanding of how to write xiaoshuo was deepened:
When I started to write Gao Yubao […] I did not have time to study some political theories and lacked profound understanding of the great Mao Zedong Thought […] Particularly I did not know what xiaoshuo means, nor did I know that the personas and plots can be created. As a result, what I wrote was nothing but an autobiography […] When revising it, I reasonably highlighted the spirit of rebellion of Yubao and the masses, and enhanced the class feelings among the laboring people in their consolidated struggle. I also deepened my exposure of the reactionary nature of the exploitative class. In addition, I added […] the Party’s influence on Yubao.66
For the reader, an autobiographical account whose title is identical to the author’s name is easily accepted as truth, but Gao did not mind blending real experiences with imagination and editing based on political need.
In addition to writing, the visualization of class education became another form of preserving and reinforcing the collective memory of victimization. The theme was soon boiled down to 2 key words: bitterness (苦 ku) and hatred (仇 chou). The documentary “Never Forget Class Bitterness, Forever Remember the Hatred in the Sea of Blood (不忘阶级苦，永记血海仇 Buwang jieji ku, yongji xiehai chou)”, made in 1965, was based on an exhibition promoting class education in Shandong Province. The film showed the objects on display, including a leather whip, club, and walnuts filled with lead that capitalists allegedly used to beat workers. These items were interpreted in the voiceover narrative as part of the “so-called bourgeois civilization.” The documentary showed a photo of an unemployed worker selling his daughter. Most other images were painted pictures with motifs such as child laborers burying, with agony, the dead body of their little colleague, headmen watching them with whips in their hands; a child worker with a fever who fainted into a wok filled with boiling water; and a sick child buried alive in a wooden box while he was striking it from inside with his fists. The plight of the peasants was another main theme of the exhibition and of the documentary, both of which displayed a quilt that a poor peasant family had allegedly used for 3 generations, a wooden pillow that was said to have been used for 4 generations, and the one pair of pants that a poor couple had shared for many years. The forced separation of families by poverty was a recurring theme of the exhibition and recalling-bitterness literature. Parents were forced to sell their children; a wife was sold to a human trafficker to pay her husband’s debts to the landlord. The documentary ended with the liberation of the people and the founding of the People’s Republic. The voiceover stated,
In the socialist society, class struggle still exists. All these that have passed, we can never forget! The blood debt owed by imperialism, the crimes committed by landlords and capitalists, and all the suffering inflicted on us—can we forget them?
Afterward, the documentary showed a village history tablet that bore an inscription of 4 characters, Yong Bu Wang Ji (永不忘记): “never forget.” The voiceover concluded: “No, we cannot. This hatred is as deep as the sea and the animosity is as heavy as a mountain, and let them be inscribed on the rock and let our offspring never forget.”67
…After being chosen, the speakers were trained further to ensure they were eloquent, emotional, and able to cry easily.71 One speaker, Master Hao, showed good skills in sobbing, talking, eating a steamed bun, and wiping off tears—almost at the same time.72
… The deep sense of victimization was effectively used to justify the violence and physical abuse of Red Guards. One former Red Guard recalled that when he and his peers were reluctant to beat students with bad class backgrounds, one radical student stood out to do “thought work.” He talked about the bitterness and hatred of the laboring people, the slaughter of revolutionary masses by the Nationalist Party, and the death of his uncle in the Civil War. Through tears, the student asked: “Back then, who sympathized with us? Who pitied us? Today, can we have mercy on these people? Can we pity them?” Upon hearing this, some students’ eyes turned red, shouting: “No, we can’t!” Some turned back and slapped the face of the student who had been beaten, though doing so half-heartedly.87 Other former students, however, recalled their experiences with skepticism. One former sent-down youth working in Inner Mongolia wrote that recalling bitterness meetings became the “privilege” of a chosen few in his village. However, the content was never consistent, he reported. The orator first said that he became a shepherd for the landlord at 12, but then he would say that was when he was ten. The village chief would go so far as to speak about the bitterness he suffered during the Great Famine in 1961 and 1962.88
In Yunnan Province in 1969, the Provincial Revolutionary Committee issued a directive requiring ideological education for sent-down youth. In one village, there was a famous female orator who had been an adopted daughter-in-law. With innocent eyes, a tanned face, and big, rough hands, the old woman convinced listeners of her past suffering. When her talk reached its climax, she burst into tears and cried out loud. Her crying, which was in itself an accusation, automatically triggered the crying of the audience, and was followed by slogan shouting. The sent-down youth who provided the reminiscence, however, said that he was later told that the old woman’s 4 brothers starved to death during the famine of 1960, and the bitterness under communism, which she was forbidden to mention, might have been the real cause of her crying.89 Very often, an invited bitterness speaker confused pre-Liberation bitterness and post-Liberation suffering, as recounted by a low-level government official, Party Secretary Ye. According to Ye, the local government usually invited a person whose “living conditions improved substantially after the Liberation” to address the youngsters. Once, however, an old man described the “difficult time he experienced after the failure of the Great Leap Forward: how much hunger he had suffered during that period, and how many people he had seen die.” The host of the event wanted to stop him, but found that the young audience listened with amusement, that is, until the host himself began to feel like laughing.90 For the sent-down youth Zhu Xueqin (朱学勤 b. 1962), who later became a famous historian, an old peasant’s anachronism in accusing collectivization under communism, the starvation of the villagers, and deprivation of the right to beg for food, was much more enlightening than it was entertaining, because it destroyed his youthful dream of revolution in toto.91
[Why is American politics so increasingly dysfunctional: less and less legislation passes on increasingly partisan grounds leading to gridlock, ever more matters are decided by judicial fiat, the imperial presidency expands to fill the vacuum, and every presidential election is more cutthroat and extreme than the one before it as controlling the presidency & Supreme Court nominations is seen as matters nothing short of existential survival.
Fukuyama diagnoses a major falloff in American state capacity, caused by its original design of checks-and-balances: a system which was perhaps reasonable centuries ago has been pushed to its limits as the USA has grown orders of magnitude larger in population, geographic size, and societal complexity, while the old system of amendments etc has fallen apart. Major legal changes, like gay marriage, which should have happened by constitutional amendment, instead are imposed (in striking contrast to more functional parliamentary democracies like France/Germany/UK—it is no accident that so few new democracies choose to emulate the USA’s Constitution). In response, empowered by ‘elastic clauses’, a hidden constitution of bureaucracies, administrative law, and courts has replaced it.
This replacement, however, has never been made explicit: obsolete old institutions persist, new missions and constraints are larded onto institutions, more and more interest groups and classes of favored insiders protect the status quo creating a “vetocracy”, and the lack of legitimacy and explicit authority means that decisions are never final, and anyone can use the fickle slow courts at any time to launch a new attack on what ought to have been decided already (or at least obstruct it). The responses to these pathologies, however, are themselves pathological, adding ever more restrictive and inconsistent rules. This further undermines public trust and participation.
Because of this, agreements are never final, political bargains cannot be enforced under winner-take-all conditions, and capture of the judiciary and executive branch become the supreme priority. Precisely because of the vetocracy and failed formal institutions, reform within the system become nearly impossible. The vested interests benefit too much and are not motivated to reform it.]
The depressing bottom line is that given how self-reinforcing the country’s political malaise is, and how unlikely the prospects for constructive incremental reform are, the decay of American politics will probably continue until some external shock comes along to catalyze a true reform coalition an galvanize it into action.
When she inserts a key in the padlock, the door swings open to reveal thousands of books, paintings, engravings, photographs and films—all, in one way or another, connected to sex. It was the kinkiest secret in the Soviet Union: across from the Kremlin, the country’s main library held a pornographic treasure trove. Founded by the Bolsheviks as a repository for aristocrats’ erotica, the collection eventually grew to house 12,000 items from around the world, ranging from 18th-century Japanese engravings to Nixon-era romance novels. Off limits to the general public, the collection was always open to top party brass—some of whom are said to have enjoyed visiting. Today, the collection is still something of a secret: there is no complete compendium of its contents and many of them are still not listed in the catalogue.
…One of the most stunning items seized from an unknown owner is The Seven Deadly Sins, an oversized book of engravings self-published in 1918 by Vasily Masyutin, who also illustrated classics by Pushkin and Chekhov. Among its depictions of gluttony is a large woman masturbating with a ghoulish smile. Before the revolution, it was fashionable among the upper classes to assemble so-called knigi dlya dam (Ladies’ Books)—a kind of bawdy scrapbook. An ostentatious leather-bound album with Kniga Dlya Dam embossed in gold on the cover opens to reveal a Chinese silk drawing of an entwined couple. Further on, dozens of engravings show aristocratic duos fornicating in sumptuously upholstered settings…Among Skorodumov’s treasures was a portfolio of drawings and watercolours by the avant-garde titan Mikhail Larionov. Made in the 1910s, they are no less scandalous in today’s Russia. One pencil sketch features a happily panting dog standing in front of a human, who is engaged in much more than petting. A watercolor depicts two soldiers having an intimate encounter on a bench.
…How did Skorodumov amass such a collection when owning a foreign title could result in a Gulag sentence?…There is also a second theory. Stalin’s secret police chief Genrikh Yagoda, a pornography aficionado whose apartment reportedly held a dildo collection, is said to have enjoyed viewing Skorodumov’s holdings. Librarians believe that he personally ensured the latter’s safety…Safely ensconced in the spetskhran, the erotica collection became available for viewing by top Stalinist henchmen. According to legend, they included the mustachioed cavalry officer and civil war hero Semyon Budyonny and grandfatherly Mikhail Kalinin, the longtime figurehead of the Soviet state. “They were supposedly interested in the visual stuff—postcards, photos”, Chestnykh said. A Politburo member did not need a pass: “No one could refuse them.”
Americans-and particularly American conservatives-are sometimes accused of failing to confront their country’s past honestly. Ye Fu’s challenge—and in many respects all of China’s—was not honestly facing his past, but simply finding it. Ye Fu was born the great grandson of a ranking Nationalist commander, the grand son of a landlord, and the son of two parents who zealously joined the revolution only to be discarded by later ‘struggles of the Proletariat’. Ye Fu was only dimly aware of this heritage growing up. It was not until his father’s funeral, when he first stepped foot on his ancestral lands, that he had either the chance or a reason to find the truth of his family’s past. This became a quest that drove and consumed him and is a recurring motif that unites his most poignant essays.
…Thus the true details of his father’s life and heritage were revealed: a grandfather who had climbed from the peasantdom of his birth to the hallowed class of landlord only a few years before the revolution overtook the village (he earned the title by being the only one in the village rich enough to employ a single field hand); a son who zealously hunted down landlords for the Party, unaware that his own family 50 miles to the east suffered the same persecution he so earnestly delivered; the suicide of his father and the destruction of the clan’s eldest generation in its entirety, both brothers and wives, within a single night.
“Hundreds of millions of lives were shoveled into the trenches of the 20th century”, Ye Fu reflects.4 Historians estimate that the death toll of these land reform campaigns is in the range of two to three million.5 But for Ye Fu those ditches are not those of the nameless millions. These were ditches dug by his father and filled by his grandfather. The tragedies of the 20th century are his tragedies. He was born from the ditches—though he would not discover this gruesome truth until he was a grown man.
He who reads Ye Fu’s meditations on these mournful roots leaves with the strong—but unexpected—impression that the true tragedy of modern Chinese history is not found in its colossal death toll. For Ye Fu the real tragedy is what all these dead represented. The first to die were those most committed to the old order. They were the upholders of traditional propriety, keepers of the ancestral shrine, and symbols of basic human decency. These men and women often lived far below their ideals, profiting from a system rightly seen as exploitative, but as long they lived so did the ideal. Their deaths meant the destruction of their entire society. With them passed old structures of power and control, but also the old values and traditions these social arrangements had embodied and enshrined. The life defined by decorum, trust, filial piety, and kindness lost its place as the ideal of Chinese civilization, replaced by a new model that honored cruelty, deception, and revolutionary ardor.
The encounters between Soviet citizens and African students studying in the Soviet Union in the 1960s inevitably generated problems of acclimation, social and political conflict, and racial strife.
The article illuminates the ways the cultural clash affirmed Russians’ and Africans’ sense of cultural superiority. The African presence in Russia confirmed Soviet altruism in rearing Africans into cultured and scientifically endowed people. Similarly, African encounters with Soviet daily life reaffirmed their identity as culturally superior to Russians by emphasizing aspects of the individual that directly conflicted with Soviet notions of collectivism.
The conflict over culturedness had direct ramifications on the Cold War as it strengthened Africans’ pragmatic stance toward Soviet patronage and their reluctance to embrace Soviet ideology and values.
…The number of African countries with students in Russia rapidly increased from 10 in 1958 to 46 in 1968. The 1959–60 school year had a mere 72 students from sub-Saharan Africa, increasing to 500 in 1961, and then to 4,000 by the end of the decade. Of the 17,400 foreign students in the Soviet Union in 1970, 20% originated from Africa.9
Soviet officials articulated their policy toward the Third World in paternalist language that essentialized all African nations to an identical stage of backwardness. As Nikita Khrushchev reiterated in a speech to the Council of Ministers in November 1960: “[Lenin] saw the historical mission of our country to help the hundreds of millions of people of downtrodden countries …to liquidate economic and cultural backwardness.” The Soviet’s own historical trajectory furnished the template. Having to quickly industrialize in the thirties, the Soviet Union, Khrushchev emphasized, “was familiar and understood” the needs of postcolonial states. Therefore, Khrushchev insisted that the Soviet leadership designed the People’s Friendship University “only for one thing: to help other countries to prepare highly qualified personnel.” After all, the Soviet people, he said, were “like brothers” to foreigners and endeavored to help them “learn better.”10 The idea that Soviet citizens were “like brothers” to Africans was a staple of Soviet ideology propaganda, which often portrayed whites as “class enemies and oppressors” or simply “bourgeois” and regarded dark-skinned people, and Africans in particular, as “our foreigners.”11
To entice youth from developing countries, the Soviet government offered free transportation from their home countries, education, healthcare, and a monthly stipend. The stipend was 4× higher than those of Soviet students and included an onetime lump-sum of 300–400 rubles for winter clothing and other supplies.12 Prospective students applied for scholarships through Soviet embassies or Soviet-friendly organizations. Students from countries without student exchange agreements could apply directly to a Soviet university… Soviet administrators followed national quotas to balance out national representation and prioritized students with worker and peasant backgrounds. In the first years, the “overwhelming majority” came from the poor, working class, and lower bureaucratic layers of African society. Of the incoming students for the 1961–62 year, for example, 25% had not completed secondary education and over half were from “poverty stricken families.”15 But ultimately class played little role in admissions, as most applicants were rejected simply for lack of space. UND pro-Rector P. D. Erzin reported that by the middle of 1960 the university had received 16,200 applications, or 30 for each available place.16 The class nature of foreign students began to change later in the decade, however, as wealthier Africans started applying. This influx of “landowning and merchant classes” prompted B. S. Nikoforov, the head of Moscow State University’s international office, to complain that many students had been “corrupted by bourgeois morals.” These included individualism, concern with personal aesthetics and consumerism, and affinity toward Western liberalism. Moreover, many had first studied in Western Europe and the United States and still maintained contact with their embassies. Nikoforov considered them possible “enemy agents” and “class aliens” in black skin.17
…Shortly after their arrival, students took a mandatory exam assessing their general educational level. Consistent with their paternalism and class-based affirmative action, Soviet officials purposely relegated placement exams to “simple questions”, expecting students to have little preparatory education. At a UND council meeting in 1960, V. S. Bondarenko, the dean of the preparatory department, reported that foreign students’ knowledge level on average was equivalent to the Soviet 7th grade, particularly in math. One student, Bondarenko noted, exclaimed “Praise Allah!” after discovering his major did not require math courses. Many students only possessed religious education and knew a bit of their country’s history but had little knowledge of math, physics, or geography.22
Unaware of Soviet affirmative action, students expressed offense and considered the exams patronizing. Anti-Taylor was “appalled” when he was only asked to locate his native Ghana on a map, name the colonial power that formally dominated it, and solve “2 simple algebra problems.”23 William Appleton, an engineering student from Liberia, recalled with dismay: “During my 2 days’ wait I have been screwing myself up for a stiff exam, especially since I had no [secondary school] certificate. And then one man asks me a few elementary questions any child could answer!”24
Antagonism to communist indoctrination was another widespread complaint, especially among students hostile to Marxist ideology. Courses in Marxist ideology, political economy, or dialectical materialism were not required. Still, students expected Soviet higher education to be devoid of all Marxist ideology. However, much to the consternation of unsympathetic students, Marxist ideology inevitably bled into many courses. William Appleton too complained that his compulsory history course “was nothing less than the indoctrination in Marxist ideology. So in order to get your training as a doctor, an engineer or a scientist …you have to submit to indoctrination in their political attitudes.”25 Indeed, a Komsomol report on foreign students noted, “students from capitalist countries” were open to classes on domestic and foreign policy of the Soviet state but “refuse to take courses on the history of the KPSS, philosophy and political economy.”26
…The Manchus, before the founding of the Qing, also rarely encountered smallpox, but they knew of its danger. Mongols and Manchus who had not been exposed to the disease were exempted from coming to Beijing to receive titles of succession. The main response of the Mongols and Manchus to those who did fall ill was quarantine. Li Xinheng commented that if anyone in a tribe caught smallpox, his relatives abandoned him in a cave or distant grassland. 70 to 80% of those infected died. The German traveler Peter Simon Pallas, who visited the Mongols three times front 1768 to I771, commented that smallpox was the only disease they greatly feared. It occurred very seldom, but spread rapidly when it struck: “If someone catches it, they abandon him in his tent; they only approach front the windward side to provide food. Children who catch it are sold to the Russians very cheaply.” The Mongols whom Pallas visited lived far from the Chinese border, but they knew well that smallpox was highly contagious and nearly fatal.
The Chinese discovery of variolation—a method of inoculation—was of great aid in reducing the severity of attacks. The Kangxi emperor himself was selected as heir in part because he had survived the disease in childhood; his father had died of it. In 1687 he inaugurated regular inoculation of the royal family, and his successor extended mandatory inoculation to all Manchu children. The Manchus adopted this Chinese medical practice in order to protect themselves against the virulent strains that were absent from the steppe. Only Manchus who had survived the disease were allowed to be sent to the Mongolian steppe. Mongols close to the Manchu and Chinese border gradually grew immune, but those farther away suffered great losses in the 19th century when Chinese penetration increased.1
…For several millennia historians have tried to explain the generally superior strength and endurance of steppe warriors, often focusing on the demands of life in the saddle or the nomads’ protein-rich diets as the explanation for their vitality. A more powerful explanation may be the absence of the debilitating and deadly diseases of settled life among the peoples of the steppe.
A compilation of books reviews of books I have read since ~1997.
This is a compilation of my book reviews. Book reviews are sorted by star, and sorted by length of review within each star level, under the assumption that longer reviews are of more interest to readers.
E-book edition of the 2002 Carter Scholz novel of post-Cold War science/technology, extensively annotated with references and related texts.
Radiance: A Novel is SF author Carter Scholz’s second literary novel. It is a roman à clef of the 1990s set at the Lawrence Livermore National Laboratory, centering on two nuclear physicists entangled in corruption, mid-life crises, institutional incentives, technological inevitability, the end of the Cold War & start of the Dotcom Bubble, nuclear bombs & Star Wars missile defense program, existential risks, accelerationism, and the great scientific project of mankind. (For relevant historical background, see the excerpts in the appendices.)
I provide a HTML transcript prepared from the novel, with extensive annotations of all references and allusions, along with extracts from related works, and a comparison with the novella version.
Note: to hide apparatus like the links, you can use reader-mode ().
Discussion of Cordwainer Smith SF story, arguing that the pain-of-space is based on forgotten psychological issues in air travel, and concerns about worse ones in space travel, which were partially vindicated by the existence of interesting psychological changes in astronauts.
Cordwainer Smith’s classic SF short story “Scanners Live in Vain” is remembered in part for its use of the space-madness trope, “the Great Pain of Space”, usually interpreted symbolically/psychologically by critics. I discuss the state of aerospace medicine in 1945 and subsequent research on “the breakaway effect”, “the overview effect”, and other unusual psychological states induced by air & space travel, and suggest Smith’s “the pain of space” is more founded on SF-style speculation & extrapolation of contemporary science/technology and anxieties than is appreciated due to the obscurity of the effects and the relative benignity of the subsequent best documented effects.
There were at least 4 waves of bow and arrow use in northern North America. These occurred at 12,000, 4,500, 2,400, and after about 1,300 years ago.
But to understand the role of the bow and arrow in the north, one must begin in the 18th century, when the Russians first arrived in the Aleutian Islands. At that time, the Aleut were using both the atlatl and dart and the bow and arrow (Figure 1). This is important for 2 particular and important reasons. First, there are few historic cases in which both technologies were used concurrently; second, the bow and arrow in the Aleutian Islands were used almost exclusively in warfare.
The atlatl was a critical technology because the bow and arrow are useless for hunting sea mammals. One cannot launch an arrow from a kayak because it is too unstable and requires that both hands remain on a paddle. To use an atlatl, it is necessary only to stabilize the kayak with a paddle on one side and launch the atlatl dart with the opposite hand. The Aleut on the Alaska Peninsula did indeed use the bow and arrow to hunt caribou there. However, in the 1,400 km of the Aleutian Islands, there are no terrestrial mammals except humans and the bow was reserved almost exclusively for conflicts among them.
The most important event in the history of the bow and arrow is not its early introduction, but rather the Asian War Complex 1300 years ago, when the recurve and backed bows first entered the region, altering regional and hemispheric political dynamics forever.
The precise quantitative nature of the Environment of Evolutionary Adaptedness (EEA) is difficult to reconstruct. The EEA represents a multitude of different geographic and temporal environments, of which a large number often need to be surveyed in order to draw sound conclusions.
We examine a large number of both hunter-gatherer (n = 20) and historical (n = 43) infant and child mortality rates to generate a reliable quantitative estimate of their levels in the EEA. Using data drawn from a wide range of geographic locations, cultures, and times, we estimate that ~27% of infants failed to survive their first year of life, while ~47.5% of children failed to survive to puberty across in the EEA. These rates represent a serious selective pressure faced by humanity that may be underappreciated by many evolutionary psychologists. Additionally, a cross-species comparison found that human child mortality rates are roughly equivalent to Old World monkeys, higher than orangutan or bonobo rates and potentially higher than those of chimpanzees and gorillas.
These findings are briefly discussed in relation to life history theory and evolved adaptations designed to lower high childhood mortality.
[Keywords: environment of evolutionary adaptedness, human evolution, infant mortality, child mortality]
After the conquests of Alexander the Great and during the reign of his numerous successors, the tradition of combat sports games became institutionalized by the elite of an Hellenized warlike aristocracy in Asia. The heroic cult of the Greeks was perpetuated as far as Central Asia, improving the local traditions by building a gymnasium in every new city of the colonies. The various technical aspects of ancient Greek combat sports were transmitted as well in order to improve effectiveness in close-combat fighting.
To trace back these technical features, a detailed description of wrestling, boxing and pankration as developed in ancient Greece are compared together with their East-Asian counterparts.
…Eurydamas from Cyrene is said to have lost his teeth during his fight and swallowed them so as not to give satisfaction to his adversary, according to the Roman author Aelian.52 The boxers used head protection and leather bands, called imantes or sphaira, around their fists in the place of gloves.53 In Roman times, boxers also wore iron rings called caestus54 on their fists, for the amusement of the Roman spectators during gladiatorial contests. Philostratus, a Greek living in the Roman Empire in the third century A.D., describes clearly how the bands of leather were tightened around the boxers’ fists and why pigskin was prohibited in boxing competitions.55 Unlike modern boxing, pygmachia also used various open-hand strikes, as indicated by various sources. In Homer’s verses, Apollo came down to earth to kill Patroclus with an “open-palm strike to his back”56 and Damoxenos pierced the internal organs of Kreugas with a finger strike (plate 2).57 Vase paintings also depicted ancient boxing practices, as in the case of the pseudo-Panathenaic amphora from Exarchos in Locrid by the painter Eucharides (~500 B.C.), which shows a palm strike and a forearm block (plate 3).
The painting of Eucharides also shows the unusual “distended” abdomen of the athletes, as if filled with air, a characteristic that is seen today in China among the adepts of traditional combat sports. The use of the principles of pneuma together with other concepts from Greek medicine led to training in various breathing techniques that were later lost in the West because of the mind/body split introduced by the Catholic Church. Indeed there is no trace of this practice in the Western world today. The explanation of Pausanias concerning the fight of Damoxenos, that “with the sharpness of his nails and the force of blow he drove his hand into his adversary, caught his bowels, and tore them out”,58 is incomplete in my opinion.
Pausanias, a second-century A.D. traveler and geographer, must have had a superficial understanding of what he heard, since he had no practical knowledge of ancient pygmachia training. To pierce the human body with one’s bare hands requires strengthening of the fingers together with explosive power developed through breathing exercises, allowing one to apply the muscular strength of one’s whole body instantaneously when striking (plate 4). Standing without changing position, and breathing techniques such as those used by Melankomas or those described by Oreibasius,59 were an integral part of a boxer’s training to fill his body with pneuma. Today in China, the best traditional boxers60 are those who apply the notion of an inner vital breath or energy. Oreibasius called this type of exercise “side therapy” or apotherapia, techniques which developed the athlete’s strength through inner breathing exercises or massage to activate the pneuma within their bodies. He advised combat-sports athletes to breathe from the lower abdomen, and to push the pneuma down using other types of breathing exercises, and also to speak with a deep voice, in order to open and fill the “empty spaces of the body.”
…Pythagoras himself is said to have been crowned in boxing, according to Eusebios of Cesarea (A.D. 265–339). During the 48h Olympiad (588 B.C.), Glycon of Croton won the stadion race. Pythagoras of Samos was excluded from boxing in the junior category because of his effeminate appearance, but he was still able to participate in the adult contest and beat all his adversaries.70Diogenes Laerce also writes that, having been expelled from the junior category, Pythagoras went on to participate in the adult contest and beat all his adversaries.71
Some of the boxers had such excellent technique that they were never hit by their opponents. They were called “the untouchables” (atravmatisti), and included famous boxers such as Kleoxenos of Alexandria (240 B.C.; one-hundred thirty-fifth Olympiad), Melankomas of Caria,72 and Hippomachos. Hippomachos, son of Moschion, sustained no blows or injuries from his 3 successive opponents in the games.73Julius Africanus (A.D. ~200) wrote that Kleoxenos had never been injured in any of his fights, and that he won all the Panhellenic games without being hurt. Melankomas was particularly well versed in standing positions, which are practiced today in China,74 but have been lost to the Western world.75 He could remain standing for 2 days with his 2 hands raised,76 a practice far removed from modern boxing. Being so skilful at his art, he was never beaten by his opponents and neither did he hurt them. He just let them exhaust themselves. Dio Chrysostom (A.D. 30–117) wrote that he had perfect control over his mind and body:
The most fantastic thing is that he was not only undefeated by his adversaries, but also by hard training in the heat, avoiding hunger, and sexual desires. The men who wish to be superior to their adversaries should not be defeated by these things. If Melankomas did not have control of himself (enkrateo),77 I doubt that he would be superior in strength, even if he was naturally strong.78
Whether China and the United States are destined to compete for domination in international politics is one of the major questions facing DoD. In a competition with the People’s Republic of China, the United States must explore all of its advantages and all of the weaknesses of China that may provide an asymmetry for the United States. This study examines one such asymmetry, the strategic consequences of Chinese racism. After having examined the literature on China extensively, this author is not aware of a single study that addresses this important topic. This study explores the causes of Chinese racism, the strategic consequences of Chinese racism, and how the United States may use this situation to advance its interests in international politics.
the study finds that xenophobia, racism, and ethnocentrism are caused by human evolution. These behaviors are not unique to the Chinese. However, they are made worse by Chinese history and culture.
considers the Chinese conception of race in Chinese history and culture. It finds that Chinese religious-cultural and historical conceptions of race reinforce Chinese racism. In Chinese history and contemporary culture, the Chinese are seen to be unique and superior to the rest of the world. Other peoples and groups are seen to be inferior, with a sliding scale of inferiority. The major Chinese distinction is between degrees of barbarians, the “black devils”, or savage inferiors, beyond any hope of interaction and the “white devils” or tame barbarians with whom the Chinese can interact. These beliefs are widespread in Chinese society, and have been for its history…
evaluates the 9 strategic consequences of Chinese racism.
virulent racism and eugenics heavily inform Chinese perceptions of the world…
racism informs their view of the United States…
racism informs their view of international politics in three ways.
states are stable, and thus good for the Chinese, to the degree that they are unicultural.
Chinese ethnocentrism and racism drive their outlook to the rest of the world. Their expectation is of a tribute system where barbarians know that the Chinese are superior.
there is a strong, implicit, racialist view of international politics that is alien and anathema to Western policy-makers and analysts. The Chinese are comfortable using race to explain events and appealing to racist stereotypes to advance their interests. Most insidious is the Chinese belief that Africans in particular need Chinese leadership.
the Chinese will make appeals to Third World states based on “racial solidarity”,…
Chinese racism retards their relations with the Third World…
Chinese racism, and the degree to which the Chinese permit their view of the United States to be informed by racism, has the potential to hinder China in its competition with the United States because it contributes to their overconfidence…
as lamentable as it is, Chinese racism helps to make the Chinese a formidable adversary…
the Chinese are never going to go through a civil rights movement like the United States…
China’s treatment of Christians and ethnic minorities is poor…
considers the 5 major implications for United States decision-makers and asymmetries that may result from Chinese racism.
Chinese racism provides empirical evidence of how the Chinese will treat other international actors if China becomes dominant…
it allows the United States to undermine China in the Third World…
it permits a positive image of the United States to be advanced in contrast to China…
calling attention to Chinese racism allows political and ideological alliances of the United States to be strengthened…
United States defense decision-makers must recognize that racism is a cohesive force for the Chinese…
…The study’s fundamental conclusion is that endemic Chinese racism offers the United States a major asymmetry it may exploit with major countries, regions like Africa, as well as with important opinion makers in international politics. The United States is on the right side of the struggle against racism and China is not. The United States should call attention to this to aid its position in international politics.
Problems with social experiments and evaluating them, loopholes, causes, and suggestions; non-experimental methods systematically deliver false results, as most interventions fail or have small effects.
“The Iron Law Of Evaluation And Other Metallic Rules” is a classic review paper by American “sociologistPeter Rossi, a dedicated progressive and the nation’s leading expert on social program evaluation from the 1960s through the 1980s”; it discusses the difficulties of creating an useful social program, and proposed some aphoristic summary rules, including most famously:
The Iron law: “The expected value of any net impact assessment of any large scale social program is zero”
the Stainless Steel law: “the better designed the impact assessment of a social program, the more likely is the resulting estimate of net impact to be zero.”
Contemporary race and immigration scholars often rely on historical analogies to help them analyze America’s current and future color lines. If European immigrants became white, they claim, perhaps today’s immigrants can as well. But too often these scholars ignore ongoing debates in the historical literature about America’s past racial boundaries. Meanwhile, the historical literature is itself needlessly muddled.
In order to address these problems, the authors borrow concepts from the social science literature on boundaries to systematically compare the experiences of blacks, Mexicans, and southern and eastern Europeans (SEEs) in the first half of the 20th century. Their findings challenge whiteness historiography; caution against making broad claims about the reinvention, blurring, or shifting of America’s color lines; and suggest that the Mexican story might have more to teach us about these current and future lines than the SEE one.
Technological developments can be foreseen but the knowledge is largely useless because startups are inherently risky and require optimal timing. A more practical approach is to embrace uncertainty, taking a reinforcement learning perspective.
How do you time your startup? Technological forecasts are often surprisingly prescient in terms of predicting that something was possible & desirable and what they predict eventually happens; but they are far less successful at predicting the timing, and almost always fail, with the success (and riches) going to another.
Why is their knowledge so useless? Why are success and failure so intertwined in the tech industry? The right moment cannot be known exactly in advance, so attempts to forecast will typically be off by years or worse. For many claims, there is no way to invest in an idea except by going all in and launching a company, resulting in extreme variance in outcomes, even when the idea is good and the forecasts correct about the (eventual) outcome.
Progress can happen and can be foreseen long before, but the details and exact timing due to bottlenecks are too difficult to get right. Launching too early means failure, but being conservative & launching later is just as bad because regardless of forecasting, a good idea will draw overly-optimistic researchers or entrepreneurs to it like moths to a flame: all get immolated but the one with the dumb luck to kiss the flame at the perfect instant, who then wins everything, at which point everyone can see that the optimal time is past. All major success stories overshadow their long list of predecessors who did the same thing, but got unlucky. The lesson of history is that for every lesson, there is an equal and opposite lesson. So, ideas can be divided into the overly-optimistic & likely doomed, or the fait accompli. On an individual level, ideas are worthless because so many others have them too—‘multiple invention’ is the rule, and not the exception. Progress, then, depends on the ‘unreasonable man’.
This overall problem falls under the reinforcement learning paradigm, and successful approaches are analogous to Thompson sampling/posterior sampling: even an informed strategy can’t reliably beat random exploration which gradually shifts towards successful areas while continuing to take occasional long shots. Since people tend to systematically over-exploit, how is this implemented? Apparently by individuals acting suboptimally on the personal level, but optimally on societal level by serving as random exploration.
A major benefit of R&D, then, is in laying fallow until the ‘ripe time’ when they can be immediately exploited in previously-unpredictable ways; applied R&D or VC strategies should focus on maintaining diversity of investments, while continuing to flexibly revisit previous failures which forecasts indicate may have reached ‘ripe time’. This balances overall exploitation & exploration to progress as fast as possible, showing the usefulness of technological forecasting on a global level despite its uselessness to individuals.
One man’s modus ponens is another man’s modus tollens is a saying in Western philosophy encapsulating a common response to a logical proof which generalizes the reductio ad absurdum and consists of rejecting a premise based on an implied conclusion. I explain it in more detail, provide examples, and a Bayesian gloss.
A logically-valid argument which takes the form of a modus ponens may be interpreted in several ways; a major one is to interpret it as a kind of reductio ad absurdum, where by ‘proving’ a conclusion believed to be false, one might instead take it as a modus tollens which proves that one of the premises is false. This “Moorean shift” is aphorized as the snowclone, “One man’s modus ponens is another man’s modus tollens”.
The Moorean shift is a powerful counter-argument which has been deployed against many skeptical & metaphysical claims in philosophy, where often the conclusion is extremely unlikely and little evidence can be provided for the premises used in the proofs; and it is relevant to many other debates, particularly methodological ones.
This article demonstrates historically and statistically that conversionary Protestants (CPs) heavily influenced the rise and spread of stable democracy around the world. It argues that CPs were a crucial catalyst initiating the development and spread of religious liberty, mass education, mass printing, newspapers, voluntary organizations, and colonial reforms, thereby creating the conditions that made stable democracy more likely. Statistically, the historic prevalence of Protestant missionaries explains about half the variation in democracy in Africa, Asia, Latin America and Oceania and removes the impact of most variables that dominate current statistical research about democracy. The association between Protestant missions and democracy is consistent in different continents and subsamples, and it is robust to more than 50 controls and to instrumental variable analyses.
AI folklore tells a story about a neural network trained to detect tanks which instead learned to detect time of day; investigating, this probably never happened.
A cautionary tale in artificial intelligence tells about researchers training an neural network (NN) to detect tanks in photographs, succeeding, only to realize the photographs had been collected under specific conditions for tanks/non-tanks and the NN had learned something useless like time of day. This story is often told to warn about the limits of algorithms and importance of data collection to avoid “dataset bias”/“data leakage” where the collected data can be solved using algorithms that do not generalize to the true data distribution, but the tank story is usually never sourced.
I collate many extent versions dating back a quarter of a century to 1992 along with two NN-related anecdotes from the 1960s; their contradictions & details indicate a classic “urban legend”, with a probable origin in a speculative question in the 1960s by Edward Fredkin at an AI conference about some early NN research, which was then classified & never followed up on.
I suggest that dataset bias is real but exaggerated by the tank story, giving a misleading indication of risks from deep learning and that it would be better to not repeat it but use real examples of dataset bias and focus on larger-scale risks like AI systems optimizing for wrong utility functions.
The collapse of empires is exceedingly difficult to understand.
The author examined the distribution of imperial lifetimes using a data set that spans more than 3 millennia and found that it conforms to a memoryless exponential distribution in which the rate of collapse of an empire is independent of its age.
Comparing this distribution to similar lifetime distributions of other complex systems—specifically, biological species and corporate firms—the author explores the reasons behind their lifetime distributions and how this approach can yield insights into empires.
This is the second half of a 2-part article by Sonia Melnikova-Raich on the relationship forged in the late 1920s and early 1930s between American industrialists and the Soviet government, which sought the help of Americans to move the Soviet Union from a peasant society to an industrial one.
The first part, published in the previous issue of Industrial Archaeology (volume 36, no. 2) described the state of the Soviet tractor and tank industries at the onset of the First Five-Year Plan in 1928 and provided a detailed account of the work in Soviet Russia of the firm of Albert Kahn, including some of the most important Soviet industrial giants, designed to manufacture domestic tractors and by the beginning of WWII converted to production of tanks.
Soviet industrialization was a complex economic and political undertaking about which much remains unclear. Rather than examine the process as a whole, this essay focuses on 2 fairly unknown players in the history of Soviet-American relations—one American firm and one Soviet negotiator—and their contribution to the amazingly rapid Soviet industrialization of the early 1930s, emphasizing some human and business factors behind Stalin’s Five-Year Plan.
Saul G. Bron, during his tenure as chairman of Amtorg Trading Corporation in 1927–1930, contracted with leading American companies to help build Soviet industrial infrastructure and commissioned the firm of the foremost American industrial architect from Detroit, Albert Kahn, as consulting architects to the Soviet Government.
The work of both played a major role in laying the foundation of the Soviet automotive, tractor, and tank industry and led to the development of Soviet defense capabilities, which in turn played an important role in the Allies’ defeat of Nazi Germany in World War II.
Drawing on Russian and English-language sources, this essay is based on comprehensive research including previously unknown archival documents, contemporaneous and current materials, and private archives.
This essay suggests that the Renaissance revolution in historical thought was encouraged by contemporary debates over the Aristotelian-Averroistic doctrine of the eternity of the world. In the early Renaissance eternalism came to be understood as a proposition with controversial consequences not only for the creation of matter ex nihilo but also for the record of historical time. Modern scholarship, following Momigliano, believes that understandings of time had little effect on the practice of ancient historians. But that was not the view of Orosius, the most widely read historian during the Middle Ages, who condemned the pagan historians for their eternalism. Nor was it the view of the Italian humanists who, after reading the Greek historians, abandoned the providentialism of Orosius and revived ancient ways of writing history.
On 21 October 1967Allen Ginsberg, Abbie Hoffman, and Ed Sanders of the band The Fugs, and others, organized an “exorcism” of the Pentagon in which several thousand demonstrators participated. Most historians have regarded this event as “a put on” or at best as “performance art.” This article takes seriously the nominal status of the ritual as a “sacred” or “magical” event. It argues that the organizers were utilizing innovative strategies of social action to alter the terms of debate regarding the Vietnam War. In as much as these strategies drew on “secret” insights into the nature of social reality, they were seen as “magical” and in continuity with pre-modern esoteric traditions. Finally, it is argued that the new left turned to such tactics out of a deep frustration with traditional forms of democratic political engagement. [The organizers asked the GSA for a permit to lift it 300 feet in the air. GSA held the line and authorized only 3 feet.]
[Scott’s Antarctic expedition in 1911 was plagued by the disease scurvy, despite its having been “conquered in 1747, when the Scottish physician James Lind proved in one of the first controlled medical experiments that citrus fruits were an effective cure for the disease.” How it all went wrong would make a case study for a philosophy of science class.
The British Admiralty switched their scurvy cure from lemon juice to lime juice in 1860. The new cure was much less effective, but by that time advances in technology meant that most sea voyages were so short that there was little or no danger of scurvy anyway. So poor Scott’s expedition, as well as applying ‘state-of-the-art’ (ie. wrong) cures, were falling back on a ‘tried-and-true’ remedy that in fact had been largely ineffective already for 50 years… without anyone noticing.]
An unfortunate series of accidents conspired with advances in technology to discredit the cure for scurvy. What had been a simple dietary deficiency became a subtle and unpredictable disease that could strike without warning. Over the course of fifty years, scurvy would return to torment not just Polar explorers, but thousands of infants born into wealthy European and American homes. And it would only be through blind luck that the actual cause of scurvy would be rediscovered, and vitamin C finally isolated, in 1932.
…So when the Admiralty began to replace lemon juice with an ineffective substitute in 1860, it took a long time for anyone to notice. In that year, naval authorities switched procurement from Mediterranean lemons to West Indian limes. The motives for this were mainly colonial—it was better to buy from British plantations than to continue importing lemons from Europe. Confusion in naming didn’t help matters. Both “lemon” and “lime” were in use as a collective term for citrus, and though European lemons and sour limes are quite different fruits, their Latin names (citrus medica, var. limonica and citrus medica, var. acida) suggested that they were as closely related as green and red apples. Moreover, as there was a widespread belief that the antiscorbutic properties of lemons were due to their acidity, it made sense that the more acidic Caribbean limes would be even better at fighting the disease.
In this, the Navy was deceived. Tests on animals would later show that fresh lime juice has a quarter of the scurvy-fighting power of fresh lemon juice. And the lime juice being served to sailors was not fresh, but had spent long periods of time in settling tanks open to the air, and had been pumped through copper tubing. A 1918 animal experiment using representative samples of lime juice from the navy and merchant marine showed that the ‘preventative’ often lacked any antiscorbutic power at all.
By the 1870s, therefore, most British ships were sailing without protection against scurvy. Only speed and improved nutrition on land were preventing sailors from getting sick.
…In the course of writing this essay, I was tempted many times to pick a villain. Maybe the perfectly named Almroth Wright, who threw his considerable medical reputation behind the ptomaine theory and so delayed the proper re-understanding of scurvy for many years. Or the nameless Admiralty flunky who helped his career by championing the switch to West Indian limes. Or even poor Scott himself, sermonizing about the virtues of scientific progress while never conducting a proper experiment, taking dreadful risks, and showing a most unscientific reliance on pure grit to get his men out of any difficulty.
But the villain here is just good old human ignorance, that master of disguise. We tend to think that knowledge, once acquired, is something permanent. Instead, even holding on to it requires constant, careful effort.
[cf. Glitz & Meyersson 2017] This is the first half of a 2-part article by on the relationship forged in the late 1920s between American industrialists, especially Albert Kahn, the renowned factory architect, and the Soviet government, which in the late 1920s and early 1930s sought the help of Americans to move the Soviet Union from a peasant society to an industrial one.
This first part focuses on that phase of Soviet-American interaction from the perspective of Kahn’s architectural firm.
The second part, which will be published in the next issue of Industrial Archaeology (volume 37, nos. 1–2), will focus on the Soviet-American commercial relationship from the perspective of Saul G. Bron, who headed the American Trading Corporation (Amtorg), the Soviet-controlled agency responsible for contracting with the American private sector.
Soviet industrialization was a complex economic and political undertaking about which much remains unclear.
Rather than examine the process as a whole, this essay focuses on 2 fairly unknown players in the history of Soviet-American relations—one American firm and one Soviet negotiator—and their contribution to the amazingly rapid Soviet industrialization of the early 1930s, emphasizing some human and business factors behind Stalin’sFive-Year Plan.
Saul G. Bron, during his tenure as chairman of Amtorg Trading Corporation in 1927–1930, contracted with leading American companies to help build Soviet industrial infrastructure and commissioned the firm of the foremost American industrial architect from Detroit, Albert Kahn, as consulting architects to the Soviet Government. The work of both played a major role in laying the foundation of the Soviet automotive, tractor, and tank industry and led to the development of Soviet defense capabilities, which in turn played an important role in the Allies’ defeat of Nazi Germany in World War II.
Drawing on Russian and English-language sources, this essay is based on comprehensive research including previously-unknown archival documents, contemporaneous and current materials, and private archives.
Smallpox produced the death of up to 30% of those infected, so Jenner’s preventive method spread quickly. The Spanish government designed and supported a ten-year effort to carry smallpox vaccine to its American and Asian territories in a chain of arm-to-arm vaccination of children. An expedition directed by Doctor Francisco Xavier de Balmis sailed from Corunna in November 1803, stopping in the Canary Islands, Puerto Rico, and Venezuela. Balmis led a subexpedition to Cuba, Mexico, and the Philippines; his assistants returned to Mexico in 1807, while Balmis took vaccine to China and returned to Spain (and again to Mexico, 1810–1813). Vice-director José Salvany and his staff took vaccine to present-day Colombia, Ecuador, Peru, Bolivia, and Chilean Patagonia. The Spanish Royal Philanthropic Vaccine Expedition shows the first attempts to solve questions still important for the introduction of new immunizations—professionalization in public health, technology transfer, protection of research subjects, and evaluation of vaccine efficacy, safety, and cost.
This article discusses several universal features of fortifications and distinguishes those features that are unequivocally military in function. The evidence adduced includes the features of known historic fortifications, relevant prescriptions by ancient military authors, and geometry. The archaeologically visible features that are universally used in military defenses are V-sectioned ditches, “defended” (especially baffled) gates, and bastions. It is also noted that ritual, ceremonial, or any other peaceful activities conducted within an enclosure having these architectural features does not preclude its obvious military function.
[Keywords: ancient fortifications, warfare, prehistoric enclosures, pre-gunpowder weapons, symbolism, warfare, noble savage myth, prehistoric war, Crow Creek massacre]
The paper uses a range of sources—parish registers, family histories, bills of mortality, local censuses, marriage licences, apprenticeship indentures, and wills—to document the history of mortality of London in the period 1538–1850. The main conclusions of the research are as follows:
Infant and child mortality more than doubled between the 16th and the middle of the 18th century in both wealthy and non-wealthy families.
Mortality peaked in the middle of the 18th century at a very high level, with nearly 2⁄3 of all children—rich and poor—dying by their fifth birthday.
Mortality under the age of 2 fell sharply after the middle of the 18th century, and older child mortality decreased mainly during the late 18th and early 19th century. By the second quarter of the 19th century about 30% of all children had died within the first 5 years. This latter fall in mortality appears to have occurred equally amongst both the wealthy and the non-wealthy population.
There was little or no change in paternal mortality from 1600 to 1750, after which date there was a steady reduction until the middle of the 19th century. The scale of the fall in adult mortality was probably less than the reduction in infant and child mortality. The latter more than halved between the middle of the 18th and 19th centuries, whereas paternal mortality fell by about a third in the same period.
There appears to have been a minimal social class gradient in infant, child and adult mortality in London during the period 1550–1850. This is an unexpected finding, raising fundamental questions about the role of poverty and social class in shaping mortality in this period.
Although migration played a leading role in fostering the population increase in London in the 16th and early 17th centuries, relatively low infant and child mortality made a major contribution to population growth during this period.
Lampson: “It really makes you wonder when there’s going to be some substantial advance. The only substantial advance since the days of PARC that I know about is the Web. Which really is qualitatively different. It’s an interesting question why it took so long to happen, which I have a theory about. My theory is that it’s entirely a matter of scale. It couldn’t happen until the Internet got big enough, because until then it wasn’t worth the hassle of organizing your stuff so that it would be accessible to other people. But things got above a certain scale. Then you could find a big enough user community that you actually cared about enough to be willing to do that work. Because from a technical point of view it could have happened 10 years earlier, I think. It’s just that it wouldn’t have paid.”
Kay: “But I wish that you had been at CERN on a sabbatical when that…”
Lampson: “I probably would have been a disaster.”
Kay: “I don’t know. But I think you would have made a slightly better…”
Lampson: “No. No. No. No. No. No. What Tim Berners-Lee did was perfect. My view about the web is that it’s the great failure of computer systems research. Why did computer systems researchers not invent the web? And I can tell you the answer. It’s because it’s too simple.”
Kay: “It is too simple.”
Lampson: “If I had been there I would have mucked it up. I swear to God. The idea that you’re going to make a new TCP connection for every mouse click on a link? Madness! The idea that you’re going to have this crusty universal data type called HTML with all those stupidangle brackets? We never would have done that! But those were the things that allowed it to succeed.”
Kay: “Yeah, to some extent.”
Lampson: “Absolutely. Not ‘to some extent’. Absolutely. There’s some bad consequences…but that’s too bad. You’ve got to go with the flow, otherwise it would… No, it would have been a disaster. Never would have worked.”
In conclusion, fencing tempo is a vital element of swordsmanship, but clearly for the duelist hitting before being hit is not at all the same thing as hitting without being hit. Exsanguination is the principal mechanism of death caused by stabbing and incising wounds and death by this means is seldom instantaneous. Although stab wounds to the heart are generally imagined to be instantly incapacitating, numerous modern medical case histories indicate that while victims of such wounds may immediately collapse upon being wounded, rapid disability from this type of wound is by no means certain. Many present-day victims of penetrating wounds involving the lungs and the great vessels of the thorax have also demonstrated a remarkable ability to remain physically active minutes to hours after their wounds were inflicted. These cases are consistent with reports of duelists who, subsequent to having been grievously or even mortally wounded through the chest, neck, or abdomen, nevertheless remained actively engaged upon the terrain and fully able to continue long enough to dispatch those who had wounded them.
…Early American motion pictures have frequently misrepresented virtually every aspect of authentic swordplay. This seems to have been especially true of the industry’s depiction of the manner in which swordsmen fell before the blades of their opponents. While anecdotes of duels may have been biased by politics or personal vanity, modern forensic medicine provides ample evidence to support historical accounts of gravely wounded duelists continuing in combats for surprising lengths of time, sometimes killing those who had killed them.
In the first installment of this essay modern forensic evidence indicated that exsanguination is the principal mechanism of death caused by stabbing and incising wounds, but that death by this means is seldom instantaneous; victims frequently capable of continued physical activity, even after being stabbed in the heart. Similarly, victims of sharp force injuries to the lungs are not infrequently able to carry on for protracted periods of time. Wounds which result in the introduction of blood into the upper airway, on the other hand, are likely to incapacitate and kill an adversary quite rapidly.
Duels featuring penetrating wounds to the muscles of the sword arm appear in some cases to have left duelists fully capable of manipulating their weapons. Thrusts to the thigh and leg may have been even less efficacious. Strokes with the cutting edges of swords to the limbs may result in more serious wounds to the musculature than the penetrating variety, but historical accounts of duels demonstrate that immediate incapacitation of an adversary stricken with such wounds was by no means guaranteed. Incising wounds which sever tendons, however, can be expected to immediately incapacitate the muscles from which they arise. Recent medical reports of sharp force injuries to the brain suggest that even a sword-thrust penetrating the skull ought not to have been expected always to disable an opponent instantaneously. While severe pain is usually incapacitating, the stress of combat may mask the pain of gravely serious wounds, enabling the determined duelist to remain on the ground for a considerable length of time.
The immediate consequences to a duelist of wounds inflicted by thrusts or cuts from the rapier, dueling sabre or smallsword were unpredictable. While historical anecdotes of affairs of honor and 20th century medical reports show that many stabbing victims collapsed immediately upon being wounded, others did not. While a swordsman certainly gained no advantage for having been wounded, it cannot be said that an unscathed adversary, after having delivered a fatal thrust or cut, had no further concern for his safety. Duelists receiving serious and even mortal wounds were sometimes able to continue effectively in the combat long enough to take the lives of those who had taken theirs…For the duelist, however, another form of tempo had to be considered. In the early history of affairs of honor, this “dueling tempo” spanned the period extending from the moment that a wound was inflicted until the instant that the adversary was no longer able to continue effectively. This span of time was unpredictable in length and could be expressed in terms ranging from a fraction of a second to minutes. Considering the number and severity of wounds that were sustained by combatants in the early days of the duel, it would not be surprising to find that many duelists of latter days secretly breathed a sigh of relief when interrupted by seconds rushing in to terminate affairs of honor immediately upon the delivery of a well placed cut or thrust.
For at least two hundred and fifty years, many men in the Roman province of Egypt married their full sisters and raised families with them. During the same era, Roman law firmly banned close-kin marriages and denounced them both as nefas, or sacrilegious, and against the ius gentium, the laws shared by all civilized peoples. In Egypt, however, Roman officials deliberately chose not to enforce the relevant marriage laws among the Greek metic, hybrid, and native Egyptian populations; the bureaucracy also created loopholes within new laws which tolerated the practice. This policy created a gap between the absolute theoretical ban in Roman law and the reality of common incestuous unions in Egypt. Since Roman Egypt was both an important and a dangerous province, Rome needed both to pacify its people and to weaken Egypt’s status with its neighbors. By permitting incestuous marriages among non-Romans in Egypt, the Roman governors simultaneously pleased the local population while causing Jews and North Africans to hold their neighbor in contempt.
[Pesic discusses Peterson’s theory of Galileo’s focus on scaling laws in Two New Sciences as reflecting belated publication of a theory developed to analyze the physical possibility of Hell in Dante’s Inferno. Peterson suggests Galileo was embarrassed at having refuted his own arguments and shown it impossible, and simply delayed publishing to avoid attack.
Pesic suggests an additional consideration: religious Catholic orthodoxy of the sort Galileo would later run afoul of. By refuting even just Dante’s Hell, Galileo would cast some doubt on the official Catholic & Ptolemaic cosmologies, treating close to heresy.]
Though the exact location of hell was not a matter of faith, its existence was a tenet of Catholic belief and its negation thus heretical. Thus, in 1620 Giuseppe Rosaccio confidently described hell as being within the earth, noting that an enormous space was needed in view of the ever increasing number of the damned, who had no right to expect as much room as the blessed souls in heaven.14
Galileo’s realization that nature is not scale invariant motivated his subsequent discovery of scaling laws. His thinking is traced to two lectures he gave on the geography of Dante’s Inferno…Looked at this way, Galileo’s lifelong reluctance to publish seems even more inexplicable, but perhaps this pattern began with the experience of the Inferno lectures. He seems to have done his best to make people forget the lectures, and he kept the scaling theory to himself. What he made public, at least in this case, was a source of trouble, while what he kept secret was a source of confidence. The unpleasantness of being vulnerable to attack is a lesson that he might have taken to heart then, and it is a view he expresses feelingly later on, on the basis of real experience (although without admitting vulnerability!), in the opening lines of The Assayer.17 Galileo frequently claims to have wonderful results that he has not yet revealed, things he has not yet chosen to disclose. We know that this was true through much of his career, and apparently it was true right from the start. Finally, it is an irony that the first success of Galileo’s mathematical physics, which is close to being the first success of mathematical physics at all, was a response to a problem that was not physical, but rather the collapse of an imaginary structure in a work of literature.
[Galileo’s Two New Sciences puzzlingly spends much of its material on the question of how large a ship or a beam of wood or a column of rock can become before collapsing, correctly arguing that the naive belief of scale-invariance (that a ship can be any size as long as it maintains the same geometric proportions) is wrong and that large ships or beams are impossible as they will collapse under their own weight. Why did Galileo, who hardly ever published, spend so much time on this rather than astronomy—especially when he appears to have conducted the scaling law research almost 30 years before?
Peterson digs up neglected lectures by a young and ambitious Galileo, at the court of the Medici, on the topic of Dante’s Inferno where he weighed in on a contemporary dispute between a fellow Florentine & a rival Italian about the size & geography of Hell (then still considered a real place located within the Earth). Galileo, assuming scale-invariance, defended & mathematically improved his fellow’s approach.
The scaling research, then, grew out of his doubts about his naive extrapolations, and he eventually refuted himself. In Renaissance Italy, science, being a patronage/prestige-based endeavour heavily driven by entertainment value, Galileo would be incentivized to keep this research secret lest he embarrass himself, and to use as a weapon in the controversy. However, the dispute appears to have died out and he never had to reveal it, so, decades later, he then included it in Two New Sciences while sanitizing it of its embarrassing origins.]
I recollect on the organization of the Landau school and describe the early history of the ITEP Theory Department, as well as the history of creation of famous Landau, Abrikosov, and Khalatnikov’s papers, and Landau’s papers on P-parity violation and CP-conservation. The recollections carry an imprint of the epoch long gone …
The Soviet communist regime had devastating consequences on the state of Russian 20th century science. Country Communist leaders promoted Trofim Lysenko—an agronomist and keen supporter of the inheritance of acquired characters—and the Soviet government imposed a complete ban on the practice and teaching of genetics, which it condemned as a “bourgeois perversion”. Russian science, which had previously flourished, rapidly declined, and many valuable scientific discoveries made by leading Russian geneticists were forgotten.
…Totalitarian political pressure: The Soviet communist regime eliminated many of its best scientists, crushed societal morals and brought irreparable harm to the country (for a discussion see REFS 8,10). During 1919–1922, Lenin exiled thousands of philosophers, sociologists, historians and economists whose ideas contradicted his views. Stalin and the Communist Party Politburo took the next step: they decided that certain scientific fields must be forbidden as “bourgeois perversion”. It is possible to argue that science is intrinsically political, and many scientists might be seen as excellent politicians when it comes to seeking financial support for their work, but, in my opinion, this behaviour cannot be compared with the hysterical appeals to the country’s leaders to ban certain disciplines and calls for the arrests of ‘anti-Soviet’ scientists that took place in the USSR.
The intervention of the Communist leaders into science in the USSR was a particular phenomenon in the history of science in the 20th century, comparable only with the events that took place in Nazi Germany. It is qualitatively different from the sort of everyday ‘politics’ in which all scientists, everywhere, engage. The most tragic consequence of totalitarian rule was the persecution of those scientists who were unable to unconditionally agree with the Party’s decrees or tried to dispute its decisions. These personal tragedies of many outstanding scientists in the USSR led to much deeper and wider effects. The progress of science was slowed or stopped, and millions of university and high school students received a distorted education. A comparable example of the devastating influence of politicization of society was the Nazis’ destruction of science in fascist Germany after 1933. Thousands of scientists, especially those of Jewish origins, were forced to leave Germany. Nevertheless, the mass arrests of scientists in the Soviet Union had much worse consequences for science. In my opinion, it was the most tragic event in the history of science. It demonstrated the terrible effects of a political dictatorship, and showed that science should develop in free and open competition between scientists, without political intervention.
Apocalyptic envisionings of the historical process, whether philosophical, pseudo-scientific or incarnate as chiliastic movements have always been, and in all likelihood will continue to be, an integral dimension in the unfolding of the Euroamerican cultural chreod. This paper begins with some general observations on the genesis and character of apocalyptic movements, then proceeds to trace the psychological roots of Euroamerican apocalyptic thought as expressed in the Trinitarian-dualist formulations of Christian dogma, showing how the writings of the medieval Calabrian mystic Joachim of Fiore (c.1135–1202) created a synthesis of dynamic Trinitarianism and existential dualism within a framework of historical immanence. The resulting Joachimite ‘program’ later underwent further dissemination and distortion within the context of psychospeciation and finally led to the great totalitarian systems of the 20th century, thereby indirectly exercising an influence on the development of psychohistory itself as an independent discipline.
Islamization, along with an area’s inclusion in the 8th-century Arab-Islamic Khalifate (and its persistence within the Islamic world) is a strong and statistically-significant predictor of parallel-cousin (FBD) marriage. While there is a clear functional connection between Islam and FBD marriage, the prescription to marry a FBD does not appear to be sufficient to persuade people to actually marry thus, even if the marriage brings with it economic advantages. A systematic acceptance of parallel-cousin marriage took place when Islamization occurred together with Arabization.
Our perspectives on ancient history can sometimes be significantly affected by contributions from scholars of other disciplines. An obvious example from the military field is Edward Luttwak’s 1976 book onThe Grand Strategy of the Roman Empire. Luttwak is a respected and insightful commentator on modern strategic issues, and his distinctive contribution was to analyse Roman military affairs in terms of modern concepts such as ‘armed suasion’ and the distinction between ‘power’ and ‘force’. His book has prompted considerable debate among specialist ancient historians, and although much of this has been critical of his ideas (largely due to the alleged anachronism of applying them in the Roman context), there is no doubt that the injection of this new dimension has helped to influence subsequent thinking on Roman imperial defence.
For decades, the ABA has administered the system as, in economic effect, a cartel of law school faculty members. The ABA has exerted monopoly power not only over the market for legal training, but also over 3 related markets: the market for the hiring of law faculty, the market for legal services, and each university’s internal market for funding.
Despite the selfless service of many in the system, the system has created large harms, but few benefits. Existing law faculty have gained at the expense of their students, of their universities, and of other potential faculty members. By suppressing new schools that would offer cheaper, more-efficient legal education, the system has excluded many from the legal profession, particularly the poor and minorities. The system has both raised the cost of legal services and denied legal services to whole segments of our society.
The system is illegal under the antitrust laws.
The Article enlarges the literature in 5 specific ways. It shows that many law schools are organized, in effect, as partnerships of professors. It explores the system’s impacts on 4 related markets, rather than just one. It appraises the ABA system’s main harms and possible benefits. It shows extensively the antitrust violation. And it suggests important policy choices, including abolishing the accreditation controls and markedly changing the role of the bar examination.
[Description of a visit to an unusual science museum: the LA Museum of Jurassic Technology. Unlike most science museums, only some of the exhibits are genuine. The others are fakes, many made by the museum’s curator. The visitor is challenged to discern the fabulous from the fraudulent.]
[Keywords: 20th century, California, Curiosities and wonders, David Hildebrand Wilson, Los Angeles, Museum of Jurassic Technology, Science museums, hoax, performance art, critical thinking]
The Great Toronto Stork Derby was a bizarre incident in Canadian history sparked by the death of a wealthy Toronto lawyer, Charles Vance Millar. In his will, Millar outlined the terms of a contest in which the woman in Toronto bearing the most children in the ten years following his death was to receive the bulk of his fortune. Millar died on October 31, 1926 and so began a competition that captivated the attention of the public in Canada for twelve years. In this competition poor, working class families participated in a high stakes gamble for Millar’s $6,345,000.5$500,000.01926 estate.
“Bearing the Burden” attempts to dispel the popular perception of the event as humorous. It will demonstrate how the Derby became a crucible for many social and moral concerns of the day. The Derby will be used as a vehicle to explore attitudes towards reproduction, class, race and gender in Depression era Canada.
The introduction will provide an overview of the story as well as the structure of the paper. Chapter One sets the theoretical and temporal boundaries for the discussion and suggests why the Derby became the subject of a “moral panic”. Chapter Two explores the Ontario government’s failed escheat attempt in 1932. Chapter Three looks at the theme of newspaper voyeurism and the general circus-like atmosphere that developed around the event. Chapters Four and Five focus on the court hearings of 1936 through 1938. These hearings focused on the validity of the will and on what type of children could be included in the count. Much debate surrounded the possible inclusion of stillborn or illegitimate children. The conclusion shows how the Derby reflected contemporary social concerns and also that class was one of the most important factors in determining the outcome of the competition.
More than two dozen Soviet astronomers were arrested between March 1936 and July 1937. Few astronomers or historians are aware of the extent to which Soviet astronomy was devastated. This article investigates the situation in astronomy during these two years. It begins with a brief discussion of Soviet astronomy between 1917 and 1935 and continues with a detailed examination of the events that served as the catalyst for the purge, the arrests themselves, and a discussion of what is known about the fates of the victims.
In the mid-1930s the Soviet Union had ~two hundred professional astronomers and sixteen astronomical observatories, most of which were associated with universities and had staffs of only two or three people. The most important and best equipped astronomical institution was the Central Astronomical Observatory of the USSR at Pulkovo, just outside Leningrad, with its branch observatories at Nikolaev and Simeis in the Ukraine. In 1935 thirty-three astronomers worked at Pulkovo.
[Gonzo-style account of hanging out with teenage hackers and phreakers in NYC, Phiber Optik and Acid Phreak, similar to Hackers]
“Sometimes”, says Kool, “it’s so simple. I used to have contests with my friends to see how few words we could use to get a password. Once I called up and said, ‘Hi, I’m from the social-engineering center and I need your password’, and they gave it to me! I swear, sometimes I think I could call up and say, ‘Hi, I’m in a diner, eating a banana split. Give me your password.’” Like its mechanical counterpart, social engineering is half business and half pleasure. It is a social game that allows the accomplished hacker to show off his knowledge of systems, his mastery of jargon, and especially his ability to manipulate people. It not only allows the hacker to get information; it also has the comic attractions of the old-fashioned prank phone call—fooling an adult, improvisation, cruelty. In the months we spent with the hackers, the best performance in a social-engineering role was by a hacker named Oddjob. With him and three other guys we pulled a hacking all-nighter in the financial district, visiting pay phones in the hallway of the World Trade Center, outside the bathrooms of the Vista Hotel, and in the lobby of the international headquarters of American Express.
…Where we see only a machine’s function, they see its potential. This is, of course, the noble and essential trait of the inventor. But hackers warp it with teenage anarchic creativity: Edison with attitude. Consider the fax machine. We look at it; we see a document-delivery device. One hacker we met, Kaos, looked at the same machine and immediately saw the Black Loop of Death. Here’s how it works: Photocopy your middle finger displaying the international sign of obscene derision. Make two more copies. Tape these three pages together. Choose a target fax machine. Wait until nighttime, when you know it will be unattended, and dial it up. Begin to feed your long document into your fax machine. When the first page begins to emerge below, tape it to the end of the last page. Ecce. This three-page loop will continuously feed your image all night long. In the morning, your victim will find an empty fax machine, surrounded by two thousand copies of your finger, flipping the bird.
…From a distance, a computer network looks like a fortress—impregnable, heavily guarded. As you get closer, though, the walls of the fortress look a little flimsy. You notice that the fortress has a thousand doors; that some are unguarded, the rest watched by unwary civilians. All the hacker has to do to get in is find an unguarded door, or borrow a key, or punch a hole in the wall. The question of whether he’s allowed in is made moot by the fact that it’s unbelievably simple to enter. Breaking into computer systems will always remain easy because the systems have to accommodate dolts like you and me. If computers were used only by brilliant programmers, no doubt they could maintain a nearly impenetrable security system. But computers aren’t built that way; they are “dumbed down” to allow those who must use them to do their jobs. So hackers will always be able to find a trusting soul to reveal a dialup, an account, and a password. And they will always get in.
Reconsideration of documentary evidence indicates that the Subarctic Algonquian windigo complex was of probable prehistoric inception, that a correlative psychiatric disorder entailing cannibalistic ideation and behavior is historically demonstrable, and that existing ecological explanations of the complex fail to elucidate its origin, persistence, characteristics, and distribution. Examination of the windigo complex from structural, pragmatic, and ideological perspectives suggests that instances of the psychiatric disorder were conditioned by Algonquian theories of dreaming and predestination.
…Hitler knew this. He perceived early on that the weakest link in his plans for blitzkrieg using his panzer divisions was fuel supply. He ordered his staff to design a fuel container that would minimize gasoline losses under combat conditions. As a result the German army had thousands of jerrycans, as they came to be called, stored and ready when hostilities began in 1939.
The jerrycan had been developed under the strictest secrecy, and its unique features were many. It was flat-sided and rectangular in shape, consisting of 2 halves welded together as in a typical automobile gasoline tank. It had 3 handles, enabling one man to carry 2 cans and pass one to another man in bucket-brigade fashion. Its capacity was ~5 U.S. gallons; its weight filled, 45 pounds. Thanks to an air chamber at the top, it would float on water if dropped overboard or from a plane. Its short spout was secured with a snap closure that could be propped open for pouring, making unnecessary any funnel or opener. A gasket made the mouth leakproof. An air-breathing tube from the spout to the air space kept the pouring smooth. And most important, the can’s inside was lined with an impervious plastic material developed for the insides of steel beer barrels. This enabled the jerrycan to be used alternately for gasoline and water.
Early in the summer of 1939, this secret weapon began a roundabout odyssey into American hands…Back in the United States, Pleiss told military officials about the container, but without a sample can he could stir no interest, even though the war was now well under way…Pleiss immediately sent one of the cans to Washington. The War Department looked at it but unwisely decided that an updated version of their World War I container would be good enough. That was a cylindrical ten-gallon can with 2 screw closures. It required a wrench and a funnel for pouring. That one jerrycan in the Army’s possession was later sent to Camp Holabird, in Maryland. There it was poorly redesigned; the only features retained were the size, shape, and handles. The welded circumferential joint was replaced with rolled seams around the bottom and one side. Both a wrench and a funnel were required for its use. And it now had no lining. As any petroleum engineer knows, it is unsafe to store gasoline in a container with rolled seams. This ersatz can did not win wide acceptance.
The British first encountered the jerrycan during the German invasion of Norway, in 1940, and gave it its English name (the Germans were, of course, the “Jerries”). Later that year Pleiss was in London and was asked by British officers if he knew anything about the can’s design and manufacture. He ordered the second of his 3 jerrycans flown to London. Steps were taken to manufacture exact duplicates of it. 2 years later the United States was still oblivious of the can.
…The British historian Desmond Young later confirmed the great importance of oil cans in the early African part of the war. “No one who did not serve in the desert”, he wrote, “can realise to what extent the difference between complete and partial success rested on the simplest item of our equipment—and the worst. Whoever sent our troops into desert warfare with the [five-gallon] petrol tin has much to answer for. General Auchinleck estimates that this ‘flimsy and ill-constructed container’ led to the loss of 30% of petrol between base and consumer. … The overall loss was almost incalculable. To calculate the tanks destroyed, the number of men who were killed or went into captivity because of shortage of petrol at some crucial moment, the ships and merchant seamen lost in carrying it, would be quite impossible.”
After my colleague and I made our report, a new 5-gallon container under consideration in Washington was canceled. Meanwhile the British were finally gearing up for mass production. 2 million British jerrycans were sent to North Africa in early 1943, and by early 1944 they were being manufactured in the Middle East. Since the British had such a head start, the Allies agreed to let them produce all the cans needed for the invasion of Europe. Millions were ready by D-day. By V-E day some 21 million Allied jerrycans had been scattered all over Europe. President Roosevelt observed in November 1944, “Without these cans it would have been impossible for our armies to cut their way across France at a lightning pace which exceeded the German Blitz of 1940.”
Machiavelli’s most famous political work, The Prince, was a masterful act of political deception. I argue that Machiavelli’s intention was a republican one: to undo Lorenzo de Medici by giving him advice that would jeopardize his power, hasten his overthrow, and allow for the resurgence of the Florentine republic.
This interpretation returns The Prince to its specific historical context. It considers Machiavelli’s advice to Lorenzo on where to reside, how to behave, and whom to arm in light of the political reality of 16th-century Florence. Evidence external to The Prince, including Machiavelli’s other writings and his own political biography, confirms his anti-Medicean sentiments, his republican convictions, and his proclivity for deception.
Understanding The Prince as an act of political deception continues a tradition of reading Machiavelli as a radical republican. Moreover, it overcomes the difficulties of previous republican interpretations, and provides new insight into the strategic perspective and Renaissance artistry Machiavelli employed as a theoretician.
…It seems that this sowing of the ruins of Carthage with salt, apparently as a symbol of its total destruction and perhaps as a means of ensuring the soil’s infertility, is a tradition in Roman history well known to most students. When, however, one comes to seek the source, it seems elusive.
…Since the ancient sources for the salt story are lacking, its origin must be sought in modern works…Who, then, has told the story of the salt? The earliest version I have found is highly notable: the Cambridge Ancient History. In 1930, B. Hallward wrote:
Buildings and walls were razed to the ground; the plough passed over the site, and salt was sown in the furrows made.
From here the story can be traced step by step. Following Hallward come H. Scullard, G. Walter, G. Picard, B. Warmington, S. Raven, G. Herm, S. Tlatli. As the story is handed down, details are added or changed: the spreading of salt was meant to consecrate the site eternally as cursed (Walter) or “to signify that it was to remain uninhabited and barren forever” (Warmington), or “to make the soil unfruitful” (Herm). The spreading or “sowing” of salt (Scullard, Picard, Warmington) even becomes finally a more genteel “sprinkling” (Raven). The modern origin of the story seems, then, to have been the influential Cambridge Ancient History,2 a chapter written by a young historian who wrote hardly anything else. So few words have rarely had such an influence!
This still does not reveal the ultimate source of the story. That is another paradox. It must be Judges 9:45, a famous biblical crux…Here we have a clutch of Jewish, Hittite, and Assyrian texts ranging over nearly one and a half millennia which describe the scattering of a variety of minerals and plants over the site of a destroyed city or land, in one case salt alone (Shechem), in another salt and some form of plant (Elam). The common link joining all these instances is the desire to render the site uninhabitable. The best-known case, of course, is that of Shechem, since it occurs in the Old Testament.
Here, then, must be the origin of the idea that Carthage also was sown with salt.
Now, more than 50 years after its first appearance in Roman histories, it is time to excise it—along with the ploughing up of the whole site—from the tradition.
There is a bizarre recent note on the consecratio of Carthage. In 1966 there was published what purports to be an old inscription concerning this act, restored ad formam tituli et litterarum by a procurator Augusti, Classicius: see CRAI (1966): 61–76. As soon as the inscription was presented to the Academy, it was pronounced a forgery by L. Robert, J. Carcopino, and others, because of aberrant grammar, letter-forms, forms of proper names, and, not least, the suggestive name of the restorer: ‘Classicius’!
The experiences of white persons held in captivity by Indians have fascinated readers for almost 3 centuries. Hundreds of redeemed captives have written or related accounts of their adventures, and many of them acknowledged that they had enjoyed the lifestyle of their captors. Other former captives charged, however, that they had been brutalized by the Indians to the point of preferring death to a life of captivity. Many captives retained almost no recollection of white civilization, having lost the use of their native languages and even forgotten their own names. They had become proficient in the skills required for survival in the wilderness and, except for the color of their skins, they could scarcely be distinguished from their captors.
This study analyzes narratives of captivity in order to identify and evaluate factors which facilitated or retarded assimilation. A number of anthropologists and historians have suggested the need for a study, based upon a large number of cases, which would help to determine why some captives became “white Indians” while others completely rejected native American culture. Scholars have speculated that both white and Indian children, when exposed to both civilizations, invariably preferred the Indian way of life. The experiences of Indian children reared by whites were analyzed, therefore, to ascertain whether assimilation occurred along similar lines among both races.
The first section of this study examines Indian-white relationships as a contest of civilizations. While the Indian perceived that the white man held superior technological knowledge which could make his life easier, he rejected many aspects of European culture, and he did not consider his own civilization to be inferior. Many whites, on the other hand, regarded Indians as savages who must be forced to abandon their way of life for the benefit of both races. The experiences of young captives who were adopted by Indian families show that these whites were treated as natural-born Indians, and that they accepted and enjoyed the way of life of their captors.
The next section looks at factors which have been suggested as determinants of the assimilation of white captives. It was concluded that the original cultural milieu of the captive was of no importance as a determinant. Persons of all races and cultural backgrounds reacted to captivity in much the same way. The cultural characteristics of the captors, also, had little influence on assimilation. While some tribes treated captives more brutally than others, abuse delayed but did not prevent Indianization. A lengthy captivity resulted in greater assimilation than a brief one, but many captives became substantially Indianized in a matter of months. It was concluded that the most important factor in determining assimilation was age at the time of captivity. Boys and girls captured below the age of puberty almost always became assimilated while persons taken prisoner above that age usually retained the desire to return to white civilization.
The final section compares the assimilation of Indian children reared by whites during frontier times with that of white children who were captured by Indians. It was concluded that an Indian child reared and cherished in a white family became assimilated in much the same manner as a white child adopted by an Indian family. The determining factor was age at the time of removal from natural parents for Indian children as well as for whites. Indian children educated at boarding schools became less assimilated than those reared in white families because teachers regarded them as persons of inferior culture and because associations with other Indian students reinforced tribal ties and cultural predilections.
This paper elaborates the argument of a previous paper ([“Bridal pregnancy in rural England in earlier centuries”, Hair 1966] Population Studies, 20, 1966, pp. 233–43).
The results of an investigation of the experience of 2,340 brides are broadly similar to those reported earlier: in particular, they confirm that bridal pregnancy was more common in the 18th than in the 17th century. Evidence is presented to suggest that the 16th-century experience was similar to that of the 17th, while the 19th-century experience was similar to that of the 18th.
It is argued that bridal pregnancy was the product of a courting convention, rather than of ‘betrothal-licence’, and that it was not especially common among widows or teenagers. It is incidentally shown that the interval between birth and baptism was very brief in the 16th century, but lengthened in later centuries; and that the forbidden seasons for marriage were gradually eroded.
Finally, it is suggested that the application of Church discipline in relation to bridal pregnancy could be assessed in the Church Court records.
Revolutions are most likely to occur when a prolonged period of objective economic and social development is followed by a short period of sharp reversal. People then subjectively fear that ground gained with great effort will be quite lost; their mood becomes revolutionary. The evidence from Dorr’s Rebellion, the Russian Revolution, and the Egyptian Revolution supports this notion; tentatively, so do data on other civil disturbances. Various statistics—as on rural uprisings, industrial strikes, unemployment, and cost of living—may serve as crude indexes of popular mood. More useful, though less easy to obtain, are direct questions in cross-sectional interviews. The goal of predicting revolution is conceived but not yet born or mature
Someone wrote to Wright Field recently, saying he understood this country had got together quite a collection of enemy war secrets, that many were now on public sale, and could he, please, be sent everything on German jet engines. The Air Documents Division of the Army Air Forces answered: “Sorry—but that would be fifty tons”. Moreover, that fifty tons was just a small portion of what is today undoubtedly the biggest collection of captured enemy war secrets ever assembled. ..It is estimated that over a million separate items must be handled, and that they, very likely, practically all the scientific, industrial and military secrets of Nazi Germany. One Washington official has called it “the greatest single source of this type of material in the world, the first orderly exploitation of an entire country’s brain-power.”
What did we find? You’d like some outstanding examples from the war secrets collection?
…the tiniest vacuum tube I had ever seen. It was about half thumb-size. Notice it is heavy porcelain—not glass—and thus virtually indestructible. It is a thousand watt—one-tenth the size of similar American tubes…“That’s Magnetophone tape”, he said. “It’s plastic, metallized on one side with iron oxide. In Germany that supplanted phonograph recordings. A day’s Radio program can be magnetized on one reel. You can demagnetize it, wipe it off and put a new program on at any time. No needle; so absolutely no noise or record wear. An hour-long reel costs fifty cents.”…He showed me then what had been two of the most closely-guarded, technical secrets of the war: the infra-red device which the Germans invented for seeing at night, and the remarkable diminutive generator which operated it. German cars could drive at any, speed in a total blackout, seeing objects clear as day two hundred meters ahead. Tanks with this device could spot; targets two miles away. As a sniper scope it enabled German riflemen to pick off a man in total blackness…We got, in addition, among these prize secrets, the technique and the machine for making the world’s most remarkable electric condenser…The Kaiser Wilhelm Institute for Silicate Research had discovered how to make it and—something which had always eluded scientists—in large sheets. We know now, thanks to FIAT teams, that ingredients of natural mica were melted in crucibles of carbon capable of taking 2,350 degrees of heat, and then—this was the real secret—cooled in a special way…“This is done on a press in one operation. It is called the ‘cold extrusion’ process. We do it with some soft, splattery metals. But by this process the Germans do it with cold steel! Thousands of parts now made as castings or drop forgings or from malleable iron can now be made this way. The production speed increase is a little matter of one thousand%.” This one war secret alone, many American steel men believe, will revolutionize dozens of our metal fabrication industries.
…In textiles the war secrets collection has produced so many revelations, that American textile men are a little dizzy. But of all the industrial secrets, perhaps, the biggest windfall came from the laboratories and plants of the great German cartel, I. G. Farbenindustrie. Never before, it is claimed, was there such a store-house of secret information. It covers liquid and solid fuels, metallurgy, synthetic rubber, textiles, chemicals, plastics. drugs, dyes. One American dye authority declares: “It includes the production know-how and the secret formulas for over fifty thousand dyes. Many of them are faster and better than ours. Many are colors we were never able to make. The American dye industry will be advanced at least ten years.”
…Milk pasteurization by ultra-violet light…how to enrich the milk with vitamin D…cheese was being made—“good quality Hollander and Tilsiter”—by a new method at unheard-of speed…a continuous butter making machine…The finished product served as both animal and human food. Its caloric value is four times that of lean meat, and it contains twice as much protein. The Germans also had developed new methods of preserving food by plastics and new, advanced refrigeration techniques…German medical researchers had discovered a way to produce synthetic blood plasma.
…When the war ended, we now know, they had 138 types of guided missiles in various stages of production or development, using every known kind of remote control and fuse: radio, radar, wire, continuous wave, acoustics, infra-red, light beams, and magnetics, to name some; and for power, all methods of jet propulsion for either subsonic or supersonic speeds. Jet propulsion had even been applied to helicopter flight…Army Air Force experts declare publicly that in rocket power and guided missiles the Nazis were ahead of us by at least ten years.
In this essay on the method to be used in the comparative study of early poetries the view is set forth that the essential feature of such poetry is its oral form, and not such cultural likenesses as have been called “popular”, “primitive”, “natural”, or “heroic.” As an example of method those numerous cases are considered where we find both in Homer and in Southslavic heroic song a verse which expresses the same idea. The explanation is as follows. Oral poetry is largely composed out of fixed verses. Especially will ideas which recur with any frequency be expressed by a fixed verse. Thus where the two poetries express the same frequent idea they both tend to do it in just the length of a verse. Knowing this common feature in the oral form of the two poetries we can conclude that the extraordinary hold which heroic poetry has on the thought and conduct of the Southern Slavs provides us with an example of what heroic poetry must have been for the early Greeks.