Advances in artificial intelligence often stem from the development of new environments that abstract real-world situations into a form where research can be done conveniently.
This paper contributes such an environment based on ideas inspired by elementary Microeconomics. Agents learn to produce resources in a spatially complex world, trade them with one another, and consume those that they prefer.
We show that the emergent production, consumption, and pricing behaviors respond to environmental conditions in the directions predicted by supply and demand shifts in Microeconomics. We also demonstrate settings where the agents’ emergent prices for goods vary over space, reflecting the local abundance of goods. After the price disparities emerge, some agents then discover a niche of transporting goods between regions with different prevailing prices—a profitable strategy because they can buy goods where they are cheap and sell them where they are expensive. Finally, in a series of ablation experiments, we investigate how choices in the environmental rewards, bartering actions, agent architecture, and ability to consume tradable goods can either aid or inhibit the emergence of this economic behavior.
This work is part of the environment development branch of a research program that aims to build human-like artificial general intelligence through multi-agent interactions in simulated societies. By exploring which environment features are needed for the basic phenomena of elementary microeconomics to emerge automatically from learning, we arrive at an environment that differs from those studied in prior multi-agent reinforcement learning work along several dimensions. For example, the model incorporates heterogeneous tastes and physical abilities, and agents negotiate with one another as a grounded form of communication.
I study how people understand and reason about trade and which factors shape their views on trade policy.
I design and run large-scale surveys and experiments in the U.S. to elicit respondents’ knowledge and understanding of trade. I also ask about their perceived economic gains and distributional impacts from trade and their views on trade restrictions and compensatory redistribution for those hurt by trade.
People’s understanding of the price, wage, and welfare effects of trade is mixed:
In general, respondents are optimistic about the efficiency gains from trade, but also understand that there may be adverse distributional consequences from it.
Respondents’ own exposure to trade through their sector, occupation, skill, and local labor market shapes their perceptions of the impacts of trade on themselves, others, and on the broader U.S. economy.
I also find patterns consistent with the idea of “diffuse gains and concentrated losses”: respondents’ perceived benefits as consumers are non-salient and unclear to them, while those in at-risk jobs starkly perceive the threats from trade.
Beyond material self-interest, people have broader social and economic concerns that strongly influence their views on trade policy. The belief that is most predictive of support for open trade is that trade generates efficiency gains.
Furthermore, people who believe that those hurt by trade can be helped using other tools (compensatory redistribution) do not oppose free trade, even if they think that it will entail adverse distributional consequences.
The results highlight the importance of compensatory redistribution as an indissoluble part of trade policy in people’s minds.
Cost of illness research has established that mental disorders lead to large social burden and massive financial costs. A substantial gap exists for the economic burden of many personality disorders, including psychopathic personality disorder (PPD).
In the current study, we used a top-down prevalence-based cost of illness approach to estimate bounded crime cost estimates of PPD in the United States and Canada. 3 key model parameters (PPD prevalence, relative offending rate of individuals with PPD, and national costs of crime for each country) were informed by existing literature. Sensitivity analyses and Monte Carlo simulations were conducted to provide bounded and central tendency estimates of crime costs, respectively.
The estimated PPD-related costs of crime ranged from $245.50 billion to $1,591.57 billion (simulated means = $512.83 to $964.23 billion) in the United States and $12.14 billion to $53.00 billion (simulated means = $25.33 to $32.10 billion) in Canada. These results suggest that PPD may be associated with a substantial economic burden as a result of crime in North America.
Recommendations are discussed regarding the burden-treatment discrepancy for PPD, as the development of future effective treatment for the disorder may decrease its costly burden on health and justice systems.
[Keywords: psychopathic personality disorder, cost of illness, crime costs, violence, social burden]
This article provides recent estimates of earnings and mental health for sexual and gender minority young adults in the United States.
Using data from a nationally representative sample of bachelor’s degree recipients:
I find a statistically-significant earnings and mental health gap between self-identified LGBTQ+ and comparable heterosexual cisgender graduates. On average, sexual and gender minorities experience 22% lower earnings 10 years after graduation. About half of this gap can be attributed to LGBTQ+ graduates being less likely to complete a high-paying major and work in a high-paying occupation (eg. STEM and business). In addition, LGBTQ+ graduates are more than twice more likely to report having a mental illness.
I then analyze the role of sexual orientation concealment and find a more pronounced earnings and mental health gap for closeted graduates.
We investigate the wage return to studying economics by leveraging a policy that prevented students with low introductory grades from declaring a major. Students who barely met the grade point average threshold to major in economics earned $22,000 (46 percent) higher annual early-career wages than they would have with their second-choice majors. Access to the economics major shifts students’ preferences toward business/finance careers, and about half of the wage return is explained by economics majors working in higher-paying industries. The causal return to majoring in economics is very similar to observational earnings differences in nationally representative data.
We utilize the staggered arrival of Uber and Lyft—large sources of on-demand, platform-enabled gig opportunities—in U.S. cities to examine the effect of the arrival of flexible gig work opportunities on new business formation.
The introduction of gig opportunities is associated with an increase of ~5% in the number of new business registrations in the local area, and a correspondingly-sized increase in small business lending to newly registered businesses. Internet searches for entrepreneurship-related keywords increase ~7%. These effects are strongest in locations where proxies for ex ante economic uncertainty regarding the viability of new businesses are larger.
Our findings suggest that the introduction of the gig economy creates fallback opportunities for would-be entrepreneurs that reduce risk and encourage new business formation.
[Keywords: gig economy, entrepreneurship, new business formation, rideshare, entrepreneurial risk]
[Twitter] Sibling similarity in income is a measure of the omnibus effect of family and community background on income.
We estimate sibling similarity in income accumulated over the life course (ages 18 to 60) to demonstrate that previous research has underestimated sibling similarity in income [due to measurement error]. Using high-quality Swedish register data, we find sibling similarity in accumulated, lifetime income to be much higher than sibling similarity in income measured over a short number of years.
In addition, we test theories predicting variation in sibling similarity over the life course. We find that, contrary to predictions derived from the model of cumulative advantage, sibling similarity in accumulated income is largely stable over the life course. Sister correlations in income are lower than brother correlations but differences diminish across cohorts. We also find largely the same amount of sibling similarity in accumulated income in socioeconomically advantaged and disadvantaged families.
We conclude by discussing the importance of using accumulated income for understanding trends and mechanisms underlying the omnibus effect of family and community background on income.
[Keywords: family background, income, intergenerational mobility, intergenerational transmission, siblings]
Markets for art, coins and other collectibles, culinary delicacies and eco-tourism suggest that consumers value the rarity of many goods. While empirical evidence supports higher prices for rare goods, isolating the value of rarity has proven difficult.
goods that are designated as rare trade at higher prices than functionally equivalent substitutes. Importantly, I use novel features of this market to account for scarcity, observed and unobserved product characteristics and separately identify rarity effects.
These results have important implications for markets ranging from luxury goods to conservation of endangered species.
…In this market, the manufacturer labels goods according to 4 different rarity categories that approximate relative rarity. However, changes in product design combined with manufacturing technology constraints affect the market supply within and across rarity categories over time. Using these changes, I calculate the odds of obtaining a particular card in a retail pack, a proxy for quantity. Then, using 2 different empirical strategies, I non-parametrically estimate the effect of odds on prices and separately identify the effect of rarity. To do this, I collect secondary market prices on thousands of unique goods (cards) from a popular online marketplace [TCGplayer]. I combine these data with detailed product-level information where I observe every characteristic appearing on each card. By comparing functionally equivalent and, in some cases otherwise identical cards, I isolate the effect of rarity designation from other factors such as scarcity and unobserved quality.
The 2 empirical approaches form upper and lower bounds on the rarity values.
The first strategy leverages variation in prices and odds across different cards in each of the rarity categories. I collect data on ~3,600 recently-printed cards over a 6-week period in 2019. I employ a cross-sectional hedonic framework using fixed-effects for observed product characteristics to flexibly model functional differences across cards. I show prices are inversely related to the odds of obtaining a particular card in a retail pack. However, conditional on these odds and product characteristics, prices are substantially higher for cards with rare designations. On average, prices for cards in the highest rarity category are between 70 and 90× higher than cards in the common category, all else equal. I present several robustness checks investigating the salience of scarcity and the possibility of unobserved (to the econometrician) product differences across rarity categories. To the extent that remaining unobserved quality differences are not captured by the model, these estimates are an upper bound on the true rarity values.
The second strategy uses variation in rarity designation within individual cards that are reprinted, many times more than once, at different rarity categories. I collect prices for ~600 cards that experienced these ‘rarity shifts.’ I account for observable and unobservable card characteristics with individual card fixed-effects. Since the rarity-shifted cards are identical other than the change in rarity designation, I attribute observed price differences to rarity value. I find prices are substantially higher for cards printed with rare designations relative to the same cards with common designations. For reasons discussed below, rarity values measured by these rarity shifts are likely biased downwards and therefore represent a lower bound on the true rarity values.
In both empirical approaches, I can easily rule out cost-based explanations for the observed price differences because manufacturing costs are equivalent across rarity categories. The observed price effects are also independent of scarcity value, as captured by the odds of obtaining a particular card in a retail pack, and do not seem to be driven by functional differences across cards. Since both empirical approaches yield large positive rarity values, these results are perhaps the best evidence to date in support of a demand for rarity.
…Moving from common to rare increases log price by 0.445 or about 56 per cent. Moving from common to mythic rare increases log price by 0.726 or about 107 per cent. The effect for foil cards is similar in magnitude. These results, namely that variation in rarity designation within-card yields large price effects, are quite remarkable and provide further evidence of rarity effects.
Female workers earn $0.89 for each male-worker dollar even in an unionized workplace, where tasks, wages, and promotion schedules are identical for men and women by design.
Using administrative time-card data on bus and train operators [from the Massachusetts Bay Transportation Authority], we show that this earnings gap can be explained by female operators taking fewer hours of overtime and more hours of unpaid time off than male operators…Mechanically, the earnings gap in our setting can be explained by the fact that male operators take 1.3 fewer unpaid hours off work (49%) and work 1.5 more overtime hours (83%) per week than their female counterparts…Female operators, especially those with dependents, pursue schedule conventionality, predictability, and controllability more than male operators.
While reducing schedule controllability can limit the earnings gap, it can also hurt female workers and their productivity.
…When overtime is scheduled the day before or the day of the necessary shift, male operators work almost twice as many of those hours as female operators. In contrast, when overtime hours are scheduled 3 months in advance, male operators sign up for only 7% more of them than female operators. Given that the MBTA’s operators are a select group who agreed to the MBTA’s job requirement of 24/7 availability, these differences in their flexibility and in their value of time could be lower bounds for the general population.
…Second, female operators prioritize conventional and predictable schedules. As operators move up the seniority ladder and consequently have a greater pool of schedules to pick from, female operators move away from working weekends and holidays and split shifts more than do male operators.
…Female operators value time outside work and schedule predictability more than do male operators, especially when they have dependents. Female operators with dependents are considerably less likely than male operators with dependents to accept a short-notice overtime opportunity. When it comes to overtime hours worked, unmarried female operators with dependents work only 6% fewer of them than men when they are preplanned 3 months in advance but about 60% fewer of them when they are offered on short notice. Unmarried women with dependents also take the largest amount of unpaid time off with FMLA, making them the lowest earners in our setting.
Court-related fines & fees are widely levied on criminal defendants who are frequently poor and have little capacity to pay. Such financial obligations may produce a criminalization of poverty, where later court involvement results not from crime but from an inability to meet the financial burdens of the legal process.
We test this hypothesis using a randomized controlled trial of court-related fee relief for misdemeanor defendants in Oklahoma County, Oklahoma.
We find that relief from fees does not affect new criminal charges, convictions, or jail bookings after 12 months.
However, control respondents were subject to debt collection efforts at statistically-significantly higher rates that involved new warrants, additional court debt, tax refund garnishment, and referral to a private debt collector. Despite substantial efforts at debt collection among those in the control group, payments to the court totaled less than 5% of outstanding debt.
The evidence indicates that court debt charged to indigent defendants neither caused nor deterred new crime, and the government obtained little financial benefit. Yet, fines and fees contributed to a criminalization of low-income defendants, placing them at risk of ongoing court involvement through new warrants and debt collection.
[Keywords: criminalization, poverty, misdemeanors, fines and fees, randomized experiment]
When venturing into unfamiliar areas of technology, inventors face ex ante technological uncertainty, that is many possible alternative technological paths going forward and limited guidance from existing technological knowledge for predicting the likelihood that a given path will successfully result in an invention.
I theorize, however, that this ex ante technological uncertainty becomes less apparent when evaluating inventions in hindsight. When one knows that a given technological path turned out to be successful ex post, it may be difficult to appreciate the ex ante plausibility of reasons to prefer alternative paths. As a result, inventions may seem more obvious to those evaluating inventions with the benefit of hindsight. My theory yields a counterintuitive implication; when inventors venture into less familiar areas of technology, there is a greater risk of evaluators overestimating obviousness due to hindsight bias.
Empirical evidence comes from novel data on accepted and rejected patent applications, including hand-collected data from the text of applicant objections to obviousness rejections and examiners’ subsequent reversals of rejections in response to applicant objections.
…I contend, moreover, that there are still other regularities in the field of computing that could also be formulated in a fashion similar to that of Moore’s and Metcalfe’s relationships. I would like to propose 4 such laws.
I named this law after George Udny Yule (1912), who was the statistician who proposed the seminal equation for explaining the relationship between 2 attributes. I formulate this law as follows:
If 2 attributes or products are complements, the value/demand of one of the complements will be inversely related to the price of the other complement.
In other words, if the price of one complement is reduced, the demand for the other will increase. There are a few historical examples of this law. One of the famous ones is the marketing of razor blades. The legendary King Camp Gillette gained market domination by applying this rule. He reduced the price of the razors, and the demand for razor blades increased. The history of IT contains numerous examples of this phenomenon, too.
The case of Atari 2600 is one notable example. Atari video games consisted of the console system hardware and the read-only memory cartridges that contained a game’s software. When the product was released, Atari Inc. marketed 3 products, namely the Atari Video Computer System (VCS) hardware and the 2 games that it had created, the arcade shooter game Jet Fighter and Tank, a heavy-artillery combat title involving, not surprisingly, tanks.
Crucially, Atari engineers decided that they would use a microchip for the VCS instead of a custom chip. They also made sure that any programmer hoping to create a new game for the VCS would be able to access and use all the inner workings of the system’s hardware. And that was exactly what happened. In other words, the designers reduced the barriers and the cost necessary for other players to develop VCS game cartridges. More than 200 such games have since been developed for the VCS—helping to spawn the sprawling US $170 billion global video game industry today.
A similar law of complementarity exists with computer printers. The more affordable the price of a printer is kept, the higher the demand for that printer’s ink cartridges. Managing complementary components well was also crucial to Apple’s winning the MP3 player wars of the early 2000s, with its now-iconic iPod.
From a strategic point of view, technology firms ultimately need to know which complementary element of their product to sell at a low price—and which complement to sell at a higher price. And, as the economist Bharat Anand points out in his celebrated 2016 book The Content Trap, proprietary complements tend to be more profitable than non-proprietary ones.
[I have been unable to find where George Udny Yule wrote about complementary goods or where the Yule-Simon distribution has been applied to demonstrate ‘commoditize your complement’ dynamics as described by Adenekan Dedeke above.]
We run a series of experiments, involving over 4,000 online participants and over 10,000 school-aged youth.
When individuals are asked to subjectively describe their performance on a male-typed task relating to math and science, we find a large gender gap in self-evaluations. This gap arises both when self-evaluations are provided to potential employers, and thus measure self-promotion, and when self-evaluations are not driven by incentives to promote. The gender gap in self-evaluations proves persistent and arises as early as the 6th grade. No gender gap arises, however, if individuals are instead asked about their performance on a more female-typed task.
The niche-diversity hypothesis proposes that personality structure arises from the affordances of unique trait combinations within a society. It predicts that personality traits will be both more variable and differentiated in populations with more distinct social and ecological niches.
Prior tests of this hypothesis in 55 nations suffered from potential confounds associated with differences in the measurement properties of personality scales across groups. Using psychometric methods for the approximation of cross-national measurement invariance, we tested the niche-diversity hypothesis in a sample of 115 nations (n = 685,089). We found that an index of niche diversity was robustly associated with lower inter-trait covariance and greater personality dimensionality across nations but was not consistently related to trait variances.
These findings generally bolster the core of the niche-diversity hypothesis, demonstrating the contingency of human personality structure on socioecological contexts.
People’s beliefs about why the rich are richer than the poor have the potential to affect both policy attitudes and economic development. We provide global evidence showing that where the fortunes of the rich are perceived to be the result of selfish behavior, inequality is viewed as unfair, and there is stronger support for income redistribution. However, we also observe that belief in selfish rich inequality is highly polarized in many countries and thus a source of political disagreement that might be detrimental to economic development. We find systematic country differences in the extent to which people believe that selfishness is a source of inequality, which sheds light on international differences in public morality, civic virtues, and redistributive policies.
We report on a study of whether people believe that the rich are richer than the poor because they have been more selfish in life, using data from more than 26,000 individuals in 60 countries.
The findings show a strong belief in the selfish rich inequality hypothesis at the global level; in the majority of countries, the mode is to strongly agree with it. However, we also identify important between-country and within-country variation. We find that the belief in selfish rich inequality is much stronger in countries with extensive corruption and weak institutions and less strong among people who are higher in the income distribution in their society. Finally, we show that the belief in selfish rich inequality is predictive of people’s policy views on inequality and redistribution: It is statistically-significantly positively associated with agreeing that inequality in their country is unfair, and it is statistically-significantly positively associated with agreeing that the government should aim to reduce inequality. These relationships are highly statistically-significant both across and within countries and robust to including country-level or individual-level controls and using lasso-selected regressors.
Thus, the data provide compelling evidence of people believing that the rich are richer because they have been more selfish in life and perceiving selfish behavior as creating unfair inequality and justifying equalizing policies.
We analyse the money-financed fiscal stimulus implemented in Venice during the famine and plague of 1629–31, which was equivalent to a ‘net-worth helicopter money’ strategy—a monetary expansion generating losses to the issuer. We argue that the strategy aimed at reconciling the need to subsidize inhabitants suffering from containment policies with the desire to prevent an increase in long-term government debt, but it generated much monetary instability and had to be quickly reversed. This episode highlights the redistributive implications of the design of macroeconomic policies and the role of political economy factors in determining such designs.
There is substantial evidence that women tend to support different policies and political candidates than men. Many studies also document gender differences in a variety of important preference dimensions, such as risk-taking, competition and pro-sociality. However, the degree to which differential voting by men and women is related to these gaps in more basic preferences requires an improved understanding.
We conduct an experiment in which individuals in small laboratory “societies” repeatedly vote for redistribution policies and engage in production.
We find that women vote for more egalitarian redistribution and that this difference persists with experience and in environments with varying degrees of risk. This gender voting gap is accounted for partly by both gender gaps in preferences and by expectations regarding economic circumstances. However, including both these controls in a regression analysis indicates that the latter is the primary driving force. We also observe policy differences between male-controlled and female-controlled groups, though these are substantially smaller than the mean individual differences—a natural consequence of the aggregation of individual preferences into collective outcomes.
…Our results demonstrate that while part of the persistent and substantial gender gap in voting for redistribution can be connected to underling gender preference gaps—primarily for less competition and more equality—the gender gap in relative performance beliefs is the most important underlying factor. Our work thus indicates that gender gaps in preferences may have some influence on behavior and policy outcomes as women’s participation in policymaking grows. However, our findings also suggest that this impact is secondary to that of beliefs about relative economic outcomes, which may change as women attain greater economic equality.
Do elites capture foreign aid? This paper documents that aid disbursements to highly aid-dependent countries coincide with sharp increases in bank deposits in offshore financial centers known for bank secrecy and private wealth management but not in other financial centers. The estimates are not confounded by contemporaneous shocks—such as civil conflicts, natural disasters, and financial crises—and are robust to instrumenting using predetermined aid commitments. The implied leakage rate is around 7.5% at the sample mean and tends to increase with the ratio of aid to GDP. The findings are consistent with aid capture in the most aid-dependent countries.
…A concern often voiced by skeptics is that aid may be captured by economic and political elites. The fact that many of the countries that receive foreign aid have high levels of corruption (Alesina & Weder 2002) invokes fears that aid flows end up in the pockets of the ruling politicians and their cronies. This would be consistent with economic theories of rent-seeking in the presence of aid (Svensson 2000) and resonate with colorful anecdotal evidence about failed development projects and self-interested elites (Klitgaard 1990). Yet there is little systematic evidence on aid capture.
In this paper, we study aid diversion by combining quarterly information on aid disbursements from the World Bank and foreign deposits from the Bank for International Settlements (BIS). The former data set covers all disbursements made by the World Bank to finance development projects and provide general budget support in its client countries. The latter data set covers foreign-owned deposits in all important financial centers—both havens, such as Switzerland, Luxembourg, Cayman Islands, and Singapore, whose legal framework emphasizes secrecy and asset protection, and nonhavens, such as Germany, France, and Sweden.
Equipped with this data set, we study whether aid disbursements trigger money flows to foreign bank accounts. In our main sample comprising the 22 most aid-dependent countries in the world (in terms of World Bank aid), we document that disbursements of aid coincide (in the same quarter) with increases in the value of bank deposits in havens. Specifically, aid disbursements equivalent to 1% of GDP are associated with a statistically-significant increase in deposits in havens of 3.4%. By contrast, there is no increase in deposits held in nonhavens
…The leakage rate implied by our baseline estimates is around 7.5%.5 The 22 countries in the sample are highly aid dependent, with annual disbursements from the World Bank exceeding 2% of GDP, but account for a modest share of all disbursements.6 By varying the sample, we document that the leakage rate exhibits a strong gradient in aid dependence. On the one hand, lowering the threshold to 1% of GDP (46 countries), the leakage rate is around 4% and we cannot reject the null hypothesis of no leakage. On the other hand, raising the threshold to 3% of GDP (seven countries), we find a substantially higher leakage rate of around 15%. This pattern suggests that the average leakage rate across all aid-receiving countries is much smaller than in the main sample. Moreover, it is consistent with existing findings that the countries receiving the most aid are not only among the least developed but also among the worst governed (Alesina & Weder 2002) and that very high levels of aid might foster corruption and institutional erosion (Knack 2000; Djankov, Montalvo, and Reynal-Querol 2008).
Real economies can be seen as a sequential imperfect-information game with many heterogeneous, interacting strategic agents of various agent types, such as consumers, firms, and governments. Dynamic general equilibrium models are common economic tools to model the economic activity, interactions, and outcomes in such systems. However, existing analytical and computational methods struggle to find explicit equilibria when all agents are strategic and interact, while joint learning is unstable and challenging. Amongst others, a key reason is that the actions of one economic agent may change the reward function of another agent, eg. a consumer’s expendable income changes when firms change prices or governments change taxes.
We show that multi-agent deep reinforcement learning (RL) can discover stable solutions that are epsilon-Nash equilibria for a meta-game over agent types, in economic simulations with many agents, through the use of structured learning curricula and efficient GPU-only simulation and training. Conceptually, our approach is more flexible and does not need unrealistic assumptions, eg. market clearing, that are commonly used for analytical tractability. Our GPU implementation enables training and analyzing economies with a large number of agents within reasonable time frames, eg. training completes within a day. We demonstrate our approach in real-business-cycle models, a representative family of DGE models, with 100 worker-consumers, 10 firms, and a government who taxes and redistributes. We validate the learned meta-game epsilon-Nash equilibria through approximate best-response analyses, show that RL policies align with economic intuitions, and that our approach is constructive, eg. by explicitly learning a spectrum of meta-game epsilon-Nash equilibria in open RBC models.
Given the well-documented importance of counterproductive workplace behavior and organizational citizenship behavior (together nontask performance), it is important to clarify the degree to which these behaviors are attributable to organizational climate versus preexisting individual differences. Such clarification informs where these behaviors stem from, and consequently has practical implications for organizations (eg. guiding prioritization of selection criteria).
We investigated familial resemblance for nontask performance among twins, nontwin and adoptive siblings, parents and offspring, and midlife and late-life couples drawn from 2, large-scale studies: the Minnesota Twin Family Study and the Sibling Interaction Behavior Study. Similarity among family members’ (eg. parents-offspring, siblings) engagement in nontask performance was assessed to estimate the degree to which preexisting individual differences (ie. genetic variability) and the environment (ie. environmentality) accounted for variation in counterproductive and citizenship behavior.
We found that degree of familial resemblance for nontask performance increased with increasing genetic relationship. Nonetheless, genetically identical individuals correlated only moderately in their workplace behavior (r = 0.29–0.40), highlighting the importance of environmental differences. Notably, family members were more similar in their counterproductive than citizenship behavior, suggesting citizenship behavior is comparatively more environmentally influenced. Spouse/partner similarity for nontask behavior was modest and did not vary between midlife and late-life couples, suggesting spousal influence on nontask performance is limited.
These findings offer insight to organizations regarding the degree of nature (individual differences) and nurture (including organizational factors) influences on nontask performance, which has implications for the selection of interventions (eg. relative value of applicant selection or incumbent interventions).
[Keywords: counterproductive work behavior, organizational citizenship behavior, familial resemblance, heritability]
We study labor market returns to vocational versus general secondary education using a regression discontinuity design created by the centralized admissions process in Finland. Admission to the vocational track increases initial annual income, and this benefit persists at least through the mid-thirties, and present discount value calculations suggest that it is unlikely that life cycle returns will turn negative through retirement. Moreover, admission to the vocational track does not increase the likelihood of working in jobs at risk of replacement by automation or offshoring. Consistent with comparative advantage, we observe larger returns for people who express a preference for vocational education.
We use data from Airbnb to identify the mechanisms underlying discrimination against ethnic minority hosts. Within the same neighborhood, hosts from minority groups charge 3.2% less for comparable listings. Since ratings provide guests with increasingly rich information about a listing’s quality, we can measure the contribution of statistical discrimination, building upon Altonji and Pierret (2001). We find that statistical discrimination can account for the whole ethnic price gap: ethnic gaps would disappear if all unobservables were revealed. Also, three-quarters (2.5 points) of the initial ethnic gap can be attributed to inaccurate beliefs of potential guests about hosts’ average group quality.
This article investigates probabilistic assumptions about the value of negative primals (eg. seeing the world as dangerous keeps me safe). We first show such assumptions are common. For example, among 185 parents, 53% preferred dangerous world beliefs for their children. We then searched for evidence consistent with these intuitions in 3 national samples and 3 local samples of undergraduates, immigrants (African and Korean), and professionals (car salespeople, lawyers, and cops), examining correlations between primals and eg.t life outcomes within 48 occupations (total n = 4,535) .
As predicted, regardless of occupation, more negative primals were almost never associated with better outcomes. Instead, they predicted less success, less job and life satisfaction, worse health, dramatically less flourishing, more negative emotion, more depression, and increased suicide attempts.
We discuss why assumptions about the value of negative primals are nevertheless widespread and implications for future research.
[Keywords: Primal world beliefs, success, job satisfaction, health, negative emotions, depression, suicide, life satisfaction, wellbeing]
ICOs allow ventures to collect funding from investors using blockchain technology. We leverage this novel funding context, in which information on the ventures and their future prospects is scarce, to empirically investigate whether the founder CEOs’ physical attractiveness is associated with increased funding (ie. amount raised) and post-funding performance (ie. buy-and-hold returns). We find that ventures with more attractive founder CEOs outperform ventures with less attractive CEOs in both dimensions. For ICO investors, this suggests that ICOs of firms with more attractive founder CEOs are more appealing investment targets. Our findings are also interesting for startups seeking external finance in uncertain contexts, such as ICOs. If startups can appoint attractive leaders, they may have better access to growth capital.
We apply insights from research in social psychology and labor economics to the domain of entrepreneurial finance and investigate how founder chief executive officers’ (founder CEOs’) facial attractiveness influences firm valuation.
Leveraging the novel context of initial coin offerings (ICOs), we document a pronounced founder CEO beauty premium, with a positive relationship between founder CEO attractiveness and firm valuation.
We find only very limited evidence of stereotype-based evaluations, through the association of founder CEO attractiveness with latent traits such as competence, intelligence, likeability, or trustworthiness. Rather, attractiveness seems to bear economic value per se, especially in a context in which investors base their decisions on a limited information set. Indeed, attractiveness has a sustainable effect on post-ICO performance.
In 1942 more than 110,000 persons of Japanese origin living on the U.S. West Coast were forcibly sent away to ten internment camps for one to three years. This paper studies how internees’ careers were affected in the long run. Combining Census data, camp records, and survey data, I develop a predictor of a person’s internment status based on Census observables. Using a difference-in-differences framework, I find that internment had long-run positive effects on earnings. The evidence is consistent with mechanisms related to increased mobility due to re-optimization of occupation and location choices, possibly facilitated by camps’ high economic diversity.
In the mid thirteenth century, England used only a single coin, the silver penny. The flow of coins into and out of the government’s treasury was recorded in the rolls of the Exchequer of Receipt. These receipt and issue rolls have been largely ignored, compared to the pipe rolls, which were records of audit. Some more obscure records, the memoranda of issue, help to show how the daily operations of government finance worked, when cash was the only medium available. They indicate something surprising: the receipt and issue rolls do not necessarily record transactions which took place during the periods they nominally cover. They also show that the Exchequer was experimenting with other forms of payment, using tally sticks, several decades earlier than was previously known. The rolls and the tallies indicate that the objectives of the Exchequer were not, as we would now expect, concerned with balancing income and expenditure, drawing up a budget, or even recording cash flows within a particular year. These concepts were as yet unknown. Instead, the Exchequer’s aim was to ensure the accountability of officials, its own and those in other branches of government, by allocating financial responsibility to individuals rather than institutions.
The frequency of lovemaking minus the frequency of quarrels is claimed to predict marital stability. Here, we set up a family economics model using insights from evolutionary psychology to ground this ad hoc formula.
As we get closer to the release date of the Steam Deck, now scheduled for February 2022 (for those who were able to pre-order an unit as part of the first wave), Valve is sharing more information on the PC handheld device and on their future plans for it.
A newly updated FAQ page dedicated to developers confirms that there won’t be any exclusive games for the Steam Deck, for example. Valve doesn’t believe that would make sense, as this is a PC after all and it should just play PC games…SteamOS will eventually be released as a separate operating system; non-Steam apps and games can be easily installed on the Steam Deck; and Valve is working closely with Unity, Epic, and even Godot to improve these engines’ support of the platform.
Q. Is Steam working closely with leading game engine developers, like Epic Games and Unity on Steam Deck?
A. Yes, we’re working with both Unity and Epic on making sure Unreal and Unity engines have integrations that make the development experience for Deck as smooth as possible. And going forward we expect there will be improvements rolling into those engines over time to further integrate with our development tools and to make those engines a great target for Steam Deck. Already there’s a pretty good experience for Unity and Unreal developers from the start.
Q. You mentioned that you’re talking with Unity and Epic, are you also talking to Godot?
A. Yes, we’re talking to Godot as well and are actively supporting them and want their engine to work well with Steam Deck.
The susceptibility of cryptocurrencies to criminal activity is a vigorously debated issue of high policy relevance. Not only the share of cryptocurrency turnover linked to crime is unknown, also the question which of several cryptocurrencies are prevalent on the darknet, and hence should be prioritized in building analytical capability for law enforcement, calls for empirical research.
Using the event study methodology, we estimate the market reaction on cryptocurrency exchanges to news about successful law enforcement actions of systemic relevance for the cybercriminal ecosystem. The events studied include seizures of darknet marketplaces and shutdowns of cybercriminal data centers and mixers.
Although the number of relevant events is still small, we observe statistically-significant cumulative abnormal returns to such news over the past years.
We cautiously interpret the obtained results by cryptocurrency and direction of the effect, and derive implications for future research and policy.
[Keywords: cryptocurrency, darknet market, event study, law enforcement]
Occupational characteristics moderate relations of personality and performance in major occupational groups.
Personality-occupational performance relations differ considerably across 9 major occupational groups.
Traits show higher criterion-related validities when experts rate them as more relevant to occupational requirements.
Moderate occupational complexity may be a “Goldilocks range” for using personality to predict occupational performance.
Occupational characteristics are important, if overlooked, contextual variables.
Personality predicts performance, but the moderating influence of occupational characteristics on its performance relations remains under-examined. Accordingly, we conduct second-order meta-analyses of the Big Five traits and occupational performance (ie. supervisory ratings of overall job performance or objective performance outcomes).
We identify 15 meta-analyses reporting 47 effects for 9 major occupational groups (clerical, customer service, healthcare, law enforcement, management, military, professional, sales, and skilled/semiskilled), which represent n = 89,639 workers across k = 539 studies. We also integrate data from the Occupational Information Network (O✱NET) concerning 2 occupational characteristics: (1) expert ratings of Big Five trait relevance to its occupational requirements; and (2) its level of occupational complexity.
We report 3 major findings:
First, relations differ considerably across major occupational groups.
When groups are ranked by complexity, multiple correlations generally follow an inverse-U shaped pattern, which suggests that moderate complexity levels may be a “Goldilocks range” for personality prediction.
Altogether, results demonstrate that occupational characteristics are important, if often overlooked, contextual variables. We close by discussing implications of findings for research, practice, and policy.
In recent decades it seems that various factors have led to a cultural background change, which although mainly characterized as incremental, in some cases can be sudden. A question therefore arises whether the way in which the cultural background has evolved during last decades affects the growth rate of economies.
We use an unbalanced panel dataset comprised from 34 OECD countries from 1981 to 2019, and a Least Squares Dummy Variable Correction (LSDVC) method as well as a series of robustness tests including different methods of analysis, adding control variables and breaking the overall period into subperiods.
We conclude that the cultural background during the overall period under consideration is characterized as post-materialistic and harms economic growth. Moreover, we highlight both theoretically and empirically the cultural backlash hypothesis since the cultural background of the countries under analysis presents a shift from traditional/materialistic (from 1981 up to 1998) to post-materialist values (from 1999 up to 2019). Doing so, we conclude on a positive effect of cultural background on economic growth when traditional / materialistic values prevail, and a negative effect when post-materialistic values prevail.
These results highlight culture as a crucial factor for economic growth and indicate that economic policymakers should take it seriously into account before designing economic policy and in order to explain the effectiveness of economic policies implemented.
[Keywords: economic growth, cultural background, post-materialism, cultural backlash, OECD]
This paper studies the long-run effects of a “big-push” program providing a large asset transfer to the poorest Indian households. In a randomized controlled trial that follows these households over ten years, we find positive effects on consumption (0.6 SD), food security (0.1 SD), income (0.3 SD), and health (0.2 SD). These effects grow for the first seven years following the transfer and persist until year ten. One main channel for persistence is that treated households take better advantage of opportunities to diversify into more lucrative wage employment, especially through migration.
We investigate the impact of historic slave trade on contemporary educational outcomes in Africa by replicating the empirical approach in Nunn 2008 and Nunn & Wantchekon 2011. We show that slavery’s long-term legacy for literacy depends on how spatial effects are accounted for.
In cross-country regressions, exposure to historic slave trade negatively predicts contemporary literacy. However, within countries, individuals whose ethnic ancestors were historically more exposed to slave exports, have higher education levels today compared to individuals from ethnicities less exposed to slave trade in the past.
We argue that these somewhat puzzling findings resonate with emerging critiques of persistence studies that link historical variables with long-run development outcomes.
[Keywords: slave trade, literacy, life expectancy, persistence]
…Contrary to the cross-country evidence, our subnational estimations that utilize data on individual survey respondents from 2 contemporary waves of Afrobarometer survey (2005 and 2008) posit a different empirical pattern. We find that, controlling for country fixed effects, the measure of slave exports has a positive and statistically-significant effect on contemporary educational attainment. This effectively implies that, within countries, individuals whose ethnic ancestors were more intensely exposed to slave trade in the past have systematically lower levels of education today (relative to individuals whose ancestors had less exposure to slavery). Splitting the sample into subregions within Africa and re-estimating the main specification, we demonstrate that the positive impact of slavery is mainly driven by coastal countries. Recognizing the possibility that historic exposure to slave trade could determine the location of Christian missions and, thereby, education (Cagé & Rueda 2016; Gallego & Woodberry 2010; Okoye & Pongou 2021), our individual-level regressions consistently include an ethnic group’s exposure to Christian missions and the disease environment through malaria prevalence. Following Nunn and Wantchekon, we also include a battery of individual, ethnicity, and colonial-level controls. The result also survives after directly controlling in the model for the distance of an individual’s ethnic group to the coast during slave trade, to the Saharan trade routes, and to historical reliance on fishing.
While these findings might appear as puzzling or counter-intuitive, they resonate with emerging concerns on persistence studies that the recent literature in historical political economy has highlighted (Abad & Maurer 2021; Kelly 2019). An important concern underlined by these critiques relates to spatial or geography-related factors. Effects of historical variables are relative to where the comparison units are located, how they are defined, and the pattern of spatial dependence (Kelly 2020). Taking cue from this, Abad & Maurer 2021 re-estimate the main specification in some prominent persistence papers and show that the inclusion of World Bank’s regional classifications as additional controls weakens the results. This helps to demonstrate that persistence papers can be sensitive to spatial dependence, manifested in this instance through “variation due to being in the same part of the world” (p. 58). Another important concern with persistence studies is the “compression of history” that emanates from regressing a historical variable on outcomes measured several centuries later (Austin 2008). This is especially evident in the case of slavery which predated the colonial period. Considerable time has elapsed between the initial exposure to slave trade and modern day outcomes. It is important to determine what might have happened in the intervening periods. There are at least 3 major time spans that are important for assessing slavery’s impact on education: the pre-colonial exposure to slave trade, colonialism, and the creation of national borders for modern African states. Our results are consistent with the suggestion that what happens after independence is important for appreciating slavery’s long-run impact on education. Overall, our empirical replications of 2 influential works on slavery by Nathan Nunn in the context of education leave some nontrivial implications for the persistence literature, and highlight the importance of both spatial and temporal factors
We investigate if organized crime groups (OCG) are able to hire good accountants.
We use data about criminal records to identify Italian accountants with connections to OCG. While the work accountants do for the OCG ecosystem is not observable, we can determine if OCG hire “good” accountants by assessing the overall quality of their work as external monitors of legal businesses.
We find that firms serviced by accountants with OCG connections have higher quality audited financial statements compared to a control group of firms serviced by accountants with no OCG connections. The findings provide evidence OCG are able to hire good accountants, despite the downside risk of OCG associations. Results are robust to controls for self-selection, for other determinants of auditor expertise, direct connections of directors and shareholders to OCG, and corporate governance mechanisms that might influence auditor choice and audit quality.
We investigate the trends and drivers of racial diversity on U.S. corporate boards.
We document that U.S. boards are persistently racially homogenous, but that this is changing. About 10% of directors on the average board during our sample are non-white, however, new director appointments of racial minorities increased from 12% in 2013 to over 40% in 2021. Smaller, value firms are less likely to appoint minority directors and through 2019 firms with racially homogenous boards are also less likely to. In 2020, this trend sharply reverses such that by 2021 firms with racially homogenous boards actually seek minority directors. This reversal coincides with the commencement of the racial justice movement as well as diversity initiatives implemented by the NYSE, Nasdaq, and state of California.
Our analysis of these initiatives reveals that the racial justice movement was the primary cause of the changes in minority director appointment behavior. Conservative estimates imply that it led to a 120% increase in the number of black directors appointed to boards, but did little to help other minority groups. In contrast, the California diversity mandate has thus far, primarily benefited racial groups that are not traditionally underrepresented and has suppressed appointments of black directors. Newly appointed minority directors have similar qualifications to those appointed before the racial justice movement.
Markets did not systematically react to any of the events that we investigate.
Our analysis is suggestive of search frictions and racial bias being important to the persistent lack of board racial diversity that we document.
Conventional wisdom suggests that labor market distress drives workers into temporary self-employment, lowering entrepreneurial quality.
Analyzing employment histories for 640,000 U.S. workers, we document that graduating college during a period of high unemployment does increase entry to entrepreneurship. However, compared to voluntary entrepreneurs, firms founded by forced entrepreneurs are more likely to survive, innovate, and receive venture backing. Explaining these results, we confirm that labor shocks disproportionately impact high earners, with these workers starting more successful firms.
Overall, we document untapped entrepreneurial potential across the top of the income distribution and the role of recessions in reversing this missing entrepreneurship.
Over 100 million research participants around the world have had research array-based genotyping (GT) or genome sequencing (GS), but only a small fraction of these have been offered return of actionable genomic findings (gRoR).
Between 2017 and 2021, we analyzed genomic results from 36,417 participants in the Mass General Brigham Biobank and offered to confirm and return pathogenic and likely pathogenic variants (PLPVs) in 59 genes.
Variant verification prior to participant recontact revealed that GT falsely identified PLPVs in 44.9% of samples, and GT failed to identify 72.0% of PLPVs detected in a subset of samples that were also sequenced. GT and GS detected verified PLPVs in 1% and 2.5% of the cohort, respectively. Of 256 participants who were alerted that they carried actionable PLPVs, 37.5% actively or passively declined further disclosure. 76.3% of those carrying PLPVs were unaware that they were carrying the variant, and over half of those met published professional criteria for genetic testing but had never been tested.
This gRoR protocol cost ~$129,000 USD per year in laboratory testing and research staff support, representing $14 per participant whose DNA was analyzed or $3,224 per participant in whom a PLPV was confirmed and disclosed.
These data provide logistical details around gRoR that could help other investigators planning to return genomic results.
Despite the increase of specialised law enforcement and commercial art crime databases concerning the registration of luxury products, it remains an often-overlooked category in art crime research.
This chapter analyses the market for luxury products, focusing specifically on watches, jewellery, and designer clothing, on defunct anonymous marketplace Evolution, which was active between January 2014 and March 2015. We argue that this marketplace works as a way to buy exclusivity through the purchase of both original and counterfeited luxury goods, here called ‘conspicuous goods’. The goods we focus on in our analysis endow cultural value, and their possession allows consumers to display a higher level of distinction. However, rather than looking at consumers who desire to differentiate themselves by purchasing these objects, we were more interested in how the market is structured to best sell these products.
Therefore, we have implemented a series of statistical analyses on the market supply, focusing on the type of traded object, their brand, and the average prices in Bitcoin, finding that a brand effect on price is at work both in counterfeited and original conspicuous goods.
This signals that the market is aware of the dynamics of conspicuous goods and its sellers behave accordingly.
[Keywords: darknet market, conspicuous goods, Evolution, art crime, branding]
Mate-choice copying occurs when people rely on the mate choices of others (social information) to inform their own mate decisions. The present study investigated women’s strategic trade-off between such social learning and using the personal information of a potential mate.
We conducted 2 experiments to investigate how mate-choice copying was affected by the personal information (eg. trait/financial information, negative/positive valence of this information, and attractiveness) of a potential male mate in short-term/long-term mate selection.
The results demonstrated that when women had no trait/financial information other than photos of potential mates, they showed mate-choice copying, but when women obtained personality trait or financial situation information (no matter negative or positive) of a potential mate, their mate-choice copying disappeared; this effect was only observed for low-attractiveness and long-term potential partners.
These results demonstrated human social learning strategies in mate selection through a trade-off between social information and personal information.
[Typical Bayesian reasoning: freeriding off priors/stereotypes (mate-copying), but updating as individuating information is available.]
A criminal record can be a serious impediment to securing stable employment, with negative implications for the economic stability of individuals and their families. State policies intended to address this issue have had mixed results, however. Using panel data from the Fragile Families study merged with longitudinal data on state-level policies, this study investigates the association between criminal record based employment discrimination policies and the employment of men both with and without criminal records. These state policies broadly regulate what kinds of records can be legally used for hiring and licensing decisions, but have received little attention in prior research. Findings indicate that men with criminal records were less likely to be working if they lived in states with more policies in place to regulate the legal use of those records. Consistent with research linking policies regulating access to records to racial discrimination, black men living in protective states reported this employment penalty even if they did not have criminal records themselves. Thus, these policies, at best, may fail to disrupt entrenched employment disparities and, at worst, may exacerbate racial discrimination.
While female property ownership is associated with positive outcomes for women, their right to inherit property in patrilineal societies may also result in more constraining marriage norms.
I test the following hypothesis: Where a woman inherits property, her male relatives are more likely to arrange her marriage to a cousin in order to keep her share of property within the male lineage. The increase in unearned income due to female inheritance also reduces women’s economic participation, especially in blue-collar jobs where women’s work is subject to social stigmas.
Using a difference in differences design that exploits exogenous variation induced by a reform of inheritance laws in India in 2005, the study finds that women exposed to the female inheritance law are more likely to marry their paternal cousins and less likely to work, especially in agriculture.
The paper also discusses possible implications for the evolution of marriage and gender norms in Islamic societies, where female inheritance is mandated by Islamic law.
Estimates of crime’s burden inform public and private decisions about crime-prevention measures. More than counts of criminal offenses, the aggregate cost of crime conveys the scale of problems from crime and the value of deterrence.
This article offers an estimate of the total annual cost of crime in the United States, including the direct costs of law enforcement, criminal justice, and victims’ losses and the indirect costs of private deterrence, fear and agony, and time lost to avoidance and recovery. The findings update crime-cost estimates of past decades while expanding the scope of coverage to include categories missing from past studies…New elements that have not appeared in previous comprehensive studies of the cost of crime include the costs of premature deaths and suicides caused by incarceration, the rapes and sexual assaults taking place in prison, and the decreased post-incarceration earnings of convicted criminals.
The estimated annual cost of crime is $4.71–$5.76 trillion including transfers from victims to criminals and $2.86–$3.92 trillion net of transfers.
…Crime exacts a toll on society far greater than its direct repercussions. An environment of crime and concomitant distrust prompts expenditures on prevention, recovery, justice, and corrections. Beyond asset transfers from victim to criminal, losses to crime comprise lives, health, fear, work, human capital, and time…These costs are comparable to the $3.83 trillion spent on health care (Centers for Medicare and Medicaid Services 2020) and the $2.71 trillion spent on food and shelter (US Department of Labor 2020a) annually in the United States.
… The enormity of crime’s cost adds relevance to the distribution of crime’s burdens. Morgan & Truman 2020 provide a breakdown of crime rates by demographic characteristics. Rates of violent-crime victimization per 1,000 persons are the highest among the group that includes Pacific Islanders, American Indians, Alaska Natives, and persons with 2 or more races (66.3) and lowest for Asian Americans (7.5). Households with incomes less than $25,000 per year experience 37.8 violent crimes per 1,000 people, while every other income level has a rate between 16.2 and 19.7. Women and men have similar rates of violent-crime victimization, 20.8 and 21.2, respectively, although women experience 88.6% of all reported rapes and sexual assaults. Rates of serious-crime victimization are highest for 18–24-year-olds (37.2) and lowest for those 65 and older (6.0). As the broader cost implications of crime come to light, added protection or assistance for groups with inordinate burdens may be justified.
The findings also indicate the portion of crime’s burden borne by crime victims, taxpayers via the government’s crime-related expenditures, criminals and their families, and citizens trying to avoid crime. Crime victims bear 58.3% of the cost of crime in the form of psychic costs, transfers to criminals, and the costs of recovery. Government expenditures, such as those on policing and corrections, amount to 19.9% of the total cost of crime. Criminals and their families internalize 13.0% of the cost of crime, largely because of the expenses of drug use, prenatal exposure to drugs, and losses associated with incarceration. Consumers shoulder the remaining 8.8% of the cost of crime by purchasing preventative goods and services and through the time lost to preventative measures.
In 7 studies (including a large, nationally representative sample of more than 90,000 respondents from 60 countries), we explore how personal relative deprivation influences zero-sum thinking—the belief that one person’s gains can only be obtained at other people’s expense.
We find that personal relative deprivation fosters a belief that economic success is zero-sum, and that this is true regardless of participants’ household income, political ideology, or subjective social class. Moreover, in a large and preregistered study, we find that the effect of personal relative deprivation on zero-sum thinking is mediated by lay perceptions of society. The more people see themselves as having been unfairly disadvantaged relative to others, the more they view the world as unjust and economic success as determined by external forces beyond one’s control. In turn, these cynical views of society lead people to believe that economic success is zero-sum.
We discuss the implications of these findings for research on social comparisons, the distribution of resources, and the psychological consequences of feeling personally deprived.
[Keywords: personal relative deprivation, zero-sum beliefs, economic success, social comparison]
This study exploits a natural experiment that occurred on the Patreon platform.
Patreon creators must decide whether to make their earnings visible to the public.
We find evidence [using Graphtreon] that removing earnings visibility increase subscribers [although total earnings are then invisible].
The provision of social information does not lead to an increase in subscribers.
In January 2017, the subscription-based crowdfunding platform Patreon allowed their users (creators) the ability to hide their earnings from existing and potential subscribers. Prior to this, all monthly earnings were visible.
We investigate what effect this policy change had on creators’ subscriber numbers over the following 6 months. Using double-robust and endogenous treatment estimation techniques, we find evidence that creators who removed the visibility of their earnings had more subscribers as a result. This suggests that the provision of social information does not lead to an increase in subscribers.
Several countries have mandated sex quotas on corporate boards of directors.
We systematically reviewed empirical studies that compared company profitability and financial performance before and after introducing legislated quotas. The search yielded 348 unique hits and 9 studies were retained, including 20 effects.
Four were null, 11 were negative, and 5 were positive, all of the latter for Italian and French companies.
We conclude that quotas for women on corporate boards have mainly decreased company performance and that several moderating factors must be taken into account when assessing causal effects of quotas on company performance.
This paper examines the relation between CEO’s individualistic cultural background and corporate innovation.
Using hand-collected data on birthplaces of US-born CEOs, we provide robust evidence that CEOs born in frontier counties with a higher level of individualistic culture promote innovation performance.
Firms led by such CEOs increase both quantity and quality of innovation outputs, measured by the number of patents, citation-weighted patents and the market value of patents. Besides innovation performance, we further show that CEO’s individualistic background causes a change in the innovation style, leading the firm to focus more on breakthrough innovation.
Our extended analysis suggests that CEOs’ individualistic background promotes corporate innovation through building an innovation-orientated corporate culture and accumulating human capital by increasing the inflow of inventors.
Objective: The connection between personality traits and performance has fascinated scholars in a variety of disciplines for over a century. The present research synthesizes results from 54 meta-analyses (k = 2,028, n = 554,778) to examine the association of Big Five traits with overall performance.
Method: Quantitative aggregation procedures were used to assess the association of Big Five traits with performance, both overall and in specific performance categories.
Results: Whereas Conscientiousness yielded the strongest effect (ρ = 0.19), the remaining Big Five traits yielded comparable effects (ρ = 0.10, 0.10, −0.12, and 0.13 for Extraversion, Agreeableness, Neuroticism, and Openness). These associations varied dramatically by performance category. Whereas Conscientiousness was more strongly associated with academic than job performance (0.28 vs 0.20), Extraversion (−0.01 vs 0.14) and Neuroticism (−0.03 vs −0.15) were less strongly associated with academic performance. Finally, associations of personality with specific performance outcomes largely replicated across independent meta-analyses.
Conclusions: Our comprehensive synthesis demonstrates that Big Five traits have robust associations with performance and documents how these associations fluctuate across personality and performance dimensions.
In neural autopilot theory, habits save cognitive effort by repeating reliably-rewarding choices.
Strong habits are marked by insensitivity to reward change, but large-scale field data do not show this effect.
Habits are predictable from context variables using machine learning.
Predictable habits can be identified in everyday behavior using machine learning.
Identifying contextual cues, and using information about reward reliability, could personalize and improve our ability to change behavior.
This paper is about the background of 2 new ideas from neuroeconomics for understanding habits. The main idea is a 2-process ‘neural autopilot’ model. This model hypothesizes that contextually cued habits occur when the reward from the habitual behavior is numerically reliable (as in related models with an ‘arbitrator’). This computational model is lightly parameterized, has the essential ingredients established in animal learning and cognitive neuroscience, and is simple enough to make nonobvious predictions. An interesting set of predictions is about how consumers react to different kinds of changes in prices and qualities of goods (‘elasticities’). Elasticity analysis expands the habit marker of insensitivity to reward devaluation, and other types of sensitivities. The second idea is to use machine learning to discover which contextual variables seem to cue habits, in field data.
[See also Grow & Bavel 2020 for how assortative mating can drive the ‘gender cliff’ without any bias.] Bertrand et al 2015 document that in the United States there is a discontinuity to the right of 0.5 in the distribution of households according to the female share of total earnings, which they attribute to the existence of a gender identity norm. We provide an alternative explanation for this discontinuity.
Using linked employer-employee data from Finland, we show that the discontinuity emerges as a result of equalization and convergence of earnings in coworking couples, and it is associated with an increase in the relative earnings of women, rather than a decrease as predicted by the norm.
…the existence of a discontinuity to the right of 0.5 in the relative earnings distribution has been widely cited both in the media and in academia as evidence for the relevance of the gender identity norm. Some authors have also pointed out that a substantial part of the discontinuity is due to the existence of a point mass of couples exactly at 0.5 (Eriksson & Stenberg 2015, Binder & Lam 2020).1 As shown in panel A of Figure 1, the discontinuity to the right of 0.5 estimated by Bertrand et al 2015 becomes smaller if spouses with equal earnings are excluded, with the McCrary 2008 estimate dropping from 12.3% to 7.4%.2
In this paper, we provide evidence contradicting the social norm interpretation of the discontinuity (and the point mass) at 0.5 and we propose an alternative explanation. We use linked employer-employee data from Finland that has detailed information on the individual employment and earnings history of the entire population of Finnish individuals for the period between 1988 and 2014.
…First, we examine the distribution of relative earnings at the beginning of cohabitation, which provides a better proxy of the time of union formation than marriage. We find no statistically-significant discontinuity at this stage of the relationship, suggesting that the gender norm does not affect the formation of couples in a discontinuous way.
Second, the norm does not seem to play a role for separations either. Separation rates do not exhibit any discontinuity around the 0.5 threshold of relative earnings. Instead, the relationship between the probability of separation and the relative earnings distribution exhibits a U-shape, with higher separation rates among couples with large earnings differentials either in favor of the husband or in favor of the wife. Third, the discontinuity in the distribution only arises in couples where both spouses are self-employed (around 6% of all employed couples) or work together in the same firm (around 9%). Hereafter, we refer to these 2 groups as coworking couples. For the rest of the population, there is no evidence of any unusual phenomena in the vicinity of the 0.5 point. The pattern looks different for these 2 groups of coworking couples. In the case of self-employed couples, the discontinuity to the right of 0.5 is mainly due to a substantial fraction of couples bunching exactly at 0.5, while among spouses working for the same employer, the distribution exhibits a cliff at 0.5 with only a small fraction of couples having identical earnings.
Fourth, the observed dynamics rules out a more specific formulation of the gender identity norm theory, according to which the norm is activated only when spouses are jointly self-employed or work in the same firm. Theoretically, this may occur if coworking makes the comparison between spouses more salient or if adjustments in accordance with the norm are feasible only in self-employed couples. We find that the discontinuity does not arise as a result of a reduction in the share of couples where women slightly outearn their husbands, as the gender identity norm would predict. Instead, when couples on both sides of the distribution become self-employed, they tend to equalize earnings leading to an excess mass at 0.5. Similarly, when couples start working together in the same firm, there is a compression of earnings toward 0.5. Since initially there are more couples where women earn less than men, this earnings compression creates a larger mass of couples just to the left of 0.5 than to the right of this point, which statistical tests identify as a discontinuity. Moreover, we also observe that coworking leads to an increase in female earnings above the earnings of similar women in non-coworking couples.
Overall, our results contradict the idea that the gender identity norm exhibits a discontinuity at the point of equal earnings. Some couples may prefer that the husband earns more than his wife, but small variations around the 0.5 point do not seem to make that much of a difference.
…However, in the United States, unlike in Finland, there are no legal defaults for income sharing in partnerships, and households can jointly file their income tax declarations.
For couples coworking in the same firm, the impact of earnings compression is likely to have a similar effect as in Finland. To assess the relevance of income convergence in coworking couples, we use the SIPP/SSA/IRS dataset and we proxy whether spouses work together using available information on industry and occupation. It seems reasonable to expect that the share of coworking couples is substantially higher among couples working in the same industry and occupation.14 Instead, couples working in different industries are unlikely to work in the same firm; although some self-employed couples may be included in this group.15
We observe that around 20% of all couples work in the same industry and occupation, while 60% of couples work in different industries. Figure 9 shows the distribution of relative earnings separately for these 2 groups of couples. The drop in the distribution at 0.5 is statistically-significantly larger among couples working in the same industry and occupation. According to the McCrary test, the estimate of the drop is 14%, which is about twice as large as the drop observed in the overall population. This evidence suggests that factors leading to earnings convergence in coworking couples are also likely to play an important role in explaining the existence of a discontinuity in the United States.
Personality traits have been associated with differences in residential mobility, but details are lacking on the types of residential moves associated with personality differences.
The present study pooled data from 4 prospective cohort studies from the United Kingdom (UK Household Longitudinal Survey, and British Household Panel Survey), Germany (Socioeconomic Panel Study), and Australia (Household, Income, and Labour Dynamics in Australia) to assess whether personality traits of the Five Factor Model are differently related to residential moves motivated by different reasons to move: employment, education, family, housing, and neighborhood (total n = 86,073).
Openness To Experience was associated with all moves but particularly with moves due to employment and education. Extraversion was associated with higher overall mobility, except for moves motivated by employment and education. Lower Emotional Stability predicted higher probability of moving due to neighborhood, housing, and family, while higher Agreeableness was associated with lower probability of moving due to neighborhood and education. Adjusting for education, household income, marital status, employment status, number of children in the household, and housing tenure did not substantially change the associations.
These results suggest that different personality traits may motivate different types of residential moves.
The sport of surfing is best enjoyed with one rider on one wave, but crowding makes that optimal assignment increasingly hard to attain. This study examines the phenomenon of surf localism, whereby competitors are excluded from waves by intimidation and the threat of violence. An alternative way to accommodate crowds is contained in the surfer’s code, which sets informal rules and self-enforced regulations to avoid conflict in the water. Both regimes establish property rights over common pool resources with no state intervention, creating a setting wherein users face the question of cooperation or conflict. The disposition to cooperate and follow norms has been shown to vary substantially across different cultures, though.
Employing data from over 700 surf spots on the European Atlantic coast, this study reports evidence that certain informal cultural norms statistically-significantly reduce the probability of violent exclusion, while formal state institutions mostly are irrelevant. The results also indicate that informal norms become more important with greater resource quality and, possibly, with increasing scarcity.
Using data from the first Census data set that includes complete measures of male biological fertility for a large-scale probability sample of the U.S. population (the 2014 wave of the Study of Income and Program ParticipationN = 55,281), this study shows that:
high income men are more likely to marry, are less likely to divorce, if divorced are more likely to remarry, and are less likely to be childless than low income men. Men who remarry marry relatively younger women than other men, on average, although this does not vary by personal income. For men who divorce who have children, high income is not associated with an increased probability of having children with new partners. Income is not associated with the probability of marriage for women and is positively associated with the probability of divorce.
High income women are less likely to remarry after divorce and more likely to be childless than low income women. For women who divorce who have children, high income is associated with a lower chance of having children with new partners, although the relationship is curvilinear.
These results are behavioral evidence that women are more likely than men to prioritize earning capabilities in a long-term mate and suggest that high income men have high value as long-term mates in the U.S.
[Keywords: evolutionary psychology, fertility, marriage, childlessness, divorce, sex differences]
We assess the impact of exogenous variation in oral contraceptives prices—a year-long decline followed by a sharp increase due to a documented collusion case—on fertility decisions and newborns’ outcomes. Our empirical strategy follows an interrupted time-series design, which is implemented using multiple sources of administrative information. As prices skyrocketed (45% within a few weeks), the Pill’s consumption plunged, and weekly conceptions increased (3.2% after a few months).
We show large effects on the number of children born to unmarried mothers, to mothers in their early twenties, and to primiparae women. The incidence of low birth weight and fetal/infant deaths increased (declined) as the cost of birth control pills rose (fell). In addition, we document a disproportional increase in the weekly miscarriage and stillbirth rates. As children reached school age, we find lower school enrollment rates and higher participation in special education programs.
Our evidence suggests these “extra” conceptions were more likely to face adverse conditions during critical periods of development.
…This paper quantifies the Pill’s role in fertility and child outcomes using a sequence of events in which unexpected shocks affected the access to oral contraceptives. In particular, we exploit a well-established case of anticompetitive behavior in the pharmaceutical market, which—after a year-long price war between the 3 largest pharmaceutical retailers in Chile—triggered a sharp and unexpected increase in the prices of birth control pills.
The price war took place during 2007, and it effectively reduced the prices of medicines across the board. In particular, prices of oral contraceptives fell by 24% during that year. By the end of 2007, the 3 largest pharmacies agreed to end the price war and engaged in a collusion scheme in which they strategically increased the prices of 222 medicines. Oral contraceptives were included in this group, experiencing price increases ranging from 30 to 100% in just a few weeks (45% on average in the first 3 weeks). We use daily information on prices and quantities sold in the country by the 3 companies from almost 40 million transactions to determine the date when the price changes for birth control pills took place. Using these data, we implement an interrupted time-series analysis (Bloom, 2003; Cauley & Iksoon 1988), which takes into account the seasonality of births, the general trends of fertility, as well as dynamics that arise because it takes time for the menstrual cycle to be fully regulated after discontinuing the Pill’s intake. We complement the pharmacies’ transaction data with administrative information from birth and death certificates collected between 2005 and 2008 and administrative records on school enrollment from 2013 to 2016. Our empirical strategy considers 2 different treatments: one stemming from a sustained and steady decline in prices (2007) and another one from a massive and sudden increase (first weeks of 2008).
The assurance contract mechanism is often used to crowdfund public goods. This mechanism has weak implementation properties that can lead to mis-coordination and failure to produce socially valuable projects. To encourage early contributions, we extend the assurance contract mechanism with refund bonuses rewarded only to early contributors in the event of fundraising failure.
The experimental results show that our proposed solution is very effective in inducing early cooperation and increasing fundraising success. Limiting refund bonuses to early contributors works as well as offering refund bonuses to all potential contributors, while also reducing the amount of bonuses paid. We find that refund bonuses can increase the rate of campaign success by 50% or more. Moreover, we find that even taking into account campaign failures, refund bonuses can be financially self-sustainable suggesting the real world value of extending assurance contracts with refund bonuses.
[Keywords: public goods, donations, assurance contract, free riding, conditional cooperation, early contributions, refund bonuses, experiment, laboratory]
Music streaming platforms can affect artists’ success through playlist ranking decisions.
Dominant platforms may exercise their power in a biased fashion.
We test for bias in Spotify’s New Music playlist rankings using outcome-based tests.
We find that Spotify’s New Music rankings favor indie-label music and music by women.
Despite challenges faced by women and indie artists, Spotify’s New Music curation appears to favor them.
Platforms are growing increasingly powerful, raising questions about whether their power might be exercised with bias.
While bias is inherently difficult to measure, we identify a context within the music industry that is amenable to bias testing. Our approach requires ex ante platform assessments of commercial promise—such as the rank order in which products are presented—along with information on eventual product success. A platform is biased against a product type if the type attains greater success, conditional on ex ante assessment. Theoretical considerations and voiced industry concerns suggest the possibility of platform biases in favor of major record labels, and industry participants also point to bias against women.
Using data on Spotify curators’ rank of songs on New Music Friday playlists in 2017, we find that Spotify’s New Music Friday rankings favor independent-label music, along with some evidence of bias in favor of music by women.
Despite challenges that independent-label artists and women face in the music industry, Spotify’s New Music curation appears to favor them.
[Keywords: online platforms, platform power, platform bias, music streaming, playlists]
A large literature establishes that cognitive and non-cognitive skills are strongly correlated with educational attainment and professional achievement. Isolating the causal effects of these traits on career outcomes is complicated by reverse causality and selection issues.
We suggest a new approach: using within-family differences in the genetic tendency to exhibit the relevant traits as a source of exogenous variation. Genes are fixed over the life cycle and genetic differences between full siblings are random, making it possible to establish the causal effects of within-family variation in genetic tendencies.
We link genetic data from individuals in the Swedish Twin Registry to government registry data and find evidence for causal effects of the genetic predispositions towards cognitive skills, personality traits, and economic preferences on professional achievement and educational attainment. Our results also demonstrate that education and labor market outcomes are partially the result of a genetic lottery.
…We find strong evidence for a causal effect of the predisposition toward stronger cognitive skills on income, occupational status, and educational outcomes. We also find evidence for statistically-significant effects of the predispositions toward several non-cognitive traits: individuals who tend to be more risk seeking, mentally stable, and open tend to work in more prestigious occupations. The opposite is true for individuals with a tendency towards narcissism or discounting the future. A tendency towards being open and forward-looking also increases educational attainment (EA). Finally, we document large causal effects of the general genetic tendency towards higher EA on all the outcomes we study. This illustrates that success in education and professional careers is in part down to “genetic luck”. We also investigate heterogeneity in these effects by gender and socioeconomic status (SES) of the parents. We find some evidence of a stronger effect of the predisposition toward cognitive skills for high-SES individuals, in particular on educational outcomes. We also find that the effects of the genetic tendencies on income tend to be stronger for women, implying that gender differences in labor market outcomes are generally larger for less skilled individuals. The exception is the link between genetic tendencies and management positions: our results suggest that cognitive and non-cognitive skills strongly increase the likelihood for men to work in a management position but that effects are much weaker for women.
…The polygenic indices we use stem from the work of the Social Science Genetic Association Consortium (SSGAC) (Becker et al 2021).
…2.4 Sample: For the full-sample analyses looking at educational outcomes, we will limit the dataset to genotyped individuals born between 1934 and 1995 (that is, individuals who have likely completed their education) whom we can link to their parents’ records for the construction of the socioeconomic controls.13 This subsample contains 29,393 individuals. For the analyses looking at labor market outcomes, we will limit the dataset to individuals born between 1934 and 1990 (that is, individuals who have likely completed their education and worked for a few years). This subsample contains 25,515 individuals. For our causal analyses using within-family variation, we will limit the sample to complete sets of genotyped dizygotic twins. This sample contains 11,344 individuals (5,672 twin pairs) for the education analyses and 9,594 individuals (4,797 twin pairs) for the income analyses.
…The scaled estimates in Figure 2 show that the magnitudes of the effects are economically meaningful. A one-standard deviation difference in the cognitive performance PGI is associated with a roughly 10 percentage points increase in the likelihood of having graduated from university. The effect of math skills is roughly 5 percentage points. These 2 effects are estimated simultaneously, meaning that an individual with one-standard deviation higher cognitive performance and math skills is around 15 percentage points more likely to graduate from university. The effects of the statistically-significant non-cognitive traits (openness, narcissism, and time discounting as proxied by smoking) are similarly large. Finally, an one-standard deviation increase in the educational attainment PGI is associated with 0.4 to 0.6 additional years of education.
[Given the large sample size, it’d be better to skip the PGSes—which still capture so little of the genetics—and use sibling IBD or RDR to establish estimates of total causal effects.]
Following domestication in the lower Yangtze River valley 9,400 years ago, rice farming spread throughout China and changed lifestyle patterns among Neolithic populations. Here, we report evidence that the advent of rice domestication and cultivation may have shaped humans not only culturally but also genetically.
Leveraging recent findings from molecular genetics, we construct a number of polygenic scores (PGSs) of behavioural traits and examine their associations with rice cultivation based on a sample of 4,101 individuals recently collected from mainland China. A total of 9 polygenic traits and genotypes are investigated in this study, including PGSs of height, body mass index, depression, time discounting, reproduction, educational attainment, risk preference, ADH1B rs1229984 and ALDH2 rs671.
Two-stage least-squares estimates of the county-level percentage of cultivated land devoted to paddy rice on the PGS of age at first birth (b = −0.029, p = 0.021) and ALDH2 rs671 (b = 0.182, p < 0.001) are both statistically-significant and robust to a wide range of potential confounds and alternative explanations.
These findings imply that rice farming may influence human evolution in relatively recent human history.
AI is undergoing a paradigm shift with the rise of models (eg. BERT, DALL·E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character.
This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (eg. language, vision, robotics, reasoning, human interaction) and technical principles (eg. model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (eg. law, healthcare, education) and societal impact (eg. inequity, misuse, economic and environmental impact, legal and ethical considerations).
Though foundation models are based on conventional deep learning and transfer learning, their scale results in new emergent capabilities, and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties.
To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.
The history of AI is one of increasing emergence and homogenization. With the introduction of machine learning, we moved from a large proliferation of specialized algorithms that specified how to compute answers to a small number of general algorithms that learned how to compute answers (ie. the algorithm for computing answers emerged from the learning algorithm). With the introduction of deep learning, we moved from a large proliferation of hand-engineered features for learning algorithms to a small number of architectures that could be pointed at a new domain and discover good features for that domain. Recently, the trend has continued: we have moved from a large proliferation of trained models for different tasks to a few large “foundation models” which learn general algorithms useful for solving specific tasks. BERT and GPT-3 are central examples of foundation models in language; many NLP tasks that previously required different models are now solved using finetuned or prompted versions of BERT and/or GPT-3.
Note that, while language is the main example of a domain with foundation models today, we should expect foundation models to be developed in an increasing number of domains over time. The authors call these “foundation” models to emphasize that (1) they form a fundamental building block for applications and (2) they are not themselves ready for deployment; they are simply a foundation on which applications can be built. Foundation models have been enabled only recently because they depend on having large scale in order to make use of large unlabeled datasets using self-supervised learning to enable effective transfer to new tasks. It is particularly challenging to understand and predict the capabilities exhibited by foundation models because their multitask nature emerges from the large-scale training rather than being designed in from the start, making the capabilities hard to anticipate. This is particularly unsettling because foundation models also lead to substantially increased homogenization, where everyone is using the same few models, and so any new emergent capability (or risk) is quickly distributed to everyone.
The authors argue that academia is uniquely suited to study and understand the risks of foundation models. Foundation models are going to interact with society, both in terms of the data used to create them and the effects on people who use applications built upon them. Thus, analysis of them will need to be interdisciplinary; this is best achieved in academia due to the concentration of people working in the various relevant areas. In addition, market-driven incentives need not align well with societal benefit, whereas the research mission of universities is the production and dissemination of knowledge and creation of global public goods, allowing academia to study directions that would have large societal benefit that might not be prioritized by industry.
All of this is just a summary of parts of the introduction to the report. The full report is over 150 pages and goes into detail on capabilities, applications, technologies (including technical risks), and societal implications. I’m not going to summarize it here, because it is long and a lot of it isn’t that relevant to alignment; I’ll instead note down particular points that I found interesting.
(pg. 26) Some studies have suggested that foundation models in language don’t learn linguistic constructions robustly; even if they use it well once, they may not do so again, especially under distribution shift. In contrast, humans can easily “slot in” new knowledge into existing linguistic constructions.
(pg. 34) This isn’t surprising but is worth repeating: many of the capabilities highlighted in the robotics section are very similar to the ones that we focus on in alignment (task specification, robustness, safety, sample efficiency).
(pg. 42) For tasks involving reasoning (eg. mathematical proofs, program synthesis, drug discovery, computer-aided design), neural nets can be used to guide a search through a large space of possibilities. Foundation models could be helpful because (1) since they are very good at generating sequences, you can encode arbitrary actions (eg. in theorem proving, they can use arbitrary instructions in the proof assistant language rather than being restricted to an existing database of theorems), (2) the heuristics for effective search learned in one domain could transfer well to other domains where data is scarce, and (3) they could accept multimodal input: for example, in theorem proving for geometry, a multimodal foundation model could also incorporate information from geometric diagrams.
(Section 3) A substantial portion of the report is spent discussing potential applications of foundation models. This is the most in-depth version of this I have seen; anyone aiming to forecast the impacts of AI on the real world in the next 5–10 years should likely read this section. It’s notable to me how nearly all of the applications have an emphasis on robustness and reliability, particularly in truth-telling and logical reasoning.
(Section 4.3) We’ve seen a few (AN #152) ways (AN #155) in which foundation models can be adapted. This section provides a good overview of the various methods that have been proposed in the literature. Note that adaptation is useful not just for specializing to a particular task like summarization, but also for enforcing constraints, handling distributional shifts, and more.
(pg. 92) Foundation models are commonly evaluated by their performance on downstream tasks. One limitation of this evaluation paradigm is that it makes it hard to distinguish between the benefits provided by better training, data, adaptation techniques, architectures, etc. (The authors propose a bunch of other evaluation methodologies we could use.)
(Section 4.9) There is a review of AI safety and AI alignment as it relates to foundation models, if you’re interested. (I suspect there won’t be much new for readers of this newsletter.)
(Section 4.10) The section on theory emphasizes studying the pretraining-adaptation interface, which seems quite good to me. I especially liked the emphasis on the fact that pretraining and adaptation work on different distributions, and so it will be important to make good modeling assumptions about how these distributions are related.]
[cf. Davidai & Ongis 2021] A core proposition in economics is that voluntary exchanges benefit both parties. We show that people often deny the mutually beneficial nature of exchange, instead espousing the belief that one or both parties fail to benefit from the exchange. Across 4 studies (and 8 further studies in the online supplementary materials), participants read about simple exchanges of goods and services, judging whether each party to the transaction was better off or worse off afterward. These studies revealed that win-win denial is pervasive, with buyers consistently seen as less likely to benefit from transactions than sellers. Several potential psychological mechanisms underlying win-win denial are considered, with the most important influences being mercantilist theories of value (confusing wealth for money) and theory of mind limits (failing to observe that people do not arbitrarily enter exchanges). We argue that these results have widespread implications for politics and society.
[Keywords: folk economics, zero-sum thinking, intuitive theories, theory of mind, decision making]
…Even though economists have been long convinced by Smith’s arguments, battles against mercantilism and trade-protectionism must be fought anew each generation, as Ricardo (1817/2004), Bastiat (1845/2011), Marshall (1879/1949), Friedman (1962); and Krugman (1996) have done in turn. This need to relearn basic economics anew each generation encourages the hypothesis that zero-sum thinking is psychologically natural—a hypothesis endorsed explicitly by economists including Bastiat (1845/2011) and Sowell (2008).
The denial of transactions as win-win fits can explain zero-sum thinking—the belief that one party’s gain is another party’s loss. Zero-sum thinking is usually mistaken in economics precisely because individual trades do not make individual parties worse off. Yet it appears to be endemic in people’s thinking about economic matters. Laypeople tend to believe that more profitable companies are less socially responsible (Bhattacharjee et al 2017), when the true correlation is just the opposite. Negotiators often perceive themselves as carving up a “fixed pie”, decreasing the chances of a successful outcome (Bazerman & Neale, 1983; de Dreu et al 2000). People believe that the government cannot benefit one group without harming another (Bazerman et al 2001) and are particularly inclined to think in zero-sum ways about international trade (Baron & Kemp, 2004; Johnson et al 2019) and immigration (Esses et al 2001; Louis et al 2013). But zero-sum thinking also seems to be psychologically natural, occurring across many countries (Rózycka-Tran et al 2015) and political orientations, though manifesting differently among liberals and conservatives (Davidai & Ongis 2019). Zero-sum thinking has been noted in numerous settings (albeit not always fallaciously), including students’ thinking about grades (Meegan, 2010), reasoners thinking about evidence (Pilditch et al 2019), consumers’ thinking about product features (Chernev, 2007; Newman et al 2014), and even couples’ thinking about love (Burleigh et al 2017).
…Overview of Experiments: 4 studies tested win-win denial and its moderators. The general method of these experiments was to ask participants about ordinary exchanges of goods or services—for example, Sally purchasing a shirt from Tony’s store, Eric purchasing a haircut from Paul’s barber shop, or Mark trading his soy sauce for Fred’s vinegar. For each transaction, participants were asked whether or not each party was better off after the transaction. From the standpoint of neoclassical economics, all parties were better off after all exchanges, since people do not voluntarily enter into transactions at a loss, and we sought to avoid conditions under which behavioral amendments to economics would be likely to produce major exceptions. Nonetheless, if people engage in win-win denial, we would expect to see a widespread belief that some parties to these exchanges do not benefit.
The particular pattern of non-benefit can help to test the potential mechanisms for win-win denial. If mercantilism is the culprit, we would expect to see buyers (but not sellers) perceived as worse off and barters as pointlessly failing to benefit either party. On the other hand, the evolutionary mismatch account suggests that people may be better at recognizing positive-sum transactions among like-kind barters rather than monetary transactions, where people might even believe that sellers are made worse off since they give up valuable goods in exchange for intrinsically valueless currency. These hypotheses were tested in Study 1.
Study 2 tested a further implication of mercantilism—that exchanges described in terms of time (labor) rather than money would be seen as more beneficial. Study 3 tested the theory of mind account by attempting to induce participants to take the perspective of the buyer by giving reasons for the buyer’s purchase. Finally, Study 4 varied the prices of monetary exchanges to test heuristic substitution account, since very inexpensive products should then be seen as benefiting the consumers at the expense of the seller.
In the online supplementary materials, we report several additional replication studies (Part B), including studies that varied the framing of the transactions or wording of the dependent variable (Studies S1, S4, and S5) and between-subjects replications of key results (Studies S2 and S3). We also pool data across studies to test individual differences in win-win denial (Part C), particularly educational and political predictors.
…Win-win denial seems to be exacerbated by issues in our theory of mind. Specifically, people are naïve realists, making a perspective-taking error in which they interpret their own preferences as ground truth, neglecting that others have different preferences and reasons for their actions. Merely reminding people that the buyers and traders had reasons for their choices (even empty reasons such as “Mary wanted the chocolate bar”) reduced the incidence of win-win denial (Study 3; see also Study S3 in the online supplementary materials). Other results reported in the online supplementary materials were also consistent with this idea. Making the preference of buyers and traders more salient reduced win-win denial (Study S4), as did asking participants to rate the parties’ perceived gain or loss (Study S5). Together, these results suggest that people do not spontaneously reflect on the fact that parties to exchanges have reasons for their behavior, leading them to discount potential gains from trade.
…Perhaps surprisingly, we find in a separate project (Johnson et al 2021 [“Zero-sum thinking in self-perceptions of consumer welfare”]) that consumers often claim that their own past transactions make them either worse off or no better off, and even make similar claims about planned future transactions. Thus, there appears to be a striking attitude-behavior gap here: Whereas people’s lay theories of exchange seem to produce strong intuitions that consumers are often made worse off by their purchases, these attitudes do not seem to manifest (in most cases, fortunately) in their actions. Perhaps this gap is driven by differences in what is considered relevant when evaluating exchanges more abstractly from a distance versus more concretely from a nearby temporal perspective (Trope & Liberman, 2010), with the latter conditions prompting more thoughts about the consumption experience itself (see Future Directions above). In any case, we think this is a genuine puzzle deserving of further research.
Stablecoins are cryptocurrencies designed to trade at par with a reference asset, typically the U.S. Dollar. While they all share the same fundamental objective of maintaining stability against their reference assets, stablecoins differ substantially in terms of their economic design, quality of backing, stability assumptions and legal protections for coin holders.
We surface 2 critical dimensions that underpin the economic design of every stablecoin:
the volatility of the reserve assets against the reference asset, which defines the risk profile of the stablecoin for coin holders; and
the degree to which the stablecoin is exposed to the risk of a death spiral.
To address these risks, fiat-backed stablecoins must rely on reserves of high-quality, liquid assets and be subject to a framework that protects coin holders from credit risk, market risk, operational risk, as well as the insolvency or bankruptcy of the issuer.
Although decentralized stablecoin designs eliminate the need to trust an intermediary, they are either exposed to death spirals, or highly capital inefficient, as they must be highly over-collateralized to account for the lack of an intermediary. While these trade-offs might be acceptable for narrow use cases within the cryptocurrency space, without a breakthrough in decentralized stablecoin design, they are likely to limit the usefulness of these coins for mainstream adoption.
A limited number of published studies have presented evidence indicating that restaurant customers discriminate against Black servers by tipping them less than their White coworkers. However, the cross-sectional, localized, and small samples that were analyzed in these extant studies do not support any unqualified claim that consumer racial discrimination in tipping practices is a widespread phenomenon. Thus, in an effort to further clarify the relationship between restaurant servers’ race and customers’ tipping practices, we present results from three survey experiments designed to assess the causal effect of servers’ race on customers’ tipping intentions. In three independent, demographically diverse, and relatively large samples of U.S. consumers, we found no evidence to conclude that all else being equal consumers discriminate against Black restaurant servers by tipping them less than comparable White servers. Furthermore, the null effects of servers’ race on customers’ tipping practices were not found to be sensitive to variation in service quality, dining satisfaction, servers’ sex, customers’ sex, or customers’ race. Our results challenge the generalizability of the previously observed server race effects on customers’ tipping practices and point toward the need for future research that aims to advance our understanding of the conditions under which customers’ tipping practices are sensitive to the perceived race of their server. The implications of our results for restaurant operations and directions for future research are also discussed.
We use the first French experiment with playing card money in its colony of Quebec between 1685 and 1719 to illustrate the link between legal tender restrictions and the price level. Initially, the quantity of playing card money and the government’s poor fiscal condition appears to have had little effect on prices. After 1705, however, the playing card money became inflationary. We argue that this was caused by the government’s increased enforcement of the legal tender laws and the adoption of a redemption plan intended to remove the notes from circulation.
This is a very strange monetary experiment in economic history. The governor of the colony printed money on the back of playing cards to finance expenditures when he ran out of coins. The “notes” were backed by incoming coin shipments.
Yet, and this might interest John H. Cochrane because of the fiscal theory of the price level implications, there was no inflation in spite of massive over-issues.
We argue, following points made by Lawrence H. White and George Selgin, that the weakness of legal tender enforcement explain the absence of inflation. Under weak legal tender enforcement (or absent even), bad money is driven out of circulation (falling velocity).
Thus, the low credibility of the government promises on the back of playing cards simply translated into falling velocity for cards but no effect on prices as other mediums kept circulating.
However, when the government announced a redemption plan in 1714, it imposed a time limit to redeem the 1.8 million pounds of notes (a per capita amount equal to nominal GDP per capita). If not redeemed before then, they became worthless.
This was a de facto enforcement (because of the wealth tax that was embedded in the features of the redemption plan) of the legal tender. To avoid losing whatever share of wealth they held in the form of notes, households were aggressively trying to get rid of them.
At that point, good money was crowded out and notes dominated circulation until their redemption. It is also in that period that there is rapid inflation (a doubling of the price level in one year).
Our story is one where the institutions (as per Buchanan and Brennan’s Power to Tax) matter to understanding monetary developments.
With weak/ineffective legal tender, fiscal theory (backing theory) of the price level is quite effective. With strong legal tender, the quantity theory is stronger.
We estimate the impact of the Green Revolution in the developing world by exploiting exogenous heterogeneity in the timing and extent of the benefits derived from high-yielding crop varieties (HYVs).
We find that HYVs increased yields by 44% between 1965 and 2010, with further gains coming through reallocation of inputs. Higher yields increased income and reduced population growth. A 10-year delay of the Green Revolution would in 2010 have cost 17% of GDP (gross domestic product) per capita and added 223 million people to the developing-world population. The cumulative GDP loss over 45 years would have been US$115.0$83.02010 trillion, corresponding to ~1 year of current global GDP.
…The IARCs targeted developing countries, so all European countries, all former Soviet republics, Australia, Canada, Israel, Japan, New Zealand, and the United States are excluded from the sample…Our shift-share variable indicates that HYVs increased yields of food crops by 44% between 1965 and 2010. The total effect on yields is even higher because of substitution toward crops for which HYVs were available and because of reallocation of land and labor. Beyond agriculture, our baseline estimates show strong, positive, and robust impacts of the Green Revolution on different measures of economic development. Most striking is the impact on GDP (gross domestic product) per capita. Our estimates imply that delaying the Green Revolution for 10 years would have reduced GDP per capita in 2010 by US$1,764.0$1,273.02010 (adjusted for PPP [purchasing power parity]), or 17%, across our full sample of countries. The dollar amount is large, in part because some of the countries grew relatively rich during the period we study: the comparable loss in today’s least developed countries is US$543.2$392.02010. By 2010, the cumulative global loss of GDP of delaying the Green Revolution 10 years would have been about US$115.0$83.02010 trillion—roughly a year of present-day global GDP. Needless to say, this surpasses the amount of resources that went into developing HYVs by several orders of magnitude. The income loss would have been much greater had the Green Revolution never happened, perhaps reducing GDP per capita in the developing world to 50% of its current level, if our estimates are taken at face value—although we stress that this number is subject to considerable uncertainty and depends on a somewhat implausible counterfactual. Despite these reservations, the results of this paper clearly place the Green Revolution among the most important economic events in the 20th century.
We find no evidence that the gains from increased agricultural productivity were offset by any Malthusian effects; the increased availability of food does not appear to have been eroded by population increases. Instead, we find a negative effect of the Green Revolution on fertility. Our estimates suggest that the world would have contained more than 200 million additional people in 2010 if the onset of the Green Revolution had been delayed for 10 years. Lower population growth increased the relative size of the working-age population, leading to a demographic dividend that accounts for roughly one-fifth of our estimated effect on GDP per capita. Our paper also sheds light on a concern, often expressed in the literature, that agricultural productivity improvements would pull additional land into agriculture at the expense of forests and other environmentally valuable land uses. We find evidence to the contrary: in keeping with the “Borlaug hypothesis”, the Green Revolution tended to reduce the amount of land devoted to agriculture.
…The start of the Green Revolution can be dated quite precisely. As noted above, the first high-yielding rice varieties were crossed in 1962 at IRRI, and after several generations of selection, they were initially released in 1965 to national research programs in rice-growing countries around the world. For wheat, it is similarly possible to identify a zero date for the Green Revolution: the first successful crosses from the Rockefeller wheat program took place in the 1950s, but they were not released to farmers in other developing countries until 1965. Maize followed soon after. For each crop, we can identify with reasonable precision the date at which the research institution first released a variety based on breeding work that took place within the institution.
The Mexican case is unique in the sense that the first HYVs were developed in a research program that did not yet have standing as an international institution. As a result, the diffusion of the wheat semi-dwarf varieties took place within Mexico slightly before the varieties became available in other countries. For all our other crops, HYVs developed at the international research centers became available to all countries at effectively the same moment—either upon a formal initial release from the international center or through the inclusion of the material in “nurseries” of promising experimental material that were shared with researchers across the developing world.
…Converting our estimates from logarithms to levels, we find that relative yields are on average 9% higher 10 years after a HYV release (β10 = 0.09) and 75% higher after 40 years (β40 = 0.56). The gradual increase in yields happens both because adoption is gradual, along an extensive margin, and because successive vintages of HYVs of a crop increase yields beyond what the first HYV could achieve. Our estimated magnitudes are consistent with the micro-level literature, surveyed in Evenson & Gollin 2003b, which shows that HYVs typically have at least 50% higher yields than traditional varieties for a given set of inputs. Inputs are not fixed, however. Many HYVs respond better to fertilizer and other inputs than traditional varieties, raising yields still further; gains of the magnitude observed in figure 2 are not unexpected, in cases when HYV adoption is widespread.
…The event study for GDP per capita in figure 4A shows that 10 years after the onset of the Green Revolution in 1965, countries specialized in wheat, rice, and maize begin to have faster income growth than other countries.
…To put our estimated effect sizes into perspective, the effect of delaying the Green Revolution by 10 years is of a magnitude comparable (with opposite sign) to the income effect of democratizing, which Acemoglu et al 2019 estimate to be about 20% after 25 years, and to the effect of railroad access in 19th-century India, which Donaldson (2018) puts at 16%. The population effect we find is substantially smaller than the effect of medical innovations, which, according to Acemoglu et al 2020, has increased the population by 45% between 1940 and 1980 in their sample of countries and by even more in low-income and middle-income countries.
…The Green Revolution is often associated with the 1960s and 1970s, but rather than slowing down, the rate of adoption and the number of new HYVs increased in the 1980s, 1990s, and 2000s. Scattered evidence from sub-Saharan Africa suggests that the HYV adoption rate has increased by as much in the 2000s as in the 4 preceding decades.27 One reason is that, compared to that in other parts of the world, especially Southeast Asia, African agriculture is specialized in cassava, sorghum, millet, and other crops for which HYVs became available relatively late. Our results consequently shed light on the divergence between Southeast Asia and Africa during the second half of the 20th century.
We estimate the distribution of television advertising elasticities and the distribution of the advertising return on investment (ROI) for a large number of products in many categories…We construct a data set by merging market (DMA) level TV advertising data with retail sales and price data at the brand level…Our identification strategy is based on the institutions of the ad buying process.
Our results reveal substantially smaller advertising elasticities compared to the results documented in the literature, as well as a sizable percentage of statistically insignificant or negative estimates. The results are robust to functional form assumptions and are not driven by insufficient statistical power or measurement error.
The ROI analysis shows negative ROIs at the margin for more than 80% of brands, implying over-investment in advertising by most firms. Further, the overall ROI of the observed advertising schedule is only positive for one third of all brands.
[Keywords: advertising, return on investment, empirical generalizations, agency issues, consumer packaged goods, media markets]
…We find that the mean and median of the distribution of estimated long-run own-advertising elasticities are 0.023 and 0.014, respectively, and 2 thirds of the elasticity estimates are not statistically different from zero. These magnitudes are considerably smaller than the results in the extant literature. The results are robust to controls for own and competitor prices and feature and display advertising, and the advertising effect distributions are similar whether a carryover parameter is assumed or estimated. The estimates are also robust if we allow for a flexible functional form for the advertising effect, and they do not appear to be driven by measurement error. As we are not able to include all sensitivity checks in the paper, we created an interactive web application that allows the reader to explore all model specifications. The web application is available.
…First, the advertising elasticity estimates in the baseline specification are small. The median elasticity is 0.0140, and the mean is 0.0233. These averages are substantially smaller than the average elasticities reported in extant meta-analyses of published case studies (Assmus, Farley, and Lehmann (1984b), Sethuraman, Tellis, and Briesch (2011)). Second, 2 thirds of the estimates are not statistically distinguishable from zero. We show in Figure 2 that the most precise estimates are those closest to the mean and the least precise estimates are in the extremes.
…6.1 Average ROI of Advertising in a Given Week:
In the first policy experiment, we measure the ROI of the observed advertising levels (in all DMAs) in a given week t relative to not advertising in week t. For each brand, we compute the corresponding ROI for all weeks with positive advertising, and then average the ROIs across all weeks to compute the average ROI of weekly advertising. This metric reveals if, on the margin, firms choose the (approximately) correct advertising level or could increase profits by either increasing or decreasing advertising.
We provide key summary statistics in the top panel of Table III, and we show the distribution of the predicted ROIs in Figure 3(a). The average ROI of weekly advertising is negative for most brands over the whole range of assumed manufacturer margins. At a 30% margin, the median ROI is −88.15%, and only 12% of brands have positive ROI. Further, for only 3% of brands the ROI is positive and statistically different from zero, whereas for 68% of brands the ROI is negative and statistically different from zero.
These results provide strong evidence for over-investment in advertising at the margin. [In Appendix C.3, we assess how much larger the TV advertising effects would need to be for the observed level of weekly advertising to be profitable. For the median brand with a positive estimated ad elasticity, the advertising effect would have to be 5.33× larger for the observed level of weekly advertising to yield a positive ROI (assuming a 30% margin).]
6.2 Overall ROI of the Observed Advertising Schedule: In the second policy experiment, we investigate if firms are better off when advertising at the observed levels versus not advertising at all. Hence, we calculate the ROI of the observed advertising schedule relative to a counterfactual baseline with zero advertising in all periods.
We present the results in the bottom panel of Table III and in Figure 3(b). At a 30% margin, the median ROI is −57.34%, and 34% of brands have a positive return from the observed advertising schedule versus not advertising at all. Whereas 12% of brands only have positive and 30% of brands only negative values in their confidence intervals, there is more uncertainty about the sign of the ROI for the remaining 58% of brands. This evidence leaves open the possibility that advertising may be valuable for a substantial number of brands, especially if they reduce advertising on the margin.
…Our results have important positive and normative implications. Why do firms spend billions of dollars on TV advertising each year if the return is negative? There are several possible explanations. First, agency issues, in particular career concerns, may lead managers (or consultants) to overstate the effectiveness of advertising if they expect to lose their jobs if their advertising campaigns are revealed to be unprofitable. Second, an incorrect prior (ie. conventional wisdom that advertising is typically effective) may lead a decision maker to rationally shrink the estimated advertising effect from their data to an incorrect, inflated prior mean. These proposed explanations are not mutually exclusive. In particular, agency issues may be exacerbated if the general effectiveness of advertising or a specific advertising effect estimate is overstated. [Another explanation is that many brands have objectives for advertising other than stimulating sales. This is a nonstandard objective in economic analysis, but nonetheless, we cannot rule it out.] While we cannot conclusively point to these explanations as the source of the documented over-investment in advertising, our discussions with managers and industry insiders suggest that these may be contributing factors.
Why are high-income and low-income earners not substantially polarized in their support for progressive income taxation? This article posits that the affluent fail to recognize that they belong to the high-income income group and this misperception affects their preferences over progressive taxation.
To explain this mechanism theoretically, I introduce a formal model of subjective income-group identification through self-comparison to an endogenous reference group. In making decisions about optimal tax rates, individuals then use these subjective evaluations of their own income group and earnings of other groups.
Relying on ISSP data, I find strong evidence for the model’s empirical implications: most high-income earners support progressive taxation when they identify themselves with a lower group. Additionally, individuals who overestimate the earnings of the rich are more likely to support progressive taxation.
[Keywords: taxation, preferences, inequality, public opinion, subjective income class, social comparison]
…More specifically, I demonstrate that most people, even the affluent, support progressive tax rates when they believe it would be someone richer than them who would disproportionately bear the extra tax burden. This belief is mostly driven by the difficulty in precisely identifying high-income individuals and their income. For most citizens being affluent is a fuzzy concept that is hard to define. Everyone—high-income and low-income individuals alike—is confident that Bill Gates or Mark Zuckerberg are among those with high income. Nobody would oppose the notion that an individual who lives on the Upper East Side in Manhattan, drives a Ferrari, and takes vacations on an exotic island would be considered rich. However, how do people classify the owners of the most beautiful house on their block or the person in their neighborhood who has a nice car? Who do they think are the high-income earners? More importantly, how do people assess their affluence?
…The following analysis relies on the 2009 Social Inequality International Social Survey Programme (ISSP 2009), which asks a variety of questions about perceptions of economic inequality, self-placement, and preferences on redistributive policies. The analysis was restricted to countries where information on income allowed the generation of 10 deciles and where the respondents were asked to report gross household income before taxes and other deductions. The sample covers 22 countries and around 8,000 respondents.2 It thus provides rich individual-level data on perceptions and preferences over welfare policies, as well as all the important control variables. First, I examine the determinants of subjective self-placement. Then I proceed to explore how subjective self-placement and assessments of high-income group’s affluence levels affect preferences over progressive taxation.
…Before presenting the empirical results, it is interesting to look at Self-Placement and Income Distance to a CEO descriptively. The aim is to establish whether most high-income individuals place themselves in the middle, as well as to investigate the nature of perceptions pertaining to the income distance to a CEO. Figure 2 shows the distribution of Self-Placement and Income Distance to a CEO by objective income deciles of the respondents. Although the analytical scope of Figure 2 is limited, it is immediately clear that when asked to place themselves on a 10-point scale, most respondents place themselves between the 4th and the 6th groups. Although self-placement increases with the objective income decile, the magnitude of this increase is not very substantial. The median value of Self-Placement of the respondents below the median household earnings averages around 5, whereas it is 6 for those above the median.
Figure 2 also reveals valuable insights about the subjective perceptions of respondents on the income distance from a CEO. The range of perceived distance ranges from −5 to +20. The horizontal line shows the logarithmic transformation of the highest CEO to average worker pay ratio of a company in the United States, the country with the highest overall proportion in the sample. The logarithmic transformation of the highest CEO-employer compensation ratio in the United States is 4.06 (Melin et al 2019), whereas the logarithmic transformation of the average CEO-employer compensation ratio is only 2.42 (Duarte 2019). Looking at the distribution of the perceived distances and the actual numbers, it is clear that many people overestimate the distance by a considerable margin. This figure thus shows that some respondents’ best guess about the yearly earnings of a CEO is substantially larger than the highest earner’s salary in their country. These numbers reveal that some people think about prototypes that do not exist when they are prompted to think about the income levels of the rich…Looking at the right side of the figure, respondents who belong to the top decile place themselves in the higher groups, around 7, only when they think they earn more than a typical CEO in their country. As their impression of the income of an average CEO increases, they start underestimating their position substantially.
[‘“Bloomberg spent $500 million on ads. The U.S. population is 327 million”, Rivas wrote. “He could have given each American $1 million and still have money left over. I feel like a $1 million check would be life-changing for most people. Yet he wasted it all on ads and STILL LOST.”’]
…The most striking result perhaps relates to the individuals who place themselves in high groups but still believe they earn substantially less than a CEO. In line with the prediction of Proposition 3.3 which posits that an individual who identifies with the higher-income group still prefers a progressive tax rate if she believes that the other members of the high-income group are substantially richer than her, this figure shows that the predicted probability of supporting progressive taxation of an individual who places herself in the top income group is substantially high, 0.95, when that individual unrealistically overestimates a typical CEO’s earnings.
…Perhaps one of the most interesting findings of this paper is that the well-known “middle-income bias” found in public opinion surveys can be systematically explained. When individuals compare themselves either to the superrich or the superpoor, they tend to infer that they are situated around the middle of the income ladder. This, of course, has severe effects on their political preferences.
The government of Greenland has decided to suspend all oil exploration off the world’s largest island, calling it “a natural step” because the Arctic government “takes the climate crisis seriously.”
No oil has been found yet around Greenland, but officials there had seen potentially vast reserves as a way to help Greenlanders realize their long-held dream of independence from Denmark by cutting the subsidy of the equivalent of about $543 million USD the Danish territory receives from the Danish government every year.
Global warming means that retreating ice could uncover potential oil and mineral resources which, if successfully tapped, could dramatically change the fortunes of the semi-autonomous territory of 57,000 people. “The future does not lie in oil. The future belongs to renewable energy, and in that respect we have much more to gain”, the Greenland government said in a statement. The government said it “wants to take co-responsibility for combating the global climate crisis.”
The decision was made June 24 but made public Thursday.
…When the current government took office, led by the Inuit Ataqatigiit party since April’s parliamentary election, it immediately began to deliver on election promises and stopped plans for uranium mining in southern Greenland. Greenland still has 4 active hydrocarbon exploration licences, which it is obliged to maintain as long as the licensees are actively exploring. They are held by 2 small companies.
Every year, nearly 5,000 patients die while waiting for kidney transplants, and yet an estimated 3,500 procured kidneys are discarded. Such a polarized coexistence of dire scarcity and massive wastefulness has been mainly driven by insufficient pooling of cadaveric kidneys across geographic regions.
Although numerous policy initiatives are aimed at broadening organ pooling, they rarely account for a key friction—efficient airline transportation, ideally direct flights, is necessary for long-distance sharing, because of the time-sensitive nature of kidney transplantation. Conceivably, transplant centers may be reluctant to accept kidney offers from far-off locations without direct flights.
In this paper, we estimate the effect of the introduction of new airline routes on broader kidney sharing. By merging the U.S. airline transportation and kidney transplantation data sets, we create an unique sample tracking (1) the evolution of airline routes connecting all the U.S. airports and (2) kidney transplants between donors and recipients connected by these airports. We estimate the introduction of a new airline route increases the number of shared kidneys by 7.3%. We also find a net increase in the total number of kidney transplants and a decrease in the organ discard rate with the introduction of new routes. Notably, the post-transplant survival rate remains largely unchanged, although average travel distance increases after the introduction of new airline routes.
Our results are robust to alternative empirical specifications and have important implications for improving access to the U.S. organ transplantation system.
[Keywords: organ transplantation, airline transportation, pooling•flexibility, causal inference]
Mutualisms are commonly observed ecological interactions, often involving the exchange of resources across species. Such exchanges can be thought of as biological markets. Biologists modeling these markets often employ an informal mix of economics and game-theoretic concepts. A fundamental question is whether exchange in biological markets is consistent with general economic equilibrium theory (GET), the main paradigm used to study exchange in economics. This paper uses data from biological experiments to demonstrate that the trading behavior of mycorrhizal fungi is consistent with the predictions of GET. The large volume of knowledge in GET might result in new insights about biological exchange. In turn, experimental findings in biology can lead to a new field of application for GET.
The interaction between land plants and mycorrhizal fungi (MF) forms perhaps the world’s most prevalent biological market. Most plants participate in such markets, in which MF collect nutrients from the soil and trade them with host plants in exchange for carbon.
In a recent study, Whiteside et al 2019 conducted experiments that allowed them to quantify the behavior of arbuscular MF when trading phosphorus with their host roots. Their experimental techniques enabled the researchers to infer the quantities traded under multiple scenarios involving different amounts of phosphorus resources initially held by different MF patches.
Physical attractiveness is an important axis of social stratification associated with educational attainment, marital patterns, earnings, and more. Still, relative to ethno-racial and gender stratification, physical attractiveness is relatively understudied. In particular, little is known about whether returns to physical attractiveness vary by race or statistically-significantly vary by race and gender combined.
In this study, we use nationally representative data to examine whether (1) socially perceived physical attractiveness is unequally distributed across race/ethnicity and gender subgroups and (2) returns to physical attractiveness vary substantially across race/ethnicity and gender subgroups. Notably, the magnitude of the earnings disparities along the perceived attractiveness continuum, net of controls, rivals and/or exceeds in magnitude the black-white race gap and, among African-Americans, the black-white race gap and the gender gap in earnings.
The implications of these findings for current and future research on the labor market and social inequality are discussed.
Prominent economists have supposed that the private production of full-bodied gold or silver coins is inefficient: due to information asymmetry, private coins will be chronically low-quality or underweight.
An examination of private mints during gold rushes in the US in the years 1830–63, drawing on contemporary accounts and numismatic literature, finds otherwise. While some private gold mints produced underweight coins, from incompetence or fraudulent intent, such mints did not last long. Informed by newspapers about the findings of assays, money-users systematically abandoned substandard coins in favour of full-weight coins. Only competent and honest mints survived.
Sketch of a decentralized mineralization-based carbon capture: suppliers stake on reported deposits of mineral dust in publicly-auditable locations.
Blockchains or tokens for carbon dioxide removal have sometimes been proposed, but provide little advantage.
I review the principles of cryptoeconomics for designing mechanisms, and the proposal of “mineralization”—rock dust naturally reacting with atmospheric CO2 to lock it into minerals—for carbon capture to fight global warming.
Cryptoeconomics often relies on auditability & challenges to create desired behavior, and mineralization provides an objective, checkable, form of carbon credits. Thus, one can set up a simple economic game where miners claim tokens for doing mineralization to sell as carbon offsets, and challengers audit their supposed mineralization deposits hunting for fraud; the equilibrium is honest reporting of mineralization quantities, yielding a true decentralized, reliable, fraud-resistant “CO2 Coin”.
The decline in late 19th century agricultural prices, by reducing the incomes of aristocratic landed estates and of non-aristocratic landed families, led to richly dowried American heiress brides being substituted for brides from landed families in British aristocratic marriages. This reflected a wider 19th century phenomenon of aristocratic substitution of foreign brides for landed brides and the substitution of daughters of British businessmen for daughters of landed families when agricultural prices declined.
The results are consistent with positive assortative matching with lump-sum transfers (dowries), where landowning family dowries are cash constrained in periods of agricultural downturn.
I think whaling is really cool. I can’t help it. It’s one of those things like guns and war and space colonization which hits the adventurous id. The idea that people used to go out in tiny boats into the middle of oceans and try to kill the biggest animals to ever exist on planet earth with glorified spears to extract organic material for fuel is awesome. It’s like something out of a fantasy novel.
So I embarked on this project to understand everything I could about whaling. I wanted to know why burning whale fat in lamps was the best way to light cities for about 50 years. I wanted to know how profitable whaling was, what the hunters were paid, and how many whaleships were lost at sea. I wanted to know why the classical image of whaling was associated with America and what other countries have whaling legacies. I wanted to know if the whaling industry wiped out the whales and if they can recover.
…Fun Fact 1: Right whale testicles make up 1% of their weight,23 so each testicle weighs around 700 pounds. The average American eats 222 pounds of meat per year (not counting fish),24 so a single right whale testicle should cover a family of 4 for almost a year.
A key challenge for interpreting published empirical research is the fact that published findings might be selected by researchers or by journals. Selection might be based on criteria such as significance, consistency with theory, or the surprisingness of findings or their plausibility. Selection leads to biased estimates, reduced coverage of confidence intervals, and distorted posterior beliefs. I review methods for detecting and quantifying selection based on the distribution of p-values, systematic replication studies, and meta-studies. I then discuss the conflicting recommendations regarding selection result ing from alternative objectives, in particular, the validity of inference versus the relevance of findings for decision-makers. Based on this discussion, I consider various reform proposals, such as de-emphasizing significance, pre-analysis plans, journals for null results and replication studies, and a functionally differentiated publication system. In conclusion, I argue that we need alternative foundations of statistics that go beyond the single-agent model of decision theory.
This paper investigates whether the impact of children on the labor market outcomes of women relative to men—child penalties—can be explained by the biological links between mother and child.
We estimate child penalties in biological and adoptive families using event studies around the arrival of children and almost 40 years of adoption data from Denmark. Short-run child penalties are slightly larger for biological mothers than for adoptive mothers, but their long-run child penalties are virtually identical and precisely estimated.
This suggests that biology is not a key driver of child-related gender gaps.
We study the local effects of new market-rate housing in low-income areas using microdata on large apartment buildings, rents, and migration. New buildings decrease rents in nearby units by about 6% relative to units slightly farther away or near sites developed later, and they increase in-migration from lowincome areas. We show that new buildings absorb many high-income households and increase the local housing stock substantially. If buildings improve nearby amenities, the effect is not large enough to increase rents. Amenity improvements could be limited because most buildings go into already-changing neighborhoods, or buildings could create disamenities such as congestion.
Why do some people blame the political system for the problems in their lives? We explore the origins of these grievances and how people assign responsibility and blame for the challenges they face. We propose that individual differences in the personality traits of locus of control and self-esteem help explain why some blame the political system for their personal problems. Using responses from a module of the 2016 Cooperative Congressional Election Study, we show that those with low self-esteem and a weaker sense of control over their fates are more likely to blame the political system for the challenges they face in their lives. We also demonstrate that this assignment of blame is politically consequential, where those who intertwine the personal and the political are more likely to evaluate elected officials based on pocketbook economic conditions rather than sociotropic considerations.
Chemosensory anxiety signals act independent of odor concentration.
It is well documented how chemosensory anxiety signals affect the perceiver’s physiology, however, much less is known about effects on overt social behavior. The aim of the present study was to investigate the effects of chemosensory anxiety signals on trust and risk behavior in men and women.
Axillary sweat samples were collected from 22 men during the experience of social anxiety, and during a sport control condition. In a series of 5 studies, the chemosensory stimuli were presented via an olfactometer to 214 participants acting as investors in a bargaining task either in interaction with a fictitious human co-player (trust condition) or with a computer program (risk condition).
It could be shown that chemosensory anxiety signals reduce trust and risk behavior in women. In men, no effects were observed.
Chemosensory anxiety is discussed to be transmitted contagiously, preferentially in women.
Investigates the long-term causal effects of bombings on later economic development.
Focus on Laos that is one of the most intensely bombed countries per capita in history.
Use granular grid data, nightlights or population, as proxies of economic development.
No robust effects of bombings in southern Laos, but some effects in northern Laos.
No within-country conditional economic convergence, which could be Lao specific.
This study investigates the long-term causal effects of U.S. bombing missions during the Vietnam War on later economic development in Laos. Following an instrumental variables approach, we use the distance between the centroid of village-level administrative boundaries and heavily bombed targets, namely, the Ho Chi Minh Trail in southern Laos and Xieng Khouang Province in northern Laos, as an instrument for the intensity of U.S. bombing missions. We use three datasets of mean nighttime light intensity (1992, 2005, and 2013) and two datasets of population density (1990 and 2005) as outcome variables. The estimation results show no robust long-term effects of U.S. bombing missions on economic development in southern Laos but show negative effects in northern Laos, even 40 years after the war. We also found that the results do not necessarily support the conditional convergence hypothesis within a given country, although this result could be unique to Laos.
A randomized tenure security intervention in Zambia statistically-significantly reduced farmers’ fear of losing their land.
But it had no impact on land fallowing, agroforestry, or other investments.
We cross-randomize tenure with an agroforestry extension that relaxes financial and technical constraints to investment.
The impact of land tenure is still zero even when these other constraints are relaxed.
There is broad agreement among the most prominent observational studies that tenure insecurity deters investment. We present new experimental evidence testing this proposition: a land certification program randomized across villages in Zambia. Our results contradict the consensus.
Though the intervention improved perceptions of tenure security, it had no impact on investment in the following season. The impact is still zero even after a cross-randomized agroforestry extension relaxes financial and technical constraints to agroforestry investment. Though relaxing these constraints has a direct effect, it is not enhanced by granting land tenure, implying tenure insecurity had not been a barrier to investment.
…This paper sidesteps such challenges by using a randomized experiment. We evaluate the short-run effects of an intervention in Zambia that cross-randomized an agroforestry extension with a program that strengthened customary land tenure through field demarcation and certification. We test for whether tenure security affects a host of outcomes drawn from prior observational studies. Our experimental results do not corroborate these studies. We estimate with reasonable precision that tenure security has zero effect…Finally, we discard our experimental variation and apply several observational research designs similar to those used in prior studies. We show that had we used such a design we would have spuriously concluded that tenure security has positive and statistically-significant effects. This exercise does not necessarily imply the estimates of the observational studies were flawed. But it does show that the key moments used by these studies for identification also appear in our Zambian sample. That implies the context is not entirely different and that it is possible to find these moments even in a sample where granting tenure security has no effect.
Fonts are durable, highly-reusable, compact, & high-quality software products which do not ‘bitrot’. Nevertheless, hundreds or thousands of new ones come out every year despite enormous duplication; why? I speculate that designer boredom seems to be the answer: they crave novelty.
Fonts are a rare highlight in software design—stable, with well-defined uses, highly-compatible software stacks, and long-lived. Unsurprisingly, a back-catalogue of tens or hundreds of thousands of digital fonts out there, many nigh-indistinguishable from the next in both form and function.
Why, then do they all cost so much, and who is paying for them all, and even going around commissioning more fonts?
The casualness of the highly marked-up prices & the language around commissioned fonts strongly points to designers spending client money, largely for the sake of novelty & boredom, functioning as a cross-subsidy from large corporations to the art of typography. The surplus of fonts then benefits everyone else—as long as they can sort through all the choices!
From the end of the Civil War to the onset of the Great War, the United States experienced an unprecedented increase in commitment rates for mental asylums. Historians and sociologists often explain this increase by noting that public sentiment called for widespread involuntary institutionalization to avoid the supposed threat of insanity to social well-being. However, that explanation neglects expanding rent seeking within psychiatry and the broader medical field over the same period. In this paper, we argue that stronger political influence from mental healthcare providers contributed substantially to the rise in institutionalization. We test our claim empirically with reference to the catalog of medical regulations from 1870 to 1910, as well as primary sources documenting rates of insanity at the state level. Our findings provide an alternative explanation for the historical rise in US institutionalizations.
[Keywords: rent-seeking, public health, American economic history, mental health, insanity]
…Between 1870 and 1910, institutionalization rates (per 100,000 persons) rose nearly 3× (see Figure 1).
…In this paper, we utilize a public choice framework to offer a complementary explanation for the rise in institutionalizations, which argues that the expansion of public asylums benefited asylum-based physicians. Although we emphasize political exchange rather than public interest, the 2 explanations are not necessarily antagonistic.3 They can be complements (Leeson 2019, pp. 39–40). To illustrate such complementarity, consider the “bootleggers and Baptists” theory of regulation (Yandle 1983; Horpedahl 2020). The “Baptists”, by means of public-interest justifications, propose a policy that offers laudable public benefits. The “bootleggers”, rent seekers who expect to profit, will support the policy. In the case of the asylum’s expansion, we will argue that rent seeking was in play. Progressive social reformers and voters (ie. the “Baptists”) saw the state asylum’s expansion as being in the public interest. Physicians and asylum superintendents (ie. the “bootleggers”), when well-organized, joined with the progressive social reformers and voters out of self-interest. In other words, public and private interest forces were not at odds with one another—they complemented each other in ways that caused asylums to expand.
…To assess whether asylum physicians were able to secure rents, we rely on state-level institutionalization rates from 1870 to 1910 (provided by US Census Bureau documents) in conjunction with state-level legislation affecting entry into the medical profession (Baker 1984; Hamowy 1979). The ability of the medical community of a given state to procure barriers to entry into the profession becomes a proxy for the effectiveness of physicians in the field of mental care in securing rents. Our assumption is that in states where physicians were politically weak, asylum physicians must have been weak as well (and thus unable to secure additional rents). While numerous laws were adopted to restrict entry, the most important one was the examining board.4 Those boards were enforcement entities that could set the conditions of entry and also amplify the effectiveness of most of the other laws. If the medical profession was too weak to get an examining board, it was too weak to capture most other potential rent sources.
…Our analysis finds that many entry-restriction laws (examining boards in particular) explain the rise of asylum populations from 1870 to 1910. For example, the introduction of an examining board increased institutionalization rates by ~10–20%. The results control for state and year effects. They are robust to changes in how the institutionalized population is measured. Thus, a rent-seeking process was at play. This process dovetails well with public interest explanations of asylum expansion (Sutton 1991).
How harmful can government regulations and protectionism be? We provide evidence of a sizable negative impact of government interventions on population health.
In 2012, the Russian government implemented a strategy to increase the affordability of pharmaceutical drugs and develop domestic generics for the majority of medications. It set price limits and implemented protectionist regulations that favor local producers of generics and biosimilars in several large groups of medicines.
We show that the mortality rate for conditions affected by public programs reversed a previously declining trend and increased by 40% after the interventions compared to the overall mortality and an unaffected (control) group of diseases. For some affected diseases, the mortality more than doubled. Additionally, the growth is more notable among the elderly, in rural compared to urban areas, and areas with a shortage of medical facilities.
How resilient are high-skilled, white collar workers? We exploit an uniquely comprehensive dataset of individual-level resumes of bank employees and the setting of the Lehman Brothers bankruptcy to estimate the effect of an unanticipated shock on the career paths of mobile and high skilled labor.
We find evidence of short-term effects that largely dissipate over the course of the decade and that touch only the senior-most employees. We match each employee of Lehman Brothers in January 2008 to the most similar employees at Goldman Sachs, Morgan Stanley, Deutsche Bank, and UBS based on job positions, skills, education, and demographics. By 2019, the former Lehman Brothers employees are 2% more likely to have experienced at least a 6-months-long break from reported employment and 3% more likely to have left the financial services industry. However, these effects concentrate among the senior individuals such as vice presidents and managing directors and are absent for junior employees such as analysts and associates.
Furthermore, in terms of subsequent career growth, junior employees of Lehman Brothers fare no worse than their counterparts at the other banks. Analysts and associates employed at Lehman Brothers in January 2008 have equal or greater likelihoods of achieving senior roles such as managing director in existing enterprises by January 2019 and are more likely to found their own businesses.
[Keywords: career disruptions, bankruptcy, human capital, skilled labor, inequality]
…Our last result suggests that former employees of Lehman Brothers were prone to use the disruption event as a platform to start new ventures, consistent with the evidence by Babina 2020 and Hacamo & Kleiner 2020. We identify entrepreneurial activity as individuals who are listed as (co-)founders, presidents, or C-level executives of firms that did not exist prior to the bankruptcy event. The unconditional likelihood of entrepreneurship among the employees of the control banks is 2.16%. This likelihood is much higher among former employees of Lehman Brothers, at 3.29%, with the difference statistically-significant at the 1% level. Across hierarchical levels, baseline entrepreneurship is higher for more senior employees (eg. 3.7% for managing directors and 4.1% for senior management), but the Lehman Brothers bankruptcy increases this rate for all positions. In fact, the starkest relative increase is observed for employees who held associate-level titles in January 2008, with ex-Lehman associates showing a 4.5% likelihood of subsequently founding their own ventures, compared to only 1.8% for associates at Goldman Sachs, Morgan Stanley, Deutsche Bank, and UBS.
E-commerce and online advertisement are growing trends.
The overall impact of ad blockers is unclear.
Using survey data, the effect of ad blocker use on online purchases is quantified.
The analysis reveals a positive effect of ad blocker use on e-commerce.
In the light of the results stakeholders should consider if the present online ads formats are the most suitable.
The use of ad blocking software has risen sharply with online advertising and is recognized as challenging the survival of the ad supported web. However, the effects of ad blocking on consumer behavior have been studied scarcely.
This paper uses propensity score matching techniques on a longitudinal survey of 4411 Internet users in Spain to show that ad blocking has a causal positive effect on their number of online purchases. This could be attributed to the positive effects of ad blocking, such as a safer and enhanced navigation.
This striking result reinforces the controversial debate of whether current online ads are too bothersome for consumers.
[Keywords: Ad blockers, advertising avoidance, e-commerce, propensity score matching]
…This study employs a rich dataset coming from a longitudinal survey. The source of the data is a survey conducted by the Spanish Markets and Competition Authority on the same sample of interviewees in the 4th quarter of 2017 and in the second quarter of 2018 ([dataset]CNMCData, 2019). The sample was designed to be representative of the population living in private households in Spain. The information was provided by 4411 Internet users ≥16 years old. At the baseline time point (fourth quarter of 2017) these individuals were asked if they regularly used ad blocking tools when navigating the web. Additionally, the survey collected information on their socio-demographic characteristics (age, gender, education level and employment status) and on how they used Internet (frequency of use of online services like: GPS navigation services, instant messaging, mobile gaming, social networks, e-mail and watching videos on the phone). 6 months later (second quarter of 2018), the same individuals were asked how many online purchases they had made during the previous 6 months (these included goods and services purchases, irrespective of the form of payment). Thus, the outcome variable (number of online purchases) occurred later than the collection of the ad blocking information and the rest of variables (our X covariates).
Stratification on PS quintiles
Stratification on PS deciles
PSM—NN after CEM pruning (1)
PSM—NN after CEM pruning (2)
Table 2: Estimated average treatment effects of ad blockers on online shopping (number of purchases in 6 months). [ATT: average treatment effect on the treated. PSM: propensity score matching. NN: nearest neighbor. KM: kernel matching. PS: propensity scores. CEM: coarsened exact matching. LCI: lower confidence interval. UCI: upper confidence interval. (1) CEM pruning by using use of Internet apps covariates. (2) CEM pruning by using socio-demographic covariates.]
What are the long-term consequences of compensation changes? Using data from an inbound sales call center, we study employee responses to a compensation change that ultimately reduced take-home pay by 7% for the average affected worker. The change caused a statistically-significant increase in the turnover rate of the firm’s most productive employees, but the response was relatively muted for less productive workers. On-the-job performance changes were minimal among workers who remained at the firm. We quantify the cost of losing highly productive employees and find that their heightened sensitivity to changes in compensation limits managers’ ability to adjust incentives. Our results speak to a driver of compensation rigidity and the difficulty managers face when setting compensation.
I study the impact of transportation network companies (TNC) on traffic delays using a natural experiment created by the abrupt departure of Uber and Lyft from Austin, Texas.
Applying difference in differences and regression discontinuity specifications to high-frequency traffic data, I estimate that Uber and Lyft together decreased daytime traffic speeds in Austin by roughly 2.3%. Using Austin-specific measures of the value of travel time, I translate these slowdowns to estimates of citywide congestion costs that range from $33 to $52 million annually. Back of the envelope calculations imply that these costs are similar in magnitude to the consumer surplus provided by TNCs in Austin.
Together these results suggest that while TNCs may impose modest travel time externalities, restricting or taxing TNC activity is unlikely to generate large net welfare gains through reduced congestion.
In the standard herding model, privately informed individuals sequentially see prior actions and then act. An identical action herd eventually starts and public beliefs tend to “cascade sets” where social learning stops. What behaviour is socially efficient when actions ignore informational externalities?
We characterize the outcome that maximizes the discounted sum of utilities. Our 4 key findings are:
cascade sets shrink but do not vanish, and herding should occur but less readily as greater weight is attached to posterity.
An optimal mechanism rewards individuals mimicked by their successor.
Cascades cannot start after period one under a signal log-concavity condition.
Given this condition, efficient behaviour is contrarian, leaning against the myopically more popular actions in every period.
We make 2 technical contributions: as value functions with learning are not smooth, we use monotone comparative statics under uncertainty to deduce optimal dynamic behaviour. We also adapt dynamic pivot mechanisms to Bayesian learning.
Methods: Literature review, data gathering and critical assessment of the indicators and proxies suggested or implied by Ehrlich and Schneider. Critical assessment of Simon’s reasons for rejecting the bet. Data gathering for his alternative indicators.
Results: For indicators that can be measured satisfactorily, the balance of the outcomes favors the Ehrlich-Schneider claims for the initial ten-year period. Extending the timeline and accounting for the measurement limitations or dubious relevance of many of their indicators, however, shifts the balance of the evidence towards Simon’s perspective.
Conclusion: Although the outcomes favour the Ehrlich-Schneider claims for the initial ten-year period, Ehrlich and Schneider’s indicators yielded mixed results in the long run. Simon’s preferred indicators of direct human welfare would yield largely favourable outcomes if the bet were extended into the present. Based on this, we claim that Simon’s optimistic perspective was once again largely validated.
Objective: This paper provides the first comprehensive assessment of the outcome of Paul Ehrlich’s and Stephen Schneider’s counteroffer (1995) to economist Julian Simon following Ehrlich’s loss in the famous Ehrlich-Simon wager on economic growth and the price of natural resources (1980–1990). Our main conclusion in a previous article is that, for indicators that can be measured satisfactorily or can be inferred from proxies, the outcome favors Ehrlich-Schneider in the first decade following their offer. This second article extends the timeline towards the present time period to examine the long-term trends of each indicator and proxy, and assesses the reasons invoked by Simon to refuse the bet.
Methods: Literature review, data gathering, and critical assessment of the indicators and proxies suggested or implied by Ehrlich and Schneider. Critical assessment of Simon’s reasons for rejecting the bet. Data gathering for his alternative indicators.
Results: For indicators that can be measured directly, the balance of the outcomes favors the Ehrlich-Schneider claims for the initial ten-year period. Extending the timeline and accounting for the measurement limitations or dubious relevance of many of their indicators, however, shifts the balance of the evidence towards Simon’s perspective.
Conclusion: The fact that Ehrlich and Schneider’s own choice of indicators yielded mixed results in the long run, coupled with the fact that Simon’s preferred indicators of direct human welfare yielded largely favorable outcomes is, in our opinion, sufficient to claim that Simon’s optimistic perspective was largely validated.
Using millions of father-son pairs spanning more than 100 years of US history [using US census data], we find that children of immigrants from nearly every sending country have higher rates of upward mobility than children of the US-born. Immigrants’ advantage is similar historically and today despite dramatic shifts in sending countries and US immigration policy. Immigrants achieve this advantage in part by choosing to settle in locations that offer better prospects for their children.
Folklore is the collection of traditional beliefs, customs, and stories of a community passed through the generations by word of mouth.
We introduce to economics an unique catalog of oral traditions by Yuri Berezkin spanning approximately 1,000 societies. After validating the catalog’s content by showing that the groups’ motifs reflect known geographic and social attributes, we present 2 sets of applications.
First, we illustrate how to fill in the gaps and expand upon a group’s ethnographic record, focusing on political complexity, high gods, and trade. Second, we discuss how machine learning and human classification methods can help shed light on cultural traits, using gender roles, attitudes toward risk, and trust as examples. Societies with tales portraying men as dominant and women as submissive tend to relegate their women to subordinate positions in their communities, both historically and today. More risk-averse and less entrepreneurial people grew up listening to stories wherein competitions and challenges are more likely to be harmful than beneficial. Communities with low tolerance toward antisocial behavior, captured by the prevalence of tricksters being punished, are more trusting and prosperous today. These patterns hold across groups, countries, and second-generation immigrants.
Overall, the results highlight the importance of folklore in cultural economics, calling for additional applications.
A fundamental feature of sacred values like environmental-protection, patriotism, and diversity is individuals’ resistance to trading off these values in exchange for material benefit. Yet, for-profit organizations increasingly associate themselves with sacred values to increase profits and enhance their reputations.
In the current research, we investigate a potentially perverse consequence of this tendency: that observing values used instrumentally (ie. in the service of self-interest) subsequently decreases the sacredness of those values. Seven studies (n = 2,785) demonstrate support for this value corruption hypothesis. Following exposure to the instrumental use of a sacred value, observers held that value as less sacred (Studies 1–6), were less willing to donate to value-relevant causes (Studies 3 and 4), and demonstrated reduced tradeoff resistance (Study 7). We reconcile the current effect with previously documented value protection effects by suggesting that instrumental use decreases value sacredness by shifting descriptive norms regarding value use (Study 3), and by failing to elicit the same level of outrage as taboo tradeoffs, thus inhibiting value protective responses (Studies 4 and 5).
These results have important implications: People and organizations that use values instrumentally may ultimately undermine the very values from which they intend to benefit.
This scoping paper addresses the role of financial institutions in empowering the British Industrial Revolution. Prominent economic historians have argued that investment was largely funded out of savings or profits, or by borrowing from family or friends: hence financial institutions played a minor role. But this claim sits uneasily with later evidence from other countries that effective financial institutions have mattered a great deal for economic development. How can this mismatch be explained?
Despite numerous technological innovations, from 1760 to 1820 industrial growth was surprisingly low. Could the underdevelopment of financial institutions have held back growth? There is relatively little data to help evaluate this hypothesis. More research is required on the historical development of institutions that enabled finance to be raised. This would include the use of property as collateral.
This paper sketches the evolution of British financial institutions before 1820 and makes suggestions for further empirical research. Research in this direction should enhance our understanding of the British Industrial Revolution and of the preconditions of economic development in other countries.
The questions of whether high-income individuals are more prosocial than low-income individuals and whether income inequality moderates this effect have received extensive attention.
We shed new light on this topic by analyzing a large-scale dataset with a representative sample of respondents from 133 countries (N = 948,837). We conduct a multiverse analysis with 30 statistical models: 15 models predicting the likelihood of donating money to charity and 15 models predicting the likelihood of volunteering time to an organization.
Across all model specifications, high-income individuals were more likely to donate their money and volunteer their time than low-income individuals. High-income individuals were more likely to engage in prosocial behavior under high (vs. low) income inequality.
Avenues for future research and potential mechanisms are discussed.
[Keywords: income inequality, prosocial behavior, income, volunteering, donating]
How much do a manager’s interpersonal skills with subordinates, which we call people management skills, affect employee outcomes? Are managers rewarded for having such skills?
Using personnel data from a large high-tech firm, we show that survey-measured people management skills have a strong negative relation to employee turnover. A causal interpretation is reinforced by several research designs, including those exploiting new workers joining the firm and workers switching managers.
However, people management skills do not consistently improve most observed non-attrition outcomes. Better people managers themselves receive higher subjective performance ratings, higher promotion rates, and larger salary increases.
…Replacing a manager at the 10th percentile of people management skills with one at the 90th percentile reduces the total subordinate labor costs by 5% solely from lower hiring costs due to less attrition. [in human terms, this same shift (from a terrible manager to an excellent one) translates to a 60% reduction in turnover, with a bias towards “regretted” quits—employees the firm would have preferred to retain]
This article examines the role that residential mobility may play in shaping cultural values. We discuss how residential mobility may foster an ethos built on dynamism, optimism, and the belief that hard work leads to success; we examine the relationship between shifting levels of mobility and feelings of optimism, well-being, trust, and individualism; and we speculate about how American culture, one specifically formed by mobility, may continue to change as more and more residents find themselves stuck in place.
We discuss the cultural power of changes in nation-level residential mobility.
Using a theoretically informed analysis of mobility trends across the developed world, we argue that a shift from a culture full of people moving their residence to a culture full of people staying in place is associated with decreases, among its residents, in individualism, happiness, trust, optimism, and endorsement of the notion that hard work leads to success. We use the United States as a case study:
Although the United States has historically been a highly-residentially mobile nation, yearly moves in the United States are halved from rates in the 1970s and quartered from rates in the late 19th century. In the past 4 decades, the proportion of Americans who are stuck in neighborhoods they no longer wish to live in is up nearly 50%. We discuss how high rates of mobility may have originally shaped American culture and how recent declines in residential mobility may relate to current feelings of cultural stagnation.
Finally, we speculate on future trends in American mobility and the consequences of a society where citizens increasingly find themselves stuck in place.
[Keywords: cultural change, relational mobility, residential mobility]
This report analyzes the current supply chain for semiconductors. It particularly focuses on which portions of the supply chain are controlled by US and its allies and China. Some key insights:
The US semiconductor industry is estimated to contribute 39% of the total value of the global semiconductor supply chain.
The semiconductor supply chain is incredibly complicated. The production of a single chip requires more than 1,000 steps and passes through borders more than 70 times throughout production.
AMD is currently the only company with expertise in designing both high-end GPUs and high-end CPUs.
TSMC controls 54% of the logic foundry market, with a larger share for leading edge production, eg. state-of-the-art 5 nm node chips.
Revenue per wafer for TSMC is rapidly increasing, while other foundries are seeing declines.
The Netherlands has a monopoly on extreme ultraviolet (EUV) scanners, equipment needed to make the most advanced chips.
The Netherlands and Japan have a monopoly on argon fluoride (ArF) immersion scanners, needed to make the second most advanced chips.
The US has a monopoly on full-spectrum electronic design automation (EDA) software needed to design semiconductors.
Japan, Taiwan, Germany and South Korea manufacture the state-of-the-art 300 mm wafers used for 99.7% of the world’s chip manufacturing. This manufacturing process requires large amounts of tacit know-how.
China controls the largest share of manufacturing for most natural materials. The US and its allies have a sizable share in all materials except for low-grade gallium, tungsten and magnesium.
China controls ~2⁄3rds of the world’s silicon production, but the US and allies have reserves.
The report also analyzes US competitiveness at detailed levels of the supply chain, which I didn’t read that carefully. Tables:]
General purpose technologies (GPTs) like AI enable and require substantial complementary investments. These investments are often intangible and poorly measured in national accounts.
We develop a model that shows how this can lead to underestimation of productivity growth in a new GPTs early years and, later, when the benefits of intangible investments are harvested, productivity growth overestimation. We call this phenomenon the Productivity J-curve.
We apply our method to US data and find that adjusting for intangibles related to computer hardware and software yields a TFP level that is 15.9% higher than official measures by the end of 2017.
[Censored from bioRxiv; author discussion: 1, 2, 3.] Natural selection has been documented in contemporary humans, but little is known about the mechanisms behind it. We test for natural selection through the association between 33 polygenic scores and fertility, across two generations, using data from UK Biobank (n = 409,629 British subjects with European ancestry).
Consistently over time, polygenic scores associated with lower (higher) earnings, education and health are selected for (against). Selection effects are concentrated among lower SES groups, younger parents, people with more lifetime sexual partners, and people not living with a partner. The direction of natural selection is reversed among older parents (22+), or after controlling for age at first live birth. These patterns are in line with economic theories of fertility, in which higher earnings may either increase or decrease fertility via income and substitution effects in the labour market.
Studying natural selection can help us understand the genetic architecture of health outcomes: we find evidence in modern day Great Britain for multiple natural selection pressures that vary between subgroups in the direction and strength of their effects, that are strongly related to the socio-economic system, and that may contribute to health inequalities across income groups.
Although zero-sum relationships are, from a strictly theoretical perspective, symmetrical, we find evidence for asymmetrical zero-sum beliefs: The belief that others gain at one’s own expense, but not vice versa. Across various contexts (international relations, interpersonal negotiations, political partisanship, organizational hierarchies) and research designs (within/between-participant), we find that people are more prone to believe that others’ success comes at their own expense than they are to believe that their own success comes at others’ expense. Moreover, we find that people exhibit asymmetric zero-sum beliefs only when thinking about how their own party relates to other parties but not when thinking about how other parties relate to each other. Finally, we find that this effect is moderated by how threatened people feel by others’ success and that reassuring people about their party’s strengths eliminates asymmetric zero-sum beliefs.
We discuss the theoretical contributions of our findings to research on interpersonal and intergroup zero-sum beliefs and their implications for understanding when and why people view life as zero-sum.
…In 7 studies (including 2 preregistered experiments), we examine the psychology of asymmetric zero-sum beliefs. Studies 1 and 2 examine whether people believe that other countries (Study 1) and people (Study 2) gain at their expense, but not vice versa. Study 3 examines whether asymmetric zero-sum beliefs are unique to contexts that directly involve one’s own party, but not to contexts that involve other parties’ relations to one another. We show that people exhibit asymmetric zero-sum beliefs when considering how their own country’s outcomes relate to another country’s outcomes (ie. U.S.-China relations), but not when thinking about 2 separate countries (ie. Germany-China relations). Study 4 replicates and extends this effect in the domain of political parties and examines the role of threat in asymmetric zero-sum beliefs. We examine whether the degree to which political partisans feel threatened by an opposing party predicts how much they see that party as gaining at their own party’s expense. Finally, Studies 5, 6A, and 6B examine the causal role of threat on asymmetric zero-sum beliefs in both interpersonal and intergroup contexts by manipulating how threatened people feel by an opposing party. We find that people exhibit asymmetric zero-sum beliefs when feeling threatened by others’ success, but not when feeling reassured about their own success.
In 3 experiments (n = 1,599), which included a pre-registered study on a nationally representative sample (Norway), we find causal evidence for racial discrimination against minority Airbnb hosts. When an identical Airbnb apartment was presented with a racial outgroup (vs. in-group) host, people reported more negative attitudes toward the apartment, lower intentions to rent it, and were 25% less likely to choose the apartment over a standard hotel room in a real choice.
The rise of peer-to-peer platforms has represented one of the major economic and societal developments observed in the last decade. We investigated whether people engage in racial discrimination in the sharing economy, and how such discrimination might be explained and mitigated.
Using a set of carefully controlled experiments (n = 1,599), including a pre-registered study on a nationally representative sample, we find causal evidence for racial discrimination. When an identical apartment is presented with a racial out-group (vs. in-group) host, people report more negative attitudes toward the apartment, lower intentions to rent it, and are 25% less likely to choose the apartment over a standard hotel room in an incentivized choice. Reduced self-congruence with apartments owned by out-group hosts mediates these effects. Left-leaning liberals rated the out-group host as more trustworthy than the in-group host in non-committing judgments and hypothetical choice, but showed the same in-group preference as right-leaning conservatives when making a real choice.
Thus, people may overstate their moral and political aspirations when doing so is cost-free. However, even in incentivized choice, racial discrimination disappeared when the apartment was presented with an explicit trust cue, as a visible top-rating by other consumers (5⁄5 stars).
[Interview] Recent research suggests that the share of US households living on less than $2/person/day is high and rising.
We reexamine such extreme poverty by linking SIPP and CPS data to administrative tax and program data.
We find that more than 90% of those reported to be in extreme poverty are not, once we include in-kind transfers, replace survey reports of earnings and transfer receipt with administrative records, and account for ownership of substantial assets. More than half of all misclassified households have incomes from the administrative data above the poverty line, and many have middle-class measures of material well-being.
Levantine ~1200–950 BCE silver hoards were subjected to chemical and isotopic analysis.
Silver was alloyed with copper, reflecting a shortage after the Bronze Age collapse.
This debasement was often concealed by adding arsenic.
A mixing model distinguishes between isotopic contributions of alloyed metals.
Results suggest that silver shortage in the Levant probably lasted until ~950 BCE.
The study of silver, which was an important mean of currency in the Southern Levant during the Bronze and Iron Age periods (~1950–586 BCE), revealed an unusual phenomenon. Silver hoards from a specific, yet rather long timespan, ~1200–950 BCE, contained mostly silver alloyed with copper. This alloying phenomenon is considered here for the first time, also with respect to previous attempts to provenance the silver using lead isotopes. Eight hoards were studied, from which 86 items were subjected to chemical and isotopic analysis. This is, by far, the largest dataset of sampled silver from this timespan in the Near East. Results show the alloys, despite their silvery sheen, contained high percentages of Cu, reaching up to 80% of the alloy. The Ag-Cu alloys retained a silvery tint using two methods, either by using an enriched silver surface to conceal a copper core, or by adding arsenic and antimony to the alloy. For the question of provenance, we applied a mixing model which simulates the contribution of up to three end members to the isotopic composition of the studied samples. The model demonstrates that for most samples, the more likely combination is that they are alloys of silver from Aegean-Anatolian ores, Pb-poor copper, and Pb-rich copper from local copper mines in the Arabah valley (Timna and Faynan). Another, previously suggested possibility, namely that a substantial part of the silver originated from the West Mediterranean, cannot be validated analytically. Contextualizing these results, we suggest that the Bronze Age collapse around the Mediterranean led to the termination of silver supply from the Aegean to the Levant in the beginning of the 12th century BCE, causing a shortage of silver. The local administrations initiated sophisticated devaluation methods to compensate for the lack of silver—a suspected forgery. It is further suggested that following the Egyptian withdrawal from Canaan around the mid-12th century BCE, Cu-Ag alloying continued, with the use of copper from Faynan instead of Timna. The revival of long-distance silver trade is evident only in the Iron Age IIA (starting ~950 BCE), when silver was no longer alloyed with copper, and was imported from Anatolia and the West Mediterranean.
[Keywords: silver hoards, alloys, lead isotopic analysis, debasement, arsenic, Bronze age collapse, Mediterranean trade]
In this paper, we review the literature on declining business dynamism and its implications in the United States and propose an unifying theory to analyze the symptoms and the potential causes of this decline. We first highlight 10 pronounced stylized facts related to declining business dynamism documented in the literature and discuss some of the existing attempts to explain them. We then describe a theoretical framework of endogenous markups, innovation, and competition that can potentially speak to all of these facts jointly. We next explore some theoretical predictions of this framework, which are shaped by two interacting forces: a composition effect that determines the market concentration and an incentive effect that determines how firms respond to a given concentration in the economy. The results highlight that a decline in knowledge diffusion between frontier and laggard firms could be an important driver of empirical trends observed in the data. This study emphasizes the potential of growth theory for the analysis of factors behind declining business dynamism and the need for further investigation in this direction.
Provides evidence for a decline in research productivity in both countries.
Using firm-level R&D panel data for public and private firms spanning three decades.
Strong decline in R&D productivity in China due to end of catch-up growth.
Conclusion: ideas are not only getting harder to find in the U.S.
In a recent paper, Bloom et al 2020 find evidence for a substantial decline in research productivity in the U.S. economy during the last 40 years. In this paper, we replicate their findings for China and Germany, using detailed firm-level data spanning three decades. Our results indicate that diminishing returns in idea production are a global phenomenon, not just confined to the U.S.
San Francisco is gentrifying rapidly as an influx of high-income newcomers drives up housing prices and displaces lower-income incumbent residents. In theory, increasing the supply of housing should mitigate increases in rents. However, new construction could also increase demand for nearby housing by improving neighborhood quality. The net impact on nearby rents depends on the relative sizes of these supply and demand effects.
This paper identifies the causal impact of new construction on nearby rents, displacement, and gentrification by exploiting random variation in the location of new construction induced by serious building fires. I combine parcel-level data on fires and new construction with an original dataset of historic Craigslist rents and panel data on individual migration histories to test the impact of proximity to new construction. I find that rents fall by 2% for parcels within 100m of new construction. Renters’ risk of being displaced to a lower-income neighborhood falls by 17%. Both effects decay linearly to zero within 1.5km. Next, I show evidence of a hyperlocal demand effect, with building renovations and business turnovers spiking and then returning to zero after 100m. Gentrification follows the pattern of this demand effect: parcels within 100m of new construction are 2.5 percentage points (29.5%) more likely to experience a net increase in richer residents.
Affordable housing and endogenously located construction do not affect displacement or gentrification. These findings suggest that increasing the supply of market rate housing has beneficial spillover effects for incumbent residents, reducing rents and displacement pressures while improving neighborhood quality.
A crew of pirates all keep their gold in one very secure chest, with labeled sections for each pirate. Unfortunately, one day a storm hits the ship, tossing everything about. After the storm clears, the gold in the chest is all mixed up. The pirates each know how much gold they had—indeed, they’re rather obsessive about it—but they don’t trust each other to give honest numbers. How can they figure out how much gold each pirate had in the chest?
Here’s the trick: the captain has each crew member write down how much gold they had, in secret. Then, the captain adds it all up. If the final amount matches the amount of gold in the chest, then we’re done. But if the final amount does not match the amount of gold in the chest, then the captain throws the whole chest overboard, and nobody gets any of the gold.
I want to emphasize two key features of this problem. First, depending on what happens, we may never know how much gold each pirate had in the chest or who lied, even in hindsight. Hindsight isn’t 20/20. Second, the solution to the problem requires outright destruction of wealth.
The point of this post is that these two features go hand-in-hand. There’s a wide range of real-life problems where we can’t tell what happened, even in hindsight; we’ll talk about three classes of examples. In these situations, it’s hard to design good incentives/mechanisms, because we don’t know where to allocate credit and blame. Outright wealth destruction provides a fairly general-purpose tool for such problems. It allows us to align incentives in otherwise-intractable problems, though often at considerable cost.
…Alice wants to sell her old car, and Bob is in the market for a decent quality used vehicle…Alternatively, we could try to align incentives without figuring out what happened in hindsight, using a trick similar to our pirate captain throwing the chest overboard. The trick is: if there’s a mechanical problem after the sale, then both Alice and Bob pay for it. I do not mean they split the bill; I mean they both pay the entire cost of the bill. One of them pays the mechanic, and the other takes the same amount of money in cash and burns it. (Or donates to a third party they don’t especially like, or …) This aligns both their incentives: Alice is no longer incentivized to hide mechanical problems when showing off the car, and Bob is no longer incentivized to ignore maintenance or frequent the racetrack.
However, this solution also illustrates the downside of the technique: it’s expensive.
[See also the exploding Nash equilibrium. This parallels Monte Carlo/evolutionary solutions to RL blackbox optimization: by setting up a large penalty for any divergence from the golden path, it creates an unbiased, but high variance estimator of credit assignment. When ‘pirates’ participate in enough rollouts with enough different assortments of pirates, they receive their approximate ‘honesty’-weighted (usefulness in causing high-value actions) return. You can try to pry open the blackbox and reduce variance by taking into account ‘pirate’ baselines etc, but at the risk of losing unbiasedness if you do it wrong.]
For many fundamental scene understanding tasks, it is difficult or impossible to obtain per-pixel ground truth labels from real images. We address this challenge by introducing Hypersim, a photorealistic synthetic dataset for holistic indoor scene understanding. To create our dataset, we leverage a large repository of synthetic scenes created by professional artists, and we generate 77,400 images of 461 indoor scenes with detailed per-pixel labels and corresponding ground truth geometry. Our dataset: (1) relies exclusively on publicly available 3D assets; (2) includes complete scene geometry, material information, and lighting information for every scene; (3) includes dense per-pixel semantic instance segmentations and complete camera information for every image; and (4) factors every image into diffuse reflectance, diffuse illumination, and a non-diffuse residual term that captures view-dependent lighting effects.
We analyze our dataset at the level of scenes, objects, and pixels, and we analyze costs in terms of money, computation time, and annotation effort. Remarkably, we find that it is possible to generate our entire dataset from scratch, for roughly half the cost of training a popular open-source natural language processing model. We also evaluate sim-to-real transfer performance on two real-world scene understanding tasks—semantic segmentation and 3D shape prediction—where we find that pre-training on our dataset significantly improves performance on both tasks, and achieves state-of-the-art performance on the most challenging Pix3D test set. All of our rendered image data, as well as all the code we used to generate our dataset and perform our experiments, is available online.
We surveyed 113 participants of the 1967–1973 Bing pre-school studies on delay of gratification when they were in their late 40’s. They reported 11 mid-life capital formation outcomes, including net worth, permanent income, absence of high-interest debt, forward-looking behaviors, and educational attainment. To address multiple hypothesis testing and our small sample, we pre-registered an analysis plan of well-powered tests.
As predicted, a newly constructed and pre-registered measure derived from preschool delay of gratification does not predict the 11 capital formation variables (ie. the sign-adjusted average correlation was 0.02). A pre-registered composite self-regulation index, combining preschool delay of gratification with survey measures of self-regulation collected at ages 17, 27, and 37, does predict 10 of the 11 capital formation variables in the expected direction, with an average correlation of 0.19. The inclusion of the preschool delay of gratification measure in this composite index does not affect the index’s predictive power.
We tested several hypothesized reasons that preschool delay of gratification does not have predictive power for our mid-life capital formation variables.
[Keywords: self-regulation, delay of gratification, mid-life capital formation]
In 2010, Steve Jobs announced that [“Thoughts On Flash”, TOF] Apple would no longer support Adobe Flash—a popular set of tools for creating Internet applications. After the announcement, the use of Flash declined precipitously.
We show there was no reduction in Flash hourly wages because of a rapid supply response: Flash specialists, especially those who were younger, were less specialized, or had good “fall back” skills quickly transitioned away from Flash; new market entrants also avoided Flash, leaving the effective supply per Flash job opening unchanged.
As such, there was no factor market reason for firms to stay with Flash longer.
…Perhaps the best contemporary indicators of software developer interest in a given technology are the questions being asked on Stack Overflow, an enormously popular programming Q&A site. The left facet of Figure 1 shows the volume of questions per month for Flash and for comparison, a basket of popular IT skills, all normalized to 1 in the TOF month.3 The y-axis is on a log scale. We can see from this comparison that the numbers of questions about Flash and our chosen basket of skills are growing more or less in lockstep pre-TOF, reflecting growth in the Q&A platform and the wider IT industry, but that after TOF, Flash shows a clear absolute decline. There is some delay in this drop, likely reflecting the diffusion of the news of Apple’s plans as well as the completion of already-planned projects.
To study how the decline in Flash affected workers specializing in Flash, we use data from a large online labor market (Horton 2010). The decline in Flash is also readily apparent in the longitudinal data from this market: the right facet of Figure 1 plots the number of job openings posted per month for jobs requiring Flash skills and for those requiring PHP (one of the “basket skills” from the left facet of Figure 1). Both Flash and PHP are normalized to 1 for the TOF-month. For comparison, we truncate the data to the first year of the Stack Overflow data (in 2008), even though the online labor market is considerably older. As we saw with the Stack Overflow data, both Flash and PHP move closely together pre-TOF and then diverge. Following TOF, the number of Flash job openings began to decline relative to PHP, falling by more than 80% between 2010 and 2015.
As we will show, despite a large decline in the number of Flash openings posted, very little else about the market for Flash changed. There is no evidence employers were inundated with applications from out-of-work Flash programmers—the number of applicants per opening remained roughly constant.4 There was no increase in the likelihood that Flash openings were filled, nor was there a reduction in the wages paid to hired Flash programmers. In short, despite a roughly 80% reduction in posted Flash jobs, we observe a reduction in the quantity of hours-worked, but no reduction in the price.
…There is heterogeneity in the choices made by individual workers and their outcomes. Although there was no decline in wages on average, workers who were older seemed to have experienced wage declines, whereas younger workers experienced modest wage increases. Older workers also experienced declines in hours-worked that younger workers did not. We also observe that younger workers increasingly demanded a premium to work with Flash post-TOF, whereas older workers did not.
In this study, we empirically assess the contributions of inventors and firms for innovation using a 37-year panel of U.S. patenting activity. We estimate that inventors’ human capital is 5–10× more important than firm capabilities for explaining the variance in inventor output. We then examine matching between inventors and firms and find highly talented inventors are attracted to firms that (1) have weak firm-specific invention capabilities and (2) employ other talented inventors. A theoretical model that incorporates worker preferences for inventive output rationalizes our empirical findings of negative assortative matching between inventors and firms and positive assortative matching among inventors.
Inducing a scientist to change their direction by a small amount—to work on marginally different topics—requires a substantial amount of funding in expectation. The switching costs of science are large. The productivity of grants is also estimated, and it appears the additional costs of targeted research may be more than offset by more productive scientists pursuing these grants.
“One approach [to building a new field] is to just pay people to work on the topic. Capitalism!
The trouble is, this kind of approach can be expensive. To estimate just how expensive, Myers 2020 looks at the cost of inducing life scientists to apply for grants they would not normally apply for. His research context is the NIH, the US’ biggest funder of biomedical science. Normally, scientists seek NIH Funding by proposing their own research ideas. But sometimes the NIH wants researchers to work on some kind of specific project, and in those cases it uses a “request for applications” grant. Myers wants to see how big those grants need to be to induce people to change their research topics to fit the NIH’s preferences.
Myers has data on all NIH “request for applications” (RFA) grant applications from 2002 to 2009, as well as the publication history of every applicant. RFA grants are ones where NIH solicits proposals related to a prespecified kind of research, instead of letting investigators propose their own topics (which is the bulk of what NIH does). Myers tries to measure how much of a stretch it is for a scientist to do research related to the RFA by measuring the similarity of the text between the RFA description and the abstract of each scientist’s most similar previously published article (more similar texts contain more of the same uncommon words). When we line up scientists left to right from least to most similar to a given RFA, we can see the probability they apply for the grant is higher the more similar they are.
…The interesting thing Myers does is combine all this information to estimate a tradeoff. How much do you need to increase the size of the grant in order to get someone with less similarity to apply for the grant at the same rate as someone with higher similarity? In other words, how much does it cost to get someone to change their research focus?
This is a tricky problem for a couple reasons. First, you have to think about where these RFAs come from in the first place. For example, if some new disease attracts a lot of attention from both NIH administrators and scientists, maybe the scientists would have been eager to work on the topic anyway. That would overstate the willingness of scientists to change their research for grant funding, since they might not be willing to change absent this new and interesting disease. Another important nuance is that bigger funds attract more applicants, which lowers the probability any one of them wins. That would tend to understate the willingness of scientists to change their research for more funding. For instance, if the value of a grant increases 10×, but the number of applicants increases 5×, then the effective increase in the expected-value of the grant has only doubled (I win only a fifth as often, but when I do I get 10× as much). Myers provides some evidence that the first concern is not really an issue and explicitly models the second one.
The upshot of all this work is that it’s quite expensive to get researchers to change their research focus. In general, Myers estimates getting one more scientist to apply (ie. getting one whose research is typically more dissimilar than any of the current applicants, but more similar than those who didn’t apply) requires increasing the size of the grant by 40% or nearly half a million dollars over the life of a grant!“]
I test the assumptions of the Malthusian model at the individual, cross-sectional level for France, 1650–1820. Using husband’s occupation from the parish records of 41 French rural villages, I assign three different measures of status. There is no evidence for the existence of the positive check; infant deaths are unrelated to status. However, the preventive check operates strongly, acting through female age at first marriage. The wives of rich men are younger brides than those of poorer men. This drives a positive net-fertility gradient in living standards. However, the strength of this gradient is substantially weaker than it is in pre-industrial England.
Why do individuals become entrepreneurs? Why do some succeed?
We propose 2 theories in which information frictions play a central role in answering these questions. Empirical analysis of longitudinal samples from the United States and the United Kingdom reveals the following patterns:
entrepreneurs have higher cognitive ability than employees with comparable education,
employees have better education than equally able entrepreneurs, and
entrepreneurs’ earnings are higher and exhibit greater variance than employees with similar education.
These and other empirical tests support our asymmetric information theory of entrepreneurship that when information frictions cause firms to undervalue workers lacking traditional credentials, workers’ quest to maximize their private returns drives the most able into successful entrepreneurship.
Managerial Summary: Steve Jobs, Bill Gates, Mark Zuckerberg, Rachael Ray, and Oprah Winfrey are all entrepreneurs whose educational qualifications belie their extraordinary success. Are they outliers or do their examples reveal a link between education and success in entrepreneurship?
We argue that employers assess potential workers based on their educational qualifications, especially early in their careers when there is little direct information on work accomplishments and productivity. This leads those who correctly believe that they are better than their résumés show to become successful entrepreneurs.
Evidence from 2 nationally representative samples of workers (from the United States and the United Kingdom) supports our theory, which applies to equally to the immigrant food vendor lacking a high school diploma as well as the PhD founder of a science-based startup.
We exploit a volcanic “experiment” to study the costs and benefits of geographic mobility. In our experiment, a third of the houses in a town were covered by lava. People living in these houses were much more likely to move away permanently. For the dependents in a household (children), we estimate that being induced to move by the “lava shock” dramatically raised lifetime earnings and education. Yet, the benefits of moving were very unequally distributed across generations: the household heads (parents) were made slightly worse off by the shock.These results suggest large barriers to moving for the children, which imply that labor does not flow to locations where it earns the highest returns. The large gains from moving for the young are surprising in light of the fact that the town affected by our volcanic experiment was(and is) a relatively high income town. We interpret our findings as evidence of the importance of comparative advantage: the gains to moving may be very large for those badly matched to the location they happened to be born in, even if differences in average income are small.
This research documents a perfection premium in evaluative judgments wherein individuals disproportionately reward perfection on an attribute compared to near-perfect values on the same attribute.
For example, individuals consider a student who earns a perfect score of 36 on the American College Test to be more intelligent than a student who earns a near-perfect 35, and this difference in perceived intelligence is substantially greater than the difference between students whose scores are 35 versus 34. The authors also show that the perfection premium occurs because people spontaneously place perfect items into a separate mental category than other items. As a result of this categorization process, the perceived evaluative distance between perfect and near-perfect items is exaggerated. Four experiments provide evidence in favor of the perfection premium and support for the proposed underlying mechanism in both social cognition and decision-making contexts.
[Keywords: perfection, categorization, numerical cognition, social cognition]
…In four experiments, we find that even when the objective numerical gap between two values is equal, people perceive the difference between individuals and items to be greater if one has a perfect attribute value or rating. For example, the perceived difference in intelligence of two students scoring100% versus 99% on an exam exceeds the perceived gap between students scoring 99% versus 98%, even though the scores differ by 1% in both cases.
[Part of this is just a ceiling effect: if one hits the ceiling on a test by scoring a perfect score rather than falling short slightly, that represents a lower bound—the person scores at least that high, and so likely scores higher, and if the test is not an extremely good one, then potentially arbitrarily much higher.
For example, if someone scores 128 on an IQ test with a ceiling of 130 (+2SD), another 129, and another scores the max of 130, then the expected scores are 128/129/136, and the expected differences are not 1/1/1 but 1/1/7. (You can calculate the truncated normal expectation using truncNormMean(2) in my dog cloning page).]
Countries that had sustained reform were 16% richer 10 years later.
Traditional policy reforms of the type embodied in the Washington Consensus have been out of academic fashion for decades. However, we are not aware of a paper that convincingly rejects the efficacy of these reforms. In this paper, we define generalized reform as a discrete, sustained jump in an index of economic freedom, whose components map well onto the points of the old consensus.
We identify 49 cases of generalized reform in our dataset that spans 141 countries from 1970 to 2015.
The average treatment effect associated with these reforms is positive, sizeable, and statistically-significant over 5-year and 10-year windows. The result is robust to different thresholds for defining reform and different estimation methods.
We argue that the policy reform baby was prematurely thrown out with the neoliberal bathwater.
[Keywords: reform, Washington Consensus, rule of law, property rights, economic development]
Most online content publishers have moved to subscription-based business models regulated by digital paywalls. But the managerial implications of such freemium content offerings are not well understood. We, therefore, utilized microlevel user activity data from the New York Times to conduct a large-scale study of the implications of digital paywall design for publishers. Specifically, we use a quasi-experiment that varied the (1) quantity (the number of free articles) and (2) exclusivity (the number of available sections) of free content available through the paywall to investigate the effects of paywall design on content demand, subscriptions, and total revenue.
The paywall policy changes we studied suppressed total content demand by about 9.9%, reducing total advertising revenue. However, this decrease was more than offset by increased subscription revenue as the policy change led to a 31% increase in total subscriptions during our seven-month study, yielding net positive revenues of over $302,995.8$230,000.02013. The results confirm an economically-significant impact of the newspaper’s paywall design on content demand, subscriptions, and net revenue. Our findings can help structure the scientific discussion about digital paywall design and help managers optimize digital paywalls to maximize readership, revenue, and profit.
What impact on local development do immigrants and their descendants have in the short and long term? The answer depends on the attributes they bring with them, what they pass on to their children, and how they interact with other groups. We develop the first measures of the country-of-ancestry composition and of GDP per worker for US counties from 1850 to 2010. We show that changes in ancestry composition are associated with changes in local economic development. We use the long panel and several instrumental variables strategies in an effort to assess different ancestry groups’ effect on county GDP per worker. Groups from countries with higher economic development, with cultural traits that favor cooperation, and with a long history of a centralized state have a greater positive impact on county GDP per worker. Ancestry diversity is positively related to county GDP per worker, while diversity in origin-country economic development or culture is negatively related.
We show that molecular variation in DNA related to cognition, personality, health, and body shape, predicts an individual’s equity market participation and risk aversion.
Moreover, the molecular genetic endowments predict individuals’ return perceptions, most of which we find to be strikingly biased.
The genetic endowments also strongly associate with many of the investor characteristics (eg. trust, sociability, wealth) shown to explain heterogeneity in equity market participation.
Our analysis helps elucidate why financial choices are heritable and how genetic endowments can help explain the links between financial choices, risk aversion, beliefs, and other variables known to explain stock market participation.
…Using a large panel data set from the Health and Retirement Study that includes financial, psychosocial, demographic, and genetic data for 5,130 individuals across time, we examine the role of specific genetic endowments in financial decisions. We focus on 8 genetic endowments related to cognition (Educational Attainment and General Cognition), personality (Neuroticism and Depressive Symptoms), health (Myocardial Infarctions and Coronary Artery Disease) and body type (Height and BMI) and examine how these endowments help shape observed heterogeneity in financial decisions.
…Consistent with our hypotheses, individuals with higher genetic endowments associated with Educational Attainment, General Cognition, and Height are more likely to invest in equity markets (and in addition invest a larger fraction of their wealth in risky assets) while individuals with higher genetic scores associated with Neuroticism, Depressive Symptoms, Myocardial Infarction, Coronary Artery Disease, and BMI exhibit lower equity market participation. Moreover, the effect sizes are substantial—a one standard deviation higher genetic endowment for Neuroticism predicts a 3.7% lower probability of holding any equity…we find that most of the 8 genetic endowments continue to predict equity market participation choices on their own. For example, after controlling for risk aversion and beliefs, a person with an one standard deviation larger genetic endowment for Neuroticism is still 2.8% less likely to hold any equity (as compared to 3.7% before controlling for risk aversion and beliefs).
Economics of wind and solar face 2 opposing drivers: learning and revenue decline.
Reduction in revenue from market forces may offset or even outpace learning.
Abatement cost may rise from $46 to $66 (solar) and −$7 to $53 (wind) per tonne of CO2.
Subsidy requirement to ensure profitability could increase over time.
Integration of substantial amount of wind or solar necessitates new grid technologies.
The economics of wind and solar generation face 2 opposing drivers. Technological progress leads to lower costs and both wind and solar have shown dramatic price reductions in recent decades. At the same time, adding wind and solar lowers market electricity prices and thus revenue during periods when they produce energy. In this work, we analyze these 2 opposing effects of renewable integration: learning and diminishing marginal revenue, investigated using a model that assumes the status quo with regards to generation technology mix and demand. Our modeling results suggest that reduction in revenue from market forces may offset or even outpace technological progress. If deployed on current grids without changes to demand response, storage or other integrating technologies, the cost of mitigating CO2 with wind will increase and will be no cheaper in the future than it is today for solar. This study highlights the need to deploy grid technologies such as storage and new transmission in order to integrate wind and solar in an economically sustainable manner.
[Keywords: renewable energy, energy modeling, Marginal abatement cost curve (MACC), energy subsidy]
Industry concentration has been rising in the United States since 1980. Does this signal declining competition and the need for a new antitrust policy? Or are other factors causing concentration to increase? This paper explores the role of proprietary information technology (IT), which could increase the productivity of top firms relative to others and raise their market share. Instrumental variable estimates find a strong link between proprietary IT and rising industry concentration, accounting for most of its growth. Moreover, the top four firms in each industry benefit disproportionately. Large investments in proprietary software—$250 billion per year—appear to substantially impact industry structure.
A scan of the history of gross world product (GWP) over millennia raises fundamental questions about the human past and prospect. What is the distribution of shocks ranging from recession to pandemic? Were the agricultural and industrial revolutions one-offs or did they manifest ongoing dynamics? Is growth exponential, if with occasional step changes in the rate, or is it superexponential? If the latter, how do we interpret the implication that output will become infinite in finite time? This paper introduces the first coherent statistical model of GWP history. It casts a GWP series as a sample path in a stochastic diffusion, one whose specification is novel yet rooted in neoclassical growth theory. After maximum likelihood fitting to GWP back to 10,000 BCE, most observations fall between the 40th and 60th percentiles of predicted distributions. The fit implies that GWP explosion is all but inevitable, in a median year of 2047. This projection cuts against the steadiness of growth in income per person seen in the last two centuries in countries at the economic frontier. And it essentially contra-dicts the laws of physics. But neither tension justifies immediate dismissal of the explosive projection. Accelerating economic growth is better explained by theory than constant growth. And if physical limits are articulated in a neoclassical-type model by endogenizing natural resources, explosion leads to implosion, formally avoiding infinities. The quality of the superexponential fit to the past suggests not so much that growth is destined to ascend as that the human system is unstable.
Last week, 2K made waves by becoming the first publisher to set a $70 asking price for a big-budget game on the next generation of consoles. NBA2K21 will cost the now-standard $60 on the Xbox One and PlayStation 4, but 2K will ask $10 more for the upcoming Xbox Series X and PlayStation 5 versions of the game (a $100 “Mamba Forever Edition” gives players access to current-generation and next-generation versions in a single bundle).
It remains to be seen if other publishers will follow 2K’s lead and make $70 a new de facto standard for big-budget console game pricing. But while $70 would match the high-water mark for nominal game pricing, it wouldn’t be a historically high asking price in terms of actual value. Thanks to inflation and changes in game distribution, in fact, the current ceiling for game prices has never been lower.
…To measure how the actual asking price for console games has changed over time, we relied primarily on scanned catalogs and retail advertising fliers we found online. While this information was easier to find for some years than others, we were still able to gather data for 20 distinct years across the last four decades. We then adjusted those nominal prices to constant 2020 dollars using the Bureau of Labor Statistics’ CPI inflation calculator.
…While nominal cartridge game prices in the early ’80s topped out at $30 to $40, inflation makes that the equivalent of $80 to $100 per game these days. $34.99 for Centipede on the Atari 2600 might sound cheap, but that 1983 price is the equivalent of roughly $90 today…As the industry transitioned into 16-bit cartridges in the ’90s, though, nominal prices for top-end games rose quickly past $60 in nominal dollars and $110 in 2020 dollars. That’s in large part because of the expensive ROM storage and co-processors often included in games of the day. By 1997, late-era SNES and early-era N64 games were routinely selling for $69.99 at many retailers, the highest nominal prices the industry has generally seen and still the equivalent of over $110 in today’s dollars.
…Disc prices settled down to a more reasonable $49.99 soon after that, setting a functional nominal price ceiling that would remain until the mid ’00s. It wasn’t until the Xbox 360 and PlayStation 3 hit the scene that top asking prices started increasing to $59.99. And that’s the de facto ceiling that has remained in place to this day, even as digital downloads and the explosion of indie games has meant many titles now launch at well below this price.
Adjusting for inflation, we can see the actual (2020 dollar) value of top-end disc-based games plateaued right around $70 for almost a decade through in the ’00s and early ’10s. Inflation has slowly eroded that value in the last decade, though, to the point where a $10 increase like the one for NBA2K21 merely gets games to the same actual price point as they enjoyed earlier in the century…a bump to $70 would not be a historically unprecedented increase in console gaming’s price ceiling. Accounting for inflation, in fact, it would merely bring those prices back in line with the recent historical average—something to keep in mind as you prepare for a new, seemingly costlier generation of console hardware.
This paper provides evidence that graduates of elite public institutions in India have an earnings advantage in the labor market even though attending these colleges has no discernible effect on academic outcomes.
Admission to the elite public colleges is based on the scores obtained in the Senior Secondary School Examinations. I exploit this feature in a regression discontinuity design.
Using administrative data on admission and college test scores and an in-depth survey, I find that the salaries of elite public college graduates are higher at the admission cutoff although the exit test scores are no different.
Flamewars over platforms & upgrades are so bitter not because people are jerks but because the choice will influence entire ecosystems, benefiting one platform through network effects & avoiding ‘bitrot’ while subtly sabotaging the rest through ‘bitcreep’.
The enduring phenomenon of ‘holy wars’ in computing, such as the bitterness around the prolonged Python 2 to Python 3 migration, is not due to mere pettiness or love of conflict, but because they are a coordination problem: dominant platforms enjoy strong network effects, such as reduced ‘bitrot’ as it is regularly used & maintained by many users, and can inflict a mirror-image ‘bitcreep’ on other platforms which gradually are neglected and begin to bitrot because of the dominant platform.
The outright negative effect of bitcreep mean that holdouts do not just cost early adopters the possible network effects, they also greatly reduce the value of a given thing, and may cause the early adopters to be actually worse off and more miserable on a daily basis. Given the extent to which holdouts have benefited from the community, holdout behavior is perceived as parasitic and immoral behavior by adopters, while holdouts in turn deny any moral obligation and resent the methods that adopters use to increase adoption (such as, in the absence of formal controls, informal ones like bullying).
This desperate need for there to be a victor, and the large technical benefits/costs to those who choose the winning/losing side, explain the (only apparently) disproportionate energy, venom, and intractability of holy wars.
Perhaps if we explicitly understand holy wars as coordination problems, we can avoid the worst excesses and tap into knowledge about the topic to better manage things like language migrations.
Our analyses show that the blockade negatively affected inventors in China to search distantly in technological and cognitive spaces compared to those in the control group who were presumably unaffected by the event. The impact was less severe for inventors with larger collaboration networks but became more pronounced in technological fields proximate to science…we measured invention economic value with the valuation dataset provided by Bureau van Dijk (BvD), a data analytics company owned by Moody’s, who estimates a patent’s dollar value from technical, market, and legal dimensions based on multiple triangulated datasets such as patent litigations and company information. Using this valuation. We find that the coefficient of China × blockade is negative (β = −0.081, p < 0.05)…Our analyses reveal that the economic value of inventions from China dropped by around 8% or $57,000 after the event compared to those from nearby unaffected regions.
Our findings contribute to innovative search literature and highlight the theoretical and practical importance of Internet technologies in developing valuable inventions.
Inventors nowadays depend heavily on Internet search to access information and knowledge. They therefore become vulnerable to barriers imposed on their online search. In this study, we find that China’s unexpected blockade of Google and its affiliated services altered the searching behavior of inventors in China such that they became less able to seek distant knowledge. This impact was further contingent on the availability of offline knowledge channels and the reliance of each technological field on science. We also find that the economic value of their inventions decreased due to the blockade. Our findings reveal a neglected but consequential aspect of Internet censorship beyond the commonly found media effect and offer important implications to practitioners and policymakers.
[Keywords: Google, innovation, recombinant search, distant search, Internet censorship]
The world spends a remarkable $250 billion a year on lottery tickets. Yet, perplexingly, it has proved difficult for social scientists to show that lottery windfalls actually make people happier. This is the famous and still unresolved paradox due initially to Brickman et al 1978.
Here we describe an underlying weakness that has affected the research area, and explain the concept of lottery-ticket bias (LT bias), which stems from unobservable lottery spending [making players much poorer]. We then collect new data—in the world’s most intense lottery-playing nation, Singapore—on the amount that people spend on lottery tickets (n = 5,626).
We demonstrate that, once we correct for LT bias, a lottery windfall is predictive of a substantial improvement in happiness and well-being.
…Nevertheless, a key problem is the following. If a scientific investigator observed someone who won 1,000 dollars, the scientist would not expect the person to be happier if that individual had already had to spend 1,000 dollars in order to get the lottery tickets. Empirical studies of lotteries have usually been forced to ignore this point, because investigators traditionally have not had data on ticket purchases. Yet, by definition, the way to get lottery wins is to buy tickets, and the greater is the number of tickets, the larger is the expected size of a person’s win. Hence, in situations where information on ticket purchases is not available to the researcher, there will be an innate downward bias in estimates of the happiness from lottery wins. We term this lottery-tickets (LT) bias. The current study attempts to correct for this form of bias (the potential existence of which has been pointed out before in the literature, such as in Apouey & Clark 2015, although the authors were only able to control for fixed effects as a partial fix for the problem).
…Panel B of Table 1 gives further descriptive statistics on the lottery-related variables in the SLP data set. The numbers reveal the extensive use of the lottery in the country of Singapore. ~52% of respondents purchased a lottery ticket at least once in the previous 12 months. Of the respondents who purchased lottery tickets at least once, 45% of them purchased tickets every week. Average annual spending on lottery tickets per player was S$1,687 (US $1,221 or £994 UK sterling). Of those who purchased a lottery ticket at least once in the last 12 months, 43% of them won at least once, with average winnings of ~S$353.5 As would be expected, the data indicate the extent of negative net returns to customers.
In our data set, the minimum and maximum of lottery ticket spending are S$1 and S$72,800. The minimum and maximum of (individual) lottery prizes are S$0 and S$30,000.
Singaporeans are known to be some of the world’s most avid lottery players. According to an annual survey on worldwide lottery sales, Singapore has the world’s largest lottery spending per-capita (Singapore National Council on Problem Gambling 2015). An annual lottery ticket spending of S$1,687 in our data set is a strikingly large amount by the standards of most nations…the top 10% of lottery players spend a remarkable S$9,000 + a year.
Little is known about whether people make good choices when facing important decisions. This article reports on a large-scale randomized field experiment in which research subjects having difficulty making a decision flipped a coin to help determine their choice. For important decisions (eg. quitting a job or ending a relationship), individuals who are told by the coin toss to make a change are more likely to make a change, more satisfied with their decisions, and happier six months later than those whose coin toss instructed maintaining the status quo. This finding suggests that people may be excessively cautious when facing life-changing choices.
SoftBank Group Corporation said its Vision Fund business lost 1.9 trillion yen ($17.7 billion) last fiscal year after writing down the value of investments, including WeWork and Uber Technologies Inc.
The company posted an overall operating loss of 1.36 trillion yen in the 12 months ended March and a net loss of 961.6 billion yen, according to a statement on Monday. The Tokyo-based conglomerate released figures in two preliminary earnings statements last month. The losses are the worst ever in the company’s 39-year history. SoftBank founder Masayoshi Son’s $100 billion Vision Fund went from the group’s main contributor to profit a year ago to its biggest drag on earnings. Uber’s disappointing public debut last May was followed by the implosion of WeWork in September and its subsequent rescue by SoftBank. Now Son is struggling with the impact of the coronavirus on the portfolio of startups weighted heavily toward the sharing economy.
“The situation is exceedingly difficult”, Son said at a briefing discussing the results on Monday. “Our unicorns have fallen into this sudden coronavirus ravine. But some of them will use this crisis to grow wings.”
The drop in Uber’s share price was responsible for about $5.2 billion of Vision Fund’s losses in the period, while WeWork contributed $4.6 billion and another $7.5 billion came from the rest of the portfolio, SoftBank said. The $75 billion the Vision Fund has spent to invest in 88 companies as of March 31 is now worth $69.6 billion. SoftBank also recorded losses from its own investments, including WeWork and satellite operator OneWeb, which filed for bankruptcy in March.
We study optimal smart contract design for monitoring an exchange of an item performed offline.
There are 2 parties, a seller and a buyer. Exchange happens off-chain, but the status update takes place on-chain. The exchange can be verified but with a cost. To guarantee self-enforcement of the smart contract, both parties make a deposit, and the deposits must cover payments made in all possible final states. Both parties have an (opportunity) cost of making deposits.
We discuss 2 classes of contract: In the first, the mechanism only interacts with the seller, while in the second, the mechanism can also interact with the buyer. In both cases, we derive optimal contracts specifying optimal deposits and verification policies.
The gains from trade of the first contract are dominated by the second contract, on the whole domain of parameters. However, the first type of contract has the advantage of less communication and, therefore, more flexibility.
[examples] How do consumers hold sellers accountable and enforce market norms? This Article contributes to our understanding of consumer markets in 3 ways. First, the Article identifies the role of a small subset of consumers—the titular “nudniks”—as engines of market discipline. Nudniks are those who call to complain, speak with managers, post online reviews, and file lawsuits. Typified by an idiosyncratic utility function and certain unique personality traits, nudniks pursue action where most consumers remain passive. Although derided in courtrooms and the court of public opinion, we show that nudniks can solve consumer collective action problems, leading to broad market improvements.
Second, the Article spotlights a disconcerting development: sellers’ growing usage of big data and predictive analytics allows them to identify specific consumers as potential nudniks and then disarm or avoid selling to them before they can draw attention to sellers’ misconduct. The Article therefore captures an understudied problem with big data tools: sellers can use these tools to shield themselves from market accountability.
Finally, the Article evaluates a menu of legal strategies that would preserve the benefits of nudnik-based activism in light of these technological developments. In the process, we revisit the conventional wisdom on the desirability of form contracts, mandatory arbitration clauses, defamation law, and standing doctrines.
[“Admirably lucid revisiting of Enron’s metamorphosis from a pipeline company into a derivatives trading-house that booked billions in paper profits before collapsing.” The Enron story displays the potentially distortionary impact of high intelligence on moral decision-making. It lends evidence to the notion that extremely intelligent people can be subtly incentivised to be (systematically) dishonest because their intelligence lowers the cost and raises the potential benefits of circumventing rules.” —The Browser summary
What, in a nutshell, was the Enron fraud? Like a tech startup, Enron had a vision of creating many new markets by upfront investments; to achieve this, which was in fact often a viable business strategy and had worked before, it needed debt-financing and to look like a logistics company with stable lucrative locked-in long-term profits, though its profits increasingly actually came from volatile unreliable financial trading. From this pressure and the need to keep up appearances to avoid switching horses in mid-stream before projects could pay off, a house of cards built up, deviance was normalized, and it slowly slid into an enormous financial fraud with few people realizing until the end.]
Long-run growth in many models is the product of two terms: the effective number of researchers and their research productivity. We present evidence from various industries, products, and firms showing that research effort is rising substantially while research productivity is declining sharply. A good example is Moore’s Law. The number of researchers required today to achieve the famous doubling of computer chip density is more than 18× larger than the number required in the early 1970s. More generally, everywhere we look we find that ideas, and the exponential growth they imply, are getting harder to find.
We revisited Indian weaving firms nine years after a randomized experiment that changed their management practices. While about half of the practices adopted in the original experimental plants had been dropped, there was still a large and significant gap in practices between the treatment and control plants, suggesting lasting impacts of effective management interventions. Few practices had spread across the firms in the study, but many had spread within firms. Managerial turnover and the lack of director time were two of the most cited reasons for the drop in management practices, highlighting the importance of key employees.
We show that genetic endowments linked to educational attainment strongly and robustly predict wealth at retirement. The estimated relationship is not fully explained by flexibly controlling for education and labor income. We therefore investigate a host of additional mechanisms that could account for the gene-wealth gradient, including inheritances, mortality, risk preferences, portfolio decisions, beliefs about the probabilities of macroeconomic events, and planning horizons. We provide evidence that genetic endowments related to human capital accumulation are associated with wealth not only through educational attainment and labor income but also through a facility with complex financial decision-making.
In 2001, Norwegian tax records became easily accessible online, allowing everyone in the country to observe the incomes of everyone else. According to the income comparisons model, this change in transparency can widen the gap in well-being between richer and poorer individuals. Using survey data from 1985–2013 and multiple identification strategies, we show that the higher transparency increased the gap in happiness between richer and poorer individuals by 29%, and it increased the life satisfaction gap by 21%. We provide back-of-the-envelope estimates of the importance of income comparisons, and discuss implications for the ongoing debate on transparency policies.
…Even if designers don’t contribute improvements to a font directly, companies can benefit from making their work open source. For example, Adobe Type senior manager Daniel Rhatigan says releasing its Source super-family of fonts [monospace, sans, serif] as open source has enabled the company to test new typography technologies like “variable fonts”, which make it easy for a designer to adjust the weight of a typeface, before rolling those technologies into other products.
In other cases, open source fonts help support other aspects of a company’s business. For example, Google Fonts program manager Dave Crossland says many of the fonts Google has funded most recently are designed for under-supported languages in developing countries. These efforts buttress Google’s “Next Billion Users” initiative, which aims to bring more people in developing countries online. Better support for more languages means more users, and ultimately, more money for Google.
The incentives to create open source fonts weren’t always obvious. In early 2009, a graphic designer and programmer named Micah Rich came across a forum post by a student who was interested in knowing more about how fonts worked. The student asked whether there was a professional quality open source font that they could learn from. The replies weren’t kind. “There were like 20 pages of professional type designers saying ‘This is our livelihood, how dare you ask us to work for free?’” Rich says.
Radio remains popular, delivering an audience reach of over 90%, but radio ratings may overestimate real advertising exposure. Little is known about audience and media factors affecting radio-advertising avoidance. Many advertisers have believed as much as 1⁄3rd of the audience switch stations during radio-advertising breaks.
In the current study, the authors combined Canadian portable people-meter data ratings to measure loss of audience during advertising. They discovered a new benchmark of 3% (across conditions) for mechanical (or actual physical) avoidance of radio advertising, such as switching stations or turning off the radio.
This rate is about one-tenth of current estimates, but was higher for music versus talk stations, out-of-home versus in-home listening, and early versus late dayparts.
These formulas have turned an obscure idea that Galanis and his college buddies had a few years ago about making more money for second rate celebs into a thriving two-sided marketplace that has caught the attention of VCs, Hollywood, and professional sports. In June, Cameo raised $50 million in Series B funding, led by Kleiner Perkins (which recently began funding more early stage startups) to boost marketing, expand into international markets, and staff up to meet the growing demand. In the past 15 months, Cameo has gone from 20 to 125 employees, and moved from an 825-square-foot home base in the 1871 technology incubator into its current 6,000-square-foot digs in Chicago’s popping West Loop. Cameo customers have purchased more than 560,000 videos from some 20,000 celebs and counting, including ’80s star Steve Guttenberg and sports legend Kareem Abdul-Jabbar. And now, when the masses find themselves in quarantined isolation—looking for levity, distractions, and any semblance of the human touch—sending each other personalized videograms from the semi-famous has never seemed like a more pitch-perfect offering.
The product itself is as simple as it is improbable. For a price the celeb sets—anywhere from $5 to $2,500—famous people record video shout-outs, aka “Cameos”, that run for a couple of minutes, and then are delivered via text or email. Most Cameo videos are booked as private birthday or anniversary gifts, but a few have gone viral on social media. Even if you don’t know Cameo by name, there’s a good chance you caught Bam Margera of MTV’s Jackass delivering an “I quit” message on behalf of a disgruntled employee, or Sugar Ray’s Mark McGrath dumping some poor dude on behalf of the guy’s girlfriend. (Don’t feel too bad for the dumpee, the whole thing was a joke.)
…Back at the whiteboard, Galanis takes a marker and sketches out a graph of how fame works on his platform. “Imagine the grid represents all the celebrity talent in the world”, he says, “which by our definition, we peg at 5 million people.” The X-axis is willingness; the Y-axis is fame. “Say LeBron is at the top of the X-axis, and I’m at the bottom”, he says. On the willingness side, Galanis puts notoriously media-averse Seattle Seahawks running back Marshawn Lynch on the far left end. At the opposite end, he slots chatty celebrity blogger-turned-Cameo-workhorse Perez Hilton, of whom Galanis says, “I promise if you booked him right now, the video would be done before we leave this room.”
…“The contrarian bet we made was that it would be way better for us to have people with small, loyal followings, often unknown to the general population, but who were willing to charge $5 to $10”, Galanis says. Cameo would employ a revenue-sharing model, getting a 25% cut of each video, while the rest went to the celeb. They wanted people like Galanis’ co-founder (and former Duke classmate) Devon Townsend, who had built a small following making silly Vine videos of his travels with pal Cody Ko, a popular YouTuber. “Devon isn’t Justin Bieber, but he had 25,000 Instagram followers from his days as a goofy Vine star”, explains Galanis. “He originally charged a couple bucks, and the people who love him responded, ‘Best money I ever spent!’”
…After a customer books a Cameo, the celeb films the video via the startup’s app within four to seven days. Most videos typically come in at under a minute, though some talent indulges in extensive riffs. (Inexplicably, “plant-based activist and health coach” Courtney Anne Feldman, wife of Corey, once went on for more than 20 minutes in a video for a customer.) Cameo handles the setup, technical infrastructure, marketing, and support, with white-glove service for the biggest earners with “whatever they need”—details like help pronouncing a customer’s name or just making sure they aren’t getting burned-out doing so many video shout-outs.
…For famous people of any caliber—the washed-up, the obscure micro-celebrity, the actual rock star—becoming part of the supply side of the Cameo marketplace is as low a barrier as it gets. Set a price and go. The videos are short—Instagram comedian Evan Breen has been known to knock out more than 100 at $25 a pop in a single sitting—and they don’t typically require any special preparation. Hair, makeup, wardrobe, or even handlers aren’t necessary. In fact, part of the oddball authenticity of Cameo videos is that they have a take-me-as-I-am familiarity—filmed at breakfast tables, lying in bed, on the golf course, running errands, at a stoplight, wherever it fits into the schedule.
I’m excited to announce that GitHub has signed an agreement to acquire npm.
For the millions of developers who use the public npm registry every day, npm will always be available and always be free. Our focus after the deal closes will be to:
Invest in the registry infrastructure and platform.
Improve the core experience.
Engage with the community.
New entrants in established markets face competing recommendations over whether it is better to establish their legitimacy by conforming to type or to differentiate themselves from incumbents by proposing novel contributions. This dilemma is particularly acute in cultural markets in which demand for novelty and attention to legitimacy are both high. We draw upon research in organizational theory and entrepreneurship to hypothesize the effects of pursuing narrow or broad appeals on the performance of new entrants in the music industry. We propose that the sales of novel products vary with the distance perceived between the classes being combined and that this happens, in part, because combinations that appear to span great distances encourage consumers to adopt superordinate rather than subordinate classes (eg. to classify and evaluate something as a “song” rather than a “country song”). Using a sample of 144 artists introduced to the public via the U.S. television program The Voice, we find evidence of a U-shaped relationship between category distance and consumer response. Specifically, consumers reward new entrants who pursue either familiarity (ie. nonspanning) or distinctive combinations (ie. combine distant genres) but reject efforts that try to balance both goals. An experimental test validates that manipulating the perceived distance an artist spans influences individual evaluations of product quality and the hierarchy of categorization. Together these results provide initial evidence that distant combinations are more likely to be classified using a superordinate category, mitigating the potential confusion and legitimacy-based penalties that affect middle-distance combinations.
We examine the record of cross-country growth over the past fifty years and ask if developing countries have made progress on closing the income gap between their per capita incomes and those in the advanced economies. We conclude that, as a group, they have not and then survey the literature on absolute convergence with particular emphasis on that from the last decade or so. That literature supports our conclusion of a lack of progress in closing the income gap between countries. We close with a brief examination of the recent literature on cross-individual distribution of income, which finds that despite the lack of progress on cross country convergence, global inequality has tended to fall since 2000.
There is growing evidence that human biology and behavior are influenced by infectious microorganisms. One such microorganism is the protozoan Toxoplasma gondii (TG). Using longitudinal data covering the female population of Denmark, we extend research on the relationship between TG infection and entrepreneurial activity and outcomes. Results indicate that TG infection is associated with a subsequent increase in the probability of becoming an entrepreneur, and is linked to other outcomes including venture performance. With parasite behavioral manipulation antithetical to rational judgment, we join a growing conversation on biology and alternative drivers of business venturing.
Health and social scientists have documented the hospital revolving-door problem, the concentration of crime, and long-term welfare dependence. Have these distinct fields identified the same citizens? Using administrative databases linked to 1.7 million New Zealanders, we quantified and monetized inequality in distributions of health and social problems and tested whether they aggregate within individuals. Marked inequality was observed: Gini coefficients equalled 0.96 for criminal convictions, 0.91 for public-hospital nights, 0.86 for welfare benefits, 0.74 for prescription-drug fills and 0.54 for injury-insurance claims. Marked aggregation was uncovered: a small population segment accounted for a disproportionate share of use-events and costs across multiple sectors. These findings were replicated in 2.3 million Danes. We then integrated the New Zealand databases with the four-decade-long Dunedin Study. The high-need/high-cost population segment experienced early-life factors that reduce workforce readiness, including low education and poor mental health. In midlife they reported low life satisfaction. Investing in young people’s education and training potential could reduce health and social inequalities and enhance population wellbeing.
In Western countries, the distribution of relative incomes within marriages tends to be skewed in a remarkable way. Husbands usually do not only earn more than their female partners, but there is also a striking discontinuity in their relative contributions to the household income at the 50:50 point: many wives contribute just a bit less than or as much as their husbands, but few contribute more. This ‘cliff’ has been interpreted as evidence that men and women avoid situations where a wife would earn more than her husband, since this would go against traditional gender norms.
In this paper, we use a simulation approach to model marriage markets and demonstrate that a cliff in the relative income distribution can also emerge without such avoidance. We feed our simulations with income data from 27 European countries.
Results: show that a cliff can emerge from inequalities in men’s and women’s average incomes, even if they do not attach special meaning to a situation in which a wife earns more than her husband.
…The observed discontinuity in the distribution of relative incomes within households would be consistent with a norm that favours male superiority in income, if such a norm existed. However, in this paper, we argue that such a norm is not necessary to generate a discontinuity. Instead, we suggest that a cliff may emerge even if both men and women prefer partners with high income over partners with low income, if we consider that even in the most gender egalitarian societies women’s average income is lower than men’s.
Our argument is based on the following intuition. If people strive for high-income partners, men who rank high in the male income distribution will be in the best position to compete for women who rank high in the female income distribution, vice versa. Some men may therefore form unions with similar-income partners, but because women’s average income is lower, many men will face a shortage of partners with similar or even higher income. Unless they are willing to remain single, these men will have to form unions with women who earn less than they do. Women, by contrast, will have to ‘settle’ less often for a lower-income partner. These differences in men’s and women’s marriage market opportunities are likely to not only create a right skew in the distribution of women’s contribution to household income, but also a discontinuity close to the 50:50 point. This occurs even if people are not more aversive of a situation in which the wife out-earns her husband than of a situation in which he out-earns her.
We demonstrate the logical consistency and empirical plausibility of our argument with a simulation study in which we compare the outcomes of a simple marriage market model with the observed distributions of relative income in the 27 countries shown in Figure 1. The model assumes that men and women strive for a high joint income in the unions that they form, while using their own income as a point of reference for determining the minimum income they expect in a partner. However, they do not evaluate a situation in which a wife out-earns her husband any differently from a situation in which he out-earns her. Our results show that partner choice based on this preference tends to generate a right skew in the distribution of relative incomes within households and, most importantly, a discontinuity at the 50:50 point.
[In competitive labor markets, workers choose compensation as a package of luxury/hobby consumption, like being a fashion designer or musician or video game programmer, and financial pay; all of this is extremely well-known, so anyone who chooses to go into those is demonstrating strong revealed preferences… “Just world” has nothing to do with it.] The pursuit of passion in one’s work is touted in contemporary discourse. Although passion may indeed be beneficial in many ways, we suggest that the modern cultural emphasis may also serve to facilitate the legitimization of unfair and demeaning management practices—a phenomenon we term the legitimization of passion exploitation.
Across 7 studies and a meta-analysis, we show that people do in fact deem poor worker treatment (eg. asking employees to do demeaning tasks that are irrelevant to their job description, asking employees to work extra hours without pay) as more legitimate when workers are presumed to be “passionate” about their work. Of importance, we demonstrate 2 mediating mechanisms by which this process of legitimization occurs: (1) assumptions that passionate workers would have volunteered for this work if given the chance (Studies 1, 3, 5, 6, and 8), and (2) beliefs that, for passionate workers, work itself is its own reward (Studies 3, 4, 5, 6, and 8). We also find support for the reverse direction of the legitimization process, in which people attribute passion to an exploited (vs. non-exploited) worker (Study 7). Finally, and consistent with the notion that this process is connected to justice motives, a test of moderated mediation shows this is most pronounced for participants high in belief in a just world (Study 8).
Taken together, these studies suggest that although passion may seem like a positive attribute to assume in others, it can also license poor and exploitative worker treatment.
[Keywords: social justice, motivated cognition, self-help ideology, passion]
Underground marketplaces have emerged as a common channel for criminals to offer their products and services. A portion of these products comprises the illegal trading of consumer products such as vouchers, coupons, and loyalty program accounts that are later used to commit business fraud. Despite its well-known existence, the impact of this type of business fraud has not been analyzed in depth before.
By leveraging longitudinal data from 8 major underground markets from 2011–2017 [Agora, Alphabay, BlackMarket Reloaded, Evolution, Hydra, Pandora, Silk Road 1, Silk Road 2], we identify, classify, and quantify different types of business fraud to then analyze the characteristics of the companies who suffered from them. Moreover, we investigate factors that influence the impact of business fraud on these companies.
Our models show that cybercriminals prefer selling products of well-established companies, while smaller companies appear to suffer higher revenue losses. Stolen accounts are the most transacted items, while pirated software together with loyalty programs create the heaviest revenue losses. The estimated criminal revenues are relatively low, at under $600,000 in total for the whole period; but the total estimated revenue losses are up to $7.5 million.
This study examines the revealed preference of informed traders to infer the extent to which earnings announcements are informative of subsequent stock price responses.
From 2011 to 2015, a cartel of sophisticated traders illegally obtained early access to firm press releases prior to publication and traded over 1,000 earnings announcements. I study their constrained profit maximization: which earnings announcements they chose to trade [9.25%] vs. which ones they forwent trading.
Consistent with theory, these traders targeted more liquid earnings announcements with larger subsequent stock price movement. Despite earning large profits overall, the informed traders enjoyed only mixed success in identifying the biggest profit opportunities. Controlling for liquidity differences, only 31% of their trades were in the most extreme announcement period return deciles. I model the informed traders’ tradeoff between liquidity and expected returns. From this model, I recover an average signal-to-noise ratio of 0.4.
I further explore 2 potential economic sources of this noise: (1) ambiguous market expectations of earnings announcements and (2) heterogeneous interpretations of earnings information by the marginal investor. Empirically, I document that the informed traders avoided noisier earnings announcements as measured by both sources of noise.
…Empirically, I test whether the informed traders behaved in a manner consistent with market microstructure theory. First, on the extensive margin, the informed traders chose more liquid earnings announcements. Compared to the unconditional mean probability of informed trade, an one standard deviation increase in liquidity increases the probability of trade by 50%. Liquidity is especially important in this setting because of detection risk. Large price impact prior to public disclosures bears the risk of discovery. Second, the informed traders chose earnings announcements with larger ex-post returns. A one standard deviation increase in the magnitude of realized stock returns increases the probability of trade by 19%. This finding confirms the joint hypothesis that informed traders could identify, and preferred to trade on, earnings with larger returns. Furthermore, on the intensive margin, the informed traders more aggressively traded earnings announcements with higher returns. Conditional on a stock that is informed-traded, an one percentage point increase in realized stock returns increases the informed traders’ price impact by 8.5 bps.
…To estimate signal noise from performance, I formulate a model of informed trade. In my model, an investor receives an array of noisy private signals about announcement period returns. The investor seeks to maximize profit by choosing to trade earnings announcements that are liquid and have large returns. The investor’s ability to do so depends on the precision of his return signals (ie. the earnings announcements). I estimate my model using simulated method of moments (SMM), where my moments are average returns, liquidity and their interaction. Using these moments, I recover parameter estimates that imply informed traders were willing to forgo 1% of expected return in exchange for 0.65 standard deviations of liquidity. Their performance implies a low signal-to-noise (SNR) ratio of on average 0.4. Within the context of this natural experiment, this is a causal estimate: signal quality determines performance. For comparison, I consider a simple benchmark trading strategy based on earnings surprise. This benchmark yields a comparable SNR estimate of 0.42. I infer from these low signal-to-noise ratios that earnings announcement press releases are poor signals of subsequent stock price responses.
…This unique natural experiment reveals a general fact that earnings announcements are noisy signals of subsequent market reactions. The informed traders had “perfect foresight” from stolen earnings announcement press releases, but they were only able to enjoy mixed success in predicting next-day stock returns. Their poor performance implies that capital market participants have difficulty mapping earnings information to stock price reactions. The contributions of this paper are to empirically quantify the limited informativeness of quarterly earnings announcements to individual investors, provide evidence on the likely sources of signal noise, and shed light on how this noise affects the behaviour of capital market participants.
Life’s major purchases, such as buying a home or going to college, often involve taking on considerable debt. What are the downstream emotional consequences? Does carrying debt influence consumers’ general sense of satisfaction in life?
7 studies examine the relationship between consumers’ debt holdings and life satisfaction, showing that the effect depends on the type of debt. Though mortgages tend to comprise consumers’ largest debts, and though credit card balances tend to have the highest interest rates, we found among a diverse sample of American adults (n = 5,808) that the type of debt most strongly associated with lower levels of life satisfaction is student loans. We further found that the extent to which consumers mentally label a given debt type as “debt” drives the emotional consequences of those debt holdings, and compared to the other debt types, student loans are perceived more as “debt.”
Together the findings suggest that carrying debt can spill over to undermine people’s overall subjective well-being, especially when their debt is perceived as such.
A defining feature of modern economic growth is the systematic application of science to advance technology. However, despite sustained progress in scientific knowledge, recent productivity growth in the United States has been disappointing. We review major changes in the American innovation ecosystem over the past century.
The past three decades have been marked by a growing division of labor between universities focusing on research and large corporations focusing on development. Knowledge produced by universities is not often in a form that can be readily digested and turned into new goods and services. Small firms and university technology transfer offices cannot fully substitute for corporate research, which had previously integrated multiple disciplines at the scale required to solve substantial technical problems. Therefore, whereas the division of innovative labor may have raised the volume of science by universities, it has also slowed, at least for a period of time, the transformation of that knowledge into novel products and processes.
Employment is thought to be more enjoyable and beneficial to individuals and society when there is alignment between the person and the occupation, but a key question is how to best match people with the right profession. The information that people broadcast online through social media provides insights into who they are, which we show can be used to match people and occupations. Findings have implications for career guidance for new graduates, disengaged employees, career changers, and the unemployed.
Work is thought to be more enjoyable and beneficial to individuals and society when there is congruence between one’s personality and one’s occupation. We provide large-scale evidence that occupations have distinctive psychological profiles, which can successfully be predicted from linguistic information unobtrusively collected through social media. Based on 128,279 Twitter users representing 3,513 occupations, we automatically assess user personalities and visually map the personality profiles of different professions. Similar occupations cluster together, pointing to specific sets of jobs that one might be well suited for. Observations that contradict existing classifications may point to emerging occupations relevant to the 21st century workplace. Findings illustrate how social media can be used to match people to their ideal occupation.
[Keywords: personality, employment, linguistic analysis, social media, 21st century workplace]
Evidence from different countries suggests that non-cognitive skills play an important role in wage determination and overall social outcomes, but studies for Canada are scarce. We contribute to filling this gap by estimating wage regressions with the Big Five traits using the Longitudinal and International Study of Adults. Our results indicate that conscientiousness is positively associated with wages, while agreeableness, extraversion, and neuroticism are associated with negative returns, with higher magnitudes on agreeableness and conscientiousness for females. Cognitive ability has the highest estimated wage return so, while substantial, non-cognitive skills do not seem to be the most important wage determinant.
[Keywords: management, labour market, returns to skills, non-cognitive skill, cognitive skill, wage regressions, personality traits, Five-Factor Model]
The difference in angel investing between Silicon Valley and everywhere else isn’t just a difference in perceived risk/reward or a difference in FOMO. It’s that angel investing fulfils a completely different purpose in Silicon Valley than it does elsewhere. It’s not just a financial activity; it’s a social status exercise.
Angel Investors in the Bay Area aren’t just in it for the financial returns; they’re also in it for the social returns.
The Bay Area tech ecosystem has been so successful that startup-related news has become the principal determinant of social status in San Francisco. In other cities, you acquire and flex social status by joining exclusive neighbourhoods or country clubs, or through philanthropic gestures, or even something as simple as what car you drive. In San Francisco, it’s angel investing. Other than founding a successful startup yourself, there’s not much higher-status in the Bay Area than backing founders that go on to build Uber or Stripe…The end result is that the Bay Area has a critical density of people who are willing to offer founders a term sheet for enough investment, and at attractive enough valuations, that it makes sense for the founder to actually accept them. I honestly believe that without this social “subsidy”, a lot of angel investing stops working. If investors were being purely rational, they could only offer something like a $2 million valuation for founders’ first cheques. And if entrepreneurs are smart, they know they can’t accept it; it makes them un-fundable from that day forward.
The social rewards of angel investing solve an important chicken-and-egg problem in early stage fundraising that financial rewards does not.
One of the biggest frustrations you face as a founder out fundraising is the refrain: “This sounds really interesting. I love it. Let me know when there are a bunch of other people investing, and then I’ll invest too.” From far away, it’s easy to label this behaviour as cowardly investing. But it happens for a reason…The social returns to angel investing resolve our chicken/egg problem: they turn angel investing into a kind of “race to be first” that is much more aligned with the founder, and more conducive to breaking inertia and completing deals. The founder wants you to move first, and so do you.
The social returns to angel investing have a strong geographical network effect, because they require a threshold density in order to kick in.
…If you can assemble enough early stage investors together, it should conceptually become self-sustaining. Once you have that sufficient density of people who care about the social return to angel investing, and you establish a genuine “early stage capital market” that is subsidized in part by the social and emotional job that it’s doing for its angel members, you create something really special. You get the rare conditions where capital is available for founders at high enough valuations, with no strings attached, and by investors who are evaluating them “the right way”, that you actually sustain a scene that produces startups in sufficient numbers to generate those few unlikely mega-winners that replenish angels’ bank accounts and keep the cycle going.
Dark markets are commercial websites that use Bitcoin to sell or broker transactions involving drugs, weapons, and other illicit goods. Being illegal, they do not offer any user protection, and several police raids and scams have caused large losses to both customers and vendors over the past years. However, this uncertainty has not prevented a steady growth of the dark market phenomenon and a proliferation of new markets. The origin of this resilience have remained unclear so far, also due to the difficulty of identifying relevant Bitcoin transaction data. Here, we investigate how the dark market ecosystem re-organizes following the disappearance of a market, due to factors including raids and scams. To do so, we analyse 24 episodes of unexpected market closure through a novel datasets of 133 million Bitcoin transactions involving 31 dark markets and their users, totaling 4 billion USD. We show that coordinated user migration from the closed market to coexisting markets guarantees overall systemic resilience beyond the intrinsic fragility of individual markets. The migration is swift, efficient and common to all market closures. We find that migrants are on average more active users in comparison to non-migrants and move preferentially towards the coexisting market with the highest trading volume. Our findings shed light on the resilience of the dark market ecosystem and we anticipate that they may inform future research on the self-organisation of emerging online markets.
Significance: Conscientiousness (C) is the most potent noncognitive predictor of occupational performance. However, questions remain about how C relates to a plethora of occupational variables, what its defining characteristics and functions are in occupational settings, and whether its performance relation differs across occupations. To answer these questions, we quantitatively review 92 meta-analyses reporting relations to 175 occupational variables. Across variables, results reveal a substantial mean effect of ρM = 20.
We then use results to synthesize 10 themes that characterize C in occupational settings. Finally, we discover that performance effects of C are weaker in high-complexity versus low-complexity to moderate-complexity occupations. Thus, for optimal occupational performance, we encourage decision makers to match C’s goal-directed motivation and behavioral restraint to more predictable environments.
Evidence from more than 100 y of research indicates that conscientiousness (C) is the most potent noncognitive construct for occupational performance. However, questions remain about the magnitudes of its effect sizes across occupational variables, its defining characteristics and functions in occupational settings, and potential moderators of its performance relation. Drawing on 92 unique meta-analyses reporting effects for 175 distinct variables, which represent n > 1.1 million participants across k > 2,500 studies, we present the most comprehensive, quantitative review and synthesis of the occupational effects of C available in the literature. Results show C has effects in a desirable direction for 98% of variables and a grand mean of ρM = 0.20 (SD = 0.13), indicative of a potent, pervasive influence across occupational variables. Using the top 33% of effect sizes (ρ≥0.24), we synthesize 10 characteristic themes of C’s occupational functioning: (1) motivation for goal-directed performance, (2) preference for more predictable environments, (3) interpersonal responsibility for shared goals, (4) commitment, (5) perseverance, (6) self-regulatory restraint to avoid counterproductivity, and (7) proficient performance—especially for (8) conventional goals, (9) requiring persistence. Finally, we examine C’s relation to performance across 8 occupations. Results indicate that occupational complexity moderates this relation. That is, (10) high occupational complexity versus low-to-moderate occupational complexity attenuates the performance effect of C. Altogether, results suggest that goal-directed performance is fundamental to C and that motivational engagement, behavioral restraint, and environmental predictability influence its optimal occupational expression. We conclude by discussing applied and policy implications of our findings.
Native advertising is a type of online advertising that matches the form and function of the platform on which it appears. In practice, the choice between display and in-feed native advertising presents brand advertisers and online news publishers with conflicting objectives. Advertisers face a trade-off between ad clicks and brand recognition, whereas publishers need to strike a balance between ad clicks and the platform’s trustworthiness. For policy makers, concerns that native advertising confuses customers prompted the U.S. Federal Trade Commission to issue guidelines for disclosing native ads. This research aims to understand how consumers respond to native ads versus display ads and to different styles of native ad disclosures, using randomized online and field experiments combining behavioral clickstream, eye movement, and survey response data. The results show that when the position of an ad on a news page is controlled for, a native ad generates a higher click-through rate because it better resembles the surrounding editorial content. However, a display ad leads to more visual attention, brand recognition, and trustworthiness for the website than a native ad.
[Keywords: native advertising, public policy, eye-tracking, field experiments, advertising disclosure]
We explore the effectiveness of experimentation as a learning mechanism through a historical exploration of the early automobile industry. We focus on a particular subset of experiments, called strategic pivots, that requires irreversible firm commitments. Our analysis suggests that strategic pivoting was associated with success. We identify lessons that could only plausibly be learned through strategic pivoting and document that those firms that were able to learn from the strategic pivots were most likely to succeed. Even though firms may use lean techniques, market solutions may only be discovered through strategic pivots whose outcomes are unknowable ex-ante. Therefore, successful strategies reflect an element of luck.
We explore the effectiveness of economic experimentation as a learning mechanism through a historical exploration of the early automobile industry. We focus on a particular subset of economic experiments, called strategic pivots, that requires irreversible firm commitments.
Our quantitative analysis suggests that strategic pivoting was associated with success. We then use historical methods to understand whether this association is reasonably interpreted as a causal link. We identify lessons that could only plausibly have been learned through strategic pivoting and document that those firms that were able to learn from the strategic pivots were most likely to succeed.
We discuss the generalizability of our findings to build the hypothesis that strategic pivots and economic experiments originate firm strategy.
…In this sense, new model introductions are best understood as Rosenbergian Economic Experiments. Rosenberg (1994, p. 88) argued that economic experiments are necessary when both the market solution and an understanding of interdependencies are difficult to deduce from “first principles”. We infer that indeed in this context entrepreneurs found it difficult to know the best way forward, because the historical record reveals that even firms that proved, ex post, to be on the right track were, ex ante, unsure that they were making the right choices. The interdependencies associated with producing and selling new models implied substantial irreversible commitments. In this sense, automobile entrepreneurs were subject to the “paradox of entrepreneurship” (Gans et al 2016).2 That is, the outcome of each experiment was unknowable, and the choice to conduct certain experiments foreclosed future options.
We compare the absolute and relative performance of three approaches to predicting outcomes for entrants in a business plan competition in Nigeria: Business plan scores from judges, simple ad-hoc prediction models used by researchers, and machine learning approaches. We find that (1) business plan scores from judges are uncorrelated with business survival, employment, sales, or profits three years later; (2) a few key characteristics of entrepreneurs such as gender, age, ability, and business sector do have some predictive power for future outcomes; (3) modern machine learning methods do not offer noticeable improvements; (4) the overall predictive power of all approaches is very low, highlighting the fundamental difficulty of picking competition winners.
Digital platforms, such as Facebook, Uber, and AirBnB, create value by connecting users, creators, and contractors of different types. Their rapid growth, untraditional business model, and disruptive nature presents challenges for managers and asset pricers. These features also, arguably, make them natural monopolies, leading to increasing calls for special regulations and taxes.
We construct and illustrate an approach for modeling digital platforms. The model allows for heterogeneity in elasticity of demand and heterogeneous network effects across different users. We parameterize our model using a survey of over 40,000 US internet users on their demand for Facebook. Facebook creates about 11.2 billion dollars in consumer surplus a month for US users age 25 or over, in line with previous estimates. We find Facebook has too low a level of advertising relative to their revenue maximizing strategy, suggesting that they also value maintaining a large user base.
We simulate six proposed government policies for digital platforms, taking Facebook’s optimal response into account. Taxes only slightly change consumer surplus. Three more radical proposals, including ‘data as labor’ and nationalization, have the potential to raise consumer surplus by up to 42%. But a botched regulation that left the US with two smaller, non-competitive social media monopolies would decrease consumer surplus by 44%.
This paper provides a large scale, empirical evaluation of unintended effects from invoking the precautionary principle after the Fukushima Daiichi nuclear accident. After the accident, all nuclear power stations ceased operation and nuclear power was replaced by fossil fuels, causing an exogenous increase in electricity prices. This increase led to a reduction in energy consumption, which caused an increase in mortality during very cold temperatures. We estimate that the increase in mortality from higher electricity prices outnumbers the mortality from the accident itself, suggesting the decision to cease nuclear production has contributed to more deaths than the accident itself.
A Genome-wide association study (GWAS) estimates size and statistical-significance of the effect of common genetic variants on a phenotype of interest. A Polygenic Score (PGS) is a score, computed for each individual, summarizing the expected value of a phenotype on the basis of the individual’s genotype. The PGS is computed as a weighted sum of the values of the individual’s genetic variants, using as weights the GWAS estimated coefficients from a training sample. Thus, PGS carries information on the genotype, and only on the genotype, of an individual. In our case phenotypes of interest are measures of educational achievement, such as having a college degree, or the education years, in a sample of ~2,700 adult twins and their parents.
We set up the analysis in a standard model of optimal parental investment and intergenerational mobility, extended to include a fully specified genetic analysis of skill transmission, and show that the model’s predictions on mobility differ substantially from those of the standard model. For instance, the coefficient of intergenerational income elasticity maybe larger, and may differ across countries because the distribution of the genotype is different, completely independently of any difference in institution, technology or preferences.
We then study how much of the educational achievement is explained by the PGS for education, thus estimating how much of the variance of education can be explained by genetic factors alone. We find a substantial effect of PGS on performance in school, years of education and college.
Finally we study the channels between PGS and the educational achievement, distinguishing how much is due to cognitive skills and to personality traits. We show that the effect of PGS is substantially stronger on Intelligence than on other traits, like Constraint, which seem natural explanatory factors of educational success. For educational achievement, both cognitive and non cognitive skills are important, although the larger fraction of success is channeled by Intelligence.
Smith challenged the longstanding assumption that inferior development outcomes reflected inferior groups, and that superior groups should coerce inferior groups to make development happen. Smith made clear that the positive-sum benefits of markets required respecting the right to consent of all individuals, from whatever group. These ideas led Smith to be a fierce critic of European conquest, enslavement, and colonialism of non-Europeans.
The loss of Smith’s insights led to a split in later intellectual history of pro-market and anti-colonial ideas. The importance of the right to consent is still insufficiently appreciated in economic development debates today.
Does advertising revenue increase or diminish content differentiation in media markets? This paper shows that an increase in the technically feasible number of ad breaks per video leads to an increase in content differentiation between several thousand YouTube channels. I exploit two institutional features of YouTube’s monetization policy to identify the causal effect of advertising on the YouTubers’ content choice. The analysis of around one million YouTube videos shows that advertising leads to a twenty percentage point reduction in the YouTubers’ probability to duplicate popular content, ie. content in high demand by the audience. I also provide evidence of the economic mechanism behind the result: popular content is covered by many competing YouTubers; hence, viewers who perceive advertising as a nuisance could easily switch to a competitor if a YouTuber increased her number of ad-breaks per video. This is less likely, however, when the YouTuber differentiates her content from her competitors.
[Keywords: advertising, content differentiation, economics of digitization, horizontal product differentiation, long tail, media diversity, user-generated content, YouTube]
…The analysis of around one million YouTube videos shows that an increase in the feasible number of ad breaks per video leads to a twenty percentage point reduction in the YouTubers’ probability to duplicate popular content. The effect size is considerable: it corresponds to around 40% of a standard deviation in the dependent variable and to around 50% of its baseline value.
The large sample size allows me to conduct several sub-group analyses to study effect heterogeneity. I find that the positive effect of advertising on content differentiation is driven by the YouTubers who have at least 1,000 subscribers, ie. the YouTubers whose additional ad revenue is likely to exceed the costs from adapt-ing their videos’ content. In addition, I find heterogeneity along video categories: some categories are more flexible in terms of their typical video duration than others, hence, exploiting the ten minutes trick is more easy (eg. a music clip is typically between three and five minutes long and cannot be easily extended). A battery of robustness checks confirms these results.
…Moreover, I show that ad revenue does not necessarily improve the YouTubers’ video quality. Although the number of views goes up when a video has more ad breaks, the relative number of likes decreases…Table 5 shows the results. The size of the estimates for δ′′(columns 1 to 3), though statistically-significant at the 1%-level, is negligible: an one second increase in video duration corresponds to a 0.0001 percentage point increase in the fraction of likes. The estimates for δ′′′ in columns 4 to 6, though, are relatively large and statistically-significant at the 1%-level, too. According to these estimates, one further second in video duration leads on average to about 1.5% more views. These estimates may reflect the algorithmic drift discussed in Section 9.2. YouTube wants to keep its viewers as long as possible on the platform to show as many ads as possible to them. As a result, longer videos get higher rankings and are watched more often.
Artificial intelligence (AI) is surpassing human performance in a growing number of domains. However, there is limited evidence of its economic effects. Using data from a digital platform, we study a key application of AI: machine translation.
We find that the introduction of a new machine translation system has substantially increased international trade on this platform, increasing exports by 10.9%. Furthermore, heterogeneous treatment effects are consistent with a substantial reduction in translation costs.
Our results provide causal evidence that language barriers substantially hinder trade and that AI has already begun to improve economic efficiency in at least one domain.
…As is often the case, the truth lies somewhere in between these extremes.
Trump’s offer to buy Greenland is not a wild-eyed fluke. Instead, it reflects a steadily increasing American interest in Greenland that is spurred by fear of Chinese and Russian encroachments. At the same time, however, a quest to purchase Greenland is not the optimal way to achieve American security interests, as it is unlikely to succeed, and even if it did, it would be far more expensive than other, more sensible approaches.
Instead, the United States should engage with Denmark and Greenland to find common ground on shared concerns…Instead of offering to buy Greenland, the United States should pursue an engagement strategy that combines targeted concessions with clever diplomacy to get the Danes and Greenlanders to cooperate. Luckily, if approached correctly, both nations are very interested in supporting U.S. security interests, as they are broadly shared—especially in Copenhagen. The key will be to see this not as a zero-sum game, but as a win-win-win situation.
A classical approach to collecting and elaborating information to make entrepreneurial decisions combines search heuristics, such as trial and error, effectuation, and confirmatory search. This paper develops a framework for exploring the implications of a more scientific approach to entrepreneurial decision making.
The panel sample of our randomized control trial includes 116 Italian startups and 16 data points over a period of about one year. Both the treatment and control groups receive 10 sessions of general training on how to obtain feedback from the market and gauge the feasibility of their idea. We teach the treated startups to develop frameworks for predicting the performance of their idea and conduct rigorous tests of their hypotheses, much as scientists do in their research. We let the firms in the control group instead follow their intuitions about how to assess their idea, which has typically produced fairly standard search heuristics.
We find that entrepreneurs who behave like scientists perform better, are more likely to pivot to a different idea, and are not more likely to drop out than the control group in the early stages of the startup.
These results are consistent with the main prediction of our theory: a scientific approach improves precision—it reduces the odds of pursuing projects with false positive returns and increases the odds of pursuing projects with false negative returns.
[Keywords: entrepreneurship, decision making, scientific method, startup, randomized control trial]
A standard Lloyd’s contract defined disgrace in vague terms—as “any criminal act, or any offence against public taste or decency…which degrades or brings that person into disrepute or provokes insult or shock to the community.” Most effective policies rely on precise terms and evidence that both sides can agree on—the Richter scale, a hospital bill. Subjective wording leads to disputes. Insurance “has to involve no litigation”, says Bill Hubbard, CEO of the entertainment insurer HCC Specialty Group. “You know the Supreme Court justice who said, ‘I know pornography when I see it’? You can’t settle claims that way.”
The contracts were much clearer on the definition of what didn’t merit a payout: Many of them exempted non-felonious offenses and acts committed prior to the policy’s start date. Even if the All the Money producers had bought a policy, Spacey’s past transgressions might have been excluded, treated as preexisting conditions.
While these limitations kept the industry small, the foibles of the rich and famous only increased demand for a better product. Tiger Woods’s 2009 car crash, followed by revelations of his infidelities, cost him $30.1$22.02009 million in contracts with brands like AT&T and Gatorade—which was nothing compared to what they cost the companies. A UC Davis study put the brands’ shareholder losses somewhere between $6.8$5.02009 billion and $16.4$12.02009 billion.
But it wasn’t Woods who made disgrace insurance look viable; it was reality television. A few months before the golfer’s car crash came what one underwriter refers to only as “that Viacom loss.” Ryan Jenkins, then a contestant on the VH1 reality show Megan Wants a Millionaire and the star of an upcoming season of I Love Money, became the lead suspect in his wife’s murder and killed himself a few days later. Megan was canceled after three episodes and the Money season shelved entirely, costing Viacom seven figures in losses. That’s when the company started buying disgrace insurance.
Thousands of reality shows have been insured in the ensuing decade, many of them via two insurance brokers, Gallagher Entertainment and HUB International. HUB’s managing director, Bob Jellen, can recall about half a dozen claims paying out since the Jenkins murder. He wouldn’t offer specifics, but others have given two examples: P.I. Moms, which was canceled in 2011 following fraud and drug charges, and Spike TV’s Bar Rescue, after an owner killed a country singer in his own rescued bar. “It’s something we don’t advertise”, says Jellen of disgrace insurance. “You don’t have to sell people on disgrace.”
Our task is simple: we will consider whether the rate of scientific progress has slowed down, and more generally what we know about the rate of scientific progress, based on these literatures and other metrics we have been investigating. This investigation will take the form of a conceptual survey of the available data. We will consider which measures are out there, what they show, and how we should best interpret them, to attempt to create the most comprehensive and wide-ranging survey of metrics for the progress of science. In particular, we integrate a number of strands in the productivity growth literature, the “science of science” literature, and various historical literatures on the nature of human progress.
…To sum up the basic conclusions of this paper, there is good and also wide-ranging evidence that the rate of scientific progress has indeed slowed down, In the disparate and partially independent areas of productivity growth, total factor productivity, GDP growth, patent measures, researcher productivity, crop yields, life expectancy, and Moore’s Law we have found support for this claim.
One implication here is we should not be especially optimistic about the productivity slowdown, as that notion is commonly understood, ending any time soon. There is some lag between scientific progress and practical outputs, and with science at less than its maximum dynamic state, one might not expect future productivity to fare so well either. Under one more specific interpretation of the data, a new General Purpose Technology might be required to kickstart economic growth once again.
Dark Net Markets (DNMs) are websites found on the Dark Net that facilitate the anonymous trade of illegal items such as drugs and weapons. Despite repeated law enforcement interventions on DNMs, the ecosystem has continued to grow since the first DNM, Silk Road, in 2011. This research project investigates the resilience of the ecosystem and tries to understand which characteristics allow it to evade law enforcement.
This thesis is comprised of three studies. The first uses a dataset contained publicly available, scraped data from 34 DNMs to quantitatively measure the impact of a large-scale law enforcement operation, Operation Onymous, on the vendor population. This impact is compared to the impact of the closure of the DNM Evolution in an exit scam. For both events, the impact on different vendor populations (for example those who are directly affected and those who aren’t) are compared and the characteristics that make vendors resilient to each event are investigated.
In the second study, a dataset acquired from the server of the DNM Silk Road 2.0 [by UK LEA] is used to better understand the relationships between buyers and vendors. Network analysis and statistical techniques are used to explore when buyers trade and who with. This dataset is also used to measure the impact of a hack on Silk Road 2.0 on its population.
In the final study, discussions from the forum site Reddit were used to qualitatively assess user perceptions of two law enforcement interventions. These interventions were distinct in nature—one, Operation Hyperion, involved warning users and arresting individuals and the second, Operation Bayonet, actively closed a DNM. Grounded Theory was used to identify topics of conversation and directly compare the opinions held by users on each intervention.
These studies were used to evaluate hypotheses incorporated into two models of resilience. One model focuses on individual users and one on the ecosystem as a whole. The models were then used to discuss current law enforcement approaches on combating DNMs and how they might be improved.
In the first study of this thesis, several methodologies for data preparation and validation within the study of DNMs were developed. In particular, this work presents a new technique for validating a publicly available dataset that has been used in multiple studies in this field. This is the first attempt to formally validate the dataset and determine what can reasonably used for research. The discussion of the dataset has implications for research already using the dataset and future research on datasets collected using the same methodology.
In order to conduct the second study in this thesis, a dataset was acquired from a law enforcement agency. This dataset gives a new insight on how buyers behave on DNMs. Buyers are an unstudied group because their activities are often hidden and so analysis of this dataset reveals new insights into the behaviour of these users. The results of this study have been used to comment on existing work using less complete datasets and contribute new findings.
The third study in this thesis presents a qualitative analysis of two law enforcement interventions. This is the first work to assess the impact of either intervention and so provides new insights into how they were received by the DNM ecosystem. It uses qualitative techniques which are rare within this discipline and so provides a different perspective, for example by revealing how individuals perceive the harms of law enforcement interventions on DNMs. The value of this work has been recognised through its acceptance at a workshop at the IEEE European Symposium on Security and Privacy, 2019.
Part of this research has been conducted in consultation with a [UK] law enforcement agency who provided data for this research. The results of this research are framed specifically for this agency and other law enforcement groups currently investigating DNMs. Several suggestions are made on how to improve the efficacy of law enforcement interventions on DNMs
…A response to the criticisms of Dolliver 2015a has been presented in Dolliver 2015b. Here, Dolliver 2015b attempts to provide further evidence that Silk Road 2.0 overestimated the number of listings advertised by including the results of a manual inspection of the site (Dolliver 2015b). The response also calls into question the use of the Branwen dataset which was collected by an independent researcher and has not been peer-reviewed. Dolliver 2015b claims that the “manually crawling approach” adopted by Van Buskirk et al 2015 is also problematic as it will miss listings that are uploaded and removed during the time it takes to crawl the site. Finally, other, unpublished datasets cited in Dolliver 2015b also point to Silk Road 2.0 being especially volatile in nature before it was closed down and show that the number of listings varied by thousands from week to week. This volatility could potentially explain the contradicting depictions of Silk Road 2.0 given by Dolliver 2015a and Munksgaard et al 2016 and allow for both studies to have accurately described the site. However, empirical evidence in the form of police reports that describe the size of Silk Road 2.0 after its closure shows that the data collected by Dolliver 2015a is an underestimate. Indeed, new data presented in this body of work also demonstrates that Silk Road 2.0 was bigger than Dolliver 2015a claims, even at the beginning of its lifetime.
We discuss the potential role of universal basic incomes (UBIs) in advanced countries. A feature of advanced economies that distinguishes them from developing countries is the existence of well-developed, if often incomplete, safety nets. We develop a framework for describing transfer programs that is flexible enough to encompass most existing programs as well as UBIs, and we use this framework to compare various UBIs to the existing constellation of programs in the United States. A UBI would direct much larger shares of transfers to childless, nonelderly, nondisabled households than existing programs, and much more to middle-income rather than poor households. A UBI large enough to increase transfers to low-income families would be enormously expensive. We review the labor supply literature for evidence on the likely impacts of a UBI. We argue that the ongoing UBI pilot studies will do little to resolve the major outstanding questions.
[Keywords: safety net, income transfer, universal basic income, labor supply]
In those early days, the company, just like almost everybody else in Washington, primarily produced Red Delicious apples, plus a few Goldens and Grannies—familiar workhorse varieties that anybody was allowed to grow. Back then, the state apple commission advertised its wares with a poster of a stoplight: one apple each in red, green, and yellow. Today, across more than 4,000 acres of McDougall apple trees, you won’t find a single Red; every year, you’ll also find fewer acres of the apples that McDougall calls “core varieties”, the more modern open-access standards such as Gala and Fuji. Instead, McDougall is betting on what he calls “value-added apples”: Ambrosias, whose rights he licensed from a Canadian company; Envy, Jazz, and Pacific Rose, whose intellectual properties are owned by the New Zealand giant Enzafruit; and a brand-new variety, commercially available for the first time this year and available only to Washington-state growers: the Cosmic Crisp.
…The Cosmic Crisp is debuting on grocery stores after this fall’s harvest, and in the nervous lead-up to the launch, everyone from nursery operators to marketers wanted me to understand the crazy scope of the thing: the scale of the plantings, the speed with which mountains of commercially untested fruit would be arriving on the market, the size of the capital risk. People kept saying things like “unprecedented”, “on steroids”, “off the friggin’ charts”, and “the largest launch of a single produce item in American history.”
McDougall took me to the highest part of his orchard, where we could look down at all its hundreds of very expensively trellised and irrigated acres (he estimated the costs to plant each individual acre at $60,000 to $65,000, plus another $12,000 in operating costs each year), their neat, thin lines of trees like the stitching over so many quilt squares. “If you’re a farmer, you’re a riverboat gambler anyway”, McDougall said. “But Cosmic Crisp—woo!” I thought of the warning of one former fruit-industry journalist that, with so much on the line, the enormous launch would have to go flawlessly: “It’s gotta be like the new iPhone.”
…Though Washington State University owns the WA 38 patent, the breeding program has received funding from the apple industry, so it was agreed, over some objections by people who worried that quality would be diluted, that the variety should be universally and exclusively available to Washington growers. (Growers of Cosmic Crisp pay royalties both on every tree they buy and on every box they sell, money that will fund future breeding projects as well as the shared marketing campaign.) The apple tested so well that WSU, in collaboration with commercial nurseries, began producing apple saplings as fast as possible; the plan was to start with 300,000 trees, but growers requested 4 million, leading to a lottery for divvying up the first available trees. Within three years, the industry had sunk 13 million of them, plus more than half a billion dollars, into the ground. Proprietary Variety Management expects that the number of Cosmic Crisp apples on the market will grow by millions of boxes every year, outpacing Pink Lady and Honeycrisp within about 5 years of its launch.
We study the judge-clerk match, a market plagued by ‘unraveling’. Evidence from an unique dataset on match and production shows that (1) agents on either side have similar preferences over those on the other side, (2) the matching game for judges is close to zero-sum, (3) this fierce competition among judges explains the unraveling in this market.
We develop a theoretical model investigating how homogeneity of preferences (and competition) affects unraveling in matching markets. We show that a static mechanism, as proposed in many previous reforms, is impossible to solve the problem of unraveling in a market with a high degree of homogeneity. By contrast, a dynamic mechanism that takes advantage of judges’ repeated participation in the market over time is proven promising.
Based on our findings, we propose a new market design for the judge-clerk match.
…A law clerk assists judges on a range of tasks including researching issues, drafting opinions, and making legal determinations. Most law clerks are recent graduates who performed near the top of their class in law school. The positions are highly sought after as they can lead to professional opportunities. Some federal judges receive thousands of applications for a single position and even the least sought-after clerkship will receive over 150 applications. Each judge presently hires 4 clerks for a year, which leads us to a many-to-one matching problem: There are roughly 167 judges (similar to firms), each of whom is hiring 4 law students on an one-year contract from a much larger pool of candidates. The matching can be considered as a non-transferable utility problem because each clerk receives fixed salary.
While the National Federal Judges Law Clerk Hiring Plan recommends when judges may receive applications and when they may contact, interview, and hire clerks, generally many do not follow this schedule and hire law students quite early, in some time periods, as early as right after the first year of law school. Due to extreme competition, by judges to get the best candidates and by candidates to get the best judges, sometimes judges can require a candidate provide an answer to the question, “Will you accept an offer?” prior to scheduling an interview. It goes without saying that job offers are expected to be accepted on the spot. To defer would be a sign of disrespect that can stigmatize the year-long relationship.
Several failed reforms have been attempted to regulate the earliest date at which law students could be hired. The market promptly unraveled in each of these prior reforms, in 1983, 1986, 1990, and 2005. While the reforms varied in their specific implementation, they generally had a deadline like “no job offers, tentative or final, shall be made to law clerk applicants before May 1st of the applicant’s second year” or “judges should not consider applications before September 15 of the students’ third year of law school.” These failures have sparked an active theoretical and experimental literature (for example, Avery et al 2001, Avery et al 2007; Fréchette et al 2007). This literature observes that some Circuits (Fifth, Seventh, and Eleventh) were noted to “cheat” in the reform years.
This paper examines how employees become simultaneously empowered and alienated by detailed, holistic knowledge of the actual operations of their organization, drawing on an inductive analysis of the experiences of employees working on organizational change teams. As employees build and scrutinize process maps of their organization, they develop a new comprehension of the structure and operation of their organization. What they had perceived as purposively designed, relatively stable, and largely external is revealed to be continuously produced through social interaction. I trace how this altered comprehension of the organization’s functioning and logic changes employees’ orientation to and place within the organization. Their central roles are revealed as less efficacious than imagined and, in fact, as reproducing the organization’s inefficiencies. Alienated from their central operational roles, they voluntarily move to peripheral change roles from which they feel empowered to pursue organization-wide change. The paper offers two contributions. First, it identifies a new means through which central actors may become disembedded, that is, detailed comprehensive knowledge of the logic and operations of the surrounding social system. Second, the paper problematizes established insights about the relationship between social position and challenges to the status quo. Rather than a peripheral social location creating a desire to challenge the status quo, a desire to challenge the status quo may encourage central actors to choose a peripheral social location.
…Some held out hope that one or two people at the top knew of these design and operation issues; however, they were often disabused of this optimism. For example, a manager walked the CEO through the map, presenting him with a view he had never seen before and illustrating for him the lack of design and the disconnect between strategy and operations. The CEO, after being walked through the map, sat down, put his head on the table, and said, “This is even more fucked up than I imagined.” The CEO revealed that not only was the operation of his organization out of his control but that his grasp on it was imaginary.
But as the projects ended and the teams disbanded, a puzzle emerged. Some team members returned, as intended by senior management, to their prior roles and careers in the organization. Some, however, chose to leave these careers entirely, abandoning what had been to that point successful and satisfying work to take on organizational change roles elsewhere. Many took new jobs with responsibility for organizational development, Six Sigma, total quality management (TQM), business process re-engineering (BPR), or lean projects. Others assumed temporary contract roles to manage BPR project teams within their own or other organizations.
…Despite being experienced managers, what they learned was eye-opening. One explained that “it was like the sun rose for the first time….I saw the bigger picture.” They had never seen the pieces—the jobs, technologies, tools, and routines—connected in one place, and they realized that their prior view was narrow and fractured. A team member acknowledged, “I only thought of things in the context of my span of control.”…The maps of the organization generated by the project teams also showed that their organizations often lacked a purposeful, integrated design that was centrally monitored and managed. There may originally have been such a design, but as the organization grew, adapted to changing markets, brought on new leadership, added or subtracted divisions, and so on, this animating vision was lost. The original design had been eroded, patched, and overgrown with alternative plans. A manager explained, “Everything I see around here was developed because of specific issues that popped up, and it was all done ad hoc and added onto each other. It certainly wasn’t engineered.” Another manager described how local, off-the-cuff action had contributed to the problems observed at the organizational level:
“They see problems, and the general approach, the human approach, is to try and fix them….Functions have tried to put band-aids on every issue that comes up. It sounds good, but when they are layered one on top of the other they start to choke the organization. But they don’t see that because they are only seeing their own thing.”
Finally, analyzing a particular work process, another manager explained that she had been “assuming that somebody did this [the process] on purpose. And it wasn’t done on purpose. It was just a series of random events that somehow came together.”]
[Profile of Judy Sheindlin, star of the long-running (>23 years) daytime television show Judge Judy where, as an arbitrator, she berates and resolves an endless line of small-claims courts. This article covers her biography as she evolved from an ambitious young Jew in NYC who entered corporate law but left to become a stay-at-home mom and ultimately a reality show star running, after >5000 episodes, a finely-tuned machine for dragnetting cases from across the country, making a fortune from royalties and renewals: her net worth is anywhere up to $400 million.]
We provide generalizable and robust results on the causal sales effect of TV advertising based on the distribution of advertising elasticities for a large number of products (brands) in many categories. Such generalizable results provide a prior distribution that can improve the advertising decisions made by firms and the analysis and recommendations of anti-trust and public policy makers. A single case study cannot provide generalizable results, and hence the marketing literature provides several meta-analyses based on published case studies of advertising effects. However, publication bias results if the research or review process systematically rejects estimates of small, statistically insignificant, or “unexpected” advertising elasticities. Consequently, if there is publication bias, the results of a meta-analysis will not reflect the true population distribution of advertising effects.
To provide generalizable results, we base our analysis on a large number of products and clearly lay out the research protocol used to select the products. We characterize the distribution of all estimates, irrespective of sign, size, or statistical-significance. To ensure generalizability we document the robustness of the estimates. First, we examine the sensitivity of the results to the approach and assumptions made when constructing the data used in estimation from the raw sources. Second, as we aim to provide causal estimates, we document if the estimated effects are sensitive to the identification strategies that we use to claim causality based on observational data. Our results reveal substantially smaller effects of own-advertising compared to the results documented in the extant literature, as well as a sizable percentage of statistically insignificant or negative estimates. If we only select products with statistically-significant and positive estimates, the mean or median of the advertising effect distribution increases by a factor of about five.
The results are robust to various identifying assumptions, and are consistent with both publication bias and bias due to non-robust identification strategies to obtain causal estimates in the literature.
Does growth training help entrepreneurs to scale-up new ventures?
Our field experiment answering this question uses a sample of 181 startup founders from the population of Singapore-based entrepreneurs in 2017.
The treatment consisted of classroom sessions conducted in workshop and lecture formats that provided content in growth-catalyst tools comprising of effective business model design, building effective venture management teams and leveraging personal networks, that help in entrepreneurial resource mobilization. Also, participants received individualized business coaching addressing their venture’s issues and challenges in these domains.
Our results show that entrepreneurs that received training in the 3 growth-catalyst tools achieved higher sales and employee growth for their ventures. In addition, entrepreneurs with higher educational attainment, higher prior work experience and higher growth goals benefited much more from the training intervention.
[Keywords: entrepreneur training, founder effects, field experiment]
We study the causes of “nutritional inequality”: why the wealthy eat more healthfully than the poor in the United States. Exploiting supermarket entry and household moves to healthier neighborhoods, we reject that neighborhood environments contribute meaningfully to nutritional inequality. We then estimate a structural model of grocery demand, using a new instrument exploiting the combination of grocery retail chains’ differing presence across geographic markets with their differing comparative advantages across product groups. Counterfactual simulations show that exposing low-income households to the same products and prices available to high-income households reduces nutritional inequality by only about 10%, while the remaining 90% is driven by differences in demand. These findings counter the argument that policies to increase the supply of healthy groceries could play an important role in reducing nutritional inequality.
OpenStreetMap (OSM), the largest Volunteered Geographic Information project in the world, is characterized both by its map as well as the active community of the millions of mappers who produce it. The discourse about participation in the OSM community largely focuses on the motivations for why members contribute map data and the resulting data quality. Recently, large corporations including Apple, Microsoft, and Facebook have been hiring editors to contribute to the OSM database.
In this article, we explore the influence these corporate editors are having on the map by first considering the history of corporate involvement in the community and then analyzing historical quarterly-snapshot OSM-QA-Tiles to show where and what these corporate editors are mapping. Cumulatively, millions of corporate edits have a global footprint, but corporations vary in geographic reach, edit types, and quantity. While corporations currently have a major impact on road networks, non-corporate mappers edit more buildings and points-of-interest: representing the majority of all edits, on average.
Since corporate editing represents the latest stage in the evolution of corporate involvement, we raise questions about how the OSM community—and researchers—might proceed as corporate editing grows and evolves as a mechanism for expanding the map for multiple uses.
[Keywords: OpenStreetMap; corporations; geospatial data; open data; Volunteered Geographic Information]
We assess evidence from randomized controlled trials (RCTs) on long-run economic productivity and living standards in poor countries. We first document that several studies estimate large positive long-run impacts, but that relatively few existing RCTs have been evaluated over the long run. We next present evidence from a systematic survey of existing RCTs, with a focus on cash transfer and child health programs, and show that a meaningful subset can realistically be evaluated for long-run effects. We discuss ways to bridge the gap between the burgeoning number of development RCTs and the limited number that have been followed up to date, including through new panel (longitudinal) data; improved participant tracking methods; alternative research designs; and access to administrative, remote sensing, and cell phone data. We conclude that the rise of development economics RCTs since roughly 2000 provides a novel opportunity to generate high-quality evidence on the long-run drivers of living standards.
The always excellent Stella Zhang directed me to a newish paper by political scientists Lee Jones and Zeng Jinhan on the domestic politics of China’s Belt and Road. Long term readers will remember that I am bearish on Xi’s grand dream. Here is how I described the central problems with the scheme for Foreign Policy:
There is also a gap between how BRI projects are supposed to be chosen and how they actually have been selected. Xi and other party leaders have characterized BRI investment in Eurasia as following along defined “economic corridors” that would directly connect China to markets and peoples in other parts of the continent. By these means the party hopes to channel capital into areas where it will have the largest long-term benefit and will make cumulative infrastructure improvements possible.
This has not happened: one analysis of 173 BRI projects concluded that with the exception of the China-Pakistan Economic Corridor (CPEC) “there appears to be no statistically-significant relationship between corridor participation and project activity…[suggesting that] interest groups within and outside China are skewing President Xi’s signature foreign policy vision.”
This skew is an inevitable result of China’s internal political system. BRI projects are not centrally directed. Instead, lower state bodies like provincial and regional governments have been tasked with developing their own BRI projects. The officials in charge of these projects have no incentive to approve financially sound investments: by the time any given project materializes, they will have been transferred elsewhere. BRI projects are shaped first and foremost by the political incentives their planners face in China: There is no better way to signal one’s loyalty to Xi than by laboring for his favored foreign-policy initiative. From this perspective, the most important criteria for a project is how easily the BRI label can be slapped on to it…
The problems China has had with the BRI stem from contradictions inherent in the ends party leaders envision for the initiative and the means they have supplied to reach them. BRI projects are chosen through a decentralized project-management system and then funded through concessional loans offered primarily by PRC policy banks. This is a recipe for cost escalation and corruption. In countries like Cambodia, an one-party state ruled by autocrats, this state of affairs is viable, for there is little chance that leaders will be held accountable for lining their pockets (or, more rarely, the coffers of their local communities) at the entire nation’s expense. But most BRI countries are not Cambodia. In democracies this way of doing things is simply not sustainable, and in most BRI countries it is only so long before an angry opposition eager to pin their opponents with malfeasance comes to power, armed with the evidence of misplaced or exploitative projects. 1
The key points to take away from my account is that the failures of the BRI seem to factor back to a few central points: first, that project selection is mostly driven by the priorities of folks working in SOEs, provincial governments, and a plethora of different policy banks. The central government in Beijing has difficulty directing their efforts. Secondly, that these people do not have a good understanding of the countries in which they are investing, and face little incentive to gain this understanding. This leads to the sort of corruption and ‘predatory’ funding that has given BRI its poisonous reputation in countries long exposed to it.
Jones and Zeng agree with this general picture, but provide a far more detailed account of what is happening ‘behind the scenes’ when BRI projects are chosen and funded. The process they describe is not unique to the Belt and Road. It starts as Communist high leadership paints bold words in the sky:
Foreign-policy steering happens through several important mechanisms. The first is top leaders’ major speeches, which are usually kept vague to accommodate diverse interests and agendas. Rather than ‘carefully-worked out grand strategies’, they are typically ‘platitudes, slogans, catchphrases, and generalities’, offering ‘atmospheric guidance’ that others must then interpret and implement. Examples include: Deng’s tao guang yang hui, whose meaning is ‘debatable’; Hu’s ‘harmonious world’—‘more of a narrative than a grand strategy’; and Xi’s ‘new type of great power relations.’ As discussed below, Xi’s vague 2013 remarks on the ‘silk road economic belt’ (SREB) and ‘maritime silk road’ (MSR) exemplify this tendency.2
But bold words are not policy. The Party often has difficulty transforming grand visions into detailed policy proposals. This is sometimes quite intentional—in a closed system like the People’s Republic, it may be better to have politicos arguing over how to make the Core’s vision possible, instead of whether the Core’s vision is worth making possible in the first place.
We live in an age of paradox. Systems using artificial intelligence match or surpass human-level performance in more and more domains, leveraging rapid advances in other technologies and driving soaring stock prices. Yet measured productivity growth has declined by half over the past decade, and real income has stagnated since the late 1990s for a majority of Americans. We describe four potential explanations for this clash of expectations and statistics: false hopes, mismeasurement, redistribution and implementation lags. While a case can be made for each explanation, we argue that lags have likely been the biggest contributor to the paradox. The most impressive capabilities of AI, particularly those based on machine learning, have not yet diffused widely. More importantly, like other general purpose technologies, their full effects won’t be realized until waves of complementary innovations are developed and implemented. The adjustment costs, organizational changes, and new skills needed for successful AI can be modeled as a kind of intangible capital. A portion of the value of this intangible capital is already reflected in the market value of firms. However, going forward, national statistics could fail to measure the full benefits of the new technologies and some may even have the wrong sign.
…The discussion around the recent patterns in aggregate productivity growth highlights a seeming contradiction. On the one hand, there are astonishing examples of potentially transformative new technologies that could greatly increase productivity and economic welfare (see Brynjolfsson and McAfee 2014 [Race Against The Machine]). There are some early concrete signs of these technologies’ promise, recent leaps in artificial intelligence (AI) performance being the most prominent example. However, at the same time, measured productivity growth over the past decade has slowed importantly. This deceleration is large, cutting productivity growth by half or more in the decade preceding the slow-down. It is also widespread, having occurred throughout the Organisation for Economic Cooperation and Development (OECD) and, more recently, among many large emerging economies as well (Syverson 2017).1
We thus appear to be facing a redux of the Solow (1987) paradox: we see transformative new technologies everywhere but in the productivity statistics.
In this chapter, we review the evidence and explanations for the modern productivity paradox and propose a resolution. Namely, there is no inherent inconsistency between forward-looking technological optimism and backward-looking disappointment. Both can simultaneously exist. Indeed, there are good conceptual reasons to expect them to simultaneously exist when the economy undergoes the kind of restructuring associated with transformative technologies. In essence, the forecasters of future company wealth and the measurers of historical economic performance show the greatest disagreement during times of technological change. In this chapter, we argue and present some evidence that the economy is in such a period now.
The Cultural Revolution was one of the greatest disasters in human history, the result of a self-reinforcing cycle of ideology failing to match reality and unsolved social problems, and the deranged reaction of zealots triggering defection and civil warfare.
Dikötter’s history of the Cultural Revolution (The Cultural Revolution: A People’s History, 1962–1976, Frank Dikötter2016; ★★★★) offers a broad overview of the multiple failures and follies of Maoism, which culminated in some of the most destructive and disastrous events in human history: the Cultural Revolution, the Great Leap Forward/Great Famine, and the Third Front.
The Cultural Revolution was not prompted by any extraordinary famine, or invasion, or genuine threat of invasion, or civil war, or disaster of any kind. How then could it have happened? The Cultural Revolution was sponsored by Mao as a way to purge the middle and upper ranks of the Communist Party of doubters, who might do to him what the Soviets had just done to Stalin: tear down his cult by revealing his monstrous crimes to the world. But Mao didn’t realize the forces he unleashed. Maoism had benefited from taking credit for post-WWII recovery and the defeat of Japan, but the more its policies were implemented and it tightened its grip, the greater the gap between its utopian promises and the grim impoverished Chinese reality became. Because its theories were radically and systematically wrong, any honest attempt to implement them was doomed to fail, and anyone pragmatic would necessarily betray the system. Old systems and ‘inequities’ reasserted themselves, to the frustration of true believers.
The only ideologically-permissible explanations were excuses like saboteurs and spies and corrupt officials. Usually kept in check, when given Mao’s imprimatur and active egging on, mass social resentment and ideological frustration boiled over, leading to a frenzy of virtue-signaling, denunciations, preference falsification spirals, murders, cannibalism, and eventually outright civil war and pandemic. Finally, Mao decided enough purging had happened and his position was secure, and brought it to an end. As strange and awful as it was, the Cultural Revolution offers food for thought on how politics can go viciously wrong, and dangerous aspects of human psychology.
Information about a person’s income can be useful in several business-related contexts, such as personalized advertising or salary negotiations. However, many people consider this information private and are reluctant to share it. In this paper, we show that income is predictable from the digital footprints people leave on Facebook. Applying an established machine learning method to an income-representative sample of 2,623 U.S. Americans, we found that (1) Facebook Likes and Status Updates alone predicted a person’s income with an accuracy of up to r = 0.43, and (2) Facebook Likes and Status Updates added incremental predictive power above and beyond a range of socio-demographic variables (ΔR2 = 6–16%, with a correlation of up to r = 0.49). Our findings highlight both opportunities for businesses and legitimate privacy concerns that such prediction models pose to individuals and society when applied without individual consent.
This study examines the use of “algorithms in everyday labor” to explore the labor conditions of three Chinese food delivery platforms: Baidu Deliveries, Eleme, and Meituan. In particular, it examines how delivery workers make sense of these algorithms through the parameters of temporality, affect, and gamification. The study also demonstrates that in working for food delivery platforms, couriers are not simply passive entities that are subjected to a digital “panopticon.” Instead, they create their own “organic algorithms” to manage and, in some cases, even subvert the system. The results of the approach used in this study demonstrate that digital labor has become both more accessible and more precarious in contemporary China. Based on these results, the notion of “algorithmic making and remaking” is suggested as a topic in future research on technology and digital labor.
Cities are epicenters for invention. Scaling analyses have verified the productivity of cities and demonstrate a superlinear relationship between cities’ population size and invention performance. However, little is known about what kinds of inventions correlate with city size. Is the productivity of cities only limited to invention quantity?
I shift the focus on the quality of idea creation by investigating how cities influence the art of knowledge combinations. Atypical combinations introduce novel and unexpected linkages between knowledge domains. They express creativity in inventions and are particularly important for technological breakthroughs. My study of 174 years of invention history in metropolitan areas in the US reveals a superlinear scaling of atypical combinations with population size. The observed scaling grows over time indicating a geographic shift toward cities since the early 20th century.
The productivity of large cities is thus not only restricted to quantity but also includes quality in invention processes.
…I attribute the growing importance to the opportunities given in large cities. In particular, knowledge diversity in large cities provides opportunities for knowledge combinations not found in smaller and less diverse towns. Beyond diversity, larger cities also concentrate the skills to exploit the given diversity. Inventors in large cities realize a disproportionate number of distinct knowledge combinations, which also affects the exploration of new combinations. Given the cumulative nature of knowledge, wealth, innovation, and human skill, my results suggest a self-reinforcing process that favors metropolitan centers for knowledge creation. Thus, knowledge creation plays a major role for creating and maintaining spatial inequalities.
Increasing spatial inequalities have profound implications for regional development and policy making. Inequalities unfold in the form of invention activities, as one crucial economic activity that transforms our economy and society. The benefits of knowledge creation in large cities are not shared by all regions and reinforces a widening divergence between large cities—as centers of knowledge exploration—and smaller towns. Given the importance of geography for knowledge generation, it is unlikely that spatial concentration of invention activities will stop. Earlier research, moreover, observes a decreasing productivity of R&D and highlights that more resources and capabilities are necessary to yield useful R&D outcomes (Lanjouw and Schankerman 2004; Wuchty, Jones, and Uzzi 2007; Jones, Wuchty, and Uzzi 2008). Large cities provide the required resources and capabilities in close geographic proximity. Smaller towns lack the requirements to compete, get disconnected, and fall behind. It should be, furthermore, in the interest of policy makers that all places benefit from urban externalities. That is, policy has to consider how to distribute the novelty created in the centers down the urban hierarchy to smaller towns and lagging regions.
However, much research remains to be done. Why did it take longer for atypical combinations to scale that strongly with city size? Has this process stopped, or will it continue? Moreover, atypical knowledge combinations do not automatically imply a high technological impact or economic value. Thus, it remains unclear precisely how (a)typical combinations relate to the economic performance of cities and how they explain local stories of success and failure.
To show how fast Internet affects employment in Africa, we exploit the gradual arrival of submarine Internet cables on the coast and maps of the terrestrial cable network. Robust difference-in-differences estimates from 3 datasets, covering 12 countries, show large positive effects on employment rates—also for less educated worker groups—with little or no job displacement across space. The sample-wide impact is driven by increased employment in higher-skill occupations, but less-educated workers’ employment gain less so. Firm-level data available for some countries indicate that increased firm entry, productivity, and exporting contribute to higher net job creation. Average incomes rise.
This article examines the extent to which Victorian investors were short-sale constrained. While previous research suggests that there were relatively few limits on arbitrage, this article argues that short-sales of stocks outside the Official List were indirectly constrained by the risk of being cornered. Evidence for this hypothesis comes from three corners in cycle company shares [during the 1890s bicycle mania] which occurred in 1896–1897, two of which resulted in substantial losses for short-sellers. Legal efforts to retrieve funds lost in a corner were unsuccessful, and the court proceedings reveal a widespread contempt for short-sellers, or ‘bears’, among the general public. Consistent with the hypothesis that these episodes affected the market, this study’s findings show that cycle companies for which cornering risk was greater experienced disproportionately lower returns during a subsequent crash in the market for cycle shares. This evidence suggests that, under certain circumstances, short-selling shares in Britain prior to 1900 could have been much riskier than previously thought.
…Cycle share prices are found to have risen by over 200% in the early months of 1896, and remained at a relatively high level until March 1897. This boom was accompanied by the promotion of many new cycle firms, with 363 established in 1896 and another 238 during the first half of 1897. This was followed by a crash, with cycle shares losing 76% of their peak value by the end of 1898. The financial press appears to have been aware that a crash was imminent, repeatedly advising investors to sell cycle shares during the first half of 1897. Interestingly, however, these articles never explicitly recommended short-selling cycle shares…Between 1890 and 1896, a succession of major technological innovations substantially increased the demand for British bicycles.37 Bicycle production increased in response, with the number of British cycle companies in existence quadrupling between 1889 and 1897.38 Cycle firms, most of which were based in and around Birmingham, took advantage of the boom of 1896 by going public, resulting in the successful promotion of £17.3 million worth of cycle firms in 1896 and a further £7.4 million in 1897.39 By 1897 there was an oversupply problem in the trade, which was worsened by an exponential increase in the number of bicycles imported from the US.40 The bicycle industry entered recession, and the number of Birmingham-based cycle firms fell by 54% between 1896 and 1900.41
…The total paid for the 200 shares [by the short-trader Hamlyn] was £2,550, to be delivered at a price of £231.25, for a loss of £2,318.75. To put this loss in context, Hamlyn’s barrister noted that, had he succeeded in obtaining the shares at allotment, the profit would have been only £26.
We analysed a large health insurance dataset to assess the genetic and environmental contributions of 560 disease-related phenotypes in 56,396 twin pairs and 724,513 sibling pairs out of 44,859,462 individuals that live in the United States. We estimated the contribution of environmental risk factors (socioeconomic status (SES), air pollution and climate) in each phenotype. Mean heritability (h2 = 0.311) and shared environmental variance (c2 = 0.088) were higher than variance attributed to specific environmental factors such as zip-code-level SES (varSES = 0.002), daily air quality (varAQI = 0.0004), and average temperature (vartemp = 0.001) overall, as well as for individual phenotypes. We found statistically-significant heritability and shared environment for a number of comorbidities (h2 = 0.433, c2 = 0.241) and average monthly cost (h2 = 0.290, c2 = 0.302). All results are available using our Claims Analysis of Twin Correlation and Heritability (CaTCH) web application.
Digital platform-based marketplaces often have a wide variety of amateurs working alongside professional enterprises and entrepreneurs. Can a platform owner alter the number and mix of market participants?
I develop a theoretical framework to show that amateurs emerge as a distinct type of market participant, subject to different market selection conditions, and differing from professionals in quality, willingness to persist on the platform, and in mix of motivations. I clarify how targeted combinations of tweaks to platform design can lead the “bottom to fall out” of a market to large numbers of amateurs.
In data on mobile app developers, I find that shifts in minimum development costs and non-pecuniary motivations are associated with discontinuous changes in numbers and types of developers, precisely as predicted by theory. The resulting flood of low-quality amateurs is in this context is associated with equally substantial increases in numbers of high-quality products.
[Keywords: amateurs, industrial organization, labor, digitization, long-tail, platforms and marketplaces, complementors, entry and exit, selection and retention, entrepreneurship, minimum viable products, non-pecuniary motivations]
A growing body of empirical work shows that social recognition of individuals’ behavior can meaningfully influence individuals’ choices. This paper studies whether social recognition is a socially-efficient lever for influencing individuals’ choices. Because social recognition generates utility from esteem to some but disutility from shame to others, it can be either positive-sum, zero-sum, or negative-sum. This depends on whether the social recognition utility function is convex, linear, or concave, respectively.
We find that social recognition increases YMCA attendance by 17–23% over an one-month period in our experiment, and our estimated structural models predict that it would increase attendance by 19–23% if it were applied to the whole YMCA of the Triangle Area population. However, we find that the social recognition utility function is substantially concave and thus generates deadweight loss.
If our social recognition intervention were applied to the whole YMCA of the Triangle Area population, we estimate that it would generate deadweight loss of $1.23–$2.15 per dollar of behaviorally-equivalent financial incentives.