Patreon creators must decide whether to make their earnings visible to the public.
We find evidence [using Graphtreon] that removing earnings visibility increase subscribers [although total earnings are then invisible].
The provision of social information does not lead to an increase in subscribers.
In January 2017, the subscription-based crowdfunding platform Patreon allowed their users (creators) the ability to hide their earnings from existing and potential subscribers. Prior to this, all monthly earnings were visible.
We investigate what effect this policy change had on creators’ subscriber numbers over the following 6 months. Using double-robust and endogenous treatment estimation techniques, we find evidence that creators who removed the visibility of their earnings had more subscribers as a result. This suggests that the provision of social information does not lead to an increase in subscribers.
Using data from the first Census data set that includes complete measures of male biological fertility for a large-scale probability sample of the U.S. population (the 2014 wave of the Study of Income and Program ParticipationN = 55,281), this study shows that:
high income men are more likely to marry, are less likely to divorce, if divorced are more likely to remarry, and are less likely to be childless than low income men. Men who remarry marry relatively younger women than other men, on average, although this does not vary by personal income. For men who divorce who have children, high income is not associated with an increased probability of having children with new partners. Income is not associated with the probability of marriage for women and is positively associated with the probability of divorce.
High income women are less likely to remarry after divorce and more likely to be childless than low income women. For women who divorce who have children, high income is associated with a lower chance of having children with new partners, although the relationship is curvilinear.
These results are behavioral evidence that women are more likely than men to prioritize earning capabilities in a long-term mate and suggest that high income men have high value as long-term mates in the U.S.
[Keywords: evolutionary psychology, fertility, marriage, childlessness, divorce, sex differences]
We assess the impact of exogenous variation in oral contraceptives prices—a year-long decline followed by a sharp increase due to a documented collusion case—on fertility decisions and newborns’ outcomes. Our empirical strategy follows an interrupted time-series design, which is implemented using multiple sources of administrative information. As prices skyrocketed (45% within a few weeks), the Pill’s consumption plunged, and weekly conceptions increased (3.2% after a few months).
We show large effects on the number of children born to unmarried mothers, to mothers in their early twenties, and to primiparae women. The incidence of low birth weight and fetal/infant deaths increased (declined) as the cost of birth control pills rose (fell). In addition, we document a disproportional increase in the weekly miscarriage and stillbirth rates. As children reached school age, we find lower school enrollment rates and higher participation in special education programs.
Our evidence suggests these “extra” conceptions were more likely to face adverse conditions during critical periods of development.
…This paper quantifies the Pill’s role in fertility and child outcomes using a sequence of events in which unexpected shocks affected the access to oral contraceptives. In particular, we exploit a well-established case of anticompetitive behavior in the pharmaceutical market, which—after a year-long price war between the 3 largest pharmaceutical retailers in Chile—triggered a sharp and unexpected increase in the prices of birth control pills.
The price war took place during 2007, and it effectively reduced the prices of medicines across the board. In particular, prices of oral contraceptives fell by 24% during that year. By the end of 2007, the 3 largest pharmacies agreed to end the price war and engaged in a collusion scheme in which they strategically increased the prices of 222 medicines. Oral contraceptives were included in this group, experiencing price increases ranging from 30 to 100% in just a few weeks (45% on average in the first 3 weeks). We use daily information on prices and quantities sold in the country by the 3 companies from almost 40 million transactions to determine the date when the price changes for birth control pills took place. Using these data, we implement an interrupted time-series analysis ( Bloom, 2003; Cauley & Iksoon 1988), which takes into account the seasonality of births, the general trends of fertility, as well as dynamics that arise because it takes time for the menstrual cycle to be fully regulated after discontinuing the Pill’s intake. We complement the pharmacies’ transaction data with administrative information from birth and death certificates collected between 2005 and 2008 and administrative records on school enrollment from 2013 to 2016. Our empirical strategy considers 2 different treatments: one stemming from a sustained and steady decline in prices (2007) and another one from a massive and sudden increase (first weeks of 2008).
The assurance contract mechanism is often used to crowdfund public goods. This mechanism has weak implementation properties that can lead to mis-coordination and failure to produce socially valuable projects. To encourage early contributions, we extend the assurance contract mechanism with refund bonuses rewarded only to early contributors in the event of fundraising failure.
The experimental results show that our proposed solution is very effective in inducing early cooperation and increasing fundraising success. Limiting refund bonuses to early contributors works as well as offering refund bonuses to all potential contributors, while also reducing the amount of bonuses paid. We find that refund bonuses can increase the rate of campaign success by 50% or more. Moreover, we find that even taking into account campaign failures, refund bonuses can be financially self-sustainable suggesting the real world value of extending assurance contracts with refund bonuses.
[Keywords: public goods, donations, assurance contract, free riding, conditional cooperation, early contributions, refund bonuses, experiment, laboratory]
A large literature establishes that cognitive and non-cognitive skills are strongly correlated with educational attainment and professional achievement. Isolating the causal effects of these traits on career outcomes is complicated by reverse causality and selection issues.
We suggest a new approach: using within-family differences in the genetic tendency to exhibit the relevant traits as a source of exogenous variation. Genes are fixed over the life cycle and genetic differences between full siblings are random, making it possible to establish the causal effects of within-family variation in genetic tendencies.
We link genetic data from individuals in the Swedish Twin Registry to government registry data and find evidence for causal effects of the genetic predispositions towards cognitive skills, personality traits, and economic preferences on professional achievement and educational attainment. Our results also demonstrate that education and labor market outcomes are partially the result of a genetic lottery.
…We find strong evidence for a causal effect of the predisposition toward stronger cognitive skills on income, occupational status, and educational outcomes. We also find evidence for statistically-significant effects of the predispositions toward several non-cognitive traits: individuals who tend to be more risk seeking, mentally stable, and open tend to work in more prestigious occupations. The opposite is true for individuals with a tendency towards narcissism or discounting the future. A tendency towards being open and forward-looking also increases educational attainment (EA). Finally, we document large causal effects of the general genetic tendency towards higher EA on all the outcomes we study. This illustrates that success in education and professional careers is in part down to “genetic luck”. We also investigate heterogeneity in these effects by gender and socioeconomic status (SES) of the parents. We find some evidence of a stronger effect of the predisposition toward cognitive skills for high-SES individuals, in particular on educational outcomes. We also find that the effects of the genetic tendencies on income tend to be stronger for women, implying that gender differences in labor market outcomes are generally larger for less skilled individuals. The exception is the link between genetic tendencies and management positions: our results suggest that cognitive and non-cognitive skills strongly increase the likelihood for men to work in a management position but that effects are much weaker for women.
…The polygenic indices we use stem from the work of the Social Science Genetic Association Consortium (SSGAC) (Becker et al 2021).
…2.4 Sample: For the full-sample analyses looking at educational outcomes, we will limit the dataset to genotyped individuals born between 1934 and 1995 (that is, individuals who have likely completed their education) whom we can link to their parents’ records for the construction of the socioeconomic controls.13 This subsample contains 29,393 individuals. For the analyses looking at labor market outcomes, we will limit the dataset to individuals born between 1934 and 1990 (that is, individuals who have likely completed their education and worked for a few years). This subsample contains 25,515 individuals. For our causal analyses using within-family variation, we will limit the sample to complete sets of genotyped dizygotic twins. This sample contains 11,344 individuals (5,672 twin pairs) for the education analyses and 9,594 individuals (4,797 twin pairs) for the income analyses.
…The scaled estimates in Figure 2 show that the magnitudes of the effects are economically meaningful. A one-standard deviation difference in the cognitive performance PGI is associated with a roughly 10 percentage points increase in the likelihood of having graduated from university. The effect of math skills is roughly 5 percentage points. These 2 effects are estimated simultaneously, meaning that an individual with one-standard deviation higher cognitive performance and math skills is around 15 percentage points more likely to graduate from university. The effects of the statistically-significant non-cognitive traits (openness, narcissism, and time discounting as proxied by smoking) are similarly large. Finally, a one-standard deviation increase in the educational attainment PGI is associated with 0.4 to 0.6 additional years of education.
[Given the large sample size, it’d be better to skip the PGSes—which still capture so little of the genetics—and usesibling IBD or RDR to establish estimates of total causal effects.]
“On the Opportunities and Risks of Foundation Models”, Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang (2021-08-16; ai / scaling, biology; backlinks):
AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL·E,GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character.
This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles (e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations).
Though foundation models are based on conventional deep learning and transfer learning, their scale results in new emergent capabilities, and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties.
To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.
The history of AI is one of increasing emergence and homogenization. With the introduction of machine learning, we moved from a large proliferation of specialized algorithms that specified how to compute answers to a small number of general algorithms that learned how to compute answers (i.e. the algorithm for computing answers emerged from the learning algorithm). With the introduction of deep learning, we moved from a large proliferation of hand-engineered features for learning algorithms to a small number of architectures that could be pointed at a new domain and discover good features for that domain. Recently, the trend has continued: we have moved from a large proliferation of trained models for different tasks to a few large “foundation models” which learn general algorithms useful for solving specific tasks. BERT and GPT-3 are central examples of foundation models in language; many NLP tasks that previously required differentmodels are now solved using finetuned or prompted versions of BERT and/or GPT-3.
Note that, while language is the main example of a domain with foundation models today, we should expect foundation models to be developed in an increasing number of domains over time. The authors call these “foundation” models to emphasize that (1) they form a fundamental building block for applications and (2) they are not themselves ready for deployment; they are simply a foundation on which applications can be built. Foundation models have been enabled only recently because they depend on having large scale in order to make use of large unlabeled datasets using self-supervised learning to enable effective transfer to new tasks. It is particularly challenging to understand and predict the capabilities exhibited by foundation models because their multitask nature emerges from the large-scale training rather than being designed in from the start, making the capabilities hard to anticipate. This is particularly unsettling because foundation models also lead to substantially increased homogenization, where everyone is using the same few models, and so any new emergent capability (or risk) is quickly distributed to everyone.
The authors argue that academia is uniquely suited to study and understand the risks of foundation models. Foundation models are going to interact with society, both in terms of the data used to create them and the effects on people who use applications built upon them. Thus, analysis of them will need to be interdisciplinary; this is best achieved in academia due to the concentration of people working in the various relevant areas. In addition, market-driven incentives need not align well with societal benefit, whereas the research mission of universities is the production and dissemination of knowledge and creation of global public goods, allowing academia to study directions that would have large societal benefit that might not be prioritized by industry.
All of this is just a summary of parts of the introduction to the report. The full report is over 150 pages and goes into detail on capabilities, applications, technologies (including technical risks), and societal implications. I’m not going to summarize it here, because it is long and a lot of it isn’t that relevant to alignment; I’ll instead note down particular points that I found interesting.
(pg. 26) Some studies have suggested that foundation models in language don’t learn linguistic constructions robustly; even if they use it well once, they may not do so again, especially under distribution shift. In contrast, humans can easily “slot in” new knowledge into existing linguistic constructions.
(pg. 34) This isn’t surprising but is worth repeating: many of the capabilities highlighted in the robotics section are very similar to the ones that we focus on in alignment (task specification, robustness, safety, sample efficiency).
(pg. 42) For tasks involving reasoning (e.g. mathematical proofs, program synthesis, drug discovery, computer-aided design), neural nets can be used to guide a search through a large space of possibilities. Foundation models could be helpful because (1) since they are very good at generating sequences, you can encode arbitrary actions (e.g. in theorem proving, they can use arbitrary instructions in the proof assistant language rather than being restricted to an existing database of theorems), (2) the heuristics for effective search learned in one domain could transfer well to other domains where data is scarce, and (3) they could accept multimodal input: for example, in theorem proving for geometry, a multimodal foundation model could also incorporate information from geometric diagrams.
(Section 3) A substantial portion of the report is spent discussing potential applications of foundation models. This is the most in-depth version of this I have seen; anyone aiming to forecast the impacts of AI on the real world in the next 5–10 years should likely read this section. It’s notable to me how nearly all of the applications have an emphasis on robustness and reliability, particularly in truth-telling and logical reasoning.
(Section 4.3) We’ve seen a few (AN #152) ways (AN #155) in which foundation models can be adapted. This section provides a good overview of the various methods that have been proposed in the literature. Note that adaptation is useful not just for specializing to a particular task like summarization, but also for enforcing constraints, handling distributional shifts, and more.
(pg. 92) Foundation models are commonly evaluated by their performance on downstream tasks. One limitation of this evaluation paradigm is that it makes it hard to distinguish between the benefits provided by better training, data, adaptation techniques, architectures, etc. (The authors propose a bunch of other evaluation methodologies we could use.)
(Section 4.9) There is a review of AI safety and AI alignment as it relates to foundation models, if you’re interested. (I suspect there won’t be much new for readers of this newsletter.)
(Section 4.10) The section on theory emphasizes studying the pretraining-adaptation interface, which seems quite good to me. I especially liked the emphasis on the fact that pretraining and adaptation work on different distributions, and so it will be important to make good modeling assumptions about how these distributions are related.
A core proposition in economics is that voluntary exchanges benefit both parties. We show that people often deny the mutually beneficial nature of exchange, instead espousing the belief that one or both parties fail to benefit from the exchange. Across 4 studies (and 8 further studies in the online supplementary materials), participants read about simple exchanges of goods and services, judging whether each party to the transaction was better off or worse off afterward. These studies revealed that win-win denial is pervasive, with buyers consistently seen as less likely to benefit from transactions than sellers. Several potential psychological mechanisms underlying win-win denial are considered, with the most important influences being mercantilist theories of value (confusing wealth for money) and theory of mind limits (failing to observe that people do not arbitrarily enter exchanges). We argue that these results have widespread implications for politics and society.
[Keywords: folk economics, zero-sum thinking, intuitive theories, theory of mind, decision making]
…Even though economists have been long convinced by Smith’s arguments, battles against mercantilism and trade-protectionism must be fought anew each generation, as Ricardo (1817/2004), Bastiat (1845/2011), Marshall (1879/1949), Friedman (1962); and Krugman (1996) have done in turn. This need to relearn basic economics anew each generation encourages the hypothesis that zero-sum thinking is psychologically natural—a hypothesis endorsed explicitly by economists including Bastiat (1845/2011) and Sowell (2008).
The denial of transactions as win-win fits can explain zero-sum thinking—the belief that one party’s gain is another party’s loss. Zero-sum thinking is usually mistaken in economics precisely because individual trades do not make individual parties worse off. Yet it appears to be endemic in people’s thinking about economic matters. Laypeople tend to believe that more profitable companies are less socially responsible (Bhattacharjee et al 2017), when the true correlation is just the opposite. Negotiators often perceive themselves as carving up a “fixed pie”, decreasing the chances of a successful outcome (Bazerman & Neale, 1983; de Dreu et al 2000). People believe that the government cannot benefit one group without harming another (Bazerman et al 2001) and are particularly inclined to think in zero-sum ways about international trade (Baron & Kemp, 2004; Johnson et al 2019) and immigration (Esses et al 2001; Louis et al 2013). But zero-sum thinking also seems to be psychologically natural, occurring across many countries (Rózycka-Tran et al 2015) and political orientations, though manifesting differently among liberals and conservatives (Davidai & Ongis, 2019). Zero-sum thinking has been noted in numerous settings (albeit not always fallaciously), including students’ thinking about grades (Meegan, 2010), reasoners thinking about evidence (Pilditch et al 2019), consumers’ thinking about product features (Chernev, 2007; Newman et al 2014), and even couples’ thinking about love (Burleigh et al 2017).
…Overview of Experiments: 4 studies tested win-win denial and its moderators. The general method of these experiments was to ask participants about ordinary exchanges of goods or services—for example, Sally purchasing a shirt from Tony’s store, Eric purchasing a haircut from Paul’s barber shop, or Mark trading his soy sauce for Fred’s vinegar. For each transaction, participants were asked whether or not each party was better off after the transaction. From the standpoint of neoclassical economics, all parties were better off after all exchanges, since people do not voluntarily enter into transactions at a loss, and we sought to avoid conditions under which behavioral amendments to economics would be likely to produce major exceptions. Nonetheless, if people engage in win-win denial, we would expect to see a widespread belief that some parties to these exchanges do not benefit.
The particular pattern of non-benefit can help to test the potential mechanisms for win-win denial. If mercantilism is the culprit, we would expect to see buyers (but not sellers) perceived as worse off and barters as pointlessly failing to benefit either party. On the other hand, the evolutionary mismatch account suggests that people may be better at recognizing positive-sum transactions among like-kind barters rather than monetary transactions, where people might even believe that sellers are made worse off since they give up valuable goods in exchange for intrinsically valueless currency. These hypotheses were tested in Study 1.
Study 2 tested a further implication of mercantilism—that exchanges described in terms of time (labor) rather than money would be seen as more beneficial. Study 3 tested the theory of mind account by attempting to induce participants to take the perspective of the buyer by giving reasons for the buyer’s purchase. Finally, Study 4 varied the prices of monetary exchanges to test heuristic substitution account, since very inexpensive products should then be seen as benefiting the consumers at the expense of the seller.
In the online supplementary materials, we report several additional replication studies (Part B), including studies that varied the framing of the transactions or wording of the dependent variable (Studies S1, S4, and S5) and between-subjects replications of key results (Studies S2 and S3). We also pool data across studies to test individual differences in win-win denial (Part C), particularly educational and political predictors.
…Win-win denial seems to be exacerbated by issues in our theory of mind. Specifically, people are naïve realists, making a perspective-taking error in which they interpret their own preferences as ground truth, neglecting that others have different preferences and reasons for their actions. Merely reminding people that the buyers and traders had reasons for their choices (even empty reasons such as “Mary wanted the chocolate bar”) reduced the incidence of win-win denial (Study 3; see also Study S3 in the online supplementary materials). Other results reported in the online supplementary materials were also consistent with this idea. Making the preference of buyers and traders more salient reduced win-win denial (Study S4), as did asking participants to rate the parties’ perceived gain or loss (Study S5). Together, these results suggest that people do not spontaneously reflect on the fact that parties to exchanges have reasons for their behavior, leading them to discount potential gains from trade.
…Perhaps surprisingly, we find in a separate project (Johnson et al 2021 [“Zero-sum thinking in self-perceptions of consumer welfare”]) that consumers often claim that their own past transactions make them either worse off or no better off, and even make similar claims about planned future transactions. Thus, there appears to be a striking attitude-behavior gap here: Whereas people’s lay theories of exchange seem to produce strong intuitions that consumers are often made worse off by their purchases, these attitudes do not seem to manifest (in most cases, fortunately) in their actions. Perhaps this gap is driven by differences in what is considered relevant when evaluating exchanges more abstractly from a distance versus more concretely from a nearby temporal perspective (Trope & Liberman, 2010), with the latter conditions prompting more thoughts about the consumption experience itself (see Future Directions above). In any case, we think this is a genuine puzzle deserving of further research.
We estimate the impact of the Green Revolution in the developing world by exploiting exogenous heterogeneity in the timing and extent of the benefits derived from high-yielding crop varieties (HYVs).
We find that HYVs increased yields by 44% between 1965 and 2010, with further gains coming through reallocation of inputs. Higher yields increased income and reduced population growth. A 10-year delay of the Green Revolution would in 2010 have cost 17% of GDP (gross domestic product) per capita and added 223 million people to the developing-world population. The cumulative GDP loss over 45 years would have been US$112$832010 trillion, corresponding to approximately one year of current global GDP.
…The IARCs targeted developing countries, so all European countries, all former Soviet republics, Australia, Canada, Israel, Japan, New Zealand, and the United States are excluded from the sample…Our shift-share variable indicates that HYVs increased yields of food crops by 44% between 1965 and 2010. The total effect on yields is even higher because of substitution toward crops for which HYVs were available and because of reallocation of land and labor. Beyond agriculture, our baseline estimates show strong, positive, and robust impacts of the Green Revolution on different measures of economic development. Most striking is the impact on GDP (gross domestic product) per capita. Our estimates imply that delaying the Green Revolution for 10 years would have reduced GDP per capita in 2010 by US$1,722$1,2732010 (adjusted for PPP [purchasing power parity]), or 17%, across our full sample of countries. The dollar amount is large, in part because some of the countries grew relatively rich during the period we study: the comparable loss in today’s least developed countries is US$530$3922010. By 2010, the cumulative global loss of GDP of delaying the Green Revolution 10 years would have been about US$112$832010 trillion—roughly a year of present-day global GDP. Needless to say, this surpasses the amount ofresources that went into developing HYVs by several orders of magnitude. The income loss would have been much greater had the Green Revolution never happened, perhaps reducing GDP per capita in the developing world to 50% of its current level, if our estimates are taken at face value—although we stress that this number is subject to considerable uncertainty and depends on a somewhat implausible counterfactual. Despite these reservations, the results of this paper clearly place the Green Revolution among the most important economic events in the twentieth century.
We find no evidence that the gains from increased agricultural productivity were offset by any Malthusian effects; the increased availability of food does not appear to have been eroded by population increases. Instead, we find a negative effect of the Green Revolution on fertility. Our estimates suggest that the world would have contained more than 200 million additional people in 2010 if the onset of the Green Revolution had been delayed for 10 years. Lower population growth increased the relative size of the working-age population, leading to a demographic dividend that accounts for roughly one-fifth of our estimated effect on GDP per capita. Our paper also sheds light on a concern, often expressed in the literature, that agricultural productivity improvements would pull additional land into agriculture at the expense of forests and other environmentally valuable land uses. We find evidence to the contrary: in keeping with the “Borlaug hypothesis”, the Green Revolution tended to reduce the amount of land devoted to agriculture.
…The start of the Green Revolution can be dated quite precisely. As noted above, the first high-yielding rice varieties were crossed in 1962 at IRRI, and after several generations of selection, they were initially released in 1965 to national research programs in rice-growing countries around the world. For wheat, it is similarly possible to identify a zero date for the Green Revolution: the first successful crosses from the Rockefeller wheat program took place in the 1950s, but they were not released to farmers in other developing countries until 1965. Maize followed soon after. For each crop, we can identify with reasonable precision the date at which the research institution first released a variety based on breeding work that took place within the institution.
The Mexican case is unique in the sense that the first HYVs were developed in a research program that did not yet have standing as an international institution. As a result, the diffusion of the wheat semi-dwarf varieties took place within Mexico slightly before the varieties became available in other countries. For all our other crops, HYVs developed at the international research centers became available to all countries at effectively the same moment—either upon a formal initial release from the international center or through the inclusion of the material in “nurseries” of promising experimental material that were shared with researchers across the developing world.
…Converting our estimates from logarithms to levels, we find that relative yields are on average 9% higher 10 years after a HYV release (β10 = 0.09) and 75% higher after 40 years (β40 = 0.56). The gradual increase in yields happens both because adoption is gradual, along an extensive margin, and because successive vintages of HYVs ofa crop increase yields beyond what the first HYV could achieve. Our estimated magnitudes are consistent with the micro-level literature, surveyed in Evenson & Gollin 2003b, which shows that HYVs typically have at least 50% higher yields than traditional varieties for a given set of inputs. Inputs are not fixed, however. Many HYVs respond better to fertilizer and other inputs than traditional varieties, raising yields still further; gains of the magnitude observed in figure 2are not unexpected, in cases when HYV adoption is widespread.
…The event study for GDP per capita in figure 4A shows that 10 years after the onset of the Green Revolution in 1965, countries specialized in wheat, rice, and maize begin to have faster income growth than other countries.
…To put our estimated effect sizes into perspective, the effect of delaying the Green Revolution by 10 years is of a magnitude comparable (with opposite sign) to the income effect of democratizing, which Acemoglu et al 2019 estimate to be about 20% after 25 years, and to the effect of railroad access in 19th-century India, which Donaldson (2018) puts at 16%. The population effect we find is substantially smaller than the effect of medical innovations, which, according to Acemoglu et al 2020, has increased the population by 45% between 1940 and 1980 in their sample of countries and by even more in low-income and middle-income countries.
…The Green Revolution is often associated with the 1960s and 1970s, but rather than slowing down, the rate of adoption and the number of new HYVs increased in the 1980s, 1990s, and 2000s. Scattered evidencefrom sub-Saharan Africa suggests that the HYV adoption rate has increased by as much in the 2000s as in the 4 preceding decades.27 One reason is that, compared to that in other parts of the world, especially Southeast Asia, African agriculture is specialized in cassava, sorghum, millet, and other crops for which HYVs became available relatively late. Our results consequently shed light on the divergence between Southeast Asia and Africa during the second half of the 20th century.
We estimate the distribution of television advertising elasticities and the distribution of the advertising return on investment (ROI) for a large number of products in many categories…We construct a data set by merging market (DMA) level TV advertising data with retail sales and price data at the brand level…Our identification strategy is based on the institutions of the ad buying process.
Our results reveal substantially smaller advertising elasticities compared to the results documented in the literature, as well as a sizable percentage of statistically insignificant or negative estimates. The results are robust to functional form assumptions and are not driven by insufficient statistical power or measurement error.
The ROI analysis shows negative ROIs at the margin for more than 80% of brands, implying over-investment in advertising by most firms. Further, the overall ROI of the observed advertising schedule is only positive for one third of all brands.
[Keywords: advertising, return on investment, empirical generalizations, agency issues, consumer packaged goods, media markets]
…We find that the mean and median of the distribution of estimated long-run own-advertising elasticities are 0.023 and 0.014, respectively, and 2 thirds of the elasticity estimates are not statistically different from zero. These magnitudes are considerably smaller than the results in the extant literature. The results are robust to controls for own and competitor prices and feature and display advertising, and the advertising effect distributions are similar whether a carryover parameter is assumed or estimated. The estimates are also robust if we allow for a flexible functional form for the advertising effect, and they do not appear to be driven by measurement error. As we are not able to include all sensitivity checks in the paper, we created an interactive web application that allows the reader to explore all model specifications. The web application is available.
…First, the advertising elasticity estimates in the baseline specification are small. The median elasticity is 0.0140, and the mean is 0.0233. These averages are substantially smaller than the average elasticities reported in extant meta-analyses of published case studies (Assmus, Farley, and Lehmann (1984b), Sethuraman, Tellis, and Briesch (2011)). Second, 2 thirds of the estimates are not statistically distinguishable from zero. We show in Figure 2 that the most precise estimates are those closest to the mean and the least precise estimates are in the extremes.
…6.1 Average ROI of Advertising in a Given Week:
In the first policy experiment, we measure the ROI of the observedadvertising levels (in all DMAs) in a given week t relative to not advertising in week t. For each brand, we compute the corresponding ROI for all weeks with positive advertising, and then average the ROIsacross all weeks to compute the average ROI of weekly advertising. This metric reveals if, on the margin, firms choose the (approximately) correct advertising level or could increase profits by either increasing or decreasing advertising.
We provide key summary statistics in the top panel of Table III, and we show the distribution of the predicted ROIs in Figure 3(a). The average ROI of weekly advertising is negative for most brands over the whole range of assumed manufacturer margins. At a 30% margin, the median ROI is −88.15%, and only 12% of brands have positive ROI.Further, for only 3% of brands the ROI is positive and statisticallydifferent from zero, whereas for 68% of brands the ROI is negative and statistically different from zero.
These results provide strong evidence for over-investment in advertising at the margin. [In Appendix C.3, we assess how much larger the TV advertising effects would need to be for the observed level of weekly advertising to be profitable. For the median brand with a positive estimated ad elasticity, the advertising effect would have to be 5.33× larger for the observed level of weekly advertising to yield a positive ROI (assuming a 30% margin).]
6.2 Overall ROI of the Observed Advertising Schedule: In the second policy experiment, we investigate if firms are better off when advertising at the observed levels versus not advertising at all. Hence, we calculate the ROI of the observed advertising schedule relative to a counterfactual baseline with zero advertising in all periods.
We present the results in the bottom panel of Table III and in Figure 3(b). At a 30% margin, the median ROI is −57.34%, and 34% of brands have a positive return from the observed advertising schedule versus not advertising at all. Whereas 12% of brands only have positive and 30% of brands only negative values in their confidence intervals, there is more uncertainty about the sign of the ROI for the remaining 58% of brands. This evidence leaves open the possibility that advertising may be valuable for a substantial number of brands, especially if they reduce advertising on the margin.
…Our results have important positive and normative implications. Why do firms spend billions of dollars on TV advertising each year if the return is negative? There are several possible explanations. First, agency issues, in particular career concerns, may lead managers (or consultants) to overstate the effectiveness of advertising if they expect to lose their jobs if their advertising campaigns are revealed to be unprofitable. Second, an incorrect prior (i.e., conventional wisdom that advertising is typically effective) may lead a decision maker to rationally shrink the estimated advertising effect from their data to an incorrect, inflated prior mean. These proposed explanations are not mutually exclusive. In particular, agency issues may be exacerbated if the general effectiveness of advertising or a specific advertising effect estimate is overstated. [Another explanation is that many brands have objectives for advertising other than stimulating sales. This is a nonstandard objective in economic analysis, but nonetheless, we cannot rule it out.] While we cannot conclusively point to these explanations as the source of the documented over-investment in advertising, our discussions with managers and industry insiders suggest that these may be contributing factors.
Why are high-income and low-income earners not substantially polarized in their support for progressive income taxation? This article posits that the affluent fail to recognize that they belong to the high-income income group and this misperception affects their preferences over progressive taxation.
To explain this mechanism theoretically, I introduce a formal model of subjective income-group identification through self-comparison to an endogenous reference group. In making decisions about optimal tax rates, individuals then use these subjective evaluations of their own income group and earnings of other groups.
Relying on ISSP data, I find strong evidence for the model’s empirical implications: most high-income earners support progressive taxation when they identify themselves with a lower group. Additionally, individuals who overestimate the earnings of the rich are more likely to support progressive taxation.
[Keywords: taxation, preferences, inequality, public opinion, subjective income class, social comparison]
…More specifically, I demonstrate that most people, even the affluent, support progressive tax rates when they believe it would be someone richer than them who would disproportionately bear the extra tax burden. This belief is mostly driven by the difficulty in precisely identifying high-income individuals and their income. For most citizens being affluent is a fuzzy concept that is hard to define. Everyone—high-income and low-income individuals alike—is confident that Bill Gates or Mark Zuckerberg are among those with high income. Nobody would oppose the notion that an individual who lives on the Upper East Side in Manhattan, drives a Ferrari, and takes vacations on an exotic island would be considered rich. However, how do people classify the owners of the most beautiful house on their block or the person in their neighborhood who has a nice car? Who do they think are the high-income earners? More importantly, how do people assess their affluence?
…The following analysis relies on the 2009 Social Inequality International Social Survey Programme (ISSP 2009), which asks a variety of questions about perceptions of economic inequality, self-placement, and preferences on redistributive policies. The analysis was restricted to countries where information on income allowed the generation of 10 deciles and where the respondents were asked to report gross household income before taxes and other deductions. The sample covers 22 countries and around 8,000 respondents.2 It thus provides rich individual-level data on perceptions and preferences over welfare policies, as well as all the important control variables. First, I examine the determinants of subjective self-placement. Then I proceed to explore how subjective self-placement and assessments of high-income group’s affluence levels affect preferences over progressive taxation.
…Before presenting the empirical results, it is interesting to look at Self-Placement and Income Distance to a CEO descriptively. The aim is to establish whether most high-income individuals place themselves in the middle, as well as to investigate the nature of perceptions pertaining to the income distance to a CEO. Figure 2 shows the distribution of Self-Placement and Income Distance to a CEO by objective income deciles of the respondents. Although the analytical scope of Figure 2 is limited, it is immediately clear that when asked to place themselves on a 10-point scale, most respondents place themselves between the 4th and the 6th groups. Although self-placement increases with the objective income decile, the magnitude of this increase is not very substantial. The median value of Self-Placement of the respondents below the median household earnings averages around 5, whereas it is 6 for those above the median.
Figure 2 also reveals valuable insights about the subjective perceptions of respondents on the income distance from a CEO. The range of perceived distance ranges from −5 to +20. The horizontal line shows the logarithmic transformation of the highest CEO to average worker pay ratio of a company in the United States, the country with the highest overall proportion in the sample. The logarithmic transformation of the highest CEO-employer compensation ratio in the United States is 4.06 (Melin et al 2019), whereas the logarithmic transformation of the average CEO-employer compensation ratio is only 2.42 (Duarte 2019). Looking at the distribution of the perceived distances and the actual numbers, it is clear that many people overestimate the distance by a considerable margin. This figure thus shows that some respondents’ best guess about the yearly earnings of a CEO is substantially larger than the highest earner’s salary in their country. These numbers reveal that some people think about prototypes that do not exist when they are prompted to think about the income levels of the rich…Looking at the right side of the figure, respondents who belong to the top decile place themselves in the higher groups, around 7, only when they think they earn more than a typical CEO in theircountry. As their impression of the income of an average CEO increases, they start underestimating their position substantially.
[‘“Bloomberg spent $500 million on ads. The U.S. population is 327 million”, Rivas wrote. “He could have given each American $1 million and still have money left over. I feel like a $1 million check would be life-changing for most people. Yet he wasted it all on ads and STILLLOST.”’]
…The most striking result perhaps relates to the individuals who place themselves in high groups but still believe they earn substantially less than a CEO. In line with the prediction ofProposition 3.3 which posits that an individual who identifies with the higher-income group still prefers a progressive tax rate if she believes that the other members of the high-income group are substantially richer than her, this figure shows that the predicted probability of supporting progressive taxation of an individual who places herself in the top income group is substantially high, 0.95, when that individual unrealistically overestimates a typical CEO’s earnings.
…Perhaps one of the most interesting findings of this paper is that the well-known “middle-income bias” found in public opinion surveys can be systematically explained. When individuals compare themselves either to the superrich or the superpoor, they tend to infer that they are situated around the middle of the income ladder. This, of course, has severe effects on their political preferences.
The government of Greenland has decided to suspend all oil exploration off the world’s largest island, calling it “a natural step” because the Arctic government “takes the climate crisis seriously.”
No oil has been found yet around Greenland, but officials there had seen potentially vast reserves as a way to help Greenlanders realize their long-held dream of independence from Denmark by cutting the subsidy of the equivalent of about $543 million USD the Danish territory receives from the Danish government every year.
Global warming means that retreating ice could uncover potential oil and mineral resources which, if successfully tapped, could dramatically change the fortunes of the semi-autonomous territory of 57,000 people. “The future does not lie in oil. The future belongs to renewable energy, and in that respect we have much more to gain”, the Greenland government said in a statement. The government said it “wants to take co-responsibility for combating the global climate crisis.”
The decision was made June 24 but made public Thursday.
…When the current government took office, led by the Inuit Ataqatigiit party since April’s parliamentary election, it immediately began to deliver on election promises and stopped plans for uranium mining in southern Greenland. Greenland still has 4 active hydrocarbon exploration licences, which it is obliged to maintain as long as the licensees are actively exploring. They are held by 2 small companies.
Every year, nearly 5,000 patients die while waiting for kidney transplants, and yet an estimated 3,500 procured kidneys are discarded. Such a polarized coexistence of dire scarcity and massive wastefulness has been mainly driven by insufficient pooling of cadaveric kidneys across geographic regions.
Although numerous policy initiatives are aimed at broadening organ pooling, they rarely account for a key friction—efficient airline transportation, ideally direct flights, is necessary for long-distance sharing, because of the time-sensitive nature of kidney transplantation. Conceivably, transplant centers may be reluctant to accept kidney offers from far-off locations without direct flights.
In this paper, we estimate the effect of the introduction of new airline routes on broader kidney sharing. By merging the U.S. airline transportation and kidney transplantation data sets, we create a unique sample tracking (1) the evolution of airline routes connecting all the U.S. airports and (2) kidney transplants between donors and recipients connected by these airports. We estimate the introduction of a new airline route increases the number of shared kidneys by 7.3%. We also find a net increase in the total number of kidney transplants and a decrease in the organ discard rate with the introduction of new routes. Notably, the post-transplant survival rate remains largely unchanged, although average travel distance increases after the introduction of new airline routes.
Our results are robust to alternative empirical specifications and have important implications for improving access to the U.S. organ transplantation system.
[Keywords: organ transplantation, airline transportation, pooling•flexibility, causal inference]
Physical attractiveness is an important axis of social stratification associated with educational attainment, marital patterns, earnings, and more. Still, relative to ethno-racial and gender stratification, physical attractiveness is relatively understudied. In particular, little is known about whether returns to physical attractiveness vary by race or statistically-significantly vary by race and gender combined.
In this study, we use nationally representative data to examine whether (1) socially perceived physical attractiveness is unequally distributed across race/ethnicity and gender subgroups and (2) returns to physical attractiveness vary substantially across race/ethnicity and gender subgroups. Notably, the magnitude of the earnings disparities along the perceived attractiveness continuum, net of controls, rivals and/or exceeds in magnitude the black-white race gap and, among African-Americans, the black-white race gap and the gender gap in earnings.
The implications of these findings for current and future research on the labor market and social inequality are discussed.
Prominent economists have supposed that the private production of full-bodied gold or silver coins is inefficient: due to information asymmetry, private coins will be chronically low-quality or underweight.
An examination of private mints during gold rushes in the US in the years 1830–63, drawing on contemporary accounts and numismatic literature, finds otherwise. While some private gold mints produced underweight coins, from incompetence or fraudulent intent, such mints did not last long. Informed by newspapers about the findings of assays, money-users systematically abandoned substandard coins in favour of full-weight coins. Only competent and honest mints survived.
The decline in late 19th century agricultural prices, by reducing the incomes of aristocratic landed estates and of non-aristocratic landed families, led to richly dowried American heiress brides being substituted for brides from landed families in British aristocratic marriages. This reflected a wider 19th century phenomenon of aristocratic substitution of foreign brides for landed brides and the substitution of daughters of British businessmen for daughters of landed families when agricultural prices declined.
The results are consistent with positive assortative matching with lump-sum transfers (dowries), where landowning family dowries are cash constrained in periods of agricultural downturn.
I think whaling is really cool. I can’t help it. It’s one of those things like guns and war and space colonization which hits the adventurous id. The idea that people used to go out in tiny boats into the middle of oceans and try to kill the biggest animals to ever exist on planet earth with glorified spears to extract organic material for fuel is awesome. It’s like something out of a fantasy novel.
So I embarked on this project to understand everything I could about whaling. I wanted to know why burning whale fat in lamps was the best way to light cities for about 50 years. I wanted to know how profitable whaling was, what the hunters were paid, and how many whaleships were lost at sea. I wanted to know why the classical image of whaling was associated with America and what other countries have whaling legacies. I wanted to know if the whaling industry wiped out the whales and if they can recover.
…Fun Fact 1: Right whale testicles make up 1% of their weight,23 so each testicle weighs around 700 pounds. The average American eats 222 pounds of meat per year (not counting fish),24 so a single right whale testicle should cover a family of 4 for almost a year.
This paper investigates whether the impact of children on the labor market outcomes of women relative to men—child penalties—can be explained by the biological links between mother and child. We estimate child penalties in biological and adoptive families using event studies around the arrival of children and almost 40 years of adoption data from Denmark. Short-run child penalties are slightly larger for biological mothers than for adoptive mothers, but their long-run child penalties are virtually identical and precisely estimated. This suggests that biology is not a key driver of child-related gender gaps.
Technological advancements bring changes to our life, altering our behaviors as well as our role in the economy. In this paper, we examine the potential effect of the rise of robotic technology on health.
Using the variation in the initial distribution of industrial employment in US cities and the difference in robot adoption across industries over time to predict robot exposure at the local labor market, we find evidence that higher penetration of industrial robots in the local economy is positively related to the health of the low-skilled population.
A 10% increase in robots per 1000 workers is associated with an approximately 10% reduction in the share of low-skilled individuals reporting poor health. Further analysis suggests that the reallocation of tasks partly explains this finding. A 10% increase in robots per 1000 workers is associated with an approximately 1.5% reduction in physical tasks supplied by low-skilled workers.
Why do some people blame the political system for the problems in their lives? We explore the origins of these grievances and how people assign responsibility and blame for the challenges they face. We propose that individual differences in the personality traits of locus of control and self-esteem help explain why some blame the political system for their personal problems. Using responses from a module of the 2016 Cooperative Congressional Election Study, we show that those with low self-esteem and a weaker sense of control over their fates are more likely to blame the political system for the challenges they face in their lives. We also demonstrate that this assignment of blame is politically consequential, where those who intertwine the personal and the political are more likely to evaluate elected officials based on pocketbook economic conditions rather than sociotropic considerations.
Chemosensory anxiety signals act independent of odor concentration.
It is well documented how chemosensory anxiety signals affect the perceiver’s physiology, however, much less is known about effects on overt social behavior. The aim of the present study was to investigate the effects of chemosensory anxiety signals on trust and risk behavior in men and women.
Axillary sweat samples were collected from 22 men during the experience of social anxiety, and during a sport control condition. In a series of 5 studies, the chemosensory stimuli were presented via an olfactometer to 214 participants acting as investors in a bargaining task either in interaction with a fictitious human co-player (trust condition) or with a computer program (risk condition).
It could be shown that chemosensory anxiety signals reduce trust and risk behavior in women. In men, no effects were observed.
Chemosensory anxiety is discussed to be transmitted contagiously, preferentially in women.
Investigates the long-term causal effects of bombings on later economic development.
Focus on Laos that is one of the most intensely bombed countries per capita in history.
Use granular grid data, nightlights or population, as proxies of economic development.
No robust effects of bombings in southern Laos, but some effects in northern Laos.
No within-country conditional economic convergence, which could be Lao specific.
This study investigates the long-term causal effects of U.S. bombing missions during the Vietnam War on later economic development in Laos. Following an instrumental variables approach, we use the distance between the centroid of village-level administrative boundaries and heavily bombed targets, namely, the Ho Chi Minh Trail in southern Laos and Xieng Khouang Province in northern Laos, as an instrument for the intensity of U.S. bombing missions. We use three datasets of mean nighttime light intensity (1992, 2005, and 2013) and two datasets of population density (1990 and 2005) as outcome variables. The estimation results show no robust long-term effects of U.S. bombing missions on economic development in southern Laos but show negative effects in northern Laos, even 40 years after the war. We also found that the results do not necessarily support the conditional convergence hypothesis within a given country, although this result could be unique to Laos.
A randomized tenure security intervention in Zambia statistically-significantly reduced farmers’ fear of losing their land.
But it had no impact on land fallowing, agroforestry, or other investments.
We cross-randomize tenure with an agroforestry extension that relaxes financial and technical constraints to investment.
The impact of land tenure is still zero even when these other constraints are relaxed.
There is broad agreement among the most prominent observational studies that tenure insecurity deters investment. We present new experimental evidence testing this proposition: a land certification program randomized across villages in Zambia. Our results contradict the consensus.
Though the intervention improved perceptions of tenure security, it had no impact on investment in the following season. The impact is still zero even after a cross-randomized agroforestry extension relaxes financial and technical constraints to agroforestry investment. Though relaxing these constraints has a direct effect, it is not enhanced by granting land tenure, implying tenure insecurity had not been a barrier to investment.
…This paper sidesteps such challenges by using a randomized experiment. We evaluate the short-run effects of an intervention in Zambia that cross-randomized an agroforestry extension with a program that strengthened customary land tenure through field demarcation and certification. We test for whether tenure security affects a host of outcomes drawn from prior observational studies. Our experimental results do not corroborate these studies. We estimate with reasonable precision that tenure security has zero effect…Finally, we discard our experimental variation and apply several observational research designs similar to those used in prior studies. We show that had we used such a design we would have spuriously concluded that tenure security has positive and statistically-significant effects. This exercise does not necessarily imply the estimates of the observational studies were flawed. But it does show that the key moments used by these studies for identification also appear in our Zambian sample. That implies the context is not entirely different and that it is possible to find these moments even in a sample where granting tenure security has no effect.
From the end of the Civil War to the onset of the Great War, the United States experienced an unprecedented increase in commitment rates for mental asylums. Historians and sociologists often explain this increase by noting that public sentiment called for widespread involuntary institutionalization to avoid the supposed threat of insanity to social well-being. However, that explanation neglects expanding rent seeking within psychiatry and the broader medical field over the same period. In this paper, we argue that stronger political influence from mental healthcare providers contributed substantially to the rise in institutionalization. We test our claim empirically with reference to the catalog of medical regulations from 1870 to 1910, as well as primary sources documenting rates of insanity at the state level. Our findings provide an alternative explanation for the historical rise in US institutionalizations.
[Keywords: rent-seeking, public health, American economic history, mental health, insanity]
…Between 1870 and 1910, institutionalization rates (per 100,000 persons) rose nearly 3× (see Figure 1).
…In this paper, we utilize a public choice framework to offer a complementary explanation for the rise in institutionalizations, which argues that the expansion of public asylums benefited asylum-based physicians. Although we emphasize political exchange rather than public interest, the 2 explanations are not necessarily antagonistic.3 They can be complements (Leeson 2019, pp. 39–40). To illustrate such complementarity, consider the “bootleggers and Baptists” theory of regulation (Yandle 1983; Horpedahl 2020). The “Baptists”, by means of public-interest justifications, propose a policy that offers laudable public benefits. The “bootleggers”, rent seekers who expect to profit, will support the policy. In the case of the asylum’s expansion, we will argue that rent seeking was in play. Progressive social reformers and voters (i.e., the “Baptists”) saw the state asylum’s expansion as being in the public interest. Physicians and asylum superintendents (i.e., the “bootleggers”), when well-organized, joined with the progressive social reformers and voters out of self-interest. In other words, public and private interest forces were not at odds with one another—they complemented each other in ways that caused asylums to expand.
…To assess whether asylum physicians were able to secure rents, we rely on state-level institutionalization rates from 1870 to 1910 (provided by US Census Bureau documents) in conjunction with state-level legislation affecting entry into the medical profession (Baker 1984; Hamowy 1979). The ability of the medical community of a given state to procure barriers to entry into the profession becomes a proxy for the effectiveness of physicians in the field of mental care in securing rents. Our assumption is that in states where physicians were politically weak, asylum physicians must have been weak as well (and thus unable to secure additional rents). While numerous laws were adopted to restrict entry, the most important one was the examining board.4 Those boards were enforcement entities that could set the conditions of entry and also amplify the effectiveness of most of the other laws. If the medical profession was too weak to get an examining board, it was too weak to capture most other potential rent sources.
…Our analysis finds that many entry-restriction laws (examining boards in particular) explain the rise of asylum populations from 1870 to 1910. For example, the introduction of an examining board increased institutionalization rates by approximately 10–20%. The results control for state and year effects. They are robust to changes in how the institutionalized population is measured. Thus, a rent-seeking process was at play. This process dovetails well with public interest explanations of asylum expansion (Sutton 1991).
How resilient are high-skilled, white collar workers? We exploit a uniquely comprehensive dataset of individual-level resumes of bank employees and the setting of the Lehman Brothers bankruptcy to estimate the effect of an unanticipated shock on the career paths of mobile and high skilled labor.
We find evidence of short-term effects that largely dissipate over the course of the decade and that touch only the senior-most employees. We match each employee of Lehman Brothers in January 2008 to the most similar employees at Goldman Sachs, Morgan Stanley, Deutsche Bank, and UBS based on job positions, skills, education, and demographics. By 2019, the former Lehman Brothers employees are 2% more likely to have experienced at least a 6-months-long break from reported employment and 3% more likely to have left the financial services industry. However, these effects concentrate among the senior individuals such as vice presidents and managing directors and are absent for junior employees such as analysts and associates.
Furthermore, in terms of subsequent career growth, junior employees of Lehman Brothers fare no worse than their counterparts at the other banks. Analysts and associates employed at Lehman Brothers in January 2008 have equal or greater likelihoods of achieving senior roles such as managing director in existing enterprises by January 2019 and are more likely to found their own businesses.
[Keywords: career disruptions, bankruptcy, human capital, skilled labor, inequality]
…Our last result suggests that former employees of Lehman Brothers were prone to use the disruption event as a platform to start new ventures, consistent with the evidence by Babina 2020 and Hacamo & Kleiner 2020. We identify entrepreneurial activity as individuals who are listed as (co-)founders, presidents, or C-level executives of firms that did not exist prior to the bankruptcy event. The unconditional likelihood of entrepreneurship among the employees of the control banks is 2.16%. This likelihood is much higher among former employees of Lehman Brothers, at 3.29%, with the difference statistically-significant at the 1% level. Across hierarchical levels, baseline entrepreneurship is higher for more senior employees (eg., 3.7% for managing directors and 4.1% for senior management), but the Lehman Brothers bankruptcy increases this rate for all positions. In fact, the starkest relative increase is observed for employees who held associate-level titles in January 2008, with ex-Lehman associates showing a 4.5% likelihood of subsequently founding their own ventures, compared to only 1.8% for associates at Goldman Sachs, Morgan Stanley, Deutsche Bank, and UBS.
E-commerce and online advertisement are growing trends.
The overall impact of ad blockers is unclear.
Using survey data, the effect of ad blocker use on online purchases is quantified.
The analysis reveals a positive effect of ad blocker use on e-commerce.
In the light of the results stakeholders should consider if the present online ads formats are the most suitable.
The use of ad blocking software has risen sharply with online advertising and is recognized as challenging the survival of the ad supported web. However, the effects of ad blocking on consumer behavior have been studied scarcely.
This paper uses propensity score matching techniques on a longitudinal survey of 4411 Internet users in Spain to show that ad blocking has a causal positive effect on their number of online purchases. This could be attributed to the positive effects of ad blocking, such as a safer and enhanced navigation.
This striking result reinforces the controversial debate of whether current online ads are too bothersome for consumers.
[Keywords: Ad blockers, advertising avoidance, e-commerce, propensity score matching]
…This study employs a rich dataset coming from a longitudinal survey. The source of the data is a survey conducted by the Spanish Markets and Competition Authority on the same sample of interviewees in the 4th quarter of 2017 and in the second quarter of 2018 ([dataset]CNMCData, 2019). The sample was designed to be representative of the population living in private households in Spain. The information was provided by 4411 Internet users ≥16 years old. At the baseline time point (fourth quarter of 2017) these individuals were asked if they regularly used ad blocking tools when navigating the web. Additionally, the survey collected information on their socio-demographic characteristics (age, gender, education level and employment status) and on how they used Internet (frequency of use of online services like: GPS navigation services, instant messaging, mobile gaming, social networks, e-mail and watching videos on the phone). 6 months later (second quarter of 2018), the same individuals were asked how many online purchases they had made during the previous 6 months (these included goods and services purchases, irrespective of the form of payment). Thus, the outcome variable (number of online purchases) occurred later than the collection of the ad blocking information and the rest of variables (our X covariates).
Stratification on PS quintiles
Stratification on PS deciles
PSM—NN after CEM pruning (1)
PSM—NN after CEM pruning (2)
Table 2: Estimated average treatment effects of ad blockers on online shopping (number of purchases in 6 months). [ATT: averagetreatment effect on the treated. PSM: propensity score matching. NN:nearest neighbor. KM: kernel matching. PS: propensity scores. CEM:coarsened exact matching. LCI: lowerconfidence interval.UCI: upperconfidence interval. (1) CEM pruning by using use of Internet appscovariates. (2) CEM pruning by using socio-demographic covariates.]
I study the impact of transportation network companies (TNC) on traffic delays using a natural experiment created by the abrupt departure of Uber and Lyft from Austin, Texas.
Applying difference in differences and regression discontinuity specifications to high-frequency traffic data, I estimate that Uber and Lyft together decreased daytime traffic speeds in Austin by roughly 2.3%. Using Austin-specific measures of the value of travel time, I translate these slowdowns to estimates of citywide congestion costs that range from $33 to $52 million annually. Back of the envelope calculations imply that these costs are similar in magnitude to the consumer surplus provided by TNCs in Austin.
Together these results suggest that while TNCs may impose modesttravel time externalities, restricting or taxing TNC activity is unlikely to generate large net welfare gains through reduced congestion.
Methods: Literature review, data gathering and critical assessment of the indicators and proxies suggested or implied by Ehrlich and Schneider. Critical assessment of Simon’s reasons for rejecting the bet. Data gathering for his alternative indicators.
Results: For indicators that can be measured satisfactorily, the balance of the outcomes favors the Ehrlich-Schneider claims for the initial ten-year period. Extending the timeline and accounting for the measurement limitations or dubious relevance of many of their indicators, however, shifts the balance of the evidence towards Simon’s perspective.
Conclusion: Although the outcomes favour the Ehrlich-Schneider claims for the initial ten-year period, Ehrlich and Schneider’s indicators yielded mixed results in the long run. Simon’s preferred indicators of direct human welfare would yield largely favourable outcomes if the bet were extended into the present. Based on this, we claim that Simon’s optimistic perspective was once again largely validated.
Objective: This paper provides the first comprehensive assessment of the outcome of Paul Ehrlich’s and Stephen Schneider’s counteroffer (1995) to economist Julian Simon following Ehrlich’s loss in the famous Ehrlich-Simon wager on economic growth and the price of natural resources (1980–1990). Our main conclusion in a previous article is that, for indicators that can be measured satisfactorily or can be inferred from proxies, the outcome favors Ehrlich-Schneider in the first decade following their offer. This second article extends the timeline towards the present time period to examine the long-term trends of each indicator and proxy, and assesses the reasons invoked by Simon to refuse the bet.
Methods: Literature review, data gathering, and critical assessment of the indicators and proxies suggested or implied by Ehrlich and Schneider. Critical assessment of Simon’s reasons for rejecting the bet. Data gathering for his alternative indicators.
Results: For indicators that can be measured directly, the balance of the outcomes favors the Ehrlich-Schneider claims for the initial ten-year period. Extending the timeline and accounting for the measurement limitations or dubious relevance of many of their indicators, however, shifts the balance of the evidence towards Simon’s perspective.
Conclusion: The fact that Ehrlich and Schneider’s own choice of indicators yielded mixed results in the long run, coupled with the fact that Simon’s preferred indicators of direct human welfare yielded largely favorable outcomes is, in our opinion, sufficient to claim that Simon’s optimistic perspective was largely validated.
Using millions of father-son pairs spanning more than 100 years of US history [using US census data], we find that children of immigrants from nearly every sending country have higher rates of upward mobility than children of the US-born. Immigrants’ advantage is similar historically and today despite dramatic shifts in sending countries and US immigration policy. Immigrants achieve this advantage in part by choosing to settle in locations that offer better prospects for their children.
A fundamental feature of sacred values like environmental-protection, patriotism, and diversity is individuals’ resistance to trading off these values in exchange for material benefit. Yet, for-profit organizations increasingly associate themselves with sacred values to increase profits and enhance their reputations.
In the current research, we investigate a potentially perverse consequence of this tendency: that observing values used instrumentally (ie., in the service of self-interest) subsequently decreases the sacredness of those values. Seven studies (n = 2,785) demonstrate support for this value corruption hypothesis. Following exposure to the instrumental use of a sacred value, observers held that value as less sacred (Studies 1–6), were less willing to donate to value-relevant causes (Studies 3 and 4), and demonstrated reduced tradeoff resistance (Study 7). We reconcile the current effect with previously documented value protection effects by suggesting that instrumental use decreases value sacredness by shifting descriptive norms regarding value use (Study 3), and by failing to elicit the same level of outrage as taboo tradeoffs, thus inhibiting value protective responses (Studies 4 and 5).
These results have important implications: People and organizations that use values instrumentally may ultimately undermine the very values from which they intend to benefit.
This scoping paper addresses the role of financial institutions in empowering the British Industrial Revolution. Prominent economic historians have argued that investment was largely funded out of savings or profits, or by borrowing from family or friends: hence financial institutions played a minor role. But this claim sits uneasily with later evidence from other countries that effective financial institutions have mattered a great deal for economic development. How can this mismatch be explained? Despite numerous technological innovations, from 1760 to 1820 industrial growth was surprisingly low. Could the underdevelopment of financial institutions have held back growth? There is relatively little data to help evaluate this hypothesis. More research is required on the historical development of institutions that enabled finance to be raised. This would include the use of property as collateral. This paper sketches the evolution of British financial institutions before 1820 and makes suggestions for further empirical research. Research in this direction should enhance our understanding of the British Industrial Revolution and of the preconditions of economic development in other countries.
This report analyzes the current supply chain for semiconductors. It particularly focuses on which portions of the supply chain are controlled by US and its allies and China. Some key insights:
The US semiconductor industry is estimated to contribute 39% of the total value of the global semiconductor supply chain.
The semiconductor supply chain is incredibly complicated. The production of a single chip requires more than 1,000 steps and passes through borders more than 70 times throughout production.
AMD is currently the only company with expertise in designing bothhigh-end GPUs and high-end CPUs.
TSMC controls 54% of the logic foundry market, with a larger share for leading edge production, e.g., state-of-the-art 5 nm node chips.
Revenue per wafer for TSMC is rapidly increasing, while other foundries are seeing declines.
The Netherlands has a monopoly on extreme ultraviolet (EUV) scanners, equipment needed to make the most advanced chips.
The Netherlands and Japan have a monopoly on argon fluoride (ArF) immersion scanners, needed to make the second most advanced chips.
The US has a monopoly on full-spectrum electronic design automation (EDA) software needed to design semiconductors.
Japan, Taiwan, Germany and South Korea manufacture the state-of-the-art 300 mm wafers used for 99.7% of the world’s chip manufacturing. This manufacturing process requires large amounts of tacit know-how.
China controls the largest share of manufacturing for most natural materials. The US and its allies have a sizable share in all materials except for low-grade gallium, tungsten and magnesium.
China controls ~2⁄3rds of the world’s silicon production, but the US and allies have reserves.
The report also analyzes US competitiveness at detailed levels of the supply chain, which I didn’t read that carefully. Tables:]
General purpose technologies (GPTs) like AI enable and require substantial complementary investments. These investments are often intangible and poorly measured in national accounts.
We develop a model that shows how this can lead to underestimation of productivity growth in a new GPTs early years and, later, when the benefits of intangible investments are harvested, productivity growth overestimation. We call this phenomenon the Productivity J-curve.
We apply our method to US data and find that adjusting for intangibles related to computer hardware and software yields a TFP level that is 15.9% higher than official measures by the end of 2017.
Natural selection has been documented in contemporary humans, but little is known about the mechanisms behind it. We test for natural selection through the association between 33 polygenic scores and fertility, across two generations, using data from UK Biobank (n = 409,629 British subjects with European ancestry).
Consistently over time, polygenic scores associated with lower (higher) earnings, education and health are selected for (against). Selection effects are concentrated among lower SES groups, younger parents, people with more lifetime sexual partners, and people not living with a partner. The direction of natural selection is reversed among older parents (22+), or after controlling for age at first live birth. These patterns are in line with economic theories of fertility, in which higher earnings may either increase or decrease fertility via income and substitution effects in the labour market.
Studying natural selection can help us understand the genetic architecture of health outcomes: we find evidence in modern day Great Britain for multiple natural selection pressures that vary between subgroups in the direction and strength of their effects, that are strongly related to the socio-economic system, and that may contribute to health inequalities across income groups.
In 3 experiments (n = 1,599), which included a pre-registered study on a nationally representative sample (Norway), we find causal evidence for racial discrimination against minority Airbnb hosts. When an identical Airbnb apartment was presented with a racial outgroup (vs. in-group) host, people reported more negative attitudes toward the apartment, lower intentions to rent it, and were 25% less likely to choose the apartment over a standard hotel room in a real choice.
The rise of peer-to-peer platforms has represented one of the major economic and societal developments observed in the last decade. We investigated whether people engage in racial discrimination in the sharing economy, and how such discrimination might be explained and mitigated.
Using a set of carefully controlled experiments (n = 1,599), including a pre-registered study on a nationally representative sample, we find causal evidence for racial discrimination. When an identical apartment is presented with a racial out-group (vs. in-group) host, people report more negative attitudes toward the apartment, lower intentions to rent it, and are 25% less likely to choose the apartment over a standard hotel room in an incentivized choice. Reduced self-congruence with apartments owned by out-group hosts mediates these effects. Left-leaning liberals rated the out-group host as more trustworthy than the in-group host in non-committing judgments and hypothetical choice, but showed the same in-group preference as right-leaning conservatives when making a real choice.
Thus, people may overstate their moral and political aspirations when doing so is cost-free. However, even in incentivized choice, racial discrimination disappeared when the apartment was presented with an explicit trust cue, as a visible top-rating by other consumers (5⁄5 stars).
[Interview] Recent research suggests that the share of US households living on less than $2/person/day is high and rising.
We reexamine such extreme poverty by linking SIPP and CPS data to administrative tax and program data.
We find that more than 90% of those reported to be in extreme poverty are not, once we include in-kind transfers, replace survey reports of earnings and transfer receipt with administrative records, and account for ownership of substantial assets. More than half of all misclassified households have incomes from the administrative data above the poverty line, and many have middle-class measures of material well-being.
Levantine ~1200–950 BCE silver hoards were subjected to chemical and isotopic analysis.
Silver was alloyed with copper, reflecting a shortage after the Bronze Age collapse.
This debasement was often concealed by adding arsenic.
A mixing model distinguishes between isotopic contributions of alloyed metals.
Results suggest that silver shortage in the Levant probably lasted until ~950 BCE.
The study of silver, which was an important mean of currency in the Southern Levant during the Bronze and Iron Age periods (~1950–586 BCE), revealed an unusual phenomenon. Silver hoards from a specific,yet rather long timespan, ~1200–950 BCE, contained mostly silver alloyed with copper. This alloying phenomenon is considered here for the first time, also with respect to previous attempts to provenance the silver using lead isotopes. Eight hoards were studied, from which 86 items were subjected to chemical and isotopic analysis. This is, by far, the largest dataset of sampled silver from this timespan in the Near East. Results show the alloys, despite their silvery sheen, contained high percentages of Cu, reaching up to 80% of the alloy. The Ag-Cu alloys retained a silvery tint using two methods, either by using an enriched silver surface to conceal a copper core, or by adding arsenic and antimony to the alloy. For the question of provenance, we applied a mixing model which simulates the contribution of up to three end members to the isotopic composition of the studied samples. The model demonstrates that for most samples, the more likely combination is that they are alloys of silver from Aegean-Anatolian ores, Pb-poor copper, and Pb-rich copper from local copper mines in the Arabah valley (Timna and Faynan). Another, previously suggested possibility, namely that a substantial part of the silver originated from the West Mediterranean, cannot be validated analytically. Contextualizing these results, we suggest that the Bronze Age collapse around the Mediterranean led to the termination of silver supply from the Aegean to the Levant in the beginning of the 12th century BCE, causing a shortage of silver. The local administrations initiated sophisticated devaluation methods to compensate for the lack of silver—a suspected forgery. It is further suggested that following the Egyptian withdrawal from Canaan around the mid-12th century BCE, Cu-Ag alloying continued, with the use of copper from Faynan instead of Timna. The revival of long-distance silver trade is evident only in the Iron Age IIA (starting ~950 BCE), when silver was no longer alloyed with copper, and was imported from Anatolia and the West Mediterranean.
[Keywords: silver hoards, alloys, lead isotopic analysis, debasement, arsenic, Bronze age collapse, Mediterranean trade]
In this paper, we review the literature on declining business dynamism and its implications in the United States and propose a unifying theory to analyze the symptoms and the potential causes of this decline. We first highlight 10 pronounced stylized facts related to declining business dynamism documented in the literature and discuss some of the existing attempts to explain them. We then describe a theoretical framework of endogenous markups, innovation, and competition that can potentially speak to all of these facts jointly. We next explore some theoretical predictions of this framework, which are shaped by two interacting forces: a composition effect that determines the market concentration and an incentive effect that determines how firms respond to a given concentration in the economy. The results highlight that a decline in knowledge diffusion between frontier and laggard firms could be an important driver of empirical trends observed in the data. This study emphasizes the potential of growth theory for the analysis of factors behind declining business dynamism and the need for further investigation in this direction.
Provides evidence for a decline in research productivity in both countries.
Using firm-level R&D panel data for public and private firms spanning three decades.
Strong decline in R&D productivity in China due to end of catch-up growth.
Conclusion: ideas are not only getting harder to find in the U.S.
In a recent paper, Bloom et al 2020 find evidence for a substantial decline in research productivity in the U.S. economy during the last 40 years. In this paper, we replicate their findings for China and Germany, using detailed firm-level data spanning three decades. Our results indicate that diminishing returns in idea production are a global phenomenon, not just confined to the U.S.
San Francisco is gentrifying rapidly as an influx of high-income newcomers drives up housing prices and displaces lower-income incumbent residents. In theory, increasing the supply of housing should mitigate increases in rents. However, new construction could also increase demand for nearby housing by improving neighborhood quality. The net impact on nearby rents depends on the relative sizes of these supply and demand effects.
This paper identifies the causal impact of new construction on nearby rents, displacement, and gentrification by exploiting random variation in the location of new construction induced by serious building fires. I combine parcel-level data on fires and new construction with an original dataset of historic Craigslist rents and panel data on individual migration histories to test the impact of proximity to new construction. I find that rents fall by 2% for parcels within 100m of new construction. Renters’ risk of being displaced to a lower-income neighborhood falls by 17%. Both effects decay linearly to zero within 1.5km. Next, I show evidence of a hyperlocal demand effect, with building renovations and business turnovers spiking and then returning to zero after 100m. Gentrification follows the pattern of this demand effect: parcels within 100m of new construction are 2.5 percentage points (29.5%) more likely to experience a net increase in richer residents.
Affordable housing and endogenously located construction do not affect displacement or gentrification. These findings suggest that increasing the supply of market rate housing has beneficial spillover effects for incumbent residents, reducing rents and displacement pressures while improving neighborhood quality.
A crew of pirates all keep their gold in one very secure chest, with labeled sections for each pirate. Unfortunately, one day a storm hits the ship, tossing everything about. After the storm clears, the gold in the chest is all mixed up. The pirates each know how much gold they had—indeed, they’re rather obsessive about it—but they don’t trust each other to give honest numbers. How can they figure out how much gold each pirate had in the chest?
Here’s the trick: the captain has each crew member write down how much gold they had, in secret. Then, the captain adds it all up. If the final amount matches the amount of gold in the chest, then we’re done. But if the final amount does not match the amount of gold in the chest, then the captain throws the whole chest overboard, and nobody gets any of the gold.
I want to emphasize two key features of this problem. First, depending on what happens, we may never know how much gold each pirate had in the chest or who lied, even in hindsight. Hindsight isn’t 20/20. Second, the solution to the problem requires outright destruction of wealth.
The point of this post is that these two features go hand-in-hand. There’s a wide range of real-life problems where we can’t tell what happened, even in hindsight; we’ll talk about three classes of examples. In these situations, it’s hard to design good incentives/mechanisms, because we don’t know where to allocate credit and blame. Outright wealth destruction provides a fairly general-purpose tool for such problems. It allows us to align incentives in otherwise-intractable problems, though often at considerable cost.
…Alice wants to sell her old car, and Bob is in the market for a decent quality used vehicle…Alternatively, we could try to align incentives without figuring out what happened in hindsight, using a trick similar to our pirate captain throwing the chest overboard. The trick is: if there’s a mechanical problem after the sale, then both Alice and Bob pay for it. I do not mean they split the bill; I mean they both pay the entire cost of the bill. One of them pays the mechanic, and the other takes the same amount of money in cash and burns it. (Or donates to a third party they don’t especially like, or …) This aligns both their incentives: Alice is no longer incentivized to hide mechanical problems when showing off the car, and Bob is no longer incentivized to ignore maintenance or frequent the racetrack.
However, this solution also illustrates the downside of the technique: it’s expensive.
[See also the exploding Nash equilibrium. This parallels Monte Carlo/evolutionary solutions to RL blackbox optimization: by setting up a large penalty for any divergence from the golden path, it creates an unbiased, but high variance estimator of credit assignment. When ‘pirates’ participate in enough rollouts with enough different assortments of pirates, they receive their approximate ‘honesty’-weighted (usefulness in causing high-value actions) return. You can try to pry open the blackbox and reduce variance by taking into account ‘pirate’ baselines etc, but at the risk of losing unbiasedness if you do it wrong.]
In this study, we empirically assess the contributions of inventors and firms for innovation using a 37-year panel of U.S. patenting activity. We estimate that inventors’ human capital is 5–10× more important than firm capabilities for explaining the variance in inventor output. We then examine matching between inventors and firms and find highly talented inventors are attracted to firms that (1) have weak firm-specific invention capabilities and (2) employ other talented inventors. A theoretical model that incorporates worker preferences for inventive output rationalizes our empirical findings of negative assortative matching between inventors and firms and positive assortative matching among inventors.
This paper was accepted by Ashish Arora, entrepreneurship and innovation.
I test the assumptions of the Malthusian model at the individual, cross-sectional level for France, 1650–1820. Using husband’s occupation from the parish records of 41 French rural villages, I assign three different measures of status. There is no evidence for the existence of the positive check; infant deaths are unrelated to status. However, the preventive check operates strongly, acting through female age at first marriage. The wives of rich men are younger brides than those of poorer men. This drives a positive net-fertility gradient in living standards. However, the strength of this gradient is substantially weaker than it is in pre-industrial England.
Why do individuals become entrepreneurs? Why do some succeed?
We propose 2 theories in which information frictions play a central role in answering these questions. Empirical analysis of longitudinal samples from the United States and the United Kingdom reveals the following patterns:
entrepreneurs have higher cognitive ability than employees with comparable education,
employees have better education than equally able entrepreneurs, and
entrepreneurs’ earnings are higher and exhibit greater variance than employees with similar education.
These and other empirical tests support our asymmetric information theory of entrepreneurship that when information frictions cause firms to undervalue workers lacking traditional credentials, workers’ quest to maximize their private returns drives the most able into successful entrepreneurship.
Managerial Summary: Steve Jobs, Bill Gates, Mark Zuckerberg, Rachael Ray, and Oprah Winfrey are all entrepreneurs whose educational qualifications belie their extraordinary success. Are they outliers or do their examples reveal a link between education and success in entrepreneurship? We argue that employers assess potential workers based on their educational qualifications, especially early in their careers when there is little direct information on work accomplishments and productivity. This leads those who correctly believe that they are better than their résumés show to become successful entrepreneurs. Evidence from 2 nationally representative samples of workers (from the United States and the United Kingdom) supports our theory, which applies to equally to the immigrant food vendor lacking a high school diploma as well as the PhD founder of a science-based startup.
We exploit a volcanic “experiment” to study the costs and benefits of geographic mobility. In our experiment, a third of the houses in a town were covered by lava. People living in these houses were much more likely to move away permanently. For the dependents in a household (children), we estimate that being induced to move by the “lava shock” dramatically raised lifetime earnings and education. Yet, the benefits of moving were very unequally distributed across generations: the household heads (parents) were made slightly worse off by the shock.These results suggest large barriers to moving for the children, which imply that labor does not flow to locations where it earns the highest returns. The large gains from moving for the young are surprising in light of the fact that the town affected by our volcanic experiment was(and is) a relatively high income town. We interpret our findings as evidence of the importance of comparative advantage: the gains to moving may be very large for those badly matched to the location they happened to be born in, even if differences in average income are small.
Sustained economic reform statistically-significantly raises real GDP per capita over a 5- to 10-year horizon.
Despite the unpopularity of the Washington Consensus, its policies reliably raise average incomes.
Countries that had sustained reform were 16% richer 10 years later.
Traditional policy reforms of the type embodied in the Washington Consensus have been out of academic fashion for decades. However, we are not aware of a paper that convincingly rejects the efficacy of these reforms. In this paper, we define generalized reform as a discrete, sustained jump in an index of economic freedom, whose components map well onto the points of the old consensus. We identify 49 cases of generalized reform in our dataset that spans 141 countries from 1970 to 2015. The average treatment effect associated with these reforms is positive, sizeable, and statistically-significant over 5- and 10- year windows. The result is robust to different thresholds for defining reform and different estimation methods. We argue that the policy reform baby was prematurely thrown out with the neoliberal bathwater.
[Keywords: reform, Washington Consensus, rule of law, property rights, economic development]
Most online content publishers have moved to subscription-based business models regulated by digital paywalls. But the managerial implications of such freemium content offerings are not well understood. We, therefore, utilized microlevel user activity data from the New York Times to conduct a large-scale study of the implications of digital paywall design for publishers. Specifically, we use a quasi-experiment that varied the (1) quantity (the number of free articles) and (2) exclusivity (the number of available sections) of free content available through the paywall to investigate the effects of paywall design on content demand, subscriptions, and total revenue.
The paywall policy changes we studied suppressed total content demand by about 9.9%, reducing total advertising revenue. However, this decrease was more than offset by increased subscription revenue as the policy change led to a 31% increase in total subscriptions during our seven-month study, yielding net positive revenues of over $295,717$230,0002013. The results confirm an economically-significant impact of the newspaper’s paywall design on content demand, subscriptions, and net revenue. Our findings can help structure the scientific discussion about digital paywall design and help managers optimize digital paywalls to maximize readership, revenue, and profit.
What impact on local development do immigrants and their descendants have in the short and long term? The answer depends on the attributes they bring with them, what they pass on to their children, and how they interact with other groups. We develop the first measures of the country-of-ancestry composition and of GDP per worker for US counties from 1850 to 2010. We show that changes in ancestry composition are associated with changes in local economic development. We use the long panel and several instrumental variables strategies in an effort to assess different ancestry groups’ effect on county GDP per worker. Groups from countries with higher economic development, with cultural traits that favor cooperation, and with a long history of a centralized state have a greater positive impact on county GDP perworker. Ancestry diversity is positively related to county GDP per worker, while diversity in origin-country economic development or culture is negatively related.
Economics of wind and solar face 2 opposing drivers: learning and revenue decline.
Reduction in revenue from market forces may offset or even outpace learning.
Abatement cost may rise from $46 to $66 (solar) and −$7 to $53 (wind) per tonne of CO2.
Subsidy requirement to ensure profitability could increase over time.
Integration of substantial amount of wind or solar necessitates new grid technologies.
The economics of wind and solar generation face 2 opposing drivers. Technological progress leads to lower costs and both wind and solar have shown dramatic price reductions in recent decades. At the same time, adding wind and solar lowers market electricity prices and thus revenue during periods when they produce energy. In this work, we analyze these 2 opposing effects of renewable integration: learning and diminishing marginal revenue, investigated using a model that assumes the status quo with regards to generation technology mix and demand. Our modeling results suggest that reduction in revenue from market forces may offset or even outpace technological progress. If deployed on current grids without changes to demand response, storage or other integrating technologies, the cost of mitigating CO2 with wind will increase and will be no cheaper in the future than it is today for solar. This study highlights the need to deploy grid technologies such as storage and new transmission in order to integrate wind and solar in an economically sustainable manner.
[Keywords: renewable energy, energy modeling, Marginal abatement cost curve (MACC), energy subsidy]
Industry concentration has been rising in the United States since 1980. Does this signal declining competition and the need for a new antitrust policy? Or are other factors causing concentration to increase? This paper explores the role of proprietary information technology (IT), which could increase the productivity of top firms relative to others and raise their market share. Instrumental variable estimates find a strong link between proprietary IT and rising industry concentration, accounting for most of its growth. Moreover, the top four firms in each industry benefit disproportionately. Large investments in proprietary software—$250 billion per year—appear to substantially impact industry structure.
A scan of the history of gross world product (GWP) over millennia raises fundamental questions about the human past and prospect. What is the distribution of shocks ranging from recession to pandemic? Were the agricultural and industrial revolutions one-offs or did they manifest ongoing dynamics? Is growth exponential, if with occasional step changes in the rate, or is it superexponential? If the latter, how do we interpret the implication that output will become infinite in finite time? This paper introduces the first coherent statistical model of GWP history. It casts a GWP series as a sample path in astochastic diffusion, one whose specification is novel yet rooted in neoclassical growth theory. After maximum likelihoodfitting to GWPback to 10,000 BCE, most observations fall between the 40th and 60th percentiles of predicted distributions. The fit implies that GWP explosion is all but inevitable, in a median year of 2047. This projection cuts against the steadiness of growth in income per person seen in the last two centuries in countries at the economic frontier. And it essentially contra-dicts the laws of physics. But neither tension justifies immediate dismissal of the explosive projection. Accelerating economic growth is better explained by theory than constant growth. And if physical limits are articulated in a neoclassical-type model by endogenizing natural resources, explosion leads to implosion, formally avoiding infinities. The quality of the superexponential fit to the past suggests not so much that growth is destined to ascend as that the human system is unstable.
Last week, 2K made waves by becoming the first publisher to set a $70 asking price for a big-budget game on the next generation of consoles. NBA2K21 will cost the now-standard $60 on the Xbox One and PlayStation 4, but 2K will ask $10 more for the upcoming Xbox Series X and PlayStation 5 versions of the game (a $100 “Mamba Forever Edition” gives players access to current-generation and next-generation versions in a single bundle).
It remains to be seen if other publishers will follow 2K’s lead and make $70 a new de facto standard for big-budget console game pricing. But while $70 would match the high-water mark for nominal game pricing, it wouldn’t be a historically high asking price in terms of actual value. Thanks to inflation and changes in game distribution, in fact, the current ceiling for game prices has never been lower.
…To measure how the actual asking price for console games has changed over time, we relied primarily on scanned catalogs and retail advertising fliers we found online. While this information was easier to find for some years than others, we were still able to gather data for 20 distinct years across the last four decades. We then adjusted those nominal prices to constant 2020 dollars using the Bureau of Labor Statistics’ CPI inflation calculator.
…While nominal cartridge game prices in the early ’80s topped out at $30 to $40, inflation makes that the equivalent of $80 to $100 per game these days. $34.99 for Centipede on the Atari 2600 might sound cheap, but that 1983 price is the equivalent of roughly $90 today…As the industry transitioned into 16-bit cartridges in the ’90s, though, nominal prices for top-end games rose quickly past $60 in nominal dollars and $110 in 2020 dollars. That’s in large part because of the expensive ROM storage and co-processors oftenincluded in games of the day. By 1997, late-era SNES and early-era N64 games were routinely selling for $69.99 at many retailers, the highest nominal prices the industry has generally seen and still the equivalent of over $110 in today’s dollars.
…Disc prices settled down to a more reasonable $49.99 soon after that, setting a functional nominal price ceiling that would remain until the mid ’00s. It wasn’t until the Xbox 360 and PlayStation 3 hit the scene that top asking prices started increasing to $59.99. And that’s the de facto ceiling that has remained in place to this day, even as digital downloads and the explosion of indie games has meant many titles now launch at well below this price.
Adjusting for inflation, we can see the actual (2020 dollar) value of top-end disc-based games plateaued right around $70 for almost a decade through in the ’00s and early ’10s. Inflation has slowly eroded that value in the last decade, though, to the point where a $10 increase like the one for NBA2K21 merely gets games to the same actual price point as they enjoyed earlier in the century…a bump to $70 would not be a historically unprecedented increase in console gaming’s price ceiling. Accounting for inflation, in fact, it would merely bring those prices back in line with the recent historical average—something to keep in mind as you prepare for a new, seemingly costlier generation of console hardware.
This paper provides evidence that graduates of elite public institutions in India have an earnings advantage in the labor market even though attending these colleges has no discernible effect on academic outcomes.
Admission to the elite public colleges is based on the scores obtained in the Senior Secondary School Examinations. I exploit this feature in a regression discontinuity design.
Using administrative data on admission and college test scores and an in-depth survey, I find that the salaries of elite public college graduates are higher at the admission cutoff although the exit test scores are no different.
Little is known about whether people make good choices when facing important decisions. This article reports on a large-scale randomized field experiment in which research subjects having difficulty making a decision flipped a coin to help determine their choice. For important decisions (eg. quitting a job or ending a relationship), individuals who are told by the coin toss to make a change are more likely to make a change, more satisfied with their decisions, and happier six months later than those whose coin toss instructed maintaining the status quo. This finding suggests that people may be excessively cautious when facing life-changing choices.
We study optimal smart contract design for monitoring an exchange of an item performed offline.
There are 2 parties, a seller and a buyer. Exchange happens off-chain, but the status update takes place on-chain. The exchange can be verified but with a cost. To guarantee self-enforcement of the smart contract, both parties make a deposit, and the deposits must cover payments made in all possible final states. Both parties have an (opportunity) cost of making deposits.
We discuss 2 classes of contract: In the first, the mechanism only interacts with the seller, while in the second, the mechanism can also interact with the buyer. In both cases, we derive optimal contracts specifying optimal deposits and verification policies.
The gains from trade of the first contract are dominated by the second contract, on the whole domain of parameters. However, the first type of contract has the advantage of less communication and, therefore, more flexibility.
[“Admirably lucid revisiting of Enron’s metamorphosis from a pipeline company into a derivatives trading-house that booked billions in paper profits before collapsing.” The Enron story displays the potentially distortionary impact of high intelligence on moral decision-making. It lends evidence to the notion that extremely intelligent people can be subtly incentivised to be (systematically) dishonest because their intelligence lowers the cost and raises the potential benefits of circumventing rules." —The Browser summary
What, in a nutshell, was the Enron fraud? Like a tech startup, Enron had a vision of creating many new markets by upfront investments; to achieve this, which was in fact often a viable business strategy and had worked before, it needed debt-financing and to look like a logistics company with stable lucrative locked-in long-term profits, though its profits increasingly actually came from volatile unreliable financial trading. From this pressure and the need to keep up appearances to avoid switching horses in mid-stream before projects could pay off, a house of cards built up, deviance was normalized, and it slowly slid into an enormous financial fraud with few people realizing until the end.]
Long-run growth in many models is the product of two terms: the effective number of researchers and their research productivity. We present evidence from various industries, products, and firms showing that research effort is rising substantially while research productivity is declining sharply. A good example is Moore’s Law. The number of researchers required today to achieve the famous doubling of computer chip density is more than 18× larger than the number required in the early 1970s. More generally, everywhere we look we find that ideas, and the exponential growth they imply, are getting harder to find.
We show that genetic endowments linked to educational attainment strongly and robustly predict wealth at retirement. The estimated relationship is not fully explained by flexibly controlling for education and labor income. We therefore investigate a host of additional mechanisms that could account for the gene-wealth gradient, including inheritances, mortality, risk preferences, portfolio decisions, beliefs about the probabilities of macroeconomic events, and planning horizons. We provide evidence that genetic endowments related to human capital accumulation are associated with wealth not only through educational attainment and labor income but also through a facility with complex financial decision-making.
In 2001, Norwegian tax records became easily accessible online, allowing everyone in the country to observe the incomes of everyone else. According to the income comparisons model, this change in transparency can widen the gap in well-being between richer and poorer individuals. Using survey data from 1985–2013 and multiple identification strategies, we show that the higher transparency increased the gap in happiness between richer and poorer individuals by 29%, and it increased the life satisfaction gap by 21%. We provide back-of-the-envelope estimates of the importance of income comparisons, and discuss implications for the ongoing debate on transparency policies.
Even if designers don’t contribute improvements to a font directly, companies can benefit from making their work open source. For example, Adobe Type senior manager Dan Rhatigan says releasing its Source super-family of fonts as open source has enabled the company to test new typography technologies like “variable fonts”, which make it easy for a designer to adjust the weight of a typeface, before rolling those technologies into other products.
In other cases, open source fonts help support other aspects of a company’s business. For example, Google Fonts program manager Dave Crossland says many of the fonts Google has funded most recently are designed for under-supported languages in developing countries. These efforts buttress Google’s “Next Billion Users” initiative, which aims to bring more people in developing countries online. Better support for more languages means more users, and ultimately, more money for Google.
The incentives to create open source fonts weren’t always obvious. In early 2009, a graphic designer and programmer named Micah Rich came across a forum post by a student who was interested in knowing more about how fonts worked. The student asked whether there was a professional quality open source font that they could learn from. The replies weren’t kind. “There were like 20 pages of professional type designers saying ‘This is our livelihood, how dare you ask us to work for free?’” Rich says.
Radio remains popular, delivering an audience reach of over 90 percent, but radio ratings may overestimate real advertising exposure. Little is known about audience and media factors affecting radio-advertising avoidance. Many advertisers have believed as much as one-third of the audience switch stations during radio-advertising breaks. In the current study, the authors combined Canadian portable people-meter data ratings to measure loss of audience during advertising. They discovered a new benchmark of 3% (across conditions) for mechanical (or actual physical) avoidance of radio advertising, such as switching stations or turning off the radio. This rate is about one-tenth of current estimates, but was higher for music versus talk stations, out-of-home versus in-home listening, and early versus late dayparts.
These formulas have turned an obscure idea that Galanis and his college buddies had a few years ago about making more money for second rate celebs into a thriving two-sided marketplace that has caught the attention of VCs, Hollywood, and professional sports. In June, Cameo raised $50 million in Series B funding, led by Kleiner Perkins (which recently began funding more early stage startups) to boost marketing, expand into international markets, and staff up to meet the growing demand. In the past 15 months, Cameo has gone from 20 to 125 employees, and moved from an 825-square-foot home base in the 1871 technology incubator into its current 6,000-square-foot digs in Chicago’s popping West Loop. Cameo customers have purchased more than 560,000 videos from some 20,000 celebs and counting, including ’80s star Steve Guttenberg and sports legend Kareem Abdul-Jabbar. And now, when the masses find themselves in quarantined isolation—looking for levity, distractions, and any semblance of the human touch—sending each other personalized videograms from the semi-famous has never seemed like a more pitch-perfect offering.
The product itself is as simple as it is improbable. For a price the celeb sets—anywhere from $5 to $2,500—famous people record video shout-outs, aka “Cameos”, that run for a couple of minutes, and then are delivered via text or email. Most Cameo videos are booked as private birthday or anniversary gifts, but a few have gone viral on social media. Even if you don’t know Cameo by name, there’s a good chance you caught Bam Margera of MTV’s Jackass delivering an “I quit” message on behalf of a disgruntled employee, or Sugar Ray’s Mark McGrath dumping some poor dude on behalf of the guy’s girlfriend. (Don’t feel too bad for the dumpee, the whole thing was a joke.)
…Back at the whiteboard, Galanis takes a marker and sketches out a graph of how fame works on his platform. “Imagine the grid represents all the celebrity talent in the world”, he says, “which by our definition, we peg at 5 million people.” The X-axis is willingness; the Y-axis is fame. “Say LeBron is at the top of the X-axis, and I’m at the bottom”, he says. On the willingness side, Galanis puts notoriously media-averse Seattle Seahawks running back Marshawn Lynch on the far left end. At the opposite end, he slots chatty celebrity blogger-turned-Cameo-workhorse Perez Hilton, of whom Galanis says, “I promise if you booked him right now, the video would be done before we leave this room.”
…“The contrarian bet we made was that it would be way better for us to have people with small, loyal followings, often unknown to the general population, but who were willing to charge $5 to $10”, Galanis says. Cameo would employ a revenue-sharing model, getting a 25% cut of each video, while the rest went to the celeb. They wanted people like Galanis’ co-founder (and former Duke classmate) Devon Townsend, who had built a small following making silly Vine videos of his travels with pal Cody Ko, a popular YouTuber. “Devon isn’t Justin Bieber, but he had 25,000 Instagram followers from his days as a goofy Vine star”, explains Galanis. “He originally charged a couple bucks, and the people who love him responded, ‘Best money I ever spent!’”
…After a customer books a Cameo, the celeb films the video via the startup’s app within four to seven days. Most videos typically come in at under a minute, though some talent indulges in extensive riffs. (Inexplicably, “plant-based activist and health coach” Courtney Anne Feldman, wife of Corey, once went on for more than 20 minutes in a video for a customer.) Cameo handles the setup, technical infrastructure, marketing, and support, with white-glove service for the biggest earners with “whatever they need”—details like help pronouncing a customer’s name or just making sure they aren’t getting burned-out doing so many video shout-outs.
…For famous people of any caliber—the washed-up, the obscure micro-celebrity, the actual rock star—becoming part of the supply side of the Cameo marketplace is as low a barrier as it gets. Set a price and go. The videos are short—Instagram comedian Evan Breen has been known to knock out more than 100 at $25 a pop in a single sitting—and they don’t typically require any special preparation. Hair, makeup, wardrobe, or even handlers aren’t necessary. In fact, part of the oddball authenticity of Cameo videos is that they have a take-me-as-I-am familiarity—filmed at breakfast tables, lying in bed, on the golf course, running errands, at a stoplight, wherever it fits into the schedule.
I’m excited to announce that GitHub has signed an agreement to acquire npm.
For the millions of developers who use the public npm registry every day, npm will always be available and always be free. Our focus after the deal closes will be to:
Invest in the registry infrastructure and platform.
Improve the core experience.
Engage with the community.
We examine the record of cross-country growth over the past fifty years and ask if developing countries have made progress on closing the income gap between their per capita incomes and those in the advanced economies. We conclude that, as a group, they have not and then survey the literature on absolute convergence with particular emphasis on that from the last decade or so. That literature supports our conclusion of a lack of progress in closing the income gap between countries. We close with a brief examination of the recent literature on cross-individual distribution of income, which finds that despite the lack of progress on cross country convergence, global inequality has tended to fall since 2000. ( JEL E01, E13, O11, O47, F41, F62)
Life’s major purchases, such as buying a home or going to college, often involve taking on considerable debt. What are the downstream emotional consequences? Does carrying debt influence consumers’ general sense of satisfaction in life?
7 studies examine the relationship between consumers’ debt holdings and life satisfaction, showing that the effect depends on the type of debt. Though mortgages tend to comprise consumers’ largest debts, and though credit card balances tend to have the highest interest rates, we found among a diverse sample of American adults (n = 5,808) that the type of debt most strongly associated with lower levels of life satisfaction is student loans. We further found that the extent to which consumers mentally label a given debt type as “debt” drives the emotional consequences of those debt holdings, and compared to the other debt types, student loans are perceived more as “debt.”
Together the findings suggest that carrying debt can spill over to undermine people’s overall subjective well-being, especially when their debt is perceived as such.
A defining feature of modern economic growth is the systematic application of science to advance technology. However, despite sustained progress in scientific knowledge, recent productivity growth in the United States has been disappointing. We review major changes in the American innovation ecosystem over the past century. The past three decades have been marked by a growing division of labor between universities focusing on research and large corporations focusing on development. Knowledge produced by universities is not often in a form that can be readily digested and turned into new goods and services. Small firms and university technology transfer offices cannot fully substitute for corporate research, which had previously integrated multiple disciplines at the scale required to solve substantial technical problems. Therefore, whereas the division of innovative labor may have raised the volume of science by universities, it has also slowed, at least for a period of time, the transformation of that knowledge into novel products and processes.
This article explores the foundations of religious influence in politics and society. We show that an important Islamic institution fostered the entrenchment of Islamism at a critical juncture in Indonesia, the world’s largest Muslim country. In the early 1960s, rural elites transferred large amounts of land into waqf—inalienable charitable trusts in Islamic law—to avoid expropriation by the state. Regions facing a greater threat of expropriation exhibit more prevalent waqf land and Islamic institutions endowed as such, including mosques and religious schools. These endowments provided conservative forces with the capital needed to promote Islamist ideology and mobilize against the secular state. We identify lasting effects of the transfers on the size of the religious sector, electoral support for Islamist parties, and the adoption of local sharia laws. These effects are shaped by greater demand for religion in government but not by greater piety among the electorate. Waqf assets also impose costs on the local economy, particularly in agriculture, where these endowments are associated with lower productivity. Overall, our findings shed new light on the origins and consequences of Islamism.
Native advertising is a type of online advertising that matches the form and function of the platform on which it appears. In practice, the choice between display and in-feed native advertising presents brand advertisers and online news publishers with conflicting objectives. Advertisers face a trade-off between ad clicks and brand recognition, whereas publishers need to strike a balance between ad clicks and the platform’s trustworthiness. For policy makers, concerns that native advertising confuses customers prompted the U.S. Federal Trade Commission to issue guidelines for disclosing native ads. This research aims to understand how consumers respond to native ads versus display ads and to different styles of native ad disclosures, using randomized online and field experiments combining behavioral clickstream, eye movement, and survey response data. The results show that when the position of an ad on a news page is controlled for, a native ad generates a higher click-through rate because it better resembles the surrounding editorial content. However, a display ad leads to more visual attention, brand recognition, and trustworthiness for the website than a native ad.
[Keywords: native advertising, public policy, eye-tracking, field experiments, advertising disclosure]
We explore the effectiveness of experimentation as a learning mechanism through a historical exploration of the early automobile industry. We focus on a particular subset of experiments, called strategic pivots, that requires irreversible firm commitments. Our analysis suggests that strategic pivoting was associated with success. We identify lessons that could only plausibly be learned through strategic pivoting and document that those firms that were able to learn from the strategic pivots were most likely to succeed. Even though firms may use lean techniques, market solutions may only be discovered through strategic pivots whose outcomes are unknowable ex-ante. Therefore, successful strategies reflect an element of luck.
We explore the effectiveness of economic experimentation as a learning mechanism through a historical exploration of the early automobile industry. We focus on a particular subset of economic experiments, called strategic pivots, that requires irreversible firm commitments.
Our quantitative analysis suggests that strategic pivoting was associated with success. We then use historical methods to understand whether this association is reasonably interpreted as a causal link. We identify lessons that could only plausibly have been learned through strategic pivoting and document that those firms that were able to learn from the strategic pivots were most likely to succeed.
We discuss the generalizability of our findings to build the hypothesis that strategic pivots and economic experiments originate firm strategy.
…In this sense, new model introductions are best understood as Rosenbergian Economic Experiments. Rosenberg (1994, p. 88) argued that economic experiments are necessary when both the market solution and an understanding of interdependencies are difficult to deduce from “first principles”. We infer that indeed in this context entrepreneurs found it difficult to know the best way forward, because the historical record reveals that even firms that proved, ex post, to be on the right track were, ex ante, unsure that they were making the right choices. The interdependencies associated with producing and selling new models implied substantial irreversible commitments. In this sense, automobile entrepreneurs were subject to the “paradox of entrepreneurship” (Gans et al 2016).2 That is, the outcome of each experiment was unknowable, and the choice to conduct certain experiments foreclosed future options.
We compare the absolute and relative performance of three approaches to predicting outcomes for entrants in a business plan competition in Nigeria: Business plan scores from judges, simple ad-hoc prediction models used by researchers, and machine learning approaches. We find that (1) business plan scores from judges are uncorrelated with business survival, employment, sales, or profits three years later; (2) a few key characteristics of entrepreneurs such as gender, age, ability, and business sector do have some predictive power for future outcomes; (3) modern machine learning methods do not offer noticeable improvements; (4) the overall predictive power of all approaches is very low, highlighting the fundamental difficulty of picking competition winners.
Digital platforms, such as Facebook, Uber, and AirBnB, create value by connecting users, creators, and contractors of different types. Their rapid growth, untraditional business model, and disruptive nature presents challenges for managers and asset pricers. These features also, arguably, make them natural monopolies, leading to increasing calls for special regulations and taxes. We construct and illustrate a approach for modeling digital platforms. The model allows for heterogeneity in elasticity of demand and heterogeneous network effects across different users. We parameterize our model using a survey of over 40,000 US internet users on their demand for Facebook. Facebook creates about 11.2 billion dollars in consumer surplus a month for US users age 25 or over, in line with previous estimates. We find Facebook has too low a level of advertising relative to their revenue maximizing strategy, suggesting that they also value maintaining a large user base. We simulate six proposed government policies for digital platforms, taking Facebook’s optimal response into account. Taxes only slightly change consumer surplus. Three more radical proposals, including ‘data as labor’ and nationalization, have the potential to raise consumer surplus by up to 42%. But a botched regulation that left the US with two smaller, non-competitive social media monopolies would decrease consumer surplus by 44%.
This paper provides a large scale, empirical evaluation of unintended effects from invoking the precautionary principle after the Fukushima Daiichi nuclear accident. After the accident, all nuclear power stations ceased operation and nuclear power was replaced by fossil fuels, causing an exogenous increase in electricity prices. This increase led to a reduction in energy consumption, which caused an increase in mortality during very cold temperatures. We estimate that the increase in mortality from higher electricity prices outnumbers the mortality from the accident itself, suggesting the decision to cease nuclear production has contributed to more deaths than the accident itself.
Smith challenged the longstanding assumption that inferior development outcomes reflected inferior groups, and that superior groups should coerce inferior groups to make development happen. Smith made clear that the positive-sum benefits of markets required respecting the right to consent of all individuals, from whatever group. These ideas led Smith to be a fierce critic of European conquest, enslavement, and colonialism of non-Europeans.
The loss of Smith’s insights led to a split in later intellectual history of pro-market and anti-colonial ideas. The importance of the right to consent is still insufficiently appreciated in economic development debates today.
Does advertising revenue increase or diminish content differentiation in media markets? This paper shows that an increase in the technically feasible number of ad breaks per video leads to an increase in content differentiation between several thousand YouTube channels. I exploit two institutional features of YouTube’s monetization policy to identify the causal effect of advertising on the YouTubers’ content choice. The analysis of around one million YouTube videos shows that advertising leads to a twenty percentage point reduction in the YouTubers’ probability to duplicate popular content, ie., content in high demand by the audience. I also provide evidence of the economic mechanism behind the result: popular content is covered by many competing YouTubers; hence, viewers who perceive advertising as a nuisance could easily switch to a competitor if a YouTuber increased her number of ad-breaks per video. This is less likely, however, when the YouTuber differentiates her content from her competitors.
[Keywords: advertising, content differentiation, economics of digitization, horizontal product differentiation, long tail, media diversity, user-generated content, YouTube]
…The analysis of around one million YouTube videos shows that an increase in the feasible number of ad breaks per video leads to a twenty percentage point reduction in the YouTubers’ probability to duplicate popular content. The effect size is considerable: it corresponds to around 40% of a standard deviation in the dependent variable and to around 50% of its baseline value.
The large sample size allows me to conduct several sub-group analyses to study effect heterogeneity. I find that the positive effect of advertising on content differentiation is driven by the YouTubers who have at least 1,000 subscribers, ie., the YouTubers whose additional ad revenue is likely to exceed the costs from adapt-ing their videos’ content. In addition, I find heterogeneity along video categories: some categories are more flexible in terms of their typical video duration than others, hence, exploiting the ten minutes trick is more easy (eg., a music clip is typically between three and five minutes long and cannot be easily extended). A battery of robustness checks confirms these results.
…Moreover, I show that ad revenue does not necessarily improve the YouTubers’ video quality. Although the number of views goes up when a video has more ad breaks, the relative number of likes decreases…Table 5 shows the results. The size of the estimates for δ′′(columns 1 to 3), though statistically-significant at the 1%-level, is negligible: a one second increase in video duration corresponds to a 0.0001 percentage point increase in the fraction of likes. The estimates for δ′′′ in columns 4 to 6, though, are relatively large and statistically-significant at the 1%-level, too. According to these estimates, one further second in video duration leads on average to about 1.5 percent more views. These estimates may reflect the algorithmic drift discussed in Section 9.2. YouTube wants to keep its viewers as long as possible on the platform to show as many ads as possible to them. As a result, longer videos get higher rankings and are watched more often.
As is often the case, the truth lies somewhere in between these extremes. Trump’s offer to buy Greenland is not a wild-eyed fluke. Instead, it reflects a steadily increasing American interest in Greenland that is spurred by fear of Chinese and Russian encroachments. At the same time, however, a quest to purchase Greenland is not the optimal way to achieve American security interests, as it is unlikely to succeed, and even if it did, it would be far more expensive than other, more sensible approaches. Instead, the United States should engage with Denmark and Greenland to find common ground on shared concerns…Instead of offering to buy Greenland, the United States should pursue an engagement strategy that combines targeted concessions with clever diplomacy to get the Danes and Greenlanders to cooperate. Luckily, if approached correctly, both nations are very interested in supporting U.S. security interests, as they are broadly shared—especially in Copenhagen. The key will be to see this not as a zero-sum game, but as a win-win-win situation.
Our task is simple: we will consider whether the rate of scientific progress has slowed down, and more generally what we know about the rate of scientific progress, based on these literatures and other metrics we have been investigating. This investigation will take the form of a conceptual survey of the available data. We will consider which measures are out there, what they show, and how we should best interpret them, to attempt to create the most comprehensive and wide-ranging survey of metrics for the progress of science. In particular, we integrate a number of strands in the productivity growth literature, the “science of science” literature, and various historical literatures on the nature of human progress.
…To sum up the basic conclusions of this paper, there is good and also wide-ranging evidence that the rate of scientific progress has indeed slowed down, In the disparate and partially independent areas of productivity growth, total factor productivity, GDP growth, patent measures, researcher productivity, crop yields, life expectancy, and Moore’s Law we have found support for this claim.
One implication here is we should not be especially optimistic about the productivity slowdown, as that notion is commonly understood, ending any time soon. There is some lag between scientific progress and practical outputs, and with science at less than its maximum dynamic state, one might not expect future productivity to fare so well either. Under one more specific interpretation of the data, a new General Purpose Technology might be required to kickstart economic growth once again.
Dark Net Markets (DNMs) are websites found on the Dark Net that facilitate the anonymous trade of illegal items such as drugs and weapons. Despite repeated law enforcement interventions on DNMs, the ecosystem has continued to grow since the first DNM, Silk Road, in 2011. This research project investigates the resilience of the ecosystem and tries to understand which characteristics allow it to evade law enforcement.
This thesis is comprised of three studies. The first uses a dataset contained publicly available, scraped data from 34 DNMs to quantitatively measure the impact of a large-scale law enforcement operation, Operation Onymous, on the vendor population. This impact is compared to the impact of the closure of the DNM Evolution in an exit scam. For both events, the impact on different vendor populations (for example those who are directly affected and those who aren’t) are compared and the characteristics that make vendors resilient to each event are investigated.
In the second study, a dataset acquired from the server of the DNMSilk Road 2.0 [by UK LEA] is used to better understand the relationships between buyers and vendors. Network analysis and statistical techniques are used to explore when buyers trade and who with. This dataset is also used to measure the impact of a hack on Silk Road 2.0 on its population.
In the final study, discussions from the forum site Reddit were used to qualitatively assess user perceptions of two law enforcement interventions. These interventions were distinct in nature—one, Operation Hyperion, involved warning users and arresting individuals and the second, Operation Bayonet, actively closed a DNM. Grounded Theory was used to identify topics of conversation and directly compare the opinions held by users on each intervention.
These studies were used to evaluate hypotheses incorporated into two models of resilience. One model focuses on individual users and one on the ecosystem as a whole. The models were then used to discuss current law enforcement approaches on combating DNMs and how they might be improved.
In the first study of this thesis, several methodologies for data preparation and validation within the study of DNMs were developed. In particular, this work presents a new technique for validating a publicly available dataset that has been used in multiple studies in this field. This is the first attempt to formally validate the dataset and determine what can reasonably used for research. The discussion of the dataset has implications for research already using the dataset and future research on datasets collected using the same methodology.
In order to conduct the second study in this thesis, a dataset was acquired from a law enforcement agency. This dataset gives a new insight on how buyers behave on DNMs. Buyers are an unstudied group because their activities are often hidden and so analysis of this dataset reveals new insights into the behaviour of these users. The results of this study have been used to comment on existing work using less complete datasets and contribute new findings.
The third study in this thesis presents a qualitative analysis of two law enforcement interventions. This is the first work to assess the impact of either intervention and so provides new insights into how they were received by the DNM ecosystem. It uses qualitative techniques which are rare within this discipline and so provides a different perspective, for example by revealing how individuals perceive the harms of law enforcement interventions on DNMs. The value of this work has been recognised through its acceptance at a workshop at the IEEE European Symposium on Security and Privacy, 2019.
Part of this research has been conducted in consultation with a [UK] law enforcement agency who provided data for this research. The results of this research are framed specifically for this agency and other law enforcement groups currently investigating DNMs. Several suggestions are made on how to improve the efficacy of law enforcement interventions on DNMs
…A response to the criticisms of (Dolliver (2015a)) has been presented in (Dolliver (2015b)). Here, Dolliver (2015b) attempts to provide further evidence that Silk Road 2.0 overestimated the number of listings advertised by including the results of a manual inspection of the site (Dolliver (2015b)). The response also calls into question the use of the Branwen dataset which was collected by an independent researcher and has not been peer-reviewed. Dolliver (2015b) claims that the “manually crawling approach” adopted by Van Buskirk et al. (2015) is also problematic as it will miss listings that are uploaded and removed during the time it takes to crawl the site. Finally, other, unpublished datasets cited in (Dolliver (2015b)) also point to Silk Road 2.0 being especially volatile in nature before it was closed down and show that the number of listings varied by thousands from week to week. This volatility could potentially explain the contradicting depictions of Silk Road 2.0 given by (Dolliver (2015a)) and (Munksgaard et al. (2016)) and allow for both studies to have accurately described the site. However, empirical evidence in the form of police reports that describe the size of Silk Road 2.0 after its closure shows that the data collected by Dolliver (2015a) is an underestimate. Indeed, new data presented in this body of work also demonstrates that Silk Road 2.0 was bigger than Dolliver (2015a) claims, even at the beginning of its lifetime.
We discuss the potential role of universal basic incomes (UBIs) in advanced countries. A feature of advanced economies that distinguishes them from developing countries is the existence of well-developed, if often incomplete, safety nets. We develop a framework for describing transfer programs that is flexible enough to encompass most existing programs as well as UBIs, and we use this framework to compare variousUBIs to the existing constellation of programs in the United States. AUBI would direct much larger shares of transfers to childless, nonelderly, nondisabled households than existing programs, and much more to middle-income rather than poor households. A UBI large enough to increase transfers to low-income families would be enormously expensive. We review the labor supply literature for evidence on the likely impacts of a UBI. We argue that the ongoing UBI pilot studies will do little to resolve the major outstanding questions.
In those early days, the company, just like almost everybody else in Washington, primarily produced Red Delicious apples, plus a few Goldens and Grannies—familiar workhorse varieties that anybody was allowed to grow. Back then, the state apple commission advertised its wares with a poster of a stoplight: one apple each in red, green, and yellow. Today, across more than 4,000 acres of McDougall apple trees, you won’t find a single Red; every year, you’ll also find fewer acres of the apples that McDougall calls “core varieties”, the more modern open-access standards such as Gala and Fuji. Instead, McDougall is betting on what he calls “value-added apples”: Ambrosias, whose rights he licensed from a Canadian company; Envy, Jazz, and Pacific Rose, whose intellectual properties are owned by the New Zealand giant Enzafruit; and a brand-new variety, commercially available for the first time this year and available only to Washington-state growers: the Cosmic Crisp.
…The Cosmic Crisp is debuting on grocery stores after this fall’s harvest, and in the nervous lead-up to the launch, everyone from nursery operators to marketers wanted me to understand the crazy scope of the thing: the scale of the plantings, the speed with which mountains of commercially untested fruit would be arriving on the market, the size of the capital risk. People kept saying things like “unprecedented”, “on steroids”, “off the friggin’ charts”, and “the largest launch of a single produce item in American history.”
McDougall took me to the highest part of his orchard, where we could look down at all its hundreds of very expensively trellised and irrigated acres (he estimated the costs to plant each individual acre at $60,000 to $65,000, plus another $12,000 in operating costs each year), their neat, thin lines of trees like the stitching over so many quilt squares. “If you’re a farmer, you’re a riverboat gambler anyway”, McDougall said. “But Cosmic Crisp—woo!” I thought of the warning of one former fruit-industry journalist that, with so much on the line, the enormous launch would have to go flawlessly: “It’s gotta be like the new iPhone.”
…Though Washington State University owns the WA 38 patent, the breeding program has received funding from the apple industry, so it was agreed, over some objections by people who worried that quality would be diluted, that the variety should be universally and exclusively available to Washington growers. (Growers of Cosmic Crisp pay royalties both on every tree they buy and on every box they sell, money that will fund future breeding projects as well as the shared marketing campaign.) The apple tested so well that WSU, in collaboration with commercial nurseries, began producing apple saplings as fast as possible; the plan was to start with 300,000 trees, but growers requested 4 million, leading to a lottery for divvying up the first available trees. Within three years, the industry had sunk 13 million of them, plus more than half a billion dollars, into the ground. Proprietary Variety Management expects that the number of Cosmic Crisp apples on the market will grow by millions of boxes every year, outpacing Pink Lady and Honeycrisp within about five years of its launch.
We document a causal impact of online user-generated information on real-world economic outcomes. In particular, we conduct a randomized field experiment to test whether additional content on Wikipedia pages about cities affects tourists’ choices of overnight visits. Our treatment of adding information to Wikipedia increases overnight stays in treated cities compared to non-treated cities. The impact is largely driven by improvements to shorter and relatively incomplete pages on Wikipedia. Our findings highlight the value of content in digital public goods for informing individual choices.
[Keywords: field experiment, user-generated content, Wikipedia, tourism industry]
This paper examines how employees become simultaneously empowered and alienated by detailed, holistic knowledge of the actual operations of their organization, drawing on an inductive analysis of the experiences of employees working on organizational change teams. As employees build and scrutinize process maps of their organization, they develop a new comprehension of the structure and operation of their organization. What they had perceived as purposively designed, relatively stable, and largely external is revealed to be continuously produced through social interaction. I trace how this altered comprehension of the organization’s functioning and logic changes employees’ orientation to and place within the organization. Their central roles are revealed as less efficacious than imagined and, in fact, as reproducing the organization’s inefficiencies. Alienated from their central operational roles, they voluntarily move to peripheral change roles from which they feel empowered to pursue organization-wide change. The paper offers two contributions. First, it identifies a new means through which central actors may become disembedded, that is, detailed comprehensive knowledge of the logic and operations of the surrounding social system. Second, the paper problematizes established insights about the relationship between social position and challenges to the status quo. Rather than a peripheral social location creating a desire to challenge the status quo, a desire to challenge the status quo may encourage central actors to choose a peripheral social location.
…Some held out hope that one or two people at the top knew of these design and operation issues; however, they were often disabused of this optimism. For example, a manager walked the CEO through the map, presenting him with a view he had never seen before and illustrating for him the lack of design and the disconnect between strategy and operations. The CEO, after being walked through the map, sat down, put his head on the table, and said, “This is even more fucked up than I imagined.” The CEO revealed that not only was the operation of his organization out of his control but that his grasp on it was imaginary.
But as the projects ended and the teams disbanded, a puzzle emerged. Some team members returned, as intended by senior management, to their prior roles and careers in the organization. Some, however, chose to leave these careers entirely, abandoning what had been to that point successful and satisfying work to take on organizational change roles elsewhere. Many took new jobs with responsibility for organizational development, Six Sigma, total quality management (TQM), business process re-engineering (BPR), or lean projects.Others assumed temporary contract roles to manage BPR project teams within their own or other organizations.
…Despite being experienced managers, what they learned was eye-opening. One explained that “it was like the sun rose for the first time….I saw the bigger picture.” They had never seen the pieces—the jobs, technologies, tools, and routines—connected in one place, and they realized that their prior view was narrow and fractured. A team member acknowledged, “I only thought of things in the context of my span of control.”…The maps of the organization generated by the project teams also showed that their organizations often lacked a purposeful, integrated design that was centrally monitored and managed. There may originally have been such a design, but as the organization grew, adapted to changing markets, brought on new leadership, added or subtracted divisions, and so on, this animating vision was lost. The original design had been eroded, patched, and overgrown with alternative plans. A manager explained, “Everything I see around here was developed because of specific issues that popped up, and it was all done ad hoc and added onto each other. It certainly wasn’t engineered.” Another manager described how local, off-the-cuff action had contributed to the problems observed at the organizational level:
“They see problems, and the general approach, the human approach, is to try and fix them….Functions have tried to put band-aids on every issue that comes up. It sounds good, but when they are layered one on top of the other they start to choke the organization. But they don’t see that because they are only seeing their own thing.”
Finally, analyzing a particular work process, another manager explained that she had been “assuming that somebody did this [the process] on purpose. And it wasn’t done on purpose. It was just a series of random events that somehow came together.”]
We provide generalizable and robust results on the causal sales effect of TV advertising based on the distribution of advertising elasticities for a large number of products (brands) in many categories. Such generalizable results provide a prior distribution that can improve the advertising decisions made by firms and the analysis and recommendations of anti-trust and public policy makers. A single case study cannot provide generalizable results, and hence the marketing literature provides several meta-analyses based on published case studies of advertising effects. However, publication bias results if the research or review process systematically rejects estimates of small, statistically insignificant, or “unexpected” advertising elasticities. Consequently, if there is publication bias, the results of a meta-analysis will not reflect the true population distribution of advertising effects.
To provide generalizable results, we base our analysis on a large number of products and clearly lay out the research protocol used to select the products. We characterize the distribution of all estimates, irrespective of sign, size, or statistical-significance. To ensure generalizability we document the robustness of the estimates. First, we examine the sensitivity of the results to the approach and assumptions made when constructing the data used in estimation from the raw sources. Second, as we aim to provide causal estimates, we document if the estimated effects are sensitive to the identification strategies that we use to claim causality based on observational data. Our results reveal substantially smaller effects of own-advertising compared to the results documented in the extant literature, as well as a sizable percentage of statistically insignificant or negative estimates. If we only select products with statistically-significant and positive estimates, the mean or median of the advertising effect distribution increases by a factor of about five.
The results are robust to various identifying assumptions, and are consistent with both publication bias and bias due to non-robust identification strategies to obtain causal estimates in the literature.
OpenStreetMap (OSM), the largest Volunteered Geographic Information project in the world, is characterized both by its map as well as the active community of the millions of mappers who produce it. The discourse about participation in the OSM community largely focuses on the motivations for why members contribute map data and the resulting data quality. Recently, large corporations including Apple, Microsoft, and Facebook have been hiring editors to contribute to the OSM database.
In this article, we explore the influence these corporate editors are having on the map by first considering the history of corporate involvement in the community and then analyzing historical quarterly-snapshot OSM-QA-Tiles to show where and what these corporate editors are mapping. Cumulatively, millions of corporate edits have a global footprint, but corporations vary in geographic reach, edit types, and quantity. While corporations currently have a major impact on road networks, non-corporate mappers edit more buildings and points-of-interest: representing the majority of all edits, on average.
Since corporate editing represents the latest stage in the evolution of corporate involvement, we raise questions about how the OSM community—and researchers—might proceed as corporate editing grows and evolves as a mechanism for expanding the map for multiple uses.
[Keywords: OpenStreetMap; corporations; geospatial data; open data; Volunteered Geographic Information]
We assess evidence from randomized controlled trials (RCTs) on long-run economic productivity and living standards in poor countries. We first document that several studies estimate large positive long-run impacts, but that relatively few existing RCTs have been evaluated over the long run. We next present evidence from a systematic survey of existing RCTs, with a focus on cash transfer and child health programs, and show that a meaningful subset can realistically be evaluated for long-run effects. We discuss ways to bridge the gap between the burgeoning number of development RCTs and the limited number that have been followed up to date, including through new panel (longitudinal) data; improved participant tracking methods; alternative research designs; and access to administrative, remote sensing, and cell phone data. We conclude that the rise of development economics RCTs since roughly 2000 provides a novel opportunity to generate high-quality evidence on the long-run drivers of living standards.
We live in an age of paradox. Systems using artificial intelligence match or surpass human-level performance in more and more domains, leveraging rapid advances in other technologies and driving soaring stock prices. Yet measured productivity growth has declined by half over the past decade, and real income has stagnated since the late 1990s for a majority of Americans. We describe four potential explanations for this clash of expectations and statistics: false hopes, mismeasurement, redistribution and implementation lags. While a case can be made for each explanation, we argue that lags have likely been the biggest contributor to the paradox. The most impressive capabilities of AI, particularly those based on machine learning, have not yet diffused widely. More importantly, like other general purpose technologies, their full effects won’t be realized until waves of complementary innovations are developed and implemented. The adjustment costs, organizational changes, and new skills needed for successful AI can be modeled as a kind of intangible capital. A portion of the value of this intangible capital is already reflected in the market value of firms. However, going forward, national statistics could fail to measure the full benefits of the new technologies and some may even have the wrong sign.
…The discussion around the recent patterns in aggregate productivity growth highlights a seeming contradiction. On the one hand, there are astonishing examples of potentially transformative new technologies that could greatly increase productivity and economic welfare (see Brynjolfsson and McAfee 2014 [Race Against The Machine]). There are some early concrete signs of these technologies’ promise, recent leaps in artificial intelligence (AI) performance being the most prominent example. However, at the same time, measured productivity growth over the past decade has slowed importantly. This deceleration is large, cutting productivity growth by half or more in the decade preceding the slow-down. It is also widespread, having occurred throughout the Organisation for Economic Cooperation and Development (OECD) and, more recently, among many large emerging economies as well (Syverson 2017).1
We thus appear to be facing a redux of the Solow (1987) paradox: we see transformative new technologies everywhere but in the productivity statistics.
In this chapter, we review the evidence and explanations for the modern productivity paradox and propose a resolution. Namely, there is no inherent inconsistency between forward-looking technological optimism and backward-looking disappointment. Both can simultaneously exist. Indeed, there are good conceptual reasons to expect them to simultaneously exist when the economy undergoes the kind of restructuring associated with transformative technologies. In essence, the forecasters of future company wealth and the measurers of historical economic performance show the greatest disagreement during times of technological change. In this chapter, we argue and present some evidence that the economy is in such a period now.
Information about a person’s income can be useful in several business-related contexts, such as personalized advertising or salary negotiations. However, many people consider this information private and are reluctant to share it. In this paper, we show that income is predictable from the digital footprints people leave on Facebook. Applying an established machine learning method to an income-representative sample of 2,623 U.S. Americans, we found that (1) Facebook Likes and Status Updates alone predicted a person’s income with an accuracy of up to r = 0.43, and (2) Facebook Likes and Status Updates added incremental predictive power above and beyond a range of socio-demographic variables (ΔR2 = 6–16%, with a correlation of up to r = 0.49). Our findings highlight both opportunities for businesses and legitimate privacy concerns that such prediction models pose to individuals and society when applied without individual consent.
This study examines the use of “algorithms in everyday labor” to explore the labor conditions of three Chinese food delivery platforms: Baidu Deliveries, Eleme, and Meituan. In particular, it examines how delivery workers make sense of these algorithms through the parameters of temporality, affect, and gamification. The study also demonstrates that in working for food delivery platforms, couriers are not simply passive entities that are subjected to a digital “panopticon.” Instead, they create their own “organic algorithms” to manage and, in some cases, even subvert the system. The results of the approach used in this study demonstrate that digital labor has become both more accessible and more precarious in contemporary China. Based on these results, the notion of “algorithmic making and remaking” is suggested as a topic in future research on technology and digital labor.
Cities are epicenters for invention. Scaling analyses have verified the productivity of cities and demonstrate a superlinear relationship between cities’ population size and invention performance. However, little is known about what kinds of inventions correlate with city size. Is the productivity of cities only limited to invention quantity?
I shift the focus on the quality of idea creation by investigating how cities influence the art of knowledge combinations. Atypical combinations introduce novel and unexpected linkages between knowledge domains. They express creativity in inventions and are particularly important for technological breakthroughs. My study of 174 years of invention history in metropolitan areas in the US reveals a superlinear scaling of atypical combinations with population size. The observed scaling grows over time indicating a geographic shift toward cities since the early twentieth century.
The productivity of large cities is thus not only restricted to quantity but also includes quality in invention processes.
…I attribute the growing importance to the opportunities given in large cities. In particular, knowledge diversity in large cities provides opportunities for knowledge combinations not found in smaller and less diverse towns. Beyond diversity, larger cities also concentrate the skills to exploit the given diversity. Inventors in large cities realize a disproportionate number of distinct knowledge combinations, which also affects the exploration of new combinations. Given the cumulative nature of knowledge, wealth, innovation, and human skill, my results suggest a self-reinforcing process that favors metropolitan centers for knowledge creation. Thus, knowledge creation plays a major role for creating and maintaining spatial inequalities.
Increasing spatial inequalities have profound implications for regional development and policy making. Inequalities unfold in the form of invention activities, as one crucial economic activity that transforms our economy and society. The benefits of knowledge creation in large cities are not shared by all regions and reinforces a widening divergence between large cities—as centers of knowledge exploration–and smaller towns. Given the importance of geography for knowledge generation, it is unlikely that spatial concentration of invention activities will stop. Earlier research, moreover, observes a decreasing productivity of R&D and highlights that more resources andcapabilities are necessary to yield useful R&D outcomes (Lanjouw and Schankerman 2004; Wuchty, Jones, and Uzzi 2007; Jones, Wuchty, and Uzzi 2008). Large cities provide the required resources and capabilities in close geographic proximity. Smaller towns lack the requirements to compete, get disconnected, and fall behind. It should be, furthermore, in the interest of policy makers that all places benefit from urban externalities. That is, policy has to consider how to distribute the novelty created in the centers down the urban hierarchy to smaller towns and lagging regions.
However, much research remains to be done. Why did it take longer for atypical combinations to scale that strongly with city size? Has this process stopped, or will it continue? Moreover, atypical knowledge combinations do not automatically imply a high technological impact or economic value. Thus, it remains unclear precisely how (a)typical combinations relate to the economic performance of cities and how they explain local stories of success and failure.
To show how fast Internet affects employment in Africa, we exploit the gradual arrival of submarine Internet cables on the coast and maps of the terrestrial cable network. Robust difference-in-differences estimates from 3 datasets, covering 12 countries, show large positive effects on employment rates—also for less educated worker groups—with little or no job displacement across space. The sample-wide impact is driven by increased employment in higher-skill occupations, but less-educated workers’ employment gain less so. Firm-level data available for some countries indicate that increased firm entry, productivity, and exporting contribute to higher net job creation. Average incomes rise.
This article examines the extent to which Victorian investors were short-sale constrained. While previous research suggests that there were relatively few limits on arbitrage, this article argues that short-sales of stocks outside the Official List were indirectly constrained by the risk of being cornered. Evidence for this hypothesis comes from three corners in cycle company shares [during the 1890s bicycle mania] which occurred in 1896–1897, two of which resulted in substantial losses for short-sellers. Legal efforts to retrieve funds lost in a corner were unsuccessful, and the court proceedings reveal a widespread contempt for short-sellers, or ‘bears’, among the general public. Consistent with the hypothesis that these episodes affected the market, this study’s findings show that cycle companies for which cornering risk was greater experienced disproportionately lower returns during a subsequent crash in the market for cycle shares. This evidence suggests that, under certain circumstances, short-selling shares in Britain prior to 1900 could have been much riskier than previously thought.
…Cycle share prices are found to have risen by over 200% in the early months of 1896, and remained at a relatively high level until March 1897. This boom was accompanied by the promotion of many new cycle firms, with 363 established in 1896 and another 238 during the first half of 1897. This was followed by a crash, with cycle shares losing 76% of their peak value by the end of 1898. The financial press appears to have been aware that a crash was imminent, repeatedly advising investors to sell cycle shares during the first half of 1897. Interestingly, however, these articles never explicitly recommended short-selling cycle shares…Between 1890 and 1896, a succession of major technological innovations substantially increased the demand for British bicycles.37 Bicycle production increased in response, with the number of British cycle companies in existence quadrupling between 1889 and 1897.38 Cycle firms, most of which were based in and around Birmingham, took advantage of the boom of 1896 by going public, resulting in the successful promotion of £17.3 million worth of cycle firms in 1896 and a further £7.4 million in 1897.39 By 1897 there was an oversupply problem in the trade, which was worsened by an exponential increase in the number of bicycles imported from the US.40 The bicycle industry entered recession, and the number of Birmingham-based cycle firms fell by 54% between 1896 and 1900.41
…The total paid for the 200 shares [by the short-trader Hamlyn] was £2,550, to be delivered at a price of £231.25, for a loss of £2,318.75. To put this loss in context, Hamlyn’s barrister noted that, had he succeeded in obtaining the shares at allotment, the profit would have been only £26.
We analysed a large health insurance dataset to assess the genetic and environmental contributions of 560 disease-related phenotypes in 56,396 twin pairs and 724,513 sibling pairs out of 44,859,462 individuals that live in the United States. We estimated the contribution of environmental risk factors (socioeconomic status (SES), air pollution and climate) in each phenotype. Mean heritability (h2 = 0.311) and shared environmental variance (c2 = 0.088) were higher than variance attributed to specific environmental factors such as zip-code-level SES (varSES = 0.002), daily air quality (var~AQI` = 0.0004), and average temperature (vartemp = 0.001) overall, as well as for individual phenotypes. We found statistically-significant heritability and shared environment for a number of comorbidities (h2 = 0.433, c2 = 0.241) and average monthly cost (h2 = 0.290, c2 = 0.302). All results are available using our Claims Analysis of Twin Correlation and Heritability (CaTCH) web application.
Digital platform-based marketplaces often have a wide variety of amateurs working alongside professional enterprises and entrepreneurs. Can a platform owner alter the number and mix of market participants?
I develop a theoretical framework to show that amateurs emerge as a distinct type of market participant, subject to different market selection conditions, and differing from professionals in quality, willingness to persist on the platform, and in mix of motivations. I clarify how targeted combinations of tweaks to platform design can lead the “bottom to fall out” of a market to large numbers of amateurs.
In data on mobile app developers, I find that shifts in minimum development costs and non-pecuniary motivations are associated with discontinuous changes in numbers and types of developers, precisely as predicted by theory. The resulting flood of low-quality amateurs is in this context is associated with equally substantial increases in numbers of high-quality products.
[Keywords: amateurs, industrial organization, labor, digitization, long-tail, platforms and marketplaces, complementors, entry and exit, selection and retention, entrepreneurship, minimum viable products, non-pecuniary motivations]
We propose a design for philanthropic or publicly-funded seeding to allow (near) optimal provision of a decentralized, self-organizing ecosystem of public goods. The concept extends ideas from Quadratic Voting to a funding mechanism for endogenous community formation. Citizens make public goods contributions to projects of value to them. The amount received by the project is (proportional to) the square of the sum of the square roots of contributions received. Under the “standard model” this yields first best public goods provision. Variations can limit the cost, help protect against collusion and aid coordination. We discuss applications to campaign finance, open source software ecosystems, news media finance and urban public projects. More broadly, we relate our mechanism to political theory, discussing how this solution to the public goods problem may furnish neutral and non-authoritarian rules for society that nonetheless support collective organization.
Making good decisions requires people to appropriately explore their available options and generalize what they have learned. While computational models have successfully explained exploratory behavior in constrained laboratory tasks, it is unclear to what extent these models generalize to complex real world choice problems. We investigate the factors guiding exploratory behavior in a data set consisting of 195,333 customers placing 1,613,967 orders from a large online food delivery service. We find important hallmarks of adaptive exploration and generalization, which we analyze using computational models. We find evidence for several theoretical predictions: (1) customers engage in uncertainty-directed exploration, (2) they adjust their level of exploration to the average restaurant quality in a city, and (3) they use feature-based generalization to guide exploration towards promising restaurants. Our results provide new evidence that people use sophisticated strategies to explore complex, real-world environments.
Beyond money and possessions, how are the rich different from the general population?
Drawing on a unique sample of high-net-worth individuals from Germany (≥1 million Euro in financial assets; n = 130), nationally representative data (n = 22,981), and an additional online panel (n = 690), we provide the first direct investigation of the stereotypically perceived and self-reported personality profiles of high-net-worth individuals.
Investigating the broad personality traits of the Big Five and the more specific traits of narcissism and locus of control, we find that stereotypes about wealthy people’s personality are accurate albeit somewhat exaggerated and that wealthy people can be characterized as stable, flexible, and agentic individuals who are focused more on themselves than on others.
To date, building a highly trustworthy, credible, and decentralized proof of delivery (POD) systems to trace and track physical items is avery challenging task. This paper presents a blockchain based POD solution of shipped physical items that uses smart contracts of Ethereum blockchain network, in which tracking, and tracing activities, logs, and events can be done in a decentralized manner, with high integrity, reliability, and immutability.
Our solution incentivizes each participating entity including the seller, transporter, and buyer to act honestly, and it totally eliminates the need for a third party as escrow. Our proposed POD solution ensures accountability, punctuality, integrity and auditability. Moreover, the proposed solution makes use of a Smart Contract Attestation Authority to ensure that the code follows the terms and conditions signed by the participating entities. It also allows the cancellation of the transaction by the seller, buyer and transporter based on the contract state. Furthermore, the buyer can also ask for a refund in certain justifiable cases.
The full code, implementation discussion with sequence diagrams, testing and verification details are all included as part of the proposed solution.
[Keywords: proof of delivery, blockchain, Ethereum, smart contracts]
A fundamental problem for electronic commerce is the buying and selling of digital goods between individuals that may not know or trust each other. Traditionally, this problem has been addressed by the use of trusted third-parties such as credit-card companies, mediated escrows, legal adjudication, or reputation systems. Despite the rise of blockchain protocols as a way to send payments without trusted third parties, the important problem of exchanging a digital good for payment without trusted third parties has been paid much less attention. We refer to this problem as the Buyer and Seller’s Dilemma and present for it a dual-deposit escrow trade protocol which uses double-sided payment deposits in conjunction with simple cryptographic primitives, and that can be implemented using a blockchain-based smart contract. We analyze our protocol as an extensive-form game and prove that the Sub-game Perfect Nash Equilibrium for this game is for both the buyer and seller to cooperate and behave honestly. We address this problem under the assumption that the digital good being traded is known and verifiable, with a fixed price known to both parties.
A randomized control trial with 432 small and medium enterprises in Mexico shows positive impact of access to 1 year of management consulting services on total factor productivity and return on assets. Owners also had an increase in “entrepreneurial spirit” (an index that measures entrepreneurial confidence and goal setting). Using Mexican social security data, we find a persistent large increase (about 50%) in the number of employees and total wage bill even 5 years after the program. We document large heterogeneity in the specific managerial practices that improved as a result of the consulting, with the most prominent being marketing, financial accounting, and long-term business planning.
In this article, the authors explore why academics tend to oppose the market. To this intent the article uses normative political theory as an explanatory mechanism, starting with a conjecture originally suggested by Robert Nozick. Academics are over-represented amongst the best students of their cohort. School achievement engenders high expectations about future economic prospects. Yet markets are only contingently sensitive to school achievement. This misalignment between schools and markets is perceived by academics—and arguably by intellectuals in general—as morally unacceptable. To test this explanation, the article uses an online questionnaire with close to 1500 French academic respondents. The data resulting from this investigation lend support to Nozick’s hypothesis.
What is the worth of a college degree when higher education expands? The relative education hypothesis posits that when college degrees are rare, individuals with more education have less competition to enter highly-skilled occupations. When college degrees are more common, there may not be enough highly-skilled jobs to go around; some college-educated workers lose out to others and are pushed into less-skilled jobs. Using new measurements of occupation-level verbal, quantitative, and analytic skills, this study tests the changing effect of education on skill utilization across 70 years of birth cohorts from 1971 to 2010, net of all other age, period, and cohort trends. Higher-education expansion erodes the value of a college degree, and college-educated workers are at greater risk for underemployment in less cognitively demanding occupations. This raises questions about the sources of rising income inequality, skill utilization across the working life course, occupational sex segregation, and how returns to education have changed across different life domains.
Are some management practices akin to a technology that can explain firm and national productivity, or do they simply reflect contingent management styles?
We collect data on core management practices from over 11,000 firms in 34 countries.
We find large cross-country differences in the adoption of management practices, with the US having the highest size-weighted average management score.
We present a formal model of “Management as a Technology”, and structurally estimate it using panel data to recover parameters including the depreciation rate and adjustment costs of managerial capital (both found to be larger than for tangible non-managerial capital). Our model also predicts (1) a positive impact of management on firm performance; (2) a positive relationship between product market competition and average management quality (part of which stems from the larger covariance between management with firm size as competition strengthens); and (3) a rise in the level and a fall in the dispersion of management with firm age.
We find strong empirical support for all of these predictions in our data.
Finally, building on our model, we find that differences in management practices account for about 30% of total factor productivity differences both between countries and within countries across firms.
[IA blog] Section 108(h) has not been utilized by libraries and archives, in part because of the uncertainty over definitions (eg. “normal commercial exploitation”), determination of the eligibility window (last 20 years of the copyright term of published works), and how to communicate the information in the record to the general public.
This paper seeks to explore the elements necessary to implement the Last Twenty exception, otherwise known as Section 108(h) and create a Last Twenty (L20) collection. In short, published works in the last 20 years of the copyright may be digitized and distributed by libraries, archives, and museums, as long as there is no commercial sale of the works and no reasonably priced copy is available. This means that Section 108(h) is available for the forgotten and neglected works, 1923-1941, including millions of foreign works restored by GATT. Section 108(h) is less effective for big, commercially available works.
In many ways, that is the dividing line created by Section 108(h): allow for commercial exploitation of works throughout their term, but allow libraries to rescue works that had no commercial exploitation or copies available for sale and make them available through copying and distribution for research, scholarship, and preservation. In fact, Section 108(h) when it was being debated in Congress was called labeled “orphan works.” This paper suggests ways to think about the requirements of Section 108(h) and to make it more usable for libraries. Essentially, by confidently using Section 108(h) we can continue to make the past usable one query at a time.
Alex/John/Mark Taylor belongs to one of the last surviving professions of Dickensian London. Clerks have co-existed with chimney sweeps and gene splicers. It’s a trade that one can enter as a teenager, with no formal qualifications, and that’s astonishingly well-paid. A senior clerk can earn a half-million pounds per year, or more than $650,000, and some who are especially entrenched make far more.
Clerks—pronounced “clarks”—have no equivalent in the U.S. legal system, and have nothing in common with the Ivy League-trained Supreme Court aides of the same spelling. They exist because in England and Wales, to simplify a bit, the role of lawyer is divided in two: There are solicitors, who provide legal advice from their offices, and there are barristers, who argue in court. Barristers get the majority of their business via solicitors, and clerks act as the crucial middlemen between the tribes—they work for and sell the services of their barristers, steering inquiring solicitors to the right man or woman. Clerks are by their own cheerful admission “wheeler-dealers”, what Americans might call hustlers. They take a certain pride in managing the careers of their bosses, the barristers—a breed that often combines academic brilliance with emotional fragility. Many barristers regard clerks as their pimps. Some, particularly at the junior end of the profession, live in terror of clerks. The power dynamic is baroque and deeply English, with a naked class divide seen in few other places on the planet. Barristers employ clerks, but a bad relationship can strangle their supply of cases. In his 1861 novel Orley Farm, Anthony Trollope described a barrister’s clerk as a man who “looked down from a considerable altitude on some men who from their professional rank might have been considered as his superiors.”…One of the most peculiar aspects of the clerk-barrister relationship is that clerks handle money negotiations with clients. Barristers argue that avoiding fee discussions keeps their own interactions with clients clean and uncomplicated, but as a consequence, they’re sometimes unaware of how much they actually charge. The practice also insulates and coddles them. Clerks become enablers of all sorts of curious, and in some cases self-destructive, behavior.
…John Flood, a legal sociologist who in 1983 published the only book-length study of barristers’ clerks, subtitled The Law’s Middlemen, uses an anthropological lens to explain the relationship. He suggests that barristers, as the de facto priests of English law—with special clothes and beautiful workplaces—require a separate tribe to keep the temple flames alight and press money from their congregation. Clerks keep barristers’ hands clean; in so doing they accrue power, and they’re paid accordingly. I asked more than a dozen clerks and barristers, as well as a professional recruiter, what the field pays. Junior clerks, traditionally recruited straight after leaving school at 16 and potentially with no formal academic qualifications, start at £15,000 to £22,000 ($19,500 to $28,600); after 10 years they can make £85,000. Pay for senior clerks ranges from £120,000 to £500,000, and a distinct subset can earn £750,000. The Institute of Barristers’ Clerks disputed these figures, saying the lows were too low and the highs too high. But there’s no doubt that the best clerks are well-rewarded. David Grief, 63, a senior clerk at the esteemed Essex Court Chambers, spoke to me enthusiastically about his personal light airplane, a TB20 Trinidad.
…Before the U.K. decimalized its currency in 1971, clerks received “shillings on the guinea” for each case fee. Under the new money system, the senior clerks’ take was standardized at 10% of their chambers’ gross revenue. Sometimes, but not always, they paid their junior staff and expenses out of this tithe. Chambers at the time were typically small, four to six barristers strong, but in the 1980s, they grew. As they added barristers and collected more money, each chambers maintained just one chief clerk, whose income soared. The system was opaque: The self-employed barristers didn’t know what their peers within their own chambers were paid, and in a precomputer age, with all transactions recorded in a byzantine paper system, barristers sometimes didn’t know what their clerks earned, either. Jason Housden, a longtime clerk who now works at Matrix Chambers, told me that, when he started out in the 1980s at another office, his senior clerk routinely earned as much as the top barristers and on occasion was the best-paid man in the building. · One anecdote from around the same time, possibly apocryphal, is widely shared. At a chambers that had expanded and was bringing in more money, three silks decided their chief clerk’s compensation, at 10%, had gotten out of hand. They summoned him for a meeting and told him so. In a tactical response that highlights all the class baggage of the clerk-barrister relationship, as well as the acute British phobia of discussing money, the clerk surprised the barristers by agreeing with them. “I’m not going to take a penny more from you”, he concluded. The barristers, gobsmacked and paralyzed by manners, never raised the pay issue again, and the clerk remained on at 10% until retirement. · Since the 1980s, fee structures have often been renegotiated when a senior clerk retires. Purely commission-based arrangements are now rare—combinations of salary and incentive are the rule, though some holdouts remain. Goddard told me last summer that he receives 3% of the entire take of the barristers at 4 Stone; later he said this was inaccurate, and that his pay was determined by a “complicated formula.” (Pupil barristers, as trainees are known, start there at £65,000 per year, and the top silks each make several million pounds.) · The huge sums that clerks earn, at least relative to their formal qualifications, both sit at odds with the feudal nature of their employment and underpin it. In some chambers, clerks still refer to even junior barristers as “sir” or “miss.” Housden remembers discussing this issue early in his career with a senior clerk. He asked the man whether he found calling people half his age “sir” demeaning. The reply was straightforward: “For three-quarters of a million pounds per year, I’ll call anyone sir.”
Despite the large increase in U.S. income inequality, consumption for families at the 25th and 50th percentiles of income has grown steadily over the time period 1960–2015. The number of cars per household with below median income has doubled since 1980 and the number of bedrooms per household has grown 10% despite decreases in household size. The finding of zero growth in American real wages since the 1970s is driven in part by the choice of the CPI-U as the price deflator (Broda & Weinstein 2008, Prices, Poverty, And Inequality: Why Americans Are Better Off Than You Think). Small biases in any price deflator compound over long periods of time. Using a different deflator such as the Personal Consumption Expenditures index (PCE) yields modest growth in real wages and in median household incomes throughout the time period. Accounting for the Hamilton (1998) and Costa (2001)estimates of CPI bias yields estimated wage growth of 1% per year during 1975–2015. Meaningful growth in consumption for below median income families has occurred even in a prolonged period of increasing income inequality, increasing consumption inequality and a decreasing share of national income accruing to labor.
We disaggregate the self-employed into incorporated and unincorporated to distinguish between “entrepreneurs” and other business owners. We show that the incorporated self-employed and their businesses engage in activities that demand comparatively strong nonroutine cognitive abilities, while the unincorporated and their firms perform tasks demanding relatively strong manual skills. People who become incorporated business owners tend to be more educated and—as teenagers—score higher on learning aptitude tests, exhibit greater self-esteem, and engage in more illicit activities than others. The combination of “smart” and “illicit” tendencies as youths accounts for both entry into entrepreneurship and the comparative earnings of entrepreneurs. Individuals tend to experience a material increase in earnings when becoming entrepreneurs, and this increase occurs at each decile of the distribution.
Many products—such as lighting and computing—have undergone revolutionary changes since the beginning of the industrial revolution. This paper considers the opposite end of the spectrum of product change, focusing on nails. Nails are a simple, everyday product whose form has changed relatively little over the last three centuries, and this paper constructs a continuous, constant-quality price index for nails since 1695. These data indicate that the price of nails fell substantially relative to an overall basket of consumption goods as reflected in the CPI, with the preferred index falling by a factor of about 15× from the mid 1700s to the mid 1900s. While these declines were nowhere near as rapid as those for lighting and computing, they were still quite sizable and large enough to enable the development of other products and processes and contribute to downstream changes in patterns of economic activity. Moreover, with the relative price of nails having been so much higher in an earlier period, nails played a much more important role in economic activity in an earlier period than they do now. [A not yet completed section of the paper will use a growth accounting framework to assess the proximate sources of the change in the price of nails.]
This whitepaper presents the primary findings of new research by Professor Benjamin Shiller (Brandeis University), Professor Joel Waldfogel (University of Minnesota and the National Bureau of Economic Research), and Dr. Johnny Ryan (PageFair).
Research of 2,574 websites over 3 years reveals that adblock has a hidden cost: it not only reduces small and medium publishers’ revenue, it also reduces their traffic.
Studying the changing rate of desktop adblock usage and traffic rank from April 2013—June 2016 reveals that adblock usage is undermining many websites’ ability to invest in content. Affected websites then attract fewer visitors, and so their traffic declines. The full paper is available from NBER, the U.S. National Bureau of Economic Research.
This is the adblock paradox: users may avoid ads in the short term, but ultimately undermine the value they can derive from the web. To reverse this phenomenon, publishers must listen to users’ legitimate grievances about online ads and respond by fixing the problems. Once they have remedied the users’ grievances, publishers can choose to serve their ads using technology that adblock companies cannot tamper with.
Recent studies in psychology and neuroscience find that fictional works exert strong influence on readers and shape their opinions and worldviews. We study the Potterian economy, which we compare to economic models, to assess how Harry Potter books affect economic literacy.
We find that some principles of Potterian economics are consistent with economists’ models. Many others, however, are distorted and contain numerous inaccuracies, which contradict professional economists’ views and insights, and contribute to the general public’s biases, ignorance, and lack of understanding of economics.
…We investigate the Potterian economy by analyzing its full structure. We find that it combines ingredients from various economic models but is not fully consistent with any particular model. Some features of the Potterian economy are in line with Marxist views, while others fit the public choice perspective. Prices in the Potterian economy are rigid in the Keynesian spirit, yet Potterians enjoy full employment as in the Classical model.
We conclude that the Potterian model reflects folk economics. As such, although it is sometimes consistent with economists’ views, many of its aspects are distorted and there are numerous biases and inaccuracies, which can influence the public, particularly young readers, who figure prominently among Harry Potter readers.
In section 2, we review the economic literacy literature. In section 3, we discuss fiction’s influence. In section 4, we describe the setting. In section 5, we study money and banking. In section 6, we look at the government. In section 7, we discuss the law and order. In section 8, we focus on monopolies. In section 9, we study income distribution. In section 10, we study international trade. In section 11, we analyze the war economy. In section 12, we study technological progress. In section 13, we discuss human capital. Section 14 concludes.
…14. Conclusion: Many elements of the Potterian model are mutually inconsistent and contradictory. For example, it is critical of market-based systems, yet it belittles government. The government is corrupt, yet it has public support. Many mutually beneficial transactions do not take place and there are no credit markets because of prejudices, yet the books reject stereotypical images. Money is made of precious metals, yet its purchasing power has no relation to its commodity value. The wizards value education, yet they do not have universities or colleges. Moreover, the Potterian model misses many deep and fundamental aspects of economic analysis. For example, the bank does not serve as an intermediary between savers and investors, money lacks some key attributes, arbitrage opportunities are not exploited, efficiency-improving transactions go unnoticed, international trade is restricted by protectionism, there is hardly any migration, the economy is in permanent stagnation, the stock of human capital is not increasing, investments are non-existent, and taxes of any kind are absent.
Thus, a naïve reader gets a distorted view of economics, and shallow and uninformed characterizations of markets and market institutions, which surely influence and shape the general public’s understanding of economic issues. Further, they likely contribute to public’s biases, misconceptions, and more generally to their economic illiteracy. For example, public exposed to such views and sentiments might be persuaded easily by populist arguments against foreigners, against international trade, against businessmen, against bankers and other financial service providers, against authorities (e.g., the central bank), etc. Folk economic interpretation of the Potterian model suggests that popular intermediaries play an important role in spreading biases and ignorance on important economic issues. Thus, rather than dismissing the “mishmash” of ideas found in the Harry Potter books, we suggest taking them seriously in order to try and understand their sources and persistence.
Some of the biases we have identified have been around for centuries. This suggests that in addition to directly influencing the public views, Harry Potter books have likely reinforced existing beliefs, which might be playing a role in transmitting the biases across through cultural transmission of values (Bisin & Verdier 2000, Necker & Voskort 2014). Moreover, the formation and propagation of these biases may be taking place from the period of early childhood/youth because this is the age group on which fiction’s influence is likely to be particularly strong and long-lasting.
[Keywords: economic and financial literacy, political economy, public choice, rent seeking, folk economics, Harry Potter, social organization of economic activity, literature, fiction, Potterian economy, Potterian economics, popular opinion]
Urban growth requires the replacement of outdated buildings, yet growth may be restricted when landowners do not internalize positive spillover effects from their own reconstruction. The Boston Fire of 1872 created an opportunity for widespread simultaneous reconstruction, initiating a virtuous circle in which building upgrades encouraged further upgrades of nearby buildings. Land values increased substantially among burned plots and nearby unburned plots, capitalizing economic gains comparable to the prior value of burned buildings. Boston had grown rapidly prior to the Fire, but negative spillovers from outdated durable buildings had substantially constrained its growth by dampening reconstruction incentives.
One of the current problems of peer-to-peer-based file storage systems like Freenet is missing participation, especially of storage providers. Users are expected to contribute storage resources but may have little incentive to do so. In this paper we propose KopperCoin, a token system inspired by Bitcoin’s blockchain which can be integrated into a peer-to-peer file storage system.
In contrast to Bitcoin, KopperCoin does not rely on a proof of work (PoW) but instead on a proof of retrievability (PoR). Thus it is not computationally expensive and instead requires participants to contribute file storage to maintain the network. Participants can earn digital tokens by providing storage to other users, and by allowing other participants in the network to download files. These tokens serve as a payment mechanism.
Thus we provide direct reward to participants contributing storage resources.
[Keywords: blockchain, cloud storage, cryptocurrency, peer-to-peer, proof of retrievability]
…3.4: Fetching Files: In order to fetch a file the client application needs to know the identifiers of the corresponding chunks. The file is restored by retrieving sufficiently many chunks. For successful retrieval not all chunks have to be fetched, depending on the erasure code that was applied before storing the file in the KopperCoin-network. The erasure code solves the problem of missing chunks and storage providers demanding unrealistically high prices for chunk retrieval.
Fetching chunks works with 2-of-2 multi-signature transactions. These are transactions which can be spent if and only if 2 out of 2 parties agree to spend them. To our knowledge the mechanism was first used by NashX .
Let U be a user who wants to retrieve a chunk which is stored at the provider P. Suppose U wants to pay the amount p for retrieving his file. Then U and P create a 2-of-2 multi-signature transaction where the user U inputs β+p and P inputs α . The amounts α and β are security deposits. In a next step P sends the chunk to U. The user U checks if he has received the correct chunk. In that case he signs a multi-signature transaction with 2 outputs: The provider P gets back his security deposit α, together with the price p for the chunk. In the other output the user U gets back his security deposit β. The process is illustrated in Figure 1. Above the arrows are the amounts and below the arrows are the owners of the respective amounts.
If U wants to cheat he cannot set his security deposit β to zero or otherwise change the first transaction since this will be detected by the provider P who then refuses to sign. Nevertheless the user U can refuse to sign the 2-of-2 multi-signature transaction after retrieving the chunk, thereby losing his security deposit β.
If the provider P cheats he can either refuse to send the chunk or refuse to sign the 2-of-2 multi-signature transaction. In both cases he will suffer a financial damage of his security deposit α and not receive the price p for retrieval of the chunk.
The past decade or so has seen a dramatic change in the way that economists can learn by watching our planet from above. A revolution has taken place in remote sensing and allied fields such as computer science, engineering, and geography.
Petabytes of satellite imagery have become publicly accessible at increasing resolution, many algorithms for extracting meaningful social science information from these images are now routine, and modern cloud-based processing power allows these algorithms to be run at global scale.
This paper seeks to introduce economists to the science of remotely sensed data, and to give a flavor of how this new source of data has been used by economists so far and what might be done in the future.
We group the main advantages of such remote sensing data to economists into 3 categories:
access to information difficult to obtain by other means:
The first advantage is simply that remote sensing technologies can collect panel data at low marginal cost, repeatedly, and at large scale on proxies for a wide range of hard-to-measure characteristics. We discuss below economic analysis that already uses remotely sensed data on night lights, precipitation, wind speed, flooding, topography, forest cover, crop choice, agricultural productivity, urban development, building type, roads, pollution, beach quality, and fish abundance. Many more characteristics of potential interest to economists have already been measured remotely and used in other fields. Most of these variables would be prohibitively expensive to measure accurately without remote sensing, and there are settings in which the official government counterparts of some remotely sensed statistics (such as pollution or forestry) may be subject to manipulation…
unusually high spatial resolution:
The second advantage of remote sensing data sources is that they are typically available at a substantially higher degree of spatial resolution than are traditional data. Much of the publicly available satellite imagery used by economists provides readings for each of the hundreds of billions of 30-meter-by-30-meter grid cells of land surface on Earth. Many economic decisions (particularly land use decisions such as zoning, building types, or crop choice) are made at approximately this same level of spatial resolution. But since 1999, private companies have offered submeter imagery and, following a 2014 US government ruling, American companies are able to sell imagery at resolutions below 0.5 meters to nongovernment customers for the first time. This is important because even when a coarser unit of analysis is appropriate, 900 1-meter pixels provide far more information available for signal extraction than a single 30-meter pixel covering the same area. In addition, some innovative identification strategies used by economists exploit stark policy changes that occur at geographic boundaries; these high-spatial-resolution research designs rely intimately on high-spatial-resolution outcome data (for example, Turner, Haughwout, and van der Klaauw 2014)…
wide geographic coverage.
The third key advantage of remotely sensed data lies in their wide geographic coverage. Only rarely do social scientists enjoy the opportunities, afforded by satellites, to study data that have been collected in a consistent manner—without regard for local events like political strife or natural disasters—across borders and with uniform spatial sampling on every inhabited continent. Equally important, many research satellites (or integrated series of satellites) offer substantial temporal coverage, capturing data from the same location at weekly or even daily frequency for several decades and counting.
An example of this third feature—global scope—can be seen in work on the economic impacts of climate change in agriculture by Costinot, Donaldson, and Smith (2016). These authors draw on an agronomic model that is partly based on remotely sensed data. The agronomic model, when evaluated under both contemporary and expected (2070–2099) climates, predicts a change in agricultural productivity for any crop in any location on Earth. For example, the relative impact for 2 of the world’s most important crops, rice and wheat, is shown in Figure 5: Costinot, Donaldson, and Smith feed these pixel-by-pixel changes into a general equilibrium model of world agricultural trade and then use the model to estimate that climate change can be expected to reduce global agricultural output by about 1⁄6th (and that international trade is unlikely to mitigate this damage, despite the inherently transnational nature of the shock seen in Figure 5). Given the rate at which algorithms for crop classification and yield measurement have improved in recent years, future applications of satellite data are likely to be particularly rich in the agricultural arena.
[Post-Heartbleed/Shellshock discussion of the economics of funding open source software: universally used & economically invaluable as a public good anyone can & does use, it is also essentially completely unfunded, leading to serious problems in long-term maintenance & improvement, exemplified by the Heartbleed bug—core cryptographic code run by almost every networked device on the planet could not fund more than a part-time developer.]
Our modern society—everything from hospitals to stock markets to newspapers to social media—runs on software. But take a closer look, and you’ll find that the tools we use to build software are buckling under demand…Nearly all software today relies on free, public code (called “open source” code), written and maintained by communities of developers and other talent. Much like roads or bridges, which anyone can walk or drive on, open source code can be used by anyone—from companies to individuals—to build software. This type of code makes up the digital infrastructure of our society today. Just like physical infrastructure, digital infrastructure needs regular upkeep and maintenance. In the United States, over half of government spending on transportation and water infrastructure goes just to maintenance.1 But financial support for digital infrastructure is much harder to come by. Currently, any financial support usually comes through sponsorships, direct or indirect, from software companies. Maintaining open source code used to be more manageable. Following the personal computer revolution of the early 1980s, most commercial software was proprietary, not shared. Software tools were built and used internally by companies, and their products were licensed to customers. Many companies felt that open source code was too nascent and unreliable for commercial use. In their view, software was meant to be charged for, not given away for free. Today, everybody uses open source code, including Fortune 500 companies, government, major software companies and startups. Sharing, rather than building proprietary code, turned out to be cheaper, easier, and more efficient.
This increased demand puts additional strain on those who maintain this infrastructure, yet because these communities are not highly visible, the rest of the world has been slow to notice. Most of us take opening a software application for granted, the way we take turning on the lights for granted. We don’t think about the human capital necessary to make that happen. In the face of unprecedented demand, the costs of not supporting our digital infrastructure are numerous. On the risk side, there are security breaches and interruptions in service, due to infrastructure maintainers not being able to provide adequate support. On the opportunity side, we need to maintain and improve these software tools in order to support today’s startup renaissance, which relies heavily on this infrastructure. Additionally, open source work builds developers’ portfolios and helps them get hired, but the talent pool is remarkably less diverse than in tech overall. Expanding the pool of contributors can positively affect who participates in the tech industry at large.
No individual company or organization is incentivized to address the problem alone, because open source code is a public good. In order to support our digital infrastructure, we must find ways to work together. Current examples of efforts to support digital infrastructure include the Linux Foundation’s Core Infrastructure Initiative and Mozilla’s Open Source Support (MOSS) program, as well as numerous software companies in various capacities. Sustaining our digital infrastructure is a new topic for many, and the challenges are not well understood. In addition, infrastructure projects are distributed across many people and organizations, defying common governance models. Many infrastructure projects have no legal entity at all. Any support strategy needs to accept and work with the decentralized, community-centric qualities of open source code. Increasing awareness of the problem, making it easier for institutions to contribute time and money, expanding the pool of open source contributors, and developing best practices and policies across infrastructure projects will all go a long way in building a healthy and sustainable ecosystem.
Approximately 30 satellite launches are insured each year, and insurance coverage is provided for about 200 in-orbit satellites. The total insured exposure for these risks is currently in excess of US$25 billion. Commercial communications satellites in geostationary Earth orbit represent the majority of these, although a larger number of commercial imaging satellites, as well as the second-generation communication constellations, will see the insurance exposure in low Earth orbit start to increase in the years ahead, from its current level of US$1.5 billion. Regulations covering Lloyd’s of London syndicates require that each syndicate reserves funds to cover potential losses and to remain solvent. New regulations under the European Union’s Solvency II directive now require each syndicate to develop models for the classes of insurance provided to determine their own solvency capital requirements. Solvency II is expected to come into force in 2016 to ensure improved consumer protection, modernized supervision, deepened EU market integration, and increased international competitiveness of EU insurers. For each class of business, the inputs to the solvency capital requirements are determined not just on previous results, but also to reflect extreme cases where an unusual event or sequence of events exposes the syndicate to its theoretical worst-case loss. To assist syndicates covering satellites to reserve funds for such extreme space events, a series of realistic disaster scenarios (RDSs) has been developed that all Lloyd’s syndicates insuring space risks must report upon on a quarterly basis. The RDSs are regularly reviewed for their applicability and were recently updated to reflect changes within the space industry to incorporate such factors as consolidation in the supply chain and the greater exploitation of low Earth orbit. The development of these theoretical RDSs will be overviewed along with the limitations of such scenarios. Changes in the industry that have warranted the recent update of the RDS, and the impact such changes have had will also be outlined. Finally, a look toward future industry developments that may require further amendments to the RDSs will also be covered by the article.
We study the out-of-sample and post-publication return predictability of 97 variables shown to predict cross-sectional stock returns. Portfolio returns are 26% lower out-of-sample and 58% lower post-publication. The out-of-sample decline is an upper bound estimate of data mining effects. We estimate a 32% (58%–26%) lower return from publication-informed trading. Post-publication declines are greater for predictors with higher in-sample returns, and returns are higher for portfolios concentrated in stocks with high idiosyncratic risk and low liquidity. Predictor portfolios exhibit post-publication increases in correlations with other published-predictor portfolios. Our findings suggest that investors learn about mispricing from academic publications.
Urbanization is a process in which separated and dispersed property rights become concentrated in a specific location. This process involves a large volume of contracts to redefine and rearrange various property rights, producing various and high transaction costs. Efficient urbanization implies the reduction of these costs. This paper studies how efficient urbanization reduces transaction costs in the real world, based on a series of contracts rather than the coercive power. Specifically, this paper shows that Jiaolong Co. built a city by being a central contractor, which acquired planning rights by contract, and signed a series of tax sharing contracts with government, farmers, tenants, and business enterprises. These contractual arrangements greatly reduced the transaction costs and promoted the development.
In this paper, we explore the costs and benefits of hosting the Olympic Games. On the cost side, there are three major categories: general infrastructure such as transportation and housing to accommodate athletes and fans; specific sports infrastructure required for competition venues; and operational costs, including general administration as well as the opening and closing ceremony and security. Three major categories of benefits also exist: the short-run benefits of tourist spending during the Games; the long-run benefits or the “Olympic legacy” which might include improvements in infrastructure and increased trade, foreign investment, or tourism after the Games; and intangible benefits such as the “feel-good effect” or civic pride. Each of these costs and benefits will be addressed in turn, but the overwhelming conclusion is that in most cases the Olympics are a money-losing proposition for host cities; they result in positive net benefits only under very specific and unusual circumstances. Furthermore, the cost–benefit proposition is worse for cities in developing countries than for those in the industrialized world. In closing, we discuss why what looks like an increasingly poor investment decision on the part of cities still receives significant bidding interest and whether changes in the bidding process of the International Olympic Committee (IOC) will improve outcomes for potential hosts.
In the legal system of the premodern Middle East, the closest thing to an autonomous private organization was the Islamic waqf. This non-state institution inhibited political participation, collective action, and rule of law, among other indicators of democratization. It did so through several mechanisms. Its activities were essentially set by its founder, which limited its capacity to meet political challenges. Being designed to provide a service on its own, it could not participate in lasting political coalitions. The waqf’s beneficiaries had no say in evaluating or selecting its officers, and they had trouble forming a political community. Thus, for all the resources it controlled, the Islamic waqf contributed minimally to building civil society. As a core element of Islam’s classical institutional complex, it perpetuated authoritarian rule by keeping the state largely unrestrained. Therein lies a key reason for the slow pace of the Middle East’s democratization process.
Decentralised smart contracts represent the next step in the development of protocols that support the interaction of independent players without the presence of a coercing authority. Based on protocols à la BitCoin for digital currencies, smart contracts are believed to be a potentially enabling technology for a wealth of future applications.
The validation of such an early developing technology is as necessary as it is complex. In this paper we combine game theory and formal models to tackle the new challenges posed by the validation of such systems [as BitHalo].
…The purpose of BitHalo is to create unbreakable trade contracts without the need of arbiters or escrow agents, lowering substantially the costs for the 2 parties involved in the contract. Since it does not require trust, nothing in the BitHalo system is centralised. It does not require a server, just the Internet. Its peer-to-peer communication system allows the 2 parties to use email, Bitmessage, IRC, or other methods to exchange messages and data. BitHalo is off-blockchain in the sense that the record of BitHalo contracts is not kept in the blockchain, and therefore the use of BitHalo will not bloat the blockchain.
BitHalo can be used for bartering, self-insuring, backing commodities, performing derivatives, making good-faith employment contracts, performing 2-party escrow, and more general business contracts.
Transactions are insured by a deposit in one of the supported digital currencies (including BTC) on a joint account, double-deposit escrow. The BitHalo protocol forces each party to uphold the contract in order to achieve the most economically optimal outcome. In a typical contract exchanging a payment for goods or services, the payment can be sent either separately, using checks, money transfer, crypto-currencies, etc., or paid directly with the deposit. The deposit will only be refunded to both parties on shared consent, which has to be expressed by both parties. In the lack of expression of shared consent, the joint account will self-destruct after a time-out. Time limits and deposit amounts are all flexible and agreed upon by both parties. Dissatisfaction about the outcome of the transaction by one of the parties, for instance because of theft or deception, will lead to the destruction of the deposit due to the lack of shared consensus. When the deposit exceeds the amount being transacted, the loss typically results larger than the benefits possibly obtainable by a fraudulent behaviour. However, deposits exceeding the transacted amount may be in some cases unfeasible. In some situations, smaller deposits may incentivize one or both parties to break the contract.
…As standard, DSCP allows 2 parties, ie. the 2 players of the protocol, to autonomously exchange money against goods without the need of a centralised arbiter. It is worth remarking that the 2 players are completely independent, not subject to any third party authority in the execution of the exchange protocol, and can, for instance, decide to leave the protocol at any time…DSCP is based on the mentioned notion of “enforced trust” in the fact that none of the 2 parties will ever be in a position in which breaking the protocol is for them advantageous. We will see that this, as expected, will be properly enforced only when the deposit, whose payment is a pre-requisite for the execution of the protocol, exceeds the value of the goods.
We estimate the effect of information and expertise on consumers’ willingness to pay for national brands in physically homogeneous product categories. In a detailed case study of headache remedies, we find that more informed or expert consumers are less likely to pay extra to buy national brands, with pharmacists choosing them over store brands only 9% of the time, compared to 26% of the time for the average consumer. In a similar case study of pantry staples such as salt and sugar, we show that chefs devote 12 percentage points less of their purchases to national brands than demographically similar non-chefs. We extend our analysis to cover 50 retail health categories and 241 food and drink categories. The results suggest that misinformation and related consumer mistakes explain a sizable share of the brand premium for health products, and a much smaller share for most food and drink products. We tie our estimates together using a stylized model of demand and pricing.
25 large field experiments with major U.S. retailers and brokerages, most reaching millions of customers and collectively representing $3.44$2.82015 million in digital advertising expenditure, reveal that measuring the returns to advertising is difficult. The median confidence interval on return on investment is over 100 percentage points wide. Detailed sales data show that relative to the per capita cost of the advertising, individual-level sales are very volatile; a coefficient of variation of 10 is common. Hence, informative advertising experiments can easily require more than 10 million person-weeks, making experiments costly and potentially infeasible for many firms. Despite these unfavorable economics, randomized control trials represent progress by injecting new, unbiased information into the market. The inference challenges revealed in the field experiments also show that selection bias, due to the targeted nature of advertising, is a crippling concern for widely employed observational methods.
Easter bunnies aren’t what they used to be. The plush toys on store shelves these days are cheaper, often safer, and much, much softer than in bygone days. They represent a small example of a pervasive phenomenon: goods whose quality has improved gradually but substantially over time, without corresponding price increases and or public recognition.
If you shop around, you can find a stuffed Easter bunny for three dollars, as you could in the 1970s. My neighborhood Target is selling two models at that price, the cheapest in a lineup of plush Easter toys that includes offerings for $6.14$4.992015, $12.29$9.992015 and $24.58$19.992015 (for a giant rabbit). Back in 1970, when I myself was young enough for Easter baskets, Walgreens advertised “plush bunnies in ‘hot’ colors” for $14.00$2.971970, along with others for $10.33$2.191970 and $17.78$3.771970. The $14.00$2.971970 bunny from 1970 was probably bigger than today’s $14.10$2.991970 model, but keep in mind that these prices are not corrected for inflation. The 1970 bunny would cost $22.10$17.972015 in today’s dollars. Or, to use another benchmark, the federal minimum wage in 1970 was $6.84$1.451970—about half the price of one of those Walgreens bunnies—compared with today’s minimum of $8.92$7.252015, more than double the price of the Target ones. Earning enough to buy the 1970 bunny required 123 minutes of minimum-wage labor versus about 25 minutes today. Three-dollar bunnies are low-profit items designed to get people in the store and stimulate impulse buying. But the broad pattern applies to more expensive stuffed animals as well: Prices have stayed low. For that, you can thank intense international competition.
…Consumers have come to expect low prices. But inexpensive doesn’t necessarily mean “cheap.” The quality of stuffed toys has also improved. “It’s a better product than it was years ago, and it’s not that much more expensive”, said Steven Meyer, the third-generation owner of Mary Meyer Corp., a Vermont-based toy company. Meyer joined the company in 1986, helping his father weather the tough transition to manufacturing in Korea.
For example, Meyer explained, Korean and Taiwanese toymakers introduced safety procedures, later copied in China, to assure the toys didn’t contain hidden hazards. “Every one of our toys is put through a metal detector before it goes into a box, and that’s because a little shard of a sewing needle can break off and go into the toy”, Meyer said. “We never thought of that when we produced in the United States.”
More immediately apparent is how the toys feel. A stuffed animal that would have delighted a baby boomer now seems rigid and rough. Today’s toys are stuffed with soft, fibrous polyester rather than the foam rubber, sawdust or ground nut shells of the past. Plush outer fabrics no longer have stiff backings; the yarns are knitted to one another rather than attached to a rigid fabric like a carpet. “The whole stuffed toy feels softer and slouchier”, Meyer said. The real magic, however, is in that silky faux fur. I first noticed it about a dozen years ago while buying Christmas presents for my nephew, who has super-sensitive skin and hates clothing tags and scratchy fabrics. Stuffed animals, I discovered, were as different from my childhood toys as a wickable polyester workout T-shirt is from a sweat-sticky polyester disco suit. The secret to both wickable T-shirts and softer Easter bunnies lies in polyester microfibers. These high-tech textiles have replaced the acrylic and polyester plushes that once covered stuffed toys just as they’ve nudged aside cotton for exercise apparel.
…Textile fibers, including polyester filaments, are measured in decitex or deniers, almost equivalent units unique to the business. For reference: Silk measures about 1.1 to 1.3 decitex, while human hair is 30 to 50. A microfiber is anything less than 1 decitex. Although polyester microfibers date to Toray Industries’ development of Ultrasuede in 1970, they have become widespread only in recent years, thanks in part to massive plant investments in China that have swamped the polyester market and driven down prices. Back when I was buying toys for my nephew, polyester fibers of around 3 decitex “were considered fine”, said Frank Horn, president of the Fiber Economics Bureau, the statistical collection and publication arm of the American Fiber Manufacturers Association. But over the past decade or so, true microfibers have “become ubiquitous.” Now, Horn estimated, the average is about 0.5 decitex—a reduction of about 85%—and some popular microfibers are as fine as 0.3 decitex. The finer the fiber, the softer the final fabric—making today’s stuffed animals extraordinarily silky. Largely unheralded outside the textile business, this progress was “not one particular technology but many”, explained textile chemist Phil Brown of Clemson University’s materials science and engineering school. “Some involved changing fiber shape, some involved using chemical treatments to reduce fiber size, some involved new fiber extrusion technologies in which fibers have more than one polymer component.”
Urban developers face frictions in the process of redeveloping land, the timing of which depends on many economic factors. This timing can be disrupted by a large shock that destroys thousands of buildings, which could then have substantial short-run and long-run effects. Studying the impact of an urban disaster, therefore, can provide unique insight into urban dynamics. Exploiting the 1906 San Francisco Fire as an exogenous reduction in the city’s building stock, this paper examines residential density across razed and unburned areas between 1900 and 2011. In prominent residential neighborhoods, density increased at least 60% in razed areas relative to unburned areas by 1914, and a large density differential still exists today. These outcomes suggest that thriving cities face substantial redevelopment frictions in the form of durable buildings and that large shocks can greatly alter the evolution of urban land-use outcomes over time.
In the third annual ad blocking report, PageFair, with the help of Adobe, provides updated data on the scale and growth of ad blocking software usage and highlights the global and regional economic impact associated with it. Additionally, this report explores the early indications surrounding the impact of ad blocking within the mobile advertising space and how mobile will change the ad blocking landscape.
Table of Contents: · 1. Introduction · 2. Table of Contents · 3. Key insights · 4. Global ad blocking growth · 5. Usage of ad blocking software in the United States · 6. Usage of ad blocking software in Europe · 7. The cost of blocking ads · 8. Effect of ad blocking by industry · 9. Google Chrome still the main driver of ad block growth · 10. Mobile is yet to be a factor in ad blocking growth · 11. Mobile will facilitate future ad blocking growth · 12. Reasons to start using an ad blocker · 13. Afterword · 14. Background · 15. Methodology · 16. Tables · 17. Tables
Key Insights: More consumers block ads, continuing the strong growth rates seen during 2013 and 2014. The findings:
Globally, the number of people using ad blocking software grew by 41% year over year.
16% of the US online population blocked ads during Q2 2015.
Ad block usage in the United States grew 48% during the past year, increasing to 45 million monthly active users (MAUs) during Q2 2015.
Ad block usage in Europe grew by 35% during the past year, increasing to 77 million monthly active users during Q2 2015.
The estimated loss of global revenue due to blocked advertising during 2015 was $26.81$21.82015B.
With the ability to block ads becoming an option on the new iOS 9, mobile is starting to get into the ad blocking game. Currently Firefox and Chrome lead the mobile space with 93% share of mobile ad blocking.
Over the past 10+ years, online companies large and small have adopted widespread A/B testing as a robust data-based method for evaluating potential product improvements. In online experimentation, it is straightforward to measure the short-term effect, ie., the impact observed during the experiment. However, the short-term effect is not always predictive of the long-term effect, ie., the final impact once the product has fully launched and users have changed their behavior in response. Thus, the challenge is how to determine the long-term user impact while still being able to make decisions in a timely manner.
We tackle that challenge in this paper by first developing experiment methodology for quantifying long-term user learning. We then apply this methodology to ads shown on Google search, more specifically, to determine and quantify the drivers of ads blindness and sightedness, the phenomenon of users changing their inherent propensity to click on or interact with ads.
We use these results to create a model that uses metrics measurable in the short-term to predict the long-term. We learn that user satisfaction is paramount: ads blindness and sightedness are driven by the quality of previously viewed or clicked ads, as measured by both ad relevance and landing page quality. Focusing on user satisfaction both ensures happier users but also makes business sense, as our results illustrate. We describe two major applications of our findings: a conceptual change to our search ads auction that further increased the importance of ads quality, and a 50% reduction of the ad load on Google’s mobile search interface.
The results presented in this paper are generalizable in two major ways. First, the methodology may be used to quantify user learning effects and to evaluate online experiments in contexts other than ads. Second, the ads blindness/sightedness results indicate that a focus on user satisfaction could help to reduce the ad load on the internet at large with long-term neutral, or even positive, business impact.
Do 9 out of 10 restaurants fail in their first year, as commonly claimed? No. Survival analysis of 1.9 million longitudinal microdata for 81,000 full-service restaurants in a 20-year U.S. Bureau of Labor Statistics non-public census of business establishments in the western US shows that only 17 percent of independently owned full-service restaurant startups failed in their first year, compared with 19 percent for all other service-providing startups. The median lifespan of restaurants is about 4.5 years, slightly longer than that of other service businesses (4.25 years). However, the median lifespan of a restaurant startup with 5 or fewer employees is 3.75 years, slightly shorter than that of other service businesses of the same startup size (4.0 years).
Crypto-currency is a form of decentralized digital currency that has changed the world of finance over the past several years. Bitcoin lacks a central authority and protects anonymity, while allowing a relatively low-cost alternative to fiat. It opens the doors for international exchange of commodities and has the potential to change how business is conducted.
The signature and scripting system that Bitcoin uses allows for the creation of smart contracts. Also using signatures, it is possible to create accounts that require multiple signatures (multisig accounts) as well as transactions with multiple inputs and outputs. There has been discussion of some of the current weaknesses with smart contracts.
We address these weaknesses to make smart contracts immediately accessible on the Bitcoin network. As proposed, this protocol offers a system of commitment schemes and business protocols that greatly reduces the issues of extortion and malleability from two-party escrow.
The Hermès brand is synonymous with a wealthy global elite clientele and its products have maintained an enduring heritage of craftsmanship that has distinguished it among competing luxury brands in the global market. Hermès has remained a family business for generations and has successfully avoided recent acquisition attempts by luxury group LVMH. Almost half of the luxury firm’s revenue ($1.96$1.52012B in 2012) is derived from the sale of its leather goods and saddlery, which includes its handbags. A large contributor to sales is global demand for one of its leather accessories, the Birkin bag, ranging in price from $12,667$10,0002014 to $316,681$250,0002014. Increased demand for the bag in the United States since 2002 resulted in an extensive customer waitlist lasting from months to a few years. Hermès retired the famed waitlist (sometimes called the ‘dream list’) in the United States in 2010, and while the waitlist has been removed, demand for the Birkin bag has not diminished and making the bag available to luxury consumers requires extensive, careful distribution management. In addition to inventory constraints related to demand for the Birkin bag in the United States, Hermès must also manage a range of other factors in the US market. These factors include competition with ‘affordable’ luxury brands like Coach, monitoring of unsolicited brand endorsers as well as counterfeit goods and resellers. This article examines some of the allocation practices used to carefully manage the Hermès brand in the US market.
Some online display advertisements are annoying. Although publishers know the payment they receive to run annoying ads, little is known about the cost that such ads incur (eg., causing website abandonment). Across three empirical studies, the authors address two primary questions: (1) What is the economic cost of annoying ads to publishers? and (2) What is the cognitive impact of annoying ads to users? First, the authors conduct a preliminary study to identify sets of more and less annoying ads. Second, in a field experiment, they calculate the compensating differential, that is, the amount of money a publisher would need to pay users to generate the same number of impressions in the presence of annoying ads as it would generate in their absence. Third, the authors conduct a mouse-tracking study to investigate how annoying ads affect reading processes. They conclude that in plausible scenarios, the practice of running annoying ads can cost more money than it earns.
Classical theories of the firm assume access to reliable signals to measure the causal impact of choice variables on profit. For advertising expenditure we show, using 25 online field experiments (representing $3.60$2.82013 million) with major U.S. retailers and brokerages, that this assumption typically does not hold. Statistical evidence from the randomized trials is very weak because individual-level sales are incredibly volatile relative to the per capita cost of a campaign—a “small” impact on a noisy dependent variable can generate positive returns. A concise statistical argument shows that the required sample size for an experiment to generate sufficiently informative confidence intervals is typically in excess of ten million person-weeks. This also implies that heterogeneity bias (or model misspecification) unaccounted for by observational methods only needs to explain a tiny fraction of the variation in sales to severely bias estimates. The weak informational feedback means most firms cannot even approach profit maximization.
A long-standing question is whether differences in management practices across firms can explain differences in productivity, especially in developing countries where these spreads appear particularly large. To investigate this, we ran a management field experiment on large Indian textile firms. We provided free consulting on management practices to randomly chosen treatment plants and compared their performance to a set of control plants. We find that adopting these management practices raised productivity by 17% in the first year through improved quality and efficiency and reduced inventory, and within three years led to the opening of more production plants. Why had the firms not adopted these profitable practices previously? Our results suggest that informational barriers were the primary factor explaining this lack of adoption. Also, because reallocation across firms appeared to be constrained by limits on managerial time, competition had not forced badly managed firms to exit.
We revisit a long-held assumption in human resource management, organizational behavior, and industrial and organizational psychology that individual performance follows a Gaussian (normal) distribution.
We conducted 5 studies involving 198 samples including 633,263 researchers, entertainers, politicians, and amateur and professional athletes.
Results are remarkably consistent across industries, types of jobs, types of performance measures, and time frames and indicate that individual performance is not normally distributed—instead, it follows a Paretian (power law) distribution. [This is a statistical mistake; they should also test log-normal which would likely fit many better; however, this would probably not meaningfully change the conclusions.]
Assuming normality of individual performance can lead to misspecified theories and misleading practices. Thus, our results have implications for all theories and applications that directly or indirectly address the performance of individual workers including performance measurement and management, utility analysis in pre-employment testing and training and development, personnel selection, leadership, and the prediction of performance, among others.
…Regarding performance measurement and management, the current zeitgeist is that the median worker should be at the mean level of performance and thus should be placed in the middle of the performance appraisal instrument. If most of those rated are in the lowest category, then the rater, measurement instrument, or both are seen as biased (ie., affected by severity bias; Cascio & Aguinis, 2011 chapter 5). Performance appraisal instruments that place most employees in the lowest category are seen as psychometrically unsound. These basic tenets have spawned decades of research related to performance appraisal that might “improve” the measurement of performance because such measurement would result in normally distributed scores given that a deviation from a normal distribution is supposedly indicative of rater bias (cf. Landy & Farr, 1980; Smither & London, 2009a). Our results suggest that the distribution of individual performance is such that most performers are in the lowest category. Based on Study 1, we discovered that nearly 2⁄3rds (65.8%) of researchers fall below the mean number of publications. Based on the Emmy-nominated entertainers in Study 2, 83.3% fall below the mean in terms of number of nominations. Based on Study 3, for U.S. representatives, 67.9% fall below the mean in terms of times elected. Based on Study 4, for NBA players, 71.1% are below the mean in terms of points scored. Based on Study 5, for MLB players, 66.3% of performers are below the mean in terms of career errors.
Moving from a Gaussian to a Paretian perspective, future research regarding performance measurement would benefit from the development of measurement instruments that, contrary to past efforts, allow for the identification of those top performers who account for the majority of results. Moreover, such improved measurement instruments should not focus on distinguishing between slight performance differences of non-elite workers. Instead, more effort should be placed on creating performance measurement instruments that are able to identify the small cohort of top performers.
As a second illustration of the implications of our results, consider the research domain of utility analysis in pre-employment testing and training and development. Utility analysis is built upon the assumption of normality, most notably with regard to the standard deviation of individual performance (SDy), which is a key component of all utility analysis equations. In their seminal article, Schmidt et al 1979 defined SDy as follows: “If job performance in dollar terms is normally distributed, then the difference between the value to the organization of the products and services produced by the average employee and those produced by an employee at the 85th percentile in performance is equal to SDy” (p. 619). The result was an estimate of $38,726$11,3271979. What difference would a Paretian distribution of job performance make in the calculation of SDy? Consider the distribution found across all 54 samples in Study 1 and the productivity levels in this group at (a) the median, (b) 84.13th percentile, (c) 97.73rd percentile, and (d) 99.86th percentile. Under a normal distribution, these values correspond to standardized scores (z) of 0, 1, 2, and 3. The difference in productivity between the 84.13th percentile and the median was 2, thus a utility analysis assuming normality would use SDy = 2.0. A researcher at the 84th percentile should produce $38,726$11,3271979 more output than the median researcher (adjusted for inflation). Extending to the second standard deviation, the difference in productivity between the 97.73rd percentile and median researcher should be 4, and this additional output is valued at $77,445$22,6521979. However, the difference between the 2 points is actually 7. Thus, if SDy is 2, then the additional output of these workers is $135,542$39,6451979 more than the median worker. Even greater disparity is found at the 99.86th percentile. Productivity difference between the 99.86th percentile and median worker should be 6.0 according to the normal distribution; instead the difference is more than quadruple that (ie., 25.0). With a normality assumption, productivity among these elite workers is estimated at $116,177$33,9811979 ($38,726$11,3271979 × 3) above the median, but the productivity of these workers is actually $484,073$141,5881979 above the median.
We chose Study 1 because of its large overall sample size, but these same patterns of productivity are found across all 5 studies. In light of our results, the value-added created by new pre-employment tests and the dollar value of training programs should be reinterpreted from a Paretian point of view that acknowledges that the differences between workers at the tails and workers at the median are considerably wider than previously thought. These are large and meaningful differences suggesting important implications of shifting from a normal to a Paretian distribution. In the future, utility analysis should be conducted using a Paretian point of view that acknowledges that differences between workers at the tails and workers at the median are considerably wider than previously thought.
…Finally, going beyond any individual research domain, a Paretian distribution of performance may help explain why despite more than a century of research on the antecedents of job performance and the countless theoretical models proposed, explained variance estimates (R2) rarely exceed 0.50 (Cascio & Aguinis, 2008b). It is possible that research conducted over the past century has not made important improvements in the ability to predict individual performance because prediction techniques rely on means and variances assumed to derive from normal distributions, leading to gross errors in the prediction of performance. As a result, even models including theoretically sound predictors and administered to a large sample will most often fail to account for even half of the variability in workers’ performance. Viewing individual performance from a Paretian perspective and testing theories with techniques that do not require the normality assumptions will allow us to improve our understanding of factors that account for and predict individual performance. Thus, research addressing the prediction of performance should be conducted with techniques that do not require the normality assumption.
US productivity growth accelerated after 1995 (unlike Europe’s), particularly in sectors that intensively use information technologies (IT). Using two new micro panel datasets we show that US multinationals operating in Europe also experienced a “productivity miracle.” US multinationals obtained higher productivity from IT than non-US multinationals, particularly in the same sectors responsible for the US productivity acceleration. Furthermore, establishments taken over by US multinationals (but not by non-US multinationals) increased the productivity of their IT. Combining pan-European firm-level IT data with our management practices survey, we find that the US IT related productivity advantage is primarily due to its tougher “people management” practices.
We estimate the impact of coups and top-secret coup authorizations on asset prices of partially nationalized multinational companies that stood to benefit from U.S.-backed coups. Stock returns of highly exposed firms reacted to coup authorizations classified as top-secret. The average cumulative abnormal return to a coup authorization was 9% over 4 days for a fully nationalized company, rising to more than 13% over 16 days. Precoup authorizations accounted for a larger share of stock price increases than the actual coup events themselves. There is no effect in the case of the widely publicized, poorly executed Cuban operations, consistent with abnormal returns to coup authorizations reflecting credible private information. We also introduce two new intuitive and easy to implement nonparametric tests that do not rely on asymptotic justifications.
The most prominent way to establish trust between buyers and sellers on online auction sites are reputation mechanisms. 2 drawbacks of this approach are the reliance on the seller being long-lived and the susceptibility to whitewashing. In this paper, we introduce so-called escrow mechanisms that avoid these problems by installing a trusted intermediary which forwards the payment to the seller only if the buyer acknowledges that the good arrived in the promised condition.
We address the incentive issues that arise and design an escrow mechanism that is incentive-compatible, efficient, interim individually rational and ex ante budget-balanced. In contrast to previous work on trust and reputation, our approach does not rely on knowing the sellers’ cost functions or the distribution of buyer valuations.
We measure the causal effects of online advertising on sales, using a randomized experiment performed in cooperation between Yahoo! and a major retailer. After identifying over one million customers matched in the databases of the retailer and Yahoo!, we randomly assign them to treatment and control groups. We analyze individual-level data on ad exposure and weekly purchases at this retailer, both online and in stores. We find statistically-significant and economically substantial impacts of the advertising on sales. The treatment effect persists for weeks after the end of an advertising campaign, and the total effect on revenues is estimated to be more than seven times the retailer’s expenditure on advertising during the study. Additional results explore differences in the number of advertising impressions delivered to each individual, online and offline sales, and the effects of advertising on those who click the ads versus those who merely view them. Power calculations show that, due to the high variance of sales, our large number of observations brings us just to the frontier of being able to measure economically substantial effects of advertising. We also demonstrate that without an experiment, using industry-standard methods based on endogenous crosssectional variation in advertising exposure, we would have obtained a wildly inaccurate estimate of advertising effectiveness.
Traditional economic theories stress the relevance of political, institutional, geographic, and historical factors for economic growth. In contrast, human-capital theories suggest that peoples’ competences, mediated by technological progress, are the deciding factor in a nation’s wealth.
Using 3 large-scale assessments, we calculated cognitive-competence sums for the mean and for upper-level & lower-level groups for 90 countries and compared the influence of each group’s intellectual ability on gross domestic product. In our cross-national analyses, we applied different statistical methods (path analyses, bootstrapping) and measures developed by different research groups to various country samples and historical periods.
Our results underscore the decisive relevance of cognitive ability—particularly of an intellectual class with high cognitive ability and accomplishments in science, technology, engineering, and math—for national wealth. Furthermore, this group’s cognitive ability predicts the quality of economic and political institutions, which further determines the economic affluence of the nation. Cognitive resources enable the evolution of capitalism and the rise of wealth.
Measuring the causal effects of online advertising (adfx) on user behavior is important to the health of the WWW publishing industry. In this paper, using three controlled experiments, we show that observational data frequently lead to incorrect estimates of adfx. The reason, which we label “activity bias”, comes from the surprising amount of time-based correlation between the myriad activities that users undertake online.
In Experiment 1, users who are exposed to an ad on a given day are much more likely to engage in brand-relevant search queries as compared to their recent history for reasons that had nothing do with the advertisement. In Experiment 2, we show that activity bias occurs for page views across diverse websites. In Experiment 3, we track account sign-ups at a competitor’s (of the advertiser) website and find that many more people sign-up on the day they saw an advertisement than on other days, but that the true “competitive effect” was minimal.
In all three experiments, exposure to a campaign signals doing “more of everything” in given period of time, making it difficult to find a suitable “matched control” using prior behavior. In such cases, the “match” is fundamentally different from the exposed group, and we show how and why observational methods lead to a massive overestimate of adfx in such circumstances.
This post investigates female attractiveness, but without the usual photo analysis stuff. Instead, we look past a woman’s picture, into the reaction she creates in the reptile mind of the human male. Among the remarkable things we’ll show:
that the more men as a group disagree about a woman’s looks, the more they end up liking her
guys tend to ignore girls who are merely cute
and, in fact, having some men think she’s ugly actually works in woman’s favor
…Now let’s look back at the two real users from before, this time with their own graphs. OkCupid uses a 1 to 5 star system for rating people, so the rest of our discussion will be in those terms. All the users pictured were generous and confident enough to allow us to dissect their experience on our site, and we appreciate it. Okay, so we have: […] As you can see, though the average attractiveness for the two women above is very close, their vote patterns differ. On the left you have consensus, and on the right you have split opinion.
To put a fine point on it:
Ms. Left is, in an absolute sense, considered slightly more attractive
Ms. Right was also given the lowest rating 142% more often
yet Ms. Right gets 3× as many messages
When we began pairing other people of similar looks and profiles, but different message outcomes, this pattern presented itself again and again. The less-messaged woman was usually considered consistently attractive, while the more-messaged woman often created variation in male opinion…Our first result was to compare the standard deviation of a woman’s votes to the messages she gets. The more men disagree about a woman’s looks, the more they like her. I’ve plotted the deviation vs. messages curve below, again including some examples…
This article extends interplanetary trade theory to an interstellar setting. It is chiefly concerned with the following question: how should interest charges on goods in transit be computed when the goods travel at close to the speed of light? This is a problem because the time taken in transit will appear less to an observer traveling with the goods than to a stationary observer. A solution is derived from economic theory, and 2 useless but true theorems are proved.
…Interstellar trade…involves wholly novel considerations. The most important of these are the problem of evaluating capital costs on goods in transit when the time taken to ship them depends on the observer’s reference frame; and the proper modeling of arbitrage in interstellar capital markets where—or when (which comes to the same thing)—simultaneity ceases to have an unambiguous meaning…The remainder of this article is, will be, or has been, depending on the reader’s inertial frame, divided into 3 sections. Section II develops the basic Einsteinian framework of the analysis. In Section III, this framework is used to analyze interstellar trade in goods. Section IV then considers the role of interstellar capital movements. It should be noted that, while the subject of this article is silly, the analysis actually does make sense. This article, then, is a serious analysis of a ridiculous subject, which is of course the opposite of what is usual in economics…
First Fundamental Theorem of Interstellar Trade:
When trade takes place between 2 planets in a common inertial frame, the interest costs on goods in transit should be calculated using time measured by clocks in the common frame and not by clocks in the frames of trading spacecraft.
Second Fundamental Theorem of Interstellar Trade:
If sentient beings may hold assets on 2 planets in the same inertial frame, competition will equalize the interest rates on the 2 planets.
Combining the 2 theorems developed in this article, it will be seen that we have the foundation for a coherent theory of interstellar trade between planets in the same inertial frame. Interstellar trading voyages can be regarded as investment projects, to be evaluated at an interest rate that will be common to the planets. From this point, the effects of trade on factor prices, income distribution, and welfare can be traced out using the conventional tools of general equilibrium analysis. The picture of the world—or, rather, of the universe—which emerges is not a lunatic vision; stellar, maybe, but not lunatic.
Is space the Final Frontier of economics? Certainly this is only a first probe of the subject, but the possibilities are surely limitless. (In curved space-time, of course, this does not prevent the possibilities from being finite as well!) I have not even touched on the fascinating possibilities of interstellar finance, where spot and forward exchange markets will have to be supplemented by conditional present markets. Those of us working in this field are still a small band, but we know that the Force is with us.
…This research was supported by a grant from the Committee to Re-Elect William Proxmire. This article is adapted with minor changes from a manuscript written in July 1978.
Merchant fees and reward programs generate an implicit monetary transfer to credit card users from non-card (or “cash”) users because merchants generally do not set differential prices for card users to recoup the costs of fees and rewards. On average, each cash-using household pays $202$1492010 to card-using households and each card-using household receives $1,532$1,1332010 from cash users every year. Because credit card spending and rewards are positively correlated with household income, the payment instrument transfer also induces a regressive transfer from low-income to high-income households in general. On average, and after accounting for rewards paid to households by banks, the lowest-income household ($27,048$20,0002010 or less annually) pays $28$212010 and the highest-income household ($202,862$150,0002010 or more annually) receives $1,014$7502010 every year. We build and calibrate a model of consumer payment choice to compute the effects of merchant fees and card rewards on consumer welfare. Reducing merchant fees and card rewards would likely increase consumer welfare.
New technologies have led to increased television advertising avoidance. In particular, mechanical avoidance in the form of zipping and zapping has gained momentum in recent years. Channel switching or “commercial zapping” studies employ diverse methodologies, including self reports, electronic monitoring, laboratory, and in-home observation which has led to a diversity of reported results. This article proposes advancing and standardizing the methodology to comprise a two-phase hidden observation and survey method. A number of research phases have led to the development of this method to collect both mechanical and behavioral avoidance data. The study includes a detailed outline of the hidden observation approach. The survey phase opens up the potential for the collection of viewer data that may further illuminate television advertising avoidance behavior.
This paper examines the effect of oral health on labor market outcomes by exploiting variation in fluoridated water exposure during childhood. The politics surrounding the adoption of water fluoridation by local governments suggests exposure to fluoride is exogenous to other factors affecting earnings. Exposure to fluoridated water increases women’s earnings by approximately 4%, but has no detectable effect for men. Furthermore, the effect is largely concentrated amongst women from families of low socioeconomic status. We find little evidence to support occupational sorting, statistical discrimination, and productivity as potential channels, with some evidence supporting consumer and possibly employer discrimination.
Purpose: This paper aims to analyse the governance structure of monasteries to gain new insights and apply them to solve agency problems of modern corporations. In an historic analysis of crises and closures it asks, if Benedictine monasteries were and are capable of solving agency problems. The analysis shows that monasteries established basic governance instruments very early and therefore were able to survive for centuries.
Design/methodology/approach: The paper uses a dataset of all Benedictine abbeys that ever existed in Bavaria, Baden-Württemberg, and German-speaking Switzerland to determine their lifespan and the reasons for closures. The governance mechanisms are analyzed in detail. Finally, it draws conclusions relevant to the modern corporation. The theoretical foundations are based upon principal agency theory, psychological economics, as well as embeddedness theory.
Findings: The monasteries that are examined show an average lifetime of almost 500 years and only a quarter of them dissolved as a result of agency problems. This paper argues that this success is due to an appropriate governance structure that relies strongly on internal control mechanisms.
Research limitations/implications: Benedictine monasteries and stock corporations differ fundamentally regarding their goals. Additional limitations of the monastic approach are the tendency to promote groupthink, the danger of dictatorship and the life long commitment.
Practical implications: The paper adds new insights into the corporate governance debate designed to solve current agency problems and facilitate better control.
Originality/value: By analyzing monasteries, a new approach is offered to understand the efficiency of internal behavioral incentives and their combination with external control mechanisms in corporate governance.
The advent of file sharing has considerably weakened effective copyright protection. Today, more than 60% of Internet traffic consists of consumers sharing music, movies, books, and games. Yet, despite the popularity of the new technology, file sharing has not undermined the incentives of authors to produce new works. We argue that the effect of file sharing has been muted for three reasons. (1) The cannibalization of sales that is due to file sharing is more modest than many observers assume. Empirical work suggests that in music, no more than 20% of the recent decline in sales is due to sharing. (2) File sharing increases the demand for complements to protected works, raising, for instance, the demand for concerts and concert prices. The sale of more expensive complements has added to artists’ incomes. (3) In many creative industries, monetary incentives play a reduced role in motivating authors to remain creative. Data on the supply of new works are consistent with the argument that file sharing did not discourage authors and publishers. Since the advent of file sharing, the production of music, books, and movies has increased sharply.
The idea that expanding work and consumption opportunities always increases people’s wellbeing is well established in economics but finds no support in psychology. Instead, there is evidence in both economics and psychology that people’s life satisfaction depends on how experienced utility compares with expectations of life satisfaction or decision utility.
In this paper I suggest that expanding work and consumption opportunities is a good thing for decision utility but may not be so for experienced utility. On this premise, I argue that people may overrate their socioeconomic prospects relative to real life chances and I discuss how systematic frustration over unfulfilled expectations can be connected to people’s educational achievement.
I test the model’s predictions on Italian data and find preliminary support for the idea that education and access to stimulating environments may have a perverse impact on life satisfaction. I also find evidence that the latter effect is mediated by factors such as gender and age.
Indeed, the model seeks to go beyond the Italian case and provide more general insights into how age/life satisfaction relationships can be modelled and explained.
This paper uses data on German consumer magazines observed between 1992 and 2004 to analyze the extent to which consumers (dis-)like advertising. We estimate logit demand models separately for the six most important magazine segments in terms of circulation. We find little evidence for readers disliking advertising. On the contrary, we show that readers in many magazine segments appreciate advertising.
Readers of Women’s magazines, Business and politics magazines as well as Car magazines—market segments where advertisements are relatively more informative—appreciate advertising while advertising is nuisance to readers of Adult magazines, a segment where advertisements are particularly uninformative. Demand for interior design magazines is not well identified. Our logit demand estimates are confirmed by logit demand models with random coefficients and by magazine-specific monopoly demand models.
[Keywords: two-sided markets, advertising, mean group estimation, random coefficients model, media markets, nuisance]
Many consumers make poor financial choices, and older adults are particularly vulnerable to such errors. About half of the population between ages 80 and 89 have a medical diagnosis of substantial cognitive impairment. We study life-cycle patterns in financial mistakes using a proprietary database with information on 10 types of credit transactions. Financial mistakes include suboptimal use of credit card balance transfer offers and excess interest rate and fee payments. In a cross section of prime borrowers, middle-aged adults made fewer financial mistakes than either younger or older adults. We conclude that financial mistakes follow a U-shaped pattern, with the cost-minimizing performance occurring around age 53. We analyze nine regulatory strategies that may help individuals avoid financial mistakes. We discuss laissez-faire, disclosure, nudges, financial “driver’s licenses”, advance directives, fiduciaries, asset safe harbors, and ex post and ex ante regulatory oversight. Finally, we pose seven questions for future research on cognitive limitations and associated policy responses.
Human behavior regarding medicine seems strange; assumptions and models that seem workable in other areas seem less so in medicine. Perhaps, we need to rethink the basics. Toward this end, I have collected many puzzling stylized facts about behavior regarding medicine, and have sought a small number of simple assumptions which might together account for as many puzzles as possible.
The puzzles I consider include a willingness to provide more medical than other assistance to associates, a desire to be seen as so providing, support for nation, firm, or family provided medical care, placebo benefits of medicine, a small average health value of additional medical spending relative to other health influences, more interest in public that private signals of medical quality, medical spending as an individual necessity but national luxury, a strong stress-mediated health status correlation, and support for regulating health behaviors of the low status. These phenomena seem widespread across time and cultures.
I can explain these puzzles moderately well by assuming that humans evolved deep medical habits long ago in an environment where people gained higher status by having more allies, honestly cared about those who remained allies, were unsure who would remain allies, wanted to seem reliable allies, inferred such reliability in part based on who helped who with health crises, tended to suffer more crises requiring non-health investments when having fewer allies, and invested more in cementing allies in good times in order to rely more on them in hard times.
These ancient habits would induce modern humans to treat medical care as a way to show that you care. Medical care provided by our allies would reassure us of their concern, and allies would want you and other allies to see that they had pay enough to distinguish themselves from posers who didn’t care as much as they. Private information about medical quality is mostly irrelevant to this signaling process.
If people with fewer allies are less likely to remain our allies, and if we care about them mainly assuming they remain our allies, then we want them to invest more in health than they would choose for themselves. This tempts us to regulate their health behaviors. This analysis suggests that the future will continue to see robust desires for health behavior regulation and for communal medical care and spending increases as a fraction of income, all regardless of the health effects of these choices.
[Atlantic summary: Paul Josephson, the self-described “Mr. Fish Stick”, is probably best at explaining why the fish stick became successful. Josephson teaches Russian and Soviet history at Colby College, in Maine, but his research interests are wide ranging (think sports bras, aluminum cans, and speed bumps). In 2008, he wrote what is the defining scholarly paper on fish sticks. The research for it required him to get information from seafood companies, which proved unexpectedly challenging. “In some ways, it was easier to get into Soviet archives having to do with nuclear bombs”, he recalls.
Josephson dislikes fish sticks. Even as a kid, he didn’t understand why they were so popular. “I found them dry”, he says. Putting aside personal preference, Josephson insists that the world didn’t ask for fish sticks. “No one ever demanded them.”
Instead, the fish stick solved a problem that had been created by technology: too much fish. Stronger diesel engines, bigger boats, and new materials increased catches after the Second World War. Fishers began scooping up more fish than ever before, Josephson says. To keep them from spoiling, fishers skinned, gutted, deboned, and froze their hauls on board…Frozen fish, however, had a terrible reputation. Early freezers chilled meat and vegetables slowly, causing the formation of large ice crystals that turned food mushy upon defrosting.
That all changed in the 1920s, when the entrepreneur Clarence Birdseye developed a novel freezing technique, in which food was placed between metal plates. Food froze so quickly that the dreaded ice crystals couldn’t form. But when used on fish, the method created large blocks of intermingled fillets that, when pried apart, tore into “mangled, unappetizing chunks”, Josephson wrote. The fishing industry tried selling the blocks whole, as “fishbricks.” These were packaged like blocks of ice cream, with the idea that a home cook could chop off however much fish she wanted that day. But supermarkets had little luck selling the unwieldy bricks, and many stores even lacked adequate freezer space to display them.
Success came when the bricks were cut into standardized sticks. In a process that has remained essentially unchanged, factories run the frozen fish blocks through an X-ray machine to ensure they’re bone-free, then use band saws to cut them into slices. These “fingers” are dumped into a batter of egg, flour, salt, and spices, and then breaded. Afterward, they’re briefly tossed into hot oil to set the coating. The whole process takes about 20 minutes, during which the fish remains frozen, even when dunked in the deep fryer.
In 1953, 13 companies produced 3.4 million kilograms of fish sticks. A year later, 4 million kilograms were produced by another 55 companies. This surge in popularity was partly due to a marketing push that stressed the convenience of the new food: “no bones, no waste, no smell, no fuss”, as one Birds Eye advertisement proclaimed.
The appeal of fish sticks is somewhat paradoxical. They contain fish, but only that with the mildest flavor—and that fish has been dressed up to resemble chicken tenders. The battered disguise may be needed because, at least in North America, seafood tends to be second-tier. “We’ve mostly considered the eating of fish to be beneath our aspirations”, writes the chef and author Barton Seaver in American Seafood. Traditionally, fish was associated with sacrifice and penance—food to eat when meat was unaffordable or, if you were Catholic, to eat on the many days when red meat was verboten. Fish also spoils fast, smells bad, and contains sharp bones that pose a choking hazard.]
We highlight important differences between twenty-first-century organizations as compared with those of the previous century, and offer a critical review of the basic principles, typical applications, general effectiveness, and limitations of the current staffing model. That model focuses on identifying and measuring job-related individual characteristics to predict individual-level job performance.
We conclude that the current staffing model has reached a ceiling or plateau in terms of its ability to make accurate predictions about future performance. Evidence accumulated over more than 80 years of staffing research suggests that general mental abilities and other traditional staffing tools do a modest job of predicting performance across settings and jobs considering that, even when combined and corrected for methodological and statistical artifacts, they rarely predict more than 50% of the variance in performance.
Accordingly, we argue for a change in direction in staffing research and propose an expanded view of the staffing process, including the introduction of a new construct, in situ performance, and an expanded view of staffing tools to be used to predict future in situ performance that take into account time and context. Our critical review offers a novel perspective and research agenda with the goal of guiding future research that will result in more useful, applicable, relevant, and effective knowledge for practitioners to use in organizational settings.
If publishing an anomaly leads to the dissipation of its profitability, a notion that has mounting empirical support, then publishing a highly profitable market anomaly seems to be irrational behavior. This paper explores the issue by developing and empirically testing a theory that argues that publishing a market anomaly may, in fact, be rational behavior. The theory predicts that researchers with few (many) publications and lesser (stronger) reputations have the highest (lowest) incentive to publish market anomalies. Employing probit models, simple OLS regressions, and principal component analysis, we show that (a) market anomalies are more likely to be published by researchers with fewer previous publications and who have been in the field for a shorter period of time and (b) the profitability of published market anomalies is inversely related to the common factor spanning the number of publications the author has and the number of years that have elapsed since the professor earned his Ph.D. The empirical results suggest that the probability of publishing an anomaly and the profitability of anomalies that are published are inversely related to the reputation of the authors. These results corroborate the theory that publishing an anomaly is rational behavior for an author trying to establish his or her reputation.
This article notes 5 reasons why a correlation between a risk (or protective) factor and some specified outcome might not reflect environmental causation. In keeping with numerous other writers, it is noted that a causal effect is usually composed of a constellation of components acting in concert. The study of causation, therefore, will necessarily be informative on only one or more subsets of such components. There is no such thing as a single basic necessary and sufficient cause. Attention is drawn to the need (albeit unobservable) to consider the counterfactual (ie., what would have happened if the individual had not had the supposed risk experience). 15 possible types of natural experiments that may be used to test causal inferences with respect to naturally occurring prior causes (rather than planned interventions) are described. These comprise 5 types of genetically sensitive designs intended to control for possible genetic mediation (as well as dealing with other issues), 6 uses of twin or adoptee strategies to deal with other issues such as selection bias or the contrasts between different environmental risks, 2 designs to deal with selection bias, regression discontinuity designs to take into account unmeasured confounders, and the study of contextual effects. It is concluded that, taken in conjunction, natural experiments can be very helpful in both strengthening and weakening causal inferences.
The common law rule against perpetuities maintained alienation of property by voiding interests in property that did not vest within a life in being at the creation of the interest plus twenty-one years. The rule was applied strictly, often producing harsh results. The courts used a what-might-happen test to strike down nonvested interests that might not have vested in a timely manner. During the last half-century, many legislatures have softened the application of the rule against perpetuities by enacting wait-and-see provisions, which require courts to decide cases based on the facts as they actually developed, and reformation, which allowed some nonvested interests to be reformed to save them from invalidity.
This paper describes the common law rule. Then it traces the modern developments, including promulgation of the widely adopted Uniform Statutory Rule Against Perpetuities, which includes an alternate 90 year fixed wait-and-see period to be applied in place of the common law’s lives in being plus twenty-one years.
The paper continues by exploring the policies which underlie the rule against perpetuities. Then, after finding that there is no substantial movement to repeal the rule except for trusts, it is established that proposals for that federal law, including federal transfer taxes, cannot and should not be used to implement the policies served by the rule itself.
There is a continuing need for state rules against perpetuities. The paper proposes that the rule be modified to make it more understandable and easier to apply. The proposed rule would replace lives in being plus twenty-one years with a fixed term of years. This would eliminate most of the difficulties encountered in application of the rule. Wait-and-see and reformation are part of the proposed rule. The proposed rule provides for determination of valid interests at the end of the fixed term of year Rule and contains a definition of “vested” to enable judges and attorneys to apply the rule in cases which will arise many years in the future.
The thesis that economics is “performative” (Callon 1998) has provoked much interest but also some puzzlement and not a little confusion. The purpose of this article is to examine from the viewpoint of performativity one of the most successful areas of modern economics, the theory of options, and in so doing hopefully to clarify some of the issues at stake. To claim that economics is performative is to argue that it does things, rather than simply describing (with greater or lesser degrees of accuracy) an external reality that is not affected by economics. But what does economics do, and what are the effects of it doing what it does?
That the theory of options is an appropriate place around which to look for performativity is suggested by two roughly concurrent developments. Since the 1950s, the academic study of finance has been transformed from a low-status, primarily descriptive activity to a high-status, analytical, mathematical, Nobel-prize-winning enterprise. At the core of that enterprise is a theoretical account of options dating from the start of the 1970s (Black-Scholes). Around option theory there has developed a large array of sophisticated mathematical analyses of financial derivatives. (A “derivative” is a contract or security, such as an option, the value of which depends upon the price of another asset or upon the level of an index or interest rate.)
…Away from the hubbub, computers were used to generate Black-Scholes prices. Those prices were reproduced on sets of paper sheets which floor traders could carry around, often tightly wound cylindrically with only immediately relevant rows visible so that a quick squint would reveal the relevant price. While some individual traders and trading firms produced their own sheets, others used commercial services. Perhaps the most widely used sheets were sold by Fischer Black himself: see figure 2. Each month, Black would produce computer-generated sheets of theoretical prices for all the options traded on U.S. options exchanges, and have them photocopied and sent to those who subscribed to his pricing service. In 1975, for example, sheets for 100 stocks, with 3 volatility estimates for each stock, cost $1,205$3001975 per month, while a basic service with one stock and one volatility estimate cost $60$151975 per month (Black 1975b, “The Option Service: An Introduction”)
At first sight, Black’s sheets look like monotonous arrays of figures. They were, however, beautifully designed for their intended role in “distributed cognition” (Hutchins 1995a and b). Black included what options traders using the Black-Scholes-Merton model needed to know, but no more than they needed to know—there is virtually no redundant information on a sheet—hence their easy portability. He found an ad hoc but satisfactory way of dealing with the consequences of dividends for option pricing (an issue not addressed in the original version of the model), and devoted particular care to the crucial matter of the estimation of volatility. Even the physical size of the sheets was well-judged. Prices had first to be printed on the large computer line-printer paper of the period, but they were then photo-reduced onto standard-sized paper, differently colored for options traded on the different exchanges. The resultant sheets were small enough for easy handling, but not so small that the figures became too hard to read (the reproduction in figure 2 is smaller than full-scale).
How were Black’s sheets and similar option pricing services used? They could, of course, simply be used to set option prices. In April 1976, options trading began on the Pacific Stock Exchange in San Francisco, and financial economist Mark Rubinstein became a trader there. He told me in an interview that he found his fellow traders on the new exchange initially heavily reliant on Black’s sheets: “I walked up [to the most active option trading ‘crowd’] and looked at the screen [of market prices] and at the sheet and it was identical. I said to myself, ‘academics have triumphed’” (Rubinstein 2000).
We model the relationship between asset float (tradeable shares) and speculative bubbles.
Investors with heterogeneous beliefs and short-sales constraints trade a stock with limited float because of insider lockups. A bubble arises as price overweighs optimists’ beliefs and investors anticipate the option to resell to those with even higher valuations. The bubble’s size depends on float as investors anticipate an increase in float with lockup expirations and speculate over the degree of insider selling.
Consistent with the Internet experience, the bubble, turnover, and volatility decrease with float and prices drop on the lockup expiration date.
Open systems strategy enables a sponsor to diffuse its technology and promotes standardization in an industry. However, this strategy has been studied in high-tech settings. We hypothesize that, in a non-high-tech industry, a sponsor giving access to its technical knowledge may impact industry structure. Based on a survey of the U.S. tabletop role-playing game (RPG) industry, our results highlight that the introduction of an open system in a sector creates an entry induction phenomenon and that these new entrants adopt more readily the open system than incumbents. Moreover, the average size of the firms in the industry decreases due to vertical specialization.
…Sample and Data: For the purpose of this study we have compared the structure of the RPG sector before and after the introduction of the d20open license. Our comparison is between the 2-year periods of 1998–99 (before the introduction of the d20 license) and 2000–01 (after the introduction of the d20 license). These periods can legitimately be compared, as the U.S. market segment encompassing RPG products did not witness a drastic evolution over these 4 years. 8 After collecting qualitative data on the industry from RPG publications (Comics and Games Retailer, D20 Magazine, Dragon Magazine) and Internet websites (D20 Reviews, Game Manufacturers Association, Game Publishers Association, GameSpy, Gaming Report, RPGA Network, RPGNow, RPG Planet, Wizard’s Attic), we established an exhaustive list of the 193 active U.S. companies publishing RPGs and compiled a database comprising 3 firm variables: age, size (number of employees), and technological system adopted (the open system vs. proprietary systems). These data were collected from company websites. We collected information
…Results: We hypothesized that the introduction of an open system in an industry would favor the arrival of new entrants (Table 1). Hypothesis 1 was strongly supported by our chi-square analysis. The 2000–01 period saw 78 new entrants into the RPG sector, with only 20 new entrants in the 1998–99 period (c2 = 12.35, statistically-significant at the 0.01 level). Of the 78 new entrants in the 2000–01 period, 51 adopted the d20 license (Table 2). This proportion was markedly greater than for incumbents, strongly supporting Hypothesis 2 (c2 = 17.89, statistically-significant at the 0.01 level). New entrants were found to adopt the new open system more readily than incumbents. These new entrants were essentially players and former freelancers operating within the sector who saw the d20 as an opportunity to avoid the prevailing development costs and switching costs for players, and so decided to launch their own company.
It should be noted that some firms, both new entrants and incumbents, coupled the open system with development of their own proprietary game’s rules of play. Moreover, 27 new entrants did not adopt the d20 license. This figure corresponds roughly to the number of new entrants during the 1998–99 period (i.e., 20). This confirms that the 2 periods (1998–99 and 2000–01) are comparable and that no exogenous variable has drastically modified the economic context of the industry. We can then attribute the new entries in the RPG industry in 2000–01 to the introduction of the d20 license per se.
We hypothesized that the diffusion of an open system into an industry should lead to a decrease in the average size of companies in that industry. Our ANOVA result strongly supports this hypothesis (F = 8.739, statistically-significant at the 0.01 level). Indeed, even though RPG companies have traditionally been very small, their average size became even smaller after the diffusion of the d20 system (reducing from an average of 5.02 down to 2.76 employees).
Morally, when should one retire from one’s job? The surprising answer may be ‘now’. It is commonly assumed that for a person who has acquired professional training at some personal effort, is employed in a task that society considers useful, and is working hard at it, no moral problem arises about whether that person should continue working. I argue that this may be a mistake: within many professions and pursuits, each one among the majority of those positive, productive, hard working people ought to consider leaving his or her job.
The actions of the federal government can have a profound impact on financial markets. As prominent participants in the government decision making process, U.S. Senators are likely to have knowledge of forthcoming government actions before the information becomes public. This could provide them with an informational advantage over other investors. We test for abnormal returns from the common stock investments of members of the U.S. Senate during the period 1993–1998. We document that a portfolio that mimics the purchases of U.S. Senators beats the market by 85 basis points per month, while a portfolio that mimics the sales of Senators lags the market by 12 basis points per month. The large difference in the returns of stocks bought and sold (nearly one percentage point per month) is economically large and reliably positive.
Two extreme views bracket the range of thinking about the amount of money in U.S. political campaigns. At one extreme is the theory that contributors wield considerable influence over legislators. Even modest contributions may be cause for concern and regulation, given the extremely large costs and benefits that are levied and granted by government. An alternative view holds that contributors gain relatively little political leverage from their donations, since the links from an individual campaign contribution to the election prospects of candidates and to the decisions of an individual legislators are not very firm. Although these theories have different implications, they share a common perspective that campaign contributions should be considered as investments in a political marketplace, where a return on that investment is expected.
In this paper, we begin by offering an overview of the sources and amounts of campaign contributions in the U.S. In the light of these facts, we explore the assumption that the amount of money in U.S. campaigns mainly reflects political investment. We then offer our perspective that campaign contributions should be viewed primarily as a type of consumption good, rather than as a market for buying political benefits. Although this perspective helps to explain the levels of campaign contributions by individuals and organizations, it opens up new research questions of its own.
This study uses microdata from the 1972–1981 National Health Interview Surveys (NHIS) to examine how health status and medical care utilization fluctuate with state macroeconomic conditions. Personal characteristics, location fixed-effects, general time effects and (usually) state-specific time trends are controlled for. The major finding is that there is a counter-cyclical variation in physical health that is especially pronounced for individuals of prime-working age, employed persons, and males. The negative health effects of economic expansions persist or accumulate over time, are larger for acute than chronic ailments, and occur despite a protective effect of income and a possible increase in the use of medical care. Finally, there is some suggestion that mental health may be procyclical, in sharp contrast to physical well-being.
[Keywords: Health status, Morbidity, Macroeconomic conditions.]
In the December 1928 issue of the Economic Journal, Frank Ramsey asked the question “how much of its income should a nation save?” Few of the Cambridge economists of the 1930s were convinced by his highly formalized answer. His contribution sank quickly into oblivion, remaining there for about thirty-five years. In the 1960s, the success of the Hamiltonian formalism and the increasing interest for optimal growth led on the contrary to a quasi “natural” use of Ramsey’s former intuitions. These mathematical tools became so widespread that, a few years later, new classical macroeconomics uses a new interpretation of the “à la Ramsey” models, within the setting of representative agent models, in order to bypass the Arrow and Sonnenschein-Mantel-Debreu “impossibility results.” The “à la Ramsey” model is the backbone of modern new classical macroeconomics. It is thus not surprising that these successive moves in macro-economic theory came to foster a slanted interpretation of Ramsey’s 1928 article. In this respect, Roger E. A. Farmer’s point of view is representative of the retrospective tribute sometimes paid to Ramsey’s article:
F. Ramsey was one of the first economists to study how an infinitely lived agent should allocate his resources over time. His work was at the forefront of mathematical economics at the time it was written but his approach has now become a standard part of graduate macroeconomic courses. Many applications of Ramsey’s work assume that there is only one agent in the economy and that this representative agent can be thought of as a stand-in for the workings of the market mechanism (Farmer 1993, p. 77).
The continuing controversy over online file sharing sparks me to offer a few thoughts as an author and publisher. To be sure, I write and publish neither movies nor music, but books. But I think that some of the lessons of my experience still apply.
Lesson 1: Obscurity is a far greater threat to authors and creative artists than piracy.
…More than 100,000 books are published each year, with several million books in print, yet fewer than 10,000 of those new books have any substantial sales, and only a hundred thousand or so of all the books in print are carried in even the largest stores…The web has been a boon for readers, since it makes it easier to spread book recommendations and to purchase the books once you hear about them. But even then, few books survive their first year or two in print. Empty the warehouses and you couldn’t give many of them away…
Lesson 2: Piracy is progressive taxation
For all of these creative artists, most laboring in obscurity, being well-enough known to be pirated would be a crowning achievement. Piracy is a kind of progressive taxation, which may shave a few percentage points off the sales of well-known artists (and I say “may” because even that point is not proven), in exchange for massive benefits to the far greater number for whom exposure may lead to increased revenues…
Lesson 3: Customers want to do the right thing, if they can.
…We’ve found little or no abatement of sales of printed books that are also available for sale online…The simplest way to get customers to stop trading illicit digital copies of music and movies is to give those customers a legitimate alternative, at a fair price.
Lesson 4: Shoplifting is a bigger threat than piracy.
…What we have is a problem that is analogous, at best, to shoplifting, an annoying cost of doing business. And overall, as a book publisher who also makes many of our books available in electronic form, we rate the piracy problem as somewhere below shoplifting as a tax on our revenues. Consistent with my observation that obscurity is a greater danger than piracy, shoplifting of a single copy can lead to lost sales of many more. If a bookstore has only one copy of your book, or a music store one copy of your CD, a shoplifted copy essentially makes it disappear from the next potential buyer’s field of possibility. Because the store’s inventory control system says the product hasn’t been sold, it may not be reordered for weeks or months, perhaps not at all. I have many times asked a bookstore why they didn’t have copies of one of my books, only to be told, after a quick look at the inventory control system: “But we do. It says we still have one copy in stock, and it hasn’t sold in months, so we see no need to reorder.” It takes some prodding to force the point that perhaps it hasn’t sold because it is no longer on the shelf…
Lesson 5: File sharing networks don’t threaten book, music, or film publishing. They threaten existing publishers.
…The question before us is not whether technologies such as peer-to-peer file sharing will undermine the role of the creative artist or the publisher, but how creative artists can leverage new technologies to increase the visibility of their work. For publishers, the question is whether they will understand how to perform their role in the new medium before someone else does. Publishing is an ecological niche; new publishers will rush in to fill it if the old ones fail to do so…Over time, it may be that online music publishing services will replace CDs and other physical distribution media, much as recorded music relegated sheet music publishers to a niche and, for many, made household pianos a nostalgic affectation rather than the home entertainment center. But the role of the artist and the music publisher will remain. The question then, is not the death of book publishing, music publishing, or film production, but rather one of who will be the publishers.
Lesson 6: “Free” is eventually replaced by a higher-quality paid service
A question for my readers: How many of you still get your email via peer-to-peer UUCP dialups or the old “free” Internet, and how many of you pay $31.89$19.952002 a month or more to an ISP? How many of you watch “free” television over the airwaves, and how many of you pay $32$202002–$96$602002 a month for cable or satellite television? (Not to mention continue to rent movies on videotape and DVD, and purchasing physical copies of your favorites.) Services like Kazaa flourish in the absence of competitive alternatives. I confidently predict that once the music industry provides a service that provides access to all the same songs, freedom from onerous copy-restriction, more accurate metadata and other added value, there will be hundreds of millions of paying subscribers…Another lesson from television is that people prefer subscriptions to pay-per-view, except for very special events. What’s more, they prefer subscriptions to larger collections of content, rather than single channels. So, people subscribe to “the movie package”, “the sports package” and so on. The recording industry’s “per song” trial balloons may work, but I predict that in the long term, an “all-you-can-eat” monthly subscription service (perhaps segmented by musical genre) will prevail in the marketplace.
Lesson 7: There’s more than one way to do it.
A study of other media marketplaces shows, though, that there is no single silver-bullet solution. A smart company maximizes revenue through all its channels, realizing that its real opportunity comes when it serves the customer who ultimately pays its bills…Interestingly, some of our most successful print/online hybrids have come about where we present the same material in different ways for the print and online contexts. For example, much of the content of our bestselling book Programming Perl (more than 600,000 copies in print) is available online as part of the standard Perl documentation. But the entire package—not to mention the convenience of a paper copy, and the aesthetic pleasure of the strongly branded packaging—is only available in print. Multiple ways to present the same information and the same product increase the overall size and richness of the market. And that’s the ultimate lesson. “Give the Wookiee what he wants!” as Han Solo said so memorably in the first Star Wars movie. Give it to him in as many ways as you can find, at a fair price, and let him choose which works best for him.
In this article, we investigate the long-run relationships among disasters, capital accumulation, total factor productivity, and economic growth.
The cross-country empirical analysis demonstrates that higher frequencies of climatic disasters are correlated with higher rates of human capital accumulation, increases in total factor productivity, and economic growth.
Though disaster risk reduces the expected rate of return to physical capital, risk also serves to increase the relative return to human capital. Thus, physical capital investment may fall, but there is also a substitution toward human capital investment. Disasters also provide the impetus to update the capital stock and adopt new technologies, leading to improvements in total factor productivity.
Every product in the marketplace has substitutes and complements. A substitute is another product you might buy if the first product is too expensive. Chicken is a substitute for beef. If you’re a chicken farmer and the price of beef goes up, the people will want more chicken, and you will sell more. A complement is a product that you usually buy together with another product. Gas and cars are complements. Computer hardware is a classic complement of computer operating systems. And babysitters are a complement of dinner at fine restaurants. In a small town, when the local five star restaurant has a two-for-one Valentine’s day special, the local babysitters double their rates. (Actually, the nine-year-olds get roped into early service.) All else being equal, demand for a product increases when the prices of its complements decrease.
Let me repeat that because you might have dozed off, and it’s important. Demand for a product increases when the prices of its complements decrease. For example, if flights to Miami become cheaper, demand for hotel rooms in Miami goes up—because more people are flying to Miami and need a room. When computers become cheaper, more people buy them, and they all need operating systems, so demand for operating systems goes up, which means the price of operating systems can go up.
…Once again: demand for a product increases when the price of its complements decreases. In general, a company’s strategic interest is going to be to get the price of their complements as low as possible. The lowest theoretically sustainable price would be the “commodity price”—the price that arises when you have a bunch of competitors offering indistinguishable goods. So:
Smart companies try to commoditize their products’ complements.
If you can do this, demand for your product will increase and you will be able to charge more and make more.
People prefer to make changeable decisions rather than unchangeable decisions because they do not realize that they may be more satisfied with the latter. Photography students believed that having the opportunity to change their minds about which prints to keep would not influence their liking of the prints. However, those who had the opportunity to change their minds liked their prints less than those who did not (Study 1). Although the opportunity to change their minds impaired the post-decisional processes that normally promote satisfaction (Study 2a), most participants wanted to have that opportunity (Study 2b). The results demonstrate that errors in affective forecasting can lead people to behave in ways that do not optimize their happiness and well-being.
[2002? Short technology essay based on Myer & Sutherland 1968 (!) discussing a perennial pattern in computing history dubbed the ‘Wheel of Reincarnation’ for how old approaches inevitably reincarnate as the exciting new thing: shifts between ‘local’ and ‘remote’ computing resources, which are exemplified by repeated cycles in graphical display technologies from dumb ‘terminals’ which display only raw pixels to smart devices which interpret more complicated inputs like text or vectors or finally fullblown programming languages which render specified images locally (eg PostScript).
These cycles are driven by cost, latency, architectural simplicity, and available computing power.
The Wheel of Reincarnation paradigm has played out for computers as well, in shifts from local terminals attached to mainframes to PCs to smartphones to ‘cloud computing’. Similar cycles can play out with other techs like software configuration.]
This paper provides the first estimates of overall CPI bias prior to the 1970s and new estimates of bias since the 1970s. It finds that annual CPI bias was −0.1% between 1888 and 1919 and rose to 0.7%between 1919 and 1935. Annual CPI bias was 0.4% in the 1960s and then rose to 2.7% between 1972 and 1982 before falling to 0.6% between 1982 and 1994. The findings imply that we have underestimated growth rates in true income in the 1920s and 1930s and in the 1970s.
In this paper we provide experimental evidence indicating that incentive contracts may cause a strong crowding out of voluntary cooperation. This crowding-out effect constitutes costs of incentive provision that have been largely neglected by economists. In our experiments the crowding-out effect is so strong that the incentive contracts are less efficient than contracts without any incentives. Principals, nonetheless, prefer the incentive contracts because they allow them to appropriate a much larger share of the (smaller) total surplus and are, hence, more profitable for them.
The contracting view of CEO pay assumes that pay is used by shareholders to solve an agency problem. Simple models of the contracting view predict that pay should not be tied to luck, where luck is defined as observable shocks to performance beyond the CEO’s control.
Using several measures of luck, we find that CEO pay in fact responds as much to a lucky dollar as to a general dollar.
A skimming model, where the CEO has captured the pay-setting process, is consistent with this fact. Because some complications to the contracting view could also generate pay for luck, we test for skimming directly by examining the effect of governance. Consistent with skimming, we find that better-governed firms pay their CEO less for luck.
The fact that sick elderly people without prescription drug coverage pay far more for drugs than do people with private health insurance has created a call for state and federal governments to take action. Antitrust cases have been launched, state price control legislation has been enacted, and proposals for expansion of Medicare have been offered in response to price and spending levels for prescription drugs.
This paper offers an analysis aimed at understanding pricing patterns of brand-name prescription drugs. I focus on the basic economic forces that enable differential pricing of products to exist and show how features of the prescription drug market promote such phenomena. The analysis directs policy attention toward how purchasing practices can be changed to better represent groups that pay the most and are most disadvantaged.
[Keywords: prescription drugs, markets, formularies, Health maintenance organizations (HMOs), managed care, brand-name drugs, prescription drug costs, pharmacy benefit managers, elderly care, co-payments]
The military drawdown program of the early 1990’s provides an opportunity to obtain estimates of personal discount rates based on large numbers of people making real choices involving large sums. The program offered over 65,000 separatees the choice between an annuity and a lump-sum payment. Despite break-even discount rates exceeding 17%, most of the separatees selected the lump sum—saving taxpayers $2.78$1.72001 billion in separation costs. Estimates of discount rates range from 0 to over 30% and vary with education, age, race, sex, number of dependents, ability test score, and the size of payment.
This Article provides theoretical and empirical support for the claim that organized crime competes with the state to provide property rights enforcement and protection services. Drawing on extensive data from Japan, this Article shows that like firms in regulated environments everywhere, the structure and activities of organized criminal firms are substantially shaped by state-supplied institutions. Careful observation reveals that in Japan, the activities of organized criminal firms closely track inefficiencies in formal legal structures, including both inefficient substantive laws and a state-induced shortage of legal professionals and other rights-enforcement agents. Thus organized crime in Japan—and, by extension, in other countries where substantial gaps exist between formal property rights structures and state enforcement capacities—is the dark side of private ordering.
Regression analyses show negative correlations between membership in Japanese organized criminal firms and (a) civil cases, (b) bankruptcies (c) reported crimes, and (d) loans outstanding. Professors Milhaupt and West interpret these data to support considerable anecdotal evidence that members of organized criminal firms in Japan play an active entrepreneurial role in substituting for state-supplied enforcement mechanisms and other public services in such areas as dispute mediation, bankruptcy and debt collection, (unorganized) crime control, and finance They offer additional empirical evidence indicating that arrests of gang members do not curb the growth of organized criminal firm Their findings may have an important normative implication for transition economies: efforts to eradicate organized crime should focus on the alteration of institutional incentive structures and the stimulation of competing rights-enforcement agents rather than on traditional crime-control activities.
A world product time series covering two million years is well fit by either a sum of four exponentials, or a constant elasticity of substitution (CES) combination of three exponential growth modes: “hunting”, “farming”, and “industry.” The CES parameters suggest that farming substituted for hunting, while industry complemented farming, making the industrial revolution a smoother transition. Each mode grew world product by a factor of a few hundred, and grew a hundred times faster than its predecessor. This weakly suggests that within the next century a new mode might appear with a doubling time measured in days, not years.
Q. Can you briefly summarize what the Open Gaming Movement is about? Where did it come from, and what does it mean to the average gamer?
A. Sure. Prepare yourself for a big gulp of business theory…That brings us to Open Gaming, and why we’re pursuing this initiative inside Wizards and outside to the larger community of game publishers.
Here’s the logic in a nutshell. We’ve got a theory that says that D&D is the most popular role playing game because it is the game more people know how to play than any other game. (For those of you interested researching the theory, this concept is called “The Theory of Network Externalities”). Note: This is a very painful concept for a lot of people to embrace, including a lot of our own staff, and including myself for many years. The idea that D&D is somehow “better” than the competition is a powerful and entrenched concept. The idea that D&D can be “beaten” by a game that is “better” than D&D is at the heart of every business plan from every company that goes into marketplace battle with the D&D game. If you accept the Theory of Network Externalities, you have to admit that the battle is lost before it begins, because the value doesn’t reside in the game itself, but in the network of people who know how to play it.
If you accept (as I have finally come to do) that the theory is valid, then the logical conclusion is that the larger the number of people who play D&D, the harder it is for competitive games to succeed, and the longer people will stay active gamers, and the more value the network of D&D players will have to Wizards of the Coast. In fact, we believe that there may be a secondary market force we jokingly call “The Skaff Effect”, after our own Skaff Elias. Skaff is one of the smartest guys in the company, and after looking at lots of trends and thinking about our business over a long period of time, he enunciated his theory thusly:
“All marketing and sales activity in a hobby gaming genre eventually contributes to the overall success of the market share leader in that genre.”
In other words, the more money other companies spend on their games, the more D&D sales are eventually made. Now, there are clearly issues of efficiency—not every dollar input to the market results in a dollar output in D&D sales; and there is a substantial time lag between input and output; and a certain amount of people are diverted from D&D to other games never to return. However, we believe very strongly that the net effect of the competition in the RPG genre is positive for D&D. The downside here is that I believe that one of the reasons that the RPG as a category has declined so much from the early 90’s relates to the proliferation of systems. Every one of those different game systems creates a “bubble” of market inefficiency; the cumulative effect of all those bubbles has proven to be a massive downsizing of the marketplace. I have to note, highlight, and reiterate: The problem is not competitive product, the problem is competitive systems. I am very much for competition and for a lot of interesting and cool products.
So much for the dry theory and background. Here’s the logical conclusions we’ve drawn: We make more revenue and more profit from our core rulebooks than any other part of our product lines. In a sense, every other RPG product we sell other than the core rulebooks is a giant, self-financing marketing program to drive sales of those core books. At an extreme view, you could say that the core book of D&D—the PHB [Player’s Handbook rulebook]—is the focus of all this activity, and in fact, the PHB is the #1 best selling, and mostprofitable RPG product Wizards of the Coast makes year in and year out.
The logical conclusion says that reducing the “cost” to other people to publishing and supporting the core D&D game to zero should eventually drive support for all other game systems to the lowest level possible in the market, create customer resistance to the introduction of new systems, and the result of all that “support” redirected to the D&D game will be to steadily increase the number of people who play D&D, thus driving sales of the core books. This is a feedback cycle—the more effective the support is, the more people play D&D. The more people play D&D, the more effective the support is.
The other great effect of Open Gaming should be a rapid, constant improvement in the quality of the rules. With lots of people able to work on them in public, problems with math, with ease of use, of variance from standard forms, etc. should all be improved over time. The great thing about Open Gaming is that it is interactive—someone figures out a way to make something work better, and everyone who uses that part of the rules is free to incorporate it into their products. Including us. So D&D as a game should benefit from the shared development of all the people who work on the Open Gaming derivative of D&D.
After reviewing all the factors, I think there’s a very, very strong business case that can be made for the idea of embracing the ideas at the heart of the Open Source movement and finding a place for them in gaming.
There is one central fact about the economic history of the twentieth century: above all, the century just past has been the century of increasing material wealth and economic productivity. No previous era and no previous economy has seen material wealth and productive potential grow at such a pace. The bulk of America’s population today achieves standards of material comfort and capabilities that were beyond the reach of even the richest of previous centuries. Even lower middle-class households in relatively poor countries have today material standards of living that would make them, in many respects, the envy of the powerful and lordly of past centuries.
This paper provides an overview of the existing theoretical and empirical work on the provision of incentives. It reviews the costs and benefits of many types of pay-for-performance, such as piece rates, promotions, and long-term incentives. The main conclusions are (1) while there is considerable evidence that individuals respond to pay-for-performance, there is less evidence that contracts are designed as predicted by the theory, (2) there has been little progress made in distinguishing amongst plausible theories, and (3) we still know little about how incentives are provided to workers whose output is difficult to measure.
This first purpose of this paper is to utilize the PSID to see whether the anomalies of Figure 1 and Figure 2 can be attributed to some non-CPI cause such as demographics or changes in the distribution of income. The second purpose is to offer a more refined estimate of CPI bias. Third, I will present evidence of strikingly differentinflation rates by race. Using the PSID, I estimate a demand function for food at home for 1974 through 1991. Using a standard measure of real income (total family income after federal taxes, the PSID’s best continuously available approximation of disposable income)10deflated by the CPI, this demand function has shown consistent drift over the sample period; I attribute this drift to unmeasured growth in real income, and in turn I attribute the mismeasurement of income to CPI bias.
In a nutshell, the results are as follows: On average, in 1974 the PSID sample11 of white households spent 16.64% of its income on at-home food. By 1991 this share had fallen to 12.04%. Measured per-household income grew 7% over this time span, explaining just over half a point of the food-share decline. Decline in the relative CPI of food is sufficient to explain perhaps as much as 1 percentage point of decline in food’s share. Other regressors accounts for less than 0.1 point of additional decline; thus about 3 points of the food-share decline are left to be explained by CPI bias. I estimate that this bias is about 2.5% per year from 1974 through 1981, and slightly under 1% per year since then.
For blacks, food’s share fell from 21.17% to 12.44%. Approximately 0.8 point of the decline can be explained by measured income growth, and another point by movement in other regressors, and up to another 1 point by the decline in the food CPI. Thus the food-share decline left to be explained by measurement error is 5.9 points. I estimate the bias to be approximately 4% per year from 1974 through 1981 and about 3% per year since then.
During the last fifteen years, Congress has deregulated, wholly or partly, a number of infrastructure industries, including most modes of transport—airlines, motor carriers, railroads, and intercity bus companies. Deregulation emerged in a comprehensive ideological movement which abhorred governmental pricing and entry controls as manifestly causing waste and inefficiency, while denying consumers the range of price and service options they desire.
In a nation dedicated to free market capitalism, governmental restraints on the freedom to enter into a business or allowing the competitive market to set the price seem fundamentally at odds with immutable notions of economic liberty. While in the late 19th and early 20th Century, market failure gave birth to economic regulation of infrastructure industries, today, we live in an era where the conventional wisdom is that government can do little good and the market can do little wrong.
Despite this passionate and powerful contemporary political/economic ideological movement, one mode of transportation has come full circle from regulation, through deregulation, and back again to regulation—the taxi industry. American cities began regulating local taxi firms in the 1920s. Beginning a half century later, more than 20 cities, most located in the Sunbelt, totally or partially deregulated their taxi companies. However, the experience with taxicab deregulation was so profoundly unsatisfactory that virtually every city that embraced it has since jettisoned it in favor of resumed economic regulation.
Today, nearly all large and medium-sized communities regulate their local taxicab companies. Typically, regulation of taxicabs involves: (1) limited entry (restricting the number of firms, and/or the ratio of taxis to population), usually under a standard of “public convenience and necessity”, [PC&N] (2) just, reasonable, and non-discriminatory fares, (3) service standards (eg., vehicular and driver safety standards, as well as a common carrier obligation of non-discriminatory service, 24-hour radio dispatch capability, and a minimum level of response time), and (4) financial responsibility standards (eg., insurance).
This article explores the legal, historical, economic, and philosophical bases of regulation and deregulation in the taxi industry, as well as the empirical results of taxi deregulation. The paradoxical metamorphosis from regulation, to deregulation, and back again, to regulation is an interesting case study of the collision of economic theory and ideology, with empirical reality. We begin with a look at the historical origins of taxi regulation.
[Keywords: Urban Transportation, Taxi Industry, Common Carrier, Mass Transit, Taxi Industry Regulation, Taxi Deregulation, Reregulation, Taxicab Ordinance, PUC, Open Entry, Reglated Entry, Operating Efficiency, Destructive Competition, Regulated Competition, Cross Subsidy, Cream Skimming, PC&N, Pollution, Cabs]
The ‘stylized fact’ that growth rates remain constant over the long run was a fundamental feature of postwar growth theory.
Using recently developed tests for structural change in univariate time series, we determine whether, and when, a break in growth rates exists for 16 countries.
We find that most countries exhibited fairly steady growth for a period lasting several decades, terminated by a statistically-significant, and sudden, drop in GDP levels. Following the break, per capita output in most countries continued to grow at roughly double their prebreak rates for many decades, even after their original growth path had been surpassed.
Two proposals are made that may facilitate the creation of derivative market instruments, such as futures contracts, cash settled based on economic indices.
The first proposal concerns index number construction: indices based on infrequent measurements of nonstandardized items may control for quality change by using a hedonic repeated measures method, an index number construction method that follows individual assets or subjects through time and also takes account of measured quality variables.
The second proposal is to establish markets for perpetual claims on cash flows matching indices of dividends or rents. Such markets may help us to measure the prices of the assets generating these dividends or rents even when the underlying asset prices are difficult or impossible to observe directly. A perpetual futures contract is proposed that would cash settle every day in terms of both the change in the futures price and the dividend or rent index for that day.
The economies of modern industrialized society can more appropriately be labeled organizational economies than market economies. Thus, even market-driven capitalist economies need a theory of organizations as much as they need a theory of markets. The attempts of the new institutional economics to explain organizational behavior solely in terms of agency, asymmetric information, transaction costs, opportunism, and other concepts drawn from neoclassical economics ignore key organizational mechanisms like authority, identification, and coordination, and hence are seriously incomplete. The theory presented here is simple and coherent, resting on only a few mechanisms that are causally linked. Better yet, it agrees with empirical observations of organizational phenomena. Large organizations, especially governmental ones, are often caricatured as “bureaucracies,” but they are often highly effective systems, despite the fact that the profit motive can penetrate these vast structures only by indirect means.
Although it is generally accepted that television program ratings are greater than the audience’s exposure to the advertising, the key issue is the actual size of the difference. A review of advertising, marketing, communication, and sociology literature yields some indications of the degree of difference between ad and program exposure and factors in the viewing environment which could influence audience commercial avoidance.
This article discusses the mechanics, economics, advantages and disadvantages of undated futures markets with specific reference to the Chinese Gold and Silver Exchange Society of Hong Kong (CGSES). It also suggests a potential application of undated futures markets to the trading of stock index futures.
An undated futures market is an alternative to conventional futures markets. In conventional futures markets contracts mature at selected times during the year. Several contracts with different maturity dates trade simultaneously. In an undated futures market only a single contract trades, but that contract can serve the hedging purposes of the multiple contracts traded in a dated market. The CGSES is of interest both as a curiosum and as an example of a potentially valuable form of futures market.
The following section of this paper describes the operation of an undated futures market and the specific mechanics of trading on the CGSES. Sections II and III discuss the economics of price determination and hedging in undated futures markets. Section IV describes the advantages of undated futures markets to futures traders and points out potential problems in certain applications. Section V shows how such markets might be adapted to the US, especially for trading futures on a stock index or on other indices, and Section VI is a conclusion. [See also "Perpetual futures".]
Combining the classical theorem of Stolper and Samuelson with a model of politics derived from Becker leads to the conclusion that exogenous changes in the risks or costs of countries’ external trade will stimulate domestic conflict between owners of locally scarce and locally abundant factors. A traditional three-factor model then predicts quite specific coalitions and cleavages among owners of land, labor, and capital, depending only on the given country’s level of economic development and its land-labor ratio. A preliminary survey of historical periods of expanding and contracting trade, and of such specific cases as the German “marriage of iron and rye,” U.S. and Latin American populism, and Asian socialism, suggests the accuracy of this hypothesis. While the importance of such other factors as cultural divisions and political inheritance cannot be denied, the role of exogenous changes in the risks and costs of trade deserves further investigation.
There are relatively few empirical laws in sociology…Several empirical laws—the size-density law, the rank-size rule, the urban density law, the gravity model, and the urban area-population law—have been reported in the ecological or social-demographic literature. They have also been derived from the theory of time-minimization (Stephan 1979).
The purpose of this paper is to examine a non-ecological law, one developed from the study of formal organizations, and to derive that law from the theory of time-minimization. The law is Mason Haire’s “square-cube law”, a law which has stirred considerable interest and controversy since its introduction. Haire examined longitudinal data from 4 firms. He divided the employees of these firms into “external employees”, those who interact with others outside the firm, and “internal employees”, those who interact only with others inside the firm. His finding was that, over time, the cube-root of the number of internal employees was directly proportional to the square-root of the number of external employees. The scatter diagrams he presented (286–7) show regression lines of the form
I1⁄3 = a + bE1⁄2
where I and E are the number of internal and external employees and a and b are the intercept and slope of the regression line (see Figure 1 for an example). His explanation of the square-cube law is based on certain mathematical properties of physical objects, extended to an explanation of biological form and analogically applied by Haire to the shape of formal organizations. For a given physical object, say a cube, an increase in the length of a side results in an increase of the surface area and also of the volume. If the new length is 13× the old, the area will be 102 or 103× the old, and the new volume will be 103 or 1003× the old. Thus, the cube-root of the volume will be proportional to the square-root of the surface area.
…Levy and Donhowe tested Haire’s law with cross-sectional data for 62 firms in 8 industries. They conclude that the square-cube law “is a reasonable and consistent description of the industrial organizational composition among firms of varying size in different industries” (342). A second study, by Draper and Strother, examined data for a single educational organization over a 45-year period. They showed that regression analysis of the untransformed data produced nearly as good a fit as did the square-cube transformation in Equation 1…Carlisle analyzed data for 7 school districts using both the square-cube transformations and the raw data. He found, supporting Draper-Strother, that the correlation coefficients were about equally good under the 2 tests.
…Derivation of the Square-Cube Law: …As McWhinney’s own scatter diagram shows (345), all 3 fit the data fairly well. Under such conditions, when the data themselves do not provide conclusive evidence favoring one model over another, the best criterion is often a logical one: Can one of the models be derived from some general theory?
…We now proceed to suggest a theoretical derivation of the square-cube law, not by analogy but by a direct consideration of the underlying processes involved. The general theory from which the derivation will proceed is the theory of time-minimization mentioned above (Stephan). Its central assumption is that social structures evolve in such a way as to minimize the time which must be expended in their operation.
Assume a firm specified by a boundary which separates it from its environment, and which includes people who spend some of their time as its employees. Assume 2 measurements made on the firm, measurements which produce the numbers E (the number of “external employees, those who interact with others outside the firm) and I (the number of”internal employees", those who interact only with others inside the firm). Finally, from the general theory of time-minimization, assume that social structures, including the firm, evolve in such a way as to minimize the time which must be expended in their operation.
All the employees of the firm must be supported or compensated from the total pool of benefits held within the firm. Since this pool of benefits is brought in through the time-expenditures of the external employees, we may say that they in effect support themselves. At least on average, a portion of what they bring in is consumed by them. In contrast, the internal employees represent a special time-cost to the firm. The internal employees, by definition, do not bring the means of their own support into the firm. They must be supported, ultimately, through the time expenditures of the external employees. The average support time will be directly proportional to the number of internal employees and inversely proportional to the number of external employees. Thus
Ts = aI / E (6)
where a is the constant of proportionality.
If the internal employees thus appear parasitical, as a cost factor, they also contribute to reducing other costs of the firm. The benefit factor is that internal employees contribute by coordinating the work of the external employees. If there were no internal structure, if the external employees had to spend time coordinating their own activities by themselves, the amount of time spent would detract from the time they could spend at their primary assignment, bringing resources into the firm. How much time would be spent in coordination? Assuming that each one potentially could interact with all others, the time spent should be proportional to E(E - 1)/2, the number of pairwise interactions in a group of E individuals; thus, as E becomes modestly large, the coordination time should be proportional to E2. Since this work is actually done by the internal employees, we have an average coordination time which is directly proportional to E2 and inversely proportional to I. Thus,
Tc = bE2/I (7)
where b is the constant of proportionality.
These 2 cost/benefit ratios represent the time expenditures of the internal and the external employees relative to one another. Their sum should give the overall time expenditure, the expenditure which the theory of time-minimization says will be minimized.
…The values of E and I can never be negative, so the second derivative must be positive; Equation 10 therefore represents the condition when T is a minimum. Rearranging terms, and taking the 6th root of both sides, we obtain
This paper analyzes compensation schemes which pay according to an individual’s ordinal rank in an organization rather than his output level.
When workers are risk neutral, it is shown that wages based upon rank induce the same efficient allocation of resources as an incentive reward scheme based on individual output levels. Under some circumstances, risk-averse workers actually prefer to be paid on the basis of rank.
In addition, if workers are heterogeneous in ability, low-quality workers attempt to contaminate high-quality firms, resulting in adverse selection. However, if ability is known in advance, a competitive handicapping structure exists which allows all workers to compete efficiently in the same organization.
Earlier work on the size-density hypothesis [political unit size vs density scales by negative 2⁄3rds] led to a theory of time-minimization from which the size-density relation could be derived. Subsequently, time-minimization theory was employed to derive expected relations between population and area for cities and urbanized areas, expectations which were empirically confirmed.
The present paper derives 3 well-known and empirically supported relationships from the time-minimization assumption:
The role of imperfect information in a principal-agent relationship subject to moral hazard is considered. A necessary and sufficient condition for imperfect information to improve on contracts based on the payoff alone is derived, and a characterization of the optimal use of such information is given.
…Employing a different problem formulation from Harris and Raviv’s, we are able to simplify their analysis and generalize their results substantially. Both questions posed above are given complete answers (in our particular model).
It is shown that any additional information about the agent’s action, however imperfect, can be used to improve the welfare of both the principal and the agent. This result, which formalizes earlier references to the value of monitoring in agency relationships (Stiglitz, 1975; Williamson, 1975), serves to explain the extensive use of imperfect information in contracting. Furthermore, we characterize optimal contracts based on such imperfect information in a way which yields considerable insight into the complex structure of actual contracts.
The formulation we use is an extension of that introduced by Mirrlees (1974, 1976). We start by presenting a slightly modified version of Mirrlees’ model (Section 2), along with some improved statements about the nature of optimal contracts when the payoff alone is observed. In Section 3 a detour is made to show how these results can be applied to prove the optimality of deductibles in accident insurance when moral hazard is present. Section 4 gives the characterization of the optimal use of imperfect information and Section 5 presents the result when imperfect information is valuable. Up to this point homogeneous beliefs are assumed, but in Section 6 this assumption is relaxed to the extent that we allow the agent to be more informed at the time he chooses his action. The analysis is brief, but indicates that qualitatively the same results obtain as for the case with homogeneous beliefs. Section 7 contains a summary and points out some directions for further research.
…When the payoff alone is observable, optimal contracts will be second-best owing to a problem of moral hazard. By creating additional information systems (as in cost accounting, for instance), or by using other available information about the agent’s action or the state of nature, contracts can generally be improved.
The current system of unemployment compensation entails very strong adverse incentives. For a wide variety of “representative” unemployed workers, unemployment benefits replace more than 60% of lost net income. In the more generous states, the replacement rate is over 80% for men and over 100% for women. Most of the $20$51974 billion in benefits go to middle and upper income families. This anomaly in the distribution of benefits is exacerbated by the fact that unemployment compensation benefits are not subject to tax.
[Profile by the veteran economics New Yorker reporter John Brooks of the post-WWII American federal income tax (history, effects & reform attempts), which taxed incomes at rates as high as 91%, but was riddled with bizarre loopholes and exceptions which meant the real tax rates were typically half that—at the cost of massively distorting human behavior.
Stars would stop performing halfway through the year, marriages would be scheduled for the right month, films were designed around tax incentives rather than merits, countless oil wells were drilled unnecessarily and rich people would invest in business of no interest to them like bowling alleys, businessmen had to meticulously record every lunch because income tax distorted salaries in favor of fringe benefits. (A similar dynamic was at play in the rise of employer-based health insurance during WWII, contributing to the present grotesquely inefficient American healthcare system.)]
…the writer David T. Bazelon has suggested that the economic effect of the tax has been so sweeping as to create two quite separate kinds of United States currency—before-tax money and after-tax money. At any rate, no corporation is ever formed, nor are any corporation’s affairs conducted for as much as a single day, without the lavishing of earnest consideration upon the income tax, and hardly anyone in any income group can get by without thinking of it occasionally, while some people, of course, have had their fortunes or their reputations, or both, ruined as a result of their failure to comply with it. As far afield as Venice, an American visitor a few years ago was jolted to find on a brass plaque affixed to a coin box for contributions to the maintenance fund of the Basilica of San Marco the words “Deductible for U.S. Income-Tax Purposes.”
[Matt Lakeman summary (emphasis added): The link is to a declassified CIA document written in 1968…The author is a CIA spywho explains that the CIA was trying to calculate the economy of theUSSR, and by their best estimates, the US GDP was more than 2× theUSSRGDP, and the US GDP per capita was around 3×. However, she thinksthese numbers are overestimating USSRGDP because it’s difficult to account for quality. An American haircut can be priced the same way as a Soviet haircut, but an American refrigerator is probably vastly better than a Soviet refrigerator.
So the author goes undercover in Moscow for a few months to live as the Russians do and see what economic life is really like for them. She explains that she tried to live the Russian way as an American working at the embassy, but the locals were super nice to her all the time. They always smiled and sent her to the front of every line. So she had to get beat up local clothes, dust off her Russian language skills, put on a grumpy expression (presumably), and pretend to be a Russian (or rather, pretend to be an Estonian due to her accent). Her findings:
Lines, lines, and more lines. Everyone had to wait on line for everything. Food, clothes, whatever. Wait times were 10–15 minutes at best, but could easily stretch into hours. Sometimes she waited on lines even when she didn’t know what she was waiting for.
Even in Moscow, the variety and quality of goods was atrocious. At a given grocer, they might offer two or three different items each day. So one day she could get pickled fish, the next day cabbage and tomatoes, the next day rice, etc. Bread seemed to be the only thing that was always in stock.
Stores were often bureaucratic clusterfucks. The author couldn’t just buy tea; she had to wait on one line to make a tea selection, then collect a piece of paper, then wait on another line to exchange the paper for money, then get another piece of paper, then wait on another line to exchange that piece of paper for tea.
Prices were outrageous by the standards of the salaries of the people working in the capital city of the second most powerful nation on earth.
The service was awful. She went to some of the nicest restaurants in Moscow (of which there were fewer than a dozen in a city of millions of people), and the meals would take at least 3 hours. Waiters would stand around doing nothing and wouldn’t come over to her even when she called them.
Everyone was incredibly rude. She was violently shoved on trains and buses. People screamed at her if she hesitated on lines.
There was a general feeling of boredom and malaise. There were no luxury goods to buy or events to look forward to. People expected to just live, get married, and wait to die.
Everyone knew the government reports about how awesome the USSR was doing were bullshit. They envied the West.]
This paper presents results, mainly in tabular form, of a sampling experiment in which 100 economic time series 25 years long were drawn at random from the Historical Statistics for the United States. Sampling distributions of coefficients of correlation and autocorrelation were computed using these series, and their logarithms, with and without correction for linear trend.
We find that the frequency distribution of autocorrelation coefficients has the following properties:
It is roughly invariant under logarithmic transformation of data.
It is approximated by a Pearson Type XII function.
The autocorrelation properties observed are not to be explained by linear trends alone. Correlations and lagged cross-correlations are quite high for all classes of data. eg., given a randomly selected series, it is possible to find, by random drawing, another series which explains at least 50% of the variances of the first one, in from 2 to 6 random trials, depending on the class of data involved. The sampling distributions obtained provide a basis for tests of statistical-significance of correlations of economic time series. We also find that our economic series are well described by exact linear difference equations of low order.