Why is automation/productivity growth so slow & “you can see the computer age everywhere but the statistics”, especially when AI has become so powerful? Because “the future is already here, it’s just unevenly distributed”—inertia & path-dependence.
Existing companies & processes are so hardwired to use humans as the basic building block that they must be reconceived to exploit new possibilities. Otherwise, you get absurdities like robotic process automation.
This ‘overhang’ also explains why startup ideas fail repeatedly before succeeding & crises can lead to abrupt increases in existing technologies: the “rising water” was held back by levees of local optimums.
“This prodigious event is still on its way, still wandering; it has not yet reached the ears of men. Lightning and thunder require time, the light of the stars requires time, deeds, though done, still require time to be seen and heard. This deed is still more distant from them than the most distant stars—and yet they have done it themselves.”
[bar graph titled “Typical Response Latency” showing 3 major sections]
[section 1: Automated Steps: 800 ms]
[section 2: Someone Copies and Pastes From a Thing Into Another Thing: 2–15 minutes (more if the person on call is busy)]
[section 3: Automated Steps: 200ms]
Each SCAPDFATIAT point increases the chance that the process will involve the phrase ‘by the next business day.’
Mere weeks later, COVID arrived in New York. A few companies tried to continue holding estate sales in person, posting mask advisories and limits on capacity. But I knew that would stop, and it did; once governors started announcing stay-at-home orders, estatesales.net began automatically pulling listings for non-appointment in-person sales in states with stay-at-home orders. That April, I saw a local listing that was advertising what I thought looked like an in-person sale anyway, but when I called the man on the other end scrabbled around for a moment and then hung up on me.
For at least a month, there were almost no sales at all. Stuff sat inside, unsold. I imagined it thickening, straining against the walls of the homes of recently deceased, Depression-era residents of Long Island, their dishes growing heavy in cupboards, “digger” basements growing even more filled with dust. But the pause was brief—the estate-sale business is intimately tied to the real-estate market, and the latter has boomed in the last year and a half. In late spring, 2020, estate-sale companies increasingly began switching to an online auction model, photographing items and then putting them up for auction at a 1-dollar or 2-dollar starting price with curbside pickup. These online sales proved extraordinarily lucrative during the pandemic: in 2020, Caring Transitions earned 17 million dollars in revenue on its proprietary online auction platform, compared with about 9 million dollars the year before. “In the 22 years in the industry, 2020 and 2021 have been the busiest years I’ve ever had”, Grant Panarese, who runs the virtual estate-sale platform AuctionNinja along with his wife, Christie, told me. “I’ve never seen years like this in my life.” Virtual platforms upend the traditional estate-sale dynamic: to get the best goods at an online estate sale, there’s no need to get in line at dawn. A typical customer, Panarese said, “has dinner, a glass of wine, and sits down for an hour and bids.”
This model favors the retail consumer. Online auctions can feel more sporting. There’s a sharp, competitive rush to winning an item at an online auction that you don’t quite get when you go in person to haggle. Theoretically, you or I have as good a chance as any seasoned picker of claiming the winning bid on a Herman Miller when the auction winds toward its close. But when I tried scoring a formerly $10,000 couch on a virtual estate sale during lockdown, I failed miserably, my heart racing as the maximum bid rose and rose. In 2020 and 2021, demand has exploded for regular home goods, such as couches and dining tables, a result of hasty exoduses to the suburbs and supply-chain issues or long back orders at furniture companies. “People say they can’t sell brown wood, but we get crazy numbers for brown wood”, Panarese said. “If it’s in good condition, we can sell it.”
I spent each morning of spring, 2020, at home, staring glassily at whatever auction was being held that day, scrolling past sectionals, table lamps, clustered picture frames, bookends. I thought often of the people I’d met at sales and fretted about whether they’d ever be able to return to doing this in person. I couldn’t tell if this first-name-only world was truly a fragile one—whether the pickers would reassemble like cockroaches once the sales came offline again, or if this universe would just disintegrate…When I asked estate-sale company owners about their businesses, they always talked about the social aspect, and how much people liked to be able to touch the things they bought. But during 2020’s lockdown, they all also started saying that virtual sales were much more lucrative. “Right now, I’m going to stick with online auctions”, Debbie Bertoli, of Treasured Tag Sales, Inc., who had never done virtual sales before the pandemic, told me last summer; currently, many sales are being held online again during the Omicron surge. As much of the country gradually began to reopen in the spring of 2021, AuctionNinja retained nearly all of the 500 vendors it ballooned to during the pandemic; Caring Transitions is still doing brisk business on its online platform. “In-person sales are a dinosaur!” Panarese said. But, by the middle of last year, the number of in-person sales on estatesales.net had, according to its C.E.O., gone back to pre-pandemic levels. Although Omicron has forced more sales back online for the time being, the fear—or hope—that estate sales will switch permanently to an online model doesn’t seem to be panning out. “Never”, insisted LoSquadro, of Sisters in Charge, when I talked to her this past summer. She had recently hired Fast Eddie to help with the load. (“People read him wrong. He’s very honest”, she said.) Her online sales were doing “phenomenal”, but she was also booked for in-person sales through September. The reason, she said, was simple. “People like to get out of their house.”
The demise of the in-person estate sale might be like that of the brick-and-mortar bookstore—constantly foretold but never decisively coming to pass, an industry both fragile and persistent in a way that doesn’t square with the story of the world as we have come to expect it.
Automation technologies, and robots in particular, are thought to be massively displacing workers and transforming the future of work.
We study firm investment in automation using cross-country data on robotization as well as administrative data from Germany with information on firm-level automation decisions. Our findings suggest that the impact of robots on firms has been limited:
investment in robots is small and highly concentrated in a few industries, accounting for less than 0.30% of aggregate expenditures on equipment.
recent increases in robotization do not resemble the explosive growth observed for IT technologies in the past, and are driven mostly by catching-up of developing countries.
robot adoption by firms endogenously responds to labor scarcity, alleviating potential displacement of existing workers.
firms that invest in robots increase employment, while total employment effect in exposed industries and regions is negative, but modest in magnitude.
We contrast robots with other digital technologies that are more widespread. Their importance in firms’ investment is substantially higher, and their link with labor markets, while sharing some similarities with robots, appears markedly different.
Digital firms tend to be both narrow in their vertical scope and large in their scale. We explain this phenomenon through a theory about how attributes of firms’ resource bundles impact their scale and specialization. We posit that highly scalable resource bundles entail large opportunity costs of integration (vs. outsourcing), which simultaneously drive “hyperspecialization” and “hyperscaling” in digital firms. Using descriptive theory and a formal model, we develop several propositions that align with observed features of digital businesses. We offer a parsimonious modeling framework for resource-based theorizing about highly scalable digital firms, shed light on the phenomenon of digital scaling, and provide insights into the far-reaching ways that technology-enabled resources are reshaping firms in the digital economy.
Why are leading firms in the digital economy simultaneously larger and more specialized than those in the industrial age? Our research explains this phenomenon as being driven by the scalability of digital resources—that is, their capacity to create more value at larger scales when used intensively in a focal activity. We clarify what digital scalability means, and highlight trade-offs created by the opportunity costs of not employing scalable digital resources intensively. Digital firms should outsource complementary activities to avoid diverting resources away from their scalable core, and to enhance their ability to grow exponentially. Although resource fungibility and outsourcing costs mitigate these imperatives, digital firms may nonetheless find it profitable to remain specialized despite the challenges of managing outsourcing and sharing value with complementors.
We discuss the idea that computers might soon help mathematicians to prove theorems in areas where they have not previously been useful. Furthermore we argue that these same computer tools will also help us in the communication and teaching of mathematics.
During the COVID-19 pandemic, traditional (offline) chess tournaments were prohibited and instead held online.
We exploit this unique setting to assess the impact of remote work policies on the cognitive performance of individuals. Using the artificial intelligence embodied in a powerful chess engine [Stockfish 11] to assess the quality of chess moves and associated errors:
we find a statistically-significant & economically-important decrease in performance when an individual competes remotely versus offline in a face-to-face setting.
The effect size decreases over time, suggesting an adaptation to the new remote setting.
…During the COVID-19 pandemic, the current chess world champion, Magnus Carlsen, initiated an online tournament series, the Magnus Carlsen Chess Tour. We analyse the performance of players who have participated in these online tournaments and the performance of players participating in recent events of the World Rapid Chess Championship as organized by the World Chess Federation in a traditional offline format. In particular, our main comparison is based on 20 elite chess players who competed both in the online and offline tournaments. We selected these tournaments because they were organized under comparable conditions, in particular, giving players the same amount of thinking time per game, offering comparable prize funds, and implementing strict anti-cheating measures.
We base our performance benchmark on evaluating the moves played by the participants using a currently leading chess engine that substantially outperforms the best human players in terms of playing strength. We use the engine’s evaluation to construct a measure of individual performance that offers a high degree of objectivity and accuracy. Overall, we analyse 214,810 individual moves including 59,273 moves of those 20 players who participated in both the remote online and the traditional offline tournaments. Using a regression model with player fixed effects that allows us to estimate changes in within-player performance, we find the quality of play is substantially worse (at a statistical-significance level of 5%) when the same player competed online versus offline. The adverse effect is particularly pronounced for the first 2 online tournaments, suggesting a partial adaptation to the remote setting in later tournaments.
…We find that playing online leads to a reduction in the quality of moves. The error variable as defined in equation (3) is, on average, 1.7 units larger when playing online than when playing identical moves in an offline setting. This corresponds to a 1.7% increase of the measure (RawError+ 1) or an ~7.5% increase in the RawError…To better assess the size of the effect, we provide a back-of-the-envelope calculation for the change in playing strength when playing online, as expressed in terms of the Elo rating. In our sample, the coefficient on the Elo rating of the player (−0.0001308) is based on a regression without individual fixed effects,18 indicating that if a player’s Elo rating increases by one point, the error variable as defined in equation (3) is reduced by 0.013 units on average. Playing online increases the error variable, on average, by 1.7 units, which corresponds to a loss of 130 points of Elo rating. The factual drop in playing strength, however, is likely to be lower, because our analysis excludes the opening stage of the game, which is less likely to be affected by the online setting. Moreover, our linear regression model might not account for smaller average error margins at the top of the Elo distribution.
Figure 1: Effect Heterogeneity by Online Tournament. Notes: The figure shows the estimated coefficient ̂δ based on equation (2). Dots represent the point estimates, the grey (black) bar show the 95% (90%) confidence intervals based on clustered standard errors at the game level. Regressions contain player and move fixed effects as well as the full set of control variables (see Table 3). The opening phase of each game is excluded for each player (m ≤ 15). MCI—Magnus Carlsen Invitational, LARC—Lindores Abbey Rapid Challenge, OCM—Chessable Masters, LoC—Legends of Chess, SO—Skilling Open.
The sudden arrival of compositeracquets affected tennis player productivity, entry, and exit.
Young players at the time benefited at the expense of older players.
The empirical patterns are consistent with a model of skill-altering technological change.
Technological innovation can raise the returns to some skills while making others less valuable or even obsolete. [cf. talkies, Adobe Flash]
We study the effects of such skill-altering technological change in the context of men’s professional tennis, which was unexpectedly transformed by the invention of composite racquets during the late 1970s. We explore the consequences of this innovation on player productivity, entry, and exit.
We find that young players benefited at the expense of older players and that the disruptive effects of the new racquets persisted over 2 to 4 generations.
[Keywords: technological change, human capital, tennis]
…At the same time, at least since Ricardo, economists have recognized that innovation can also be disruptive. As Acemoglu 2002 vividly states, “in 19th-century Britain, skilled artisans destroyed weaving, spinning, and threshing machines during the Luddite and Captain Swing riots, in the belief that the new machines would make their skills redundant. They were right: the artisan shop was replaced by the factory and later by interchangeable parts and the assembly line.” New technologies disrupt the labor market when they raise the returns to some skills while making others less valuable or obsolete.
We develop a theory, inspired by MacDonald & Weisbach 2004, of this phenomenon, which we call skill-altering technical change. Our theory emphasizes that workers endogenously invest in a portfolio of skills over their life cycle. We show that new technologies that change the relative values of skills can hurt older workers, who have spent a lifetime investing in the old, ideal skill mix, and better workers, who, by definition, possess more of the skills that were previously more valuable.
…To test our model’s predictions empirically and quantify the effects of skill-altering technical change, we exploit the introduction of composite racquets in men’s professional tennis during the late 1970s and early 1980s. These new racquets drastically changed the way the game was played, increasing the importance of hitting with spin and power relative to control. There are 4 reasons this episode in men’s professional tennis is an useful setting to study the effects of skill-altering technological change on workers. First, we have detailed panel data on multiple cohorts of individual workers (players), allowing us to track the impacts of skill-altering change over multiple generations of workers. Such data is difficult to obtain in most settings. Second, the new technology arrived suddenly and unexpectedly and was adopted universally within a few years.
…Until the mid-1970s, tennis racquets were made nearly exclusively of wood, and this technology had been stable for decades. Although alternative materials were tried, such as the steel Wilson T-2000, which a few players used, most players continued to play with wood racquets. Then a retired engineer, Howard Head, started playing tennis and discovered he was terrible at the game. He decided that the fault lay with his racquet, and in 1976, he took it upon himself to invent a new one, the Prince Classic. “With…my racket I was inventing not to just make money, but to help me.”2
Although initial reactions to Head’s new racquet were laughter and scorn, the racquet had much to offer recreational players.3 The Prince Classic had a larger string bed and sweet spot that made it easier for players to make good contact with the ball and generate more power and spin, but it achieved these gains at the expense of stiffness and control, making the racquet unacceptable for professional players. Racquet makers quickly found a solution; they developed methods for constructing the racquet frame out of a composite material consisting of a mixture of carbon fibers and resin. Composite frames allowed both a larger string bed and a stiff frame, giving players more power and control. The first composite racquet that professionals used, the Prince Pro, hit the market in 1978, and composite racquets quickly replaced wood ones as professional players found that their familiar wood racquets were no match for the combination of power and control afforded by the new composite racquets.
…by 1984, composite racquets had taken over the tour. The introduction of composite racquets substantially changed the way men’s professional tennis was played. When tennis players strike the ball, they often try to impart topspin…Although composite racquets allow players to generate much more topspin and power, taking full advantage of this potential required large changes to players’ strokes and play style. Older players in particular, who had invested years in learning to play with a wood racquet, faced the daunting challenge of adjusting to composite racquets. Players began altering their stances and swings to generate more topspin and power. They rotated their grips to generate more spin and help them return balls that were bouncing higher because of the increased topspin of their opponents. These seemingly subtle changes in technique and strategy resulted in a much more physical and faster-paced game. As Cross observed,
The modern game of tennis is played at a furious pace compared with the old days when everyone used wood racquets. Just watch old film from the 1950s and you will see that the game is vastly different. Ken Rosewall and Lew Hoad barely broke into a sweat. Today’s game has players grunting and screaming on every shot, calling for the towel every third shot, and launching themselves off the court with the ferocity of their strokes.
…We find that the introduction of the new composite racquets substantially disrupted the tour, with repercussions lasting for between 2 and 4 generations. It temporarily reduced the rank correlation in player quality over time, helped younger players at the expense of older ones, reduced the average age of tennis players, and increased exit rates of older players relative to younger ones. We find that inter-generational inequality rose, though we find mixed evidence for the new racquets’ effects on cross-sectional inequality. We also consider competing explanations, but we conclude that they cannot explain many of the other patterns we find. Moreover, when we compare the ages of tennis players with other Olympic athletes, we do not find a similar drop in the ages of Olympic athletes during the same time period.
Technological advancements bring changes to our life, altering our behaviors as well as our role in the economy. In this paper, we examine the potential effect of the rise of robotic technology on health.
Using the variation in the initial distribution of industrial employment in US cities and the difference in robot adoption across industries over time to predict robot exposure at the local labor market, we find evidence that higher penetration of industrial robots in the local economy is positively related to the health of the low-skilled population.
A 10% increase in robots per 1000 workers is associated with an approximately 10% reduction in the share of low-skilled individuals reporting poor health. Further analysis suggests that the reallocation of tasks partly explains this finding. A 10% increase in robots per 1000 workers is associated with an approximately 1.5% reduction in physical tasks supplied by low-skilled workers.
The “sailing-ship effect” is the process whereby improvements to an incumbent technology (eg. sail) are intentionally sought as a new competing technology (steam) emerges. Despite the fact that the effect has been referred to by quite a few scholars in different technological battles, the effect itself seems to have been taken for granted rather than organically defined and investigated. In this paper, within the context of evolutionary “appreciative theorizing” à la Nelson and Winter, through in-depth study of technological battles between old and new technologies, we transform what was an unfinished concept into a structured, fully-fledged, tool of analysis.
We document new stylized facts on the occupational mix of businesses in the United States and how their internal organization evolves over their life cycles. Our main empirical finding is that younger businesses have fewer hierarchical layers and lower span of control than comparable older businesses do. Our results suggest that businesses become simultaneously more hierarchical and increase their managerial span of control over their life cycles. We show that this pattern is not entirely driven by selection or differences in size and is pervasive across cohorts and sectors.
…I. Data: We assemble an unique dataset by combining the confidential microdata of the Occupational Employment Statistics (OES) semiannual surveys from November 2002 through May 2017 with the administrative employment and wage records of the Quarterly Census of Employment and Wages (C’S) for private sector establishments from 1992 through the first quarter of 2020. The OES survey provides high-quality information on detailed occupation and wage distributions for a large sample of establishments at one or more times during their life cycles. The COX records contain the geographic location, industry, and quarterly total employment and wages for the near universe of private sector establishments operating in the United States. This combination of data allows us to observe the occupations and wages of workers at least once for about 1.8 million establishments with known ages.
…We summarize the internal organization of establishments using 2 variables that capture the number of hierarchical layers and the span of control. We follow Caliendo et al 2015 and Forsythe 2019 in classifying workers into managers, supervisors, and “other workers.” Using this information, we count the number of distinct layers of employment (ranging 1–3) and compute the span of control as the ratio of the number of other workers to the number of managers and supervisors in each establishment.
Figure 2: Layers over the Life Cycle: Heterogeneity and Changes over Time. Notes: Panel A shows the estimated age fixed effects of the number of layers of organization (log) according to equation (1) estimated by OLS with sampling weights, separately for broad industrial groups. Panel B estimates the same equation separably for 2 distinct periods. All regressions include time effects, industry, location, and multiunit status fixed effects. We normalize the number of layers for the age group up to 3 years old according to the unconditional average.
…One key advantage of our dataset is that it covers cohorts of businesses over an extensive period. In Figure 2 (panel B), we explore the relationship between hierarchical layers and age using repeated cross-sectional variation from surveys pooled separately for 2002–2009 and 2010–2017. Our results indicate that recent start-ups have relatively fewer layers, which leads us to conjecture that information and communication technology allow businesses to have relatively flatter structures. We find similar qualitative patterns for the measure of span of control. Another relevant source of heterogeneity is the multiunit status of an establishment. We explore heterogeneity in age fixed-effects between independent establishments and those that are part of multiunit organizations. Both types exhibit a statistically-significant positive association between layers (and span of control) and age.
What is newsworthy? This question should haunt everyone with a platform.
Last month, Stanford HAI published the AI Index Report 2021, a 222-page report on the state of AI, put together by an all-star team supported by a lot of data and strong connections to technical experts. What was newsworthy in this report? According to The Verge, “Artificial intelligence research continues to grow as China overtakes U.S. in AI journal citations.” In fact, the article takes its cue from what the report authors themselves deemed important, given that “China overtakes the U.S. in AI journal citations” features as one of the report’s 9 key takeaways.
Dig deeper into the data, however, and you’ll uncover alternative takeaways. Look at the cross-national statistics on average field-weighted citation impact (FWCI) of AI authors, for example, which gives a sense of the quality of the average AI publication from a region. Interestingly enough, the U.S. actually increased its relative lead in FWCI over China over the past couple years. According to the 2019 version of the AI Index, the FWCI of US publications was about 1.3× greater than China’s; in 2021, that gap has widened to almost 3× greater (p. 24).
So, working off the same materials as released in the AI index, here’s another way one could have distilled key takeaways: “The U.S. increases its lead over China in average impact of AI publications.” Or, if you wanted to be cheeky: “China lags behind Turkey in average impact of AI publications.” Just as newsworthy, in my opinion.
However, what I found most newsworthy about the AI Index went beyond horse-race reporting about “who’s winning the AI race‽” Instead, I was most intrigued by the rise of commercially available machine translation (MT) systems, covered on page 64. According to data from Intento, a startup that assesses MT services, there are now 28 cloud MT systems with pre-trained models that are commercially available—an increase from just 8 in 2017. But wait … there’s more: Intento also reports an incredible spike in MT language coverage, with 16,000+ language pairs supported by at least one MT provider (slide 33 of Intento’s “State of Machine Translation” report).
…Somehow, these incredible advances in translation are not relevant to the effect of AI on U.S.-China relations, at least based on existing discussions. Compare the complete dearth of Twitter discussions centered on the following keywords: U.S., China, and “machine translation” against what you get when you replace “machine translation” with “facial recognition.” Consider another reference point, the recently published 756-page report by the National Security Commission on Artificial Intelligence (NSCAI). 62 of those pages mention the word “weapon” at least once. Only 9 pages mention the word “translation”, and most do not substantively discuss translation (eg. the word appears in a bibliographic reference for a translated text).
Yet, I could make a convincing case that translation is more important than targeting for U.S. national security. Think about the potential of improved translation capabilities for the intelligence community. Another obvious vector is the effect of translation on diplomacy.
Top 9 Takeaways:
AI investment in drug design and discovery increased substantially: “Drugs, Cancer, Molecular, Drug Discovery” received the greatest amount of private AI investment in 2020, with more than USD 13.8 billion, 4.5× higher than 2019.
The industry shift continues: In 2019, 65% of graduating North American PhDs in AI went into industry—up from 44.4% in 2010, highlighting the greater role industry has begun to play in AI development.
Generative everything: AI systems can now compose text, audio, and images to a sufficiently high standard that humans have a hard time telling the difference between synthetic and non-synthetic outputs for some constrained applications of the technology.
AI has a diversity challenge: In 2019, 45% new U.S. resident AI PhD graduates were white—by comparison, 2.4% were African American and 3.2% were Hispanic.
China overtakes the US in AI journal citations: After surpassing the US in the total number of journal publications several years ago, China now also leads in journal citations; however, the US has consistently (and substantially) more AI conference papers (which are also more heavily cited) than China over the last decade.
The majority of the US AI PhD grads are from abroad—and they’re staying in the US: The percentage of international students among new AI PhDs in North America continued to rise in 2019, to 64.3%—a 4.3% increase from 2018. Among foreign graduates, 81.8% stayed in the United States and 8.6% have taken jobs outside the United States.
Surveillance technologies are fast, cheap, and increasingly ubiquitous: The technologies necessary for large-scale surveillance are rapidly maturing, with techniques for image classification, face recognition, video analysis, and voice identification all seeing substantial progress in 2020.
AI ethics lacks benchmarks and consensus: Though a number of groups are producing a range of qualitative or normative outputs in the AI ethics domain, the field generally lacks benchmarks that can be used to measure or assess the relationship between broader societal discussions about technology development and the development of the technology itself. Furthermore, researchers and civil society view AI ethics as more important than industrial organizations.
AI has gained the attention of the U.S. Congress: The 116th Congress is the most AI-focused congressional session in history with the number of mentions of AI in congressional record more than triple that of the 115th Congress.
This time last year, Tabitha Jackson was preparing to helm her first Sundance Film Festival—she was also preparing to helm the first Sundance to be held amid a global pandemic. Because of COVID-19, the 2021 fest was held completely online, with each movie, as well as filmmaker Q&As and panels, streamed online. At the time, Jackson told me, it was an experiment, not so much a blueprint for the festival, but “an opportunity to gather evidence for what we might wish to see.” Earlier this month, she put those lessons to use. Amid plans for a virtual-live hybrid festival for 2022, Omicron cases spiked. Sundance would be going all-virtual once again.
This time, though, Jackson and her colleagues were prepared. Since they’d held the festival online last year, they knew what to do. And in planning this year’s festival as a hybrid event, they found most of the mechanisms for pivoting to streaming were already in place. When the event launched last night, it was practically seamless. For the next week, films will stream online, Q&As will take place via Zoom, and attendees looking for the social aspects of the fest will be able to hang out in The Spaceship, a virtual—it’s tempting to call it “metaverse-ian”, but no—hub for post-screening conversations. (Yes, you can go in VR.) “The saving grace was the online platforms”, Jackson says of the festival’s late-in-the-game planning pivot. “A massive silver lining is that we could have a festival we are still excited about.”
…But even if COVID is, one day, a thing of the past, another virus could take its place. And film festivals have always struggled with accessibility issues that can be mitigated by allowing people to attend from home. So perhaps hybrid festivals are the future even in the best of times. Cinema culture exists on multiple planes; it’s time film festivals did too.
We survey 15,000 Americans over several waves to investigate whether, how, and why working from home will stick after COVID-19. The pandemic drove a mass social experiment in which half of all paid hours were provided from home between May & October 2020. Our survey evidence says that about 25% of all full work days will be supplied from home after the pandemic ends, compared with just 5% before. We provide evidence on five mechanisms behind this persistent shift to working from home: diminished stigma, better-than-expected experiences working from home, investments in physical and human capital enabling working from home, reluctance to return to pre-pandemic activities, and innovation supporting working from home. We also examine some implications of a persistent shift in working arrangements: First, high-income workers, especially, will enjoy the perks of working from home. Second, we forecast that the post-pandemic shift to working from home will lower worker spending in major city centers by 5 to 10%. Third, many workers report being more productive at home than on business premises, so post-pandemic work from home plans offer the potential to raise productivity as much as 2.4%.
Covariant.ai has developed a platform that consists of off-the-shelf robot arms equipped with cameras, a special gripper, and plenty of computer power for figuring out how to grasp objects tossed into warehouse bins. The company, emerging from stealth Wednesday, announced the first commercial installations of its AI-equipped robots: picking boxes and bags of products for a German electronics retailer called Obeta.
…The company was founded in 2017 by Pieter Abbeel, a prominent AI professor at UC Berkeley, and several of his students. Abbeel pioneered the application of machine learning to robotics, and he made a name for himself in academic circles in 2010 by developing a robot capable of folding laundry (albeit very slowly). Covariant uses a range of AI techniques to teach robots how to grasp unfamiliar objects. These include reinforcement learning, in which an algorithm trains itself through trial and error, a little like the way animals learn through positive and negative feedback…Besides reinforcement learning, Abbeel says his company’s robots make use of imitation learning, a way of learning by observing demonstrations of perception and grasping by another algorithm, and meta-learning, a way of refining the learning process itself. Abbeel says the system can adapt and improve when a new batch of items arrive. “It’s training on the fly”, he says. “I don’t think anybody else is doing that in the real world.”
…But reinforcement learning is finicky and needs lots of computer power. “I used to be skeptical about reinforcement learning, but I’m not anymore”, says Hinton, a professor at the University of Toronto who also works part time at Google. Hinton says the amount of computer power needed to make reinforcement learning work has often seemed prohibitive, so it is striking to see commercial success. He says it is particularly impressive that Covariant’s system has been running in a commercial setting for a prolonged period.
…Peter Puchwein, vice president of innovation at Knapp, says he is particularly impressed by the way Covariant.ai’s robots can grasp even products in transparent bags, which can be difficult for cameras to perceive. “Even as a human being, if you have a box with 20 products in poly bags, it’s really hard to take just one out”, he says…Late last year, the international robot maker ABB ran a contest. It invited 20 companies to design software for its robot arms that could sort through bins of random items, from cubes to plastic bags filled with other objects. Ten of the companies were based in Europe, and the other half were in the United States. Most came nowhere close to passing the test. A few could handle most tasks but failed on the trickier cases. Covariant was the only company that could handle every task as swiftly and efficiently as a human. “We were trying to find weaknesses”, said Marc Segura, managing director of service robotics at ABB. “It is easy to reach a certain level on these tests, but it is super difficult not to show any weaknesses.”
Recently, many have predicted an imminent automation revolution, and large resulting job losses. Others have created metrics to predict new patterns in job automation vulnerability. As context to such claims, we test basic theory, two vulnerability metrics, and 251 O✱NET job features as predictors of 1505 expert reports regarding automation levels in 832 U.S. job types from 1999 to 2019.
We find that pay, employment, and vulnerability metrics are predictive (R2~0.15), but add little to the top 25 O✱NET job features, which together predict far better (R2~0.55). These best predictors seem understandable in terms of traditional kinds of automation, and have not changed over our time period. Instead, it seems that jobs have changed their features to become more suitable for automation.
We thus find no evidence yet of a revolution in the patterns or quantity of automation. And since, over this period, automation increases have predicted neither changes in pay nor employment, this suggests that workers have little to fear if such a revolution does come.
Our automation data analysis found a few surprising results…But the most interesting surprise, I think, is that while, over the last twenty years, we’ve seen no noticeable change in the factors that predict which jobs get more automated, we have seen job features change to become more suitable to automation. On average jobs have moved by about a third of a standard deviation, relative to the distribution of job automation across jobs. This is actually quite a lot. Why do jobs change this way?
Consider the example of a wave of human colonization moving over a big land area. Instead of all the land becoming colonized more densely at same rate everywhere, what you instead see is new colonization happening much more near old colonization. In the U.S., dense concentrations started in the east and slowly spread to the west. There was little point in clearing land to grow stuff if there weren’t enough other folks nearby to which to sell your crops, and from which to buy supplies.
…Now think about the space of job tasks as a similar sort of landscape. Two tasks are adjacent to other tasks when the same person tends to do both, when info or objects are passed from one to the other, when they take place close in place and time, and when their details gain from being coordinated. The ease of automating each task depends on how regular and standardized are its inputs, how easy it is to formalize the info on which key choices depend, how easy it is to evaluate and judge outputs, and how simple, stable, and mild are the physical environments in which this task is done. When the tasks near a particular task get more automated, those tasks tend more to happen in a more controlled stable environment, the relevant info tends to be more formalized, and related info and objects get simpler, more standardized, and more reliably available. And this all tends to make it easier to automate such tasks. Much like how land is easier to colonize when nearby land is more colonized.
…We have long been experiencing a wave of automation passing across the space of job tasks. Some of this increase in automation has been due to falling computer tech costs, improving algorithms and tools, etc. But much of it may simply be the general potential of this tech being realized via a slow steady process with a long delay: the automation of tasks near other recently automated tasks, slowly spreading across the landscape of tasks.
I then analyze transition’s effect on actor employment, and find it to be associated with a substantial increase in career terminations, not only among major stars (which film scholars emphasize), but also among more minor actors. Furthermore, I find that sound raised hazard rates generally. Finally, I calculate that the number of actors employed in movies increased substantially in the sound era…Examining the IMDb’s genre categorizations, I find evidence that plots became more complex with sound; consistently, the average number of credited actors per film rose. The number of films released also rose, so that the net effect was a substantial sound-era increase in the annual employment of motion picture actors
…Despite the success of The Jazz Singer in late 1927, many industry pundits initially regarded sound as a fad, and even supporters expected talking and silent films to coexist indefinitely. The author of a Harvard case study of a cinema considering the conversion to sound in 1928 wrote, “It was difficult to judge the permanence of the appeal of sound pictures. Theatrical managers were convinced that the appeal at first was largely one of curiosity” (Clayton Theater 1930, 491). Jack Warner, the champion of the talking picture, said as late as 1928 that he expected most future films to be part sound and part silent (Crafton 1997, 174). Adolph Zukor, President of Paramount Pictures, was quoted in late 1928 as saying,
“By no means is the silent picture gone or even diminished in importance. … there always have been subjects which could not be augmented in value or strength by the addition of sound and dialogue.” (The Film Daily 1929 Yearbook, 513)12
As a new general-purpose technology, robots have the potential to radically transform employment and organizations.
In contrast to prior studies that predict dramatic employment declines, we find that investments in robotics are associated with increases in total firm employment, but decreases in the total number of managers. Similarly, we find that robots are associated with an increase in the span of control for supervisors remaining within the organization. We also provide evidence that robot adoption is not motivated by the desire to reduce labor costs, but is instead related to improving product and service quality.
Our findings are consistent with the notion that robots reduce variance in production processes, diminishing the need for managers to monitor worker activities to ensure production quality. As additional evidence, we also find robot investments predict improved performance measurement and increased adoption of incentive pay based on individual employee performance. With respect to changes in skill composition within the organization, robots predict decreases in employment for middle-skilled workers, but increases in employment for low-skill and high-skilled workers. We also find robots not only predict changes in employment, but also corresponding adaptations in organizational structure. Robot investments are associated with both centralization and decentralization of decision-making authority depending upon the task, but decision rights in either case are reassigned away from the managerial level of the hierarchy.
Overall, our results suggest that robots have distinct and profound effects on employment and organizations that require fundamental changes in firm practices and organizational design.
A handful of studies have investigated the effects of robots on workers in advanced economies. According to a recent report from the World Bank (2016), 1.8 billion jobs in developing countries are susceptible to automation. Given the inability of labor markets to adjust to rapid changes, there is a growing concern that the effect of automation and robotization in emerging economies may increase inequality and social unrest. Yet, we still know very little about the impact of robots in developing countries.
In this paper we analyze the effects of exposure to industrial robots in the Chinese labor market.
Using aggregate data from Chinese prefectural cities (2000–2016) and individual longitudinal data from China, we find:
a large negative impact of robot exposure on employment and wages of Chinese workers. Effects are concentrated in the state-owned sector and are larger among low-skilled, male, and prime-age and older workers. Furthermore, we find evidence that exposure to robots affected internal mobility and increased the number of labor-related strikes and protests.
Recent advances in artificial intelligence and robotics have generated a robust debate about the future of work. An analogous debate occurred in the late 19th century when mechanization first transformed manufacturing. We analyze an extraordinary dataset from the late 19th century, the Hand and Machine Labor study carried out by the US Department of Labor in the mid-1890s. We focus on transitions at the task level from hand to machine production, and on the impact of inanimate power, especially of steam power, on labor productivity. Our analysis sheds light on the ability of modern task-based models to account for the effects of historical mechanization.
Quantifying automation in the Industrial Revolution: We all know that the Industrial Revolution involved the substantial substitution of human labour for machine labour. This 2019 paper from a trio of economists paints a clear quantitative picture of automation in this period, using the 1899 US Hand and Machine Labor Study.
The dataset: The HML study is a remarkable data-set that has only recently been analyzed by economic historians. Commissioned by Congress and collected by the Bureau of Labor Statistics, the study collected observations on the production of 626 manufactured units (eg. ‘men’s laced shoes’) and recorded in detail the tasks involved in their production and relevant inputs to each task. For each unit, this data was collected for machine-production, and hand-production.
Key findings: The paper looks at transitions between hand-labour and machine-labour across tasks. It finds clear evidence for both the displacement and productivity effects of automation on labour:
67% of hand tasks transitioned 1-to-1 to being performed by machines and a further 28% of hand tasks were subdivided or consolidated into machine tasks. Only 4% of hand tasks were abandoned.
New tasks (not previously done by hand) represented one-third of machine tasks.
Machine labour reduced total production time by a factor of 7.
The net effect of new tasks on labour demand was positive—time taken up by new machine-tasks was 5× the time lost on abandoned hand-tasks
Matthew’s view: The Industrial Revolution is perhaps the most transformative period in human history so far, with massive effects on labour, living standards, and other important variables. It seems likely that advances in AI could have a similarly transformative effect on society, and that we are in a position to influence this transformation and ensure that it goes well. This makes understanding past transitions particularly important. Aside from the paper’s object-level conclusions, I’m struck by how valuable this diligent empirical work from the 1890s, and the foresight of people who saw the importance in gathering high-quality data in the midst of this transition. This should serve as inspiration for those involved in efforts to track metrics of AI progress.]
This paper raises basic questions about the process of economic growth. It questions the assumption, nearly universal since Solow’s seminal contributions of the 1950s, that economic growth is a continuous process that will persist forever. There was virtually no growth before 1750, and thus there is no guarantee that growth will continue indefinitely. Rather, the paper suggests that the rapid progress made over the past 250 years could well turn out to be an unique episode in human history. The paper is only about the United States and views the future from 2007 while pretending that the financial crisis did not happen. Its point of departure is growth in per-capita real GDP in the frontier country since 1300, the U.K. until 1906 and the U.S. afterwards. Growth in this frontier gradually accelerated after 1750, reached a peak in the middle of the 20th century, and has been slowing down since. The paper is about “how much further could the frontier growth rate decline?”
The analysis links periods of slow and rapid growth to the timing of the three industrial revolutions (IR’s), that is, IR #1 (steam, railroads) from 1750 to 1830; IR #2 (electricity, internal combustion engine, running water, indoor toilets, communications, entertainment, chemicals, petroleum) from 1870 to 1900; and IR #3 (computers, the web, mobile phones) from 1960 to present. It provides evidence that IR #2 was more important than the others and was largely responsible for 80 years of relatively rapid productivity growth between 1890 and 1972. Once the spin-off inventions from IR #2 (airplanes, air conditioning, interstate highways) had run their course, productivity growth during 1972–96 was much slower than before. In contrast, IR #3 created only a short-lived growth revival between 1996 and 2004. Many of the original and spin-off inventions of IR #2 could happen only once—urbanization, transportation speed, the freedom of females from the drudgery of carrying tons of water per year, and the role of central heating and air conditioning in achieving a year-round constant temperature.
Even if innovation were to continue into the future at the rate of the two decades before 2007, the U.S. faces six headwinds that are in the process of dragging long-term growth to half or less of the 1.9% annual rate experienced between 1860 and 2007. These include demography, education, inequality, globalization, energy/environment, and the overhang of consumer and government debt. A provocative “exercise in subtraction” suggests that future growth in consumption per capita for the bottom 99% of the income distribution could fall below 0.5% per year for an extended period of decades.
So we have perhaps five eras during which the thing whose growth is at issue—the universe, brains, the hunting economy, the farming economy, and the industrial economy—doubled in size at fixed intervals. Each era of growth before now, however, has eventually switched suddenly to a new era having a growth rate that was between 60 and 250× as fast. Each switch was completed in much less time than it had taken the previous regime to double—from a few millennia for the agricultural revolution to a few centuries for the industrial one. These switches constituted singularities…A few exceedingly rare innovations, however, do suddenly change everything. One such innovation led to agriculture; another led to industry.
…If current trends continue, we should have computer hardware and brain scans fast and cheap enough to support this scenario in a few decades…Though it might cost many billions of dollars to build one such machine, the first copy might cost only millions and the millionth copy perhaps thousands or less. Mass production could then supply what has so far been the one factor of production that has remained critically scarce throughout human history: intelligent, highly trained labor.
…The relative advantages of humans and machines vary from one task to the next. Imagine a chart resembling a topographic cross section, with the tasks that are “most human” forming a human advantage curve on the higher ground. Here you find chores best done by humans, like gourmet cooking or elite hairdressing. Then there is a “shore” consisting of tasks that humans and machines are equally able to perform and, beyond them an “ocean” of tasks best done by machines. When machines get cheaper or smarter or both, the water level rises, as it were, and the shore moves inland.
This sea change has two effects. First, machines will substitute for humans by taking over newly “flooded” tasks. Second, doing machine tasks better complements human tasks, raising the value of doing them well. Human wages may rise or fall, depending on which effect is stronger. Wages could fall so far that most humans could not live on them. For example, in the 1920s, when the mass-produced automobile came along, it was produced largely by machines, with human help. So machines dominated that function—the assembly of cars. The resulting proliferation of machine-assembled cars raised the value of related human tasks, such as designing those cars, because the financial stakes were now much higher. Sure enough, automobiles raised the wages of machinists and designers—in these cases, the complementary effect dominated. At the same time, the automobile industry lowered the pay of saddle makers and stable hands, an example of the substitution effect.
So far, machines have displaced relatively few human workers, and when they have done so, they have in most cases greatly raised the incomes of other workers. That is, the complementary effect has outweighed the substitution effect—but this trend need not continue. In our graph of machines and humans, imagine that the ocean of machine tasks reached a wide plateau. This would happen if, for instance, machines were almost capable enough to take on a vast array of human jobs. For example, it might occur if machines were on the very cusp of human-level cognition. In this situation, a small additional rise in sea level would flood that plateau and push the shoreline so far inland that a huge number of important tasks formerly in the human realm were now achievable with machines. We’d expect such a wide plateau if the cheapest smart machines were whole-brain emulations whose relative abilities on most tasks should be close to those of human beings.
…Together these effects seem quite capable of producing economic doubling times much shorter than anything the world has ever seen. And note that this forecast does not depend on the rate at which we achieve machine intelligence capabilities or the rate at which the intelligence of machines increases. Merely having computer-like machines able to do most important mental tasks as well as humans do seems sufficient to produce very rapid growth.
The evolution of technology causes human capital to become obsolete. We study this phenomenon in an overlapping generations setting, assuming that technology evolves stochastically and that older workers find updating uneconomic. Experience and learning by doing may offer the old some income protection, but technology advance always turns them into has-beens to some degree.
We focus on the determinants (demand elasticities, persistence of technology change, etc.) of the severity of the has-beens effect. It can be large, even leading to negatively sloped within-occupation age-earnings profiles and an occupation dominated by a few young, high-income workers.
Architecture displays the sort of features the theory identifies as magnifying the has-beens effect, and both anecdotes and some data suggest that the has-beens effect in architecture is extreme indeed.
We explore the effect of computerization on productivity and output growth using data from 527 large U.S. firms over 1987–1994. We find that computerization makes a contribution to measured productivity and output growth in the short term (using 1-year differences) that is consistent with normal returns to computer investments. However, the productivity and output contributions associated with computerization are up to 5× greater over long periods (using 5-year to 7-year differences). The results suggest that the observed contribution of computerization is accompanied by relatively large and time-consuming investments in complementary inputs, such as organizational capital, that may be omitted in conventional calculations of productivity. The large long-run contribution of computers and their associated complements that we uncover may partially explain the subsequent investment surge in computers in the late 1990s.
This essay discusses the effect of technical change on wage inequality.
I argue that the behavior of wages and returns to schooling indicates that technical change has been skill-biased during the past 60 years. Furthermore, the recent increase in inequality is most likely due to an acceleration in skill bias. In contrast to 20th-century developments, much of the technical change during the early 19th century appears to be skill-replacing.
I suggest that this is because the increased supply of unskilled workers in the English cities made the introduction of these technologies profitable. On the other hand, the 20th century has been characterized by skill-biased technical change because the rapid increase in the supply of skilled workers has induced the development of skill-complementary technologies. The recent acceleration in skill bias is in turn likely to have been a response to the acceleration in the supply of skills during the past several decades.
A world product time series covering two million years is well fit by either a sum of four exponentials, or a constant elasticity of substitution (CES) combination of three exponential growth modes: “hunting”, “farming”, and “industry.”
The CES parameters suggest that farming substituted for hunting, while industry complemented farming, making the industrial revolution a smoother transition. Each mode grew world product by a factor of a few hundred, and grew a hundred times faster than its predecessor.
This weakly suggests that within the next century a new mode might appear with a doubling time measured in days, not years.
To understand the economic value of computers, one must broaden the traditional definition of both the technology and its effects. Case studies and firm-level econometric evidence suggest that: (1) organizational “investments” have a large influence on the value of IT investments; and (2) the benefits of IT investment are often intangible and disproportionately difficult to measure. Our analysis suggests that the link between IT and increased productivity emerged well before the recent surge in the aggregate productivity statistics and that the current macroeconomic productivity revival may in part reflect the contributions of intangible capital accumulated in the past.
…What We Now Know About Computers and Productivity: Research on computers and productivity is entering a new phase. While the first wave of studies sought to document the relationship between investments in computers and increases in productivity, new research is focusing on how to make more computerization effective. Computerization does not automatically increase productivity, but it is an essential component of a broader system of organizational changes which does increase productivity. As the impact of computers becomes greater and more pervasive, it is increasingly important to consider these organizational changes as an integral part of the computerization process.
This is not the first time that a major general purpose technology like computers required an expensive and time-consuming period of restructuring. Substantial productivity improvement from electric motors did not emerge until almost 40 years after their introduction into factories [^7^](/docs/economics/automation/1990-david.pdf “‘The Dynamo and the Computer: A Historical Perspective on the Modern Productivity Paradox’, David 1990”). The first use involved swapping gargantuan motors for large steam engines with no redesign of work processes. The big productivity gains came when engineers realized that the factory layout no longer had to be dictated by the placement of power transmitting shafts and rods. They re-engineered the factory so that machines were distributed throughout the factory, each driven by a separate, small electric motor. This made it possible to arrange the machines in accordance with the logic of work flow instead of in proximity to the central power unit.
It has also taken some time for businesses to realize the transformative potential of information technology to revolutionize work. However, the statistical evidence suggests that revolution is occurring much more quickly this time.
Many observers of recent trends in the industrialized economies of the West have been perplexed by the conjecture of rapid technological innovation with disappointingly slow gains in measured productivity. A generation of economists who were brought up to identify increases in total factor productivity indexes with “technical progress” has found it quite paradoxical for the growth accountants’ residual measure of “the advance of knowledge” to have vanished at the very same time that a wave of major innovations was appearing-in microelectronics, in communications technologies based on lasers and fiber optics, in composite materials, and in biotechnology…This latter aspect of the so-called “productivity paradox” attained popular currency in the succinct formulation attributed to Robert Solow: “We see the computers everywhere but in the productivity statistics.”
…If, however, we are prepared to approach the matter from the perspective afforded by the economic history of the large technical systems characteristic of network industries, and to keep in mind a time-scale appropriate for thinking about transitions from established technological regimes to their respective successor regimes, many features of the so-called productivity paradox will be found to be neither so unprecedented nor so puzzling as they might otherwise appear.
…Computer and dynamo each form the nodal elements of physically distributed (transmission) networks. Both occupy key positions in a web of strongly complementary technical relationships that give rise to “network externality effects” of various kinds, and so make issues of compatibility standardization important for business strategy and public policy (see my 1987 paper and my paper with Julie Bunn, 1988). In both instances, we can recognize the emergence of an extended trajectory of incremental technical improvements, the gradual and protracted process of diffusion into widespread use, and the confluence with other streams of technological innovation, all of which are interdependent features of the dynamic process through which a general purpose engine acquires a broad domain of specific applications (see Timothy Bresnahan and Manuel Trajtenberg, 1989). Moreover, each of the principal empirical phenomena that make up modem perceptions of a productivity paradox had its striking historical precedent in the conditions that obtained a little less than a century ago in the industrialized West, including the pronounced slowdown in industrial and aggregate productivity growth experienced during the 1890–1913 era by the two leading industrial countries, Britain and the United States (see my 1989 paper, pp. 12–15, for details). In 1900, contemporary observers well might have remarked that the electric dynamos were to be seen “everywhere but in the productivity statistics!”
Many observers of contemporary economic trends have been perplexed by the contemporary conjuncture of rapid technological innovation with disappointingly slow gains in measured productivity. The purpose of this essay is to show modern economists, and others who share their puzzlement in this matter, the direct relevance to their concerns of historical studies that trace the evolution of techno-economic regimes formed around “general purpose engines”. For this purpose an explicit parallel is drawn between two such engines—the computer and the dynamo. Although the analogy between information technology and electrical technology would have many limitations were it to be interpreted very literally, it nevertheless proves illuminating. Each of the principal empirical phenomena that go to make up modern perceptions of a “productivity paradox”, had a striking historical precedent in the conditions that obtained a little less than a century ago in the industrialized West. In 1900 contemporaries might well have said that the electric dynamos were to be seen “everywhere but in the economic statistics”. Exploring the reasons for that state of affairs, and the features of commonality between computer and dynamo—particularly in the dynamics of their diffusion and their incremental improvement, and the problems of capturing their initial effects with conventional productivity measures—provides some clues to help understand our current situation. The paper stresses the importance of keeping an appropriately long time-frame in mind when discussing the connections between the information revolution and productivity growth, as well as appreciating the contingent, path-dependent nature of the process of transition between one techno-economic regime and the next.
[Keywords: productivity slowdown; diffusion of innovations; economics of technology; information technology; electric power industry]