My Mistakes

Things I have changed my mind about.
personal, philosophy, predictions, Bitcoin, survey, Bayes
2011-09-152018-02-08 in progress certainty: unlikely importance: 7


“One does not care to acknowledge the mistakes of one’s youth.”

, 1

It is salutary for the soul to review past events and perhaps keep a list of things one no longer believes, since such crises are rare2 and so easily pass from memory (there is no feeling of being wrong, only of having been wrong3). One does not need an elaborate ritual (fun as they are to read about) to change one’s mind, but the changes must happen. If you are not changing, you are not growing4; no one has won the belief lottery and has a monopoly on truth5. To the honest inquirer, all surprises are pleasant ones6.

Changes

Only the most clever and the most stupid cannot change.7

This list is not for specific facts of which there are too many to record, nor is it for falsified predictions like my belief that George W. Bush would not be elected (for those see or my PredictionBook.com page), nor mistakes in my private life (which go into a private file), nor things I never had an initial strong position on (Windows vs Linux, Java vs Haskell). The following are some major ideas or sets of ideas that I have changed my mind about:

Religion

For I count being refuted a greater good, insofar as it is a greater good to be rid of the greatest evil from oneself than to rid someone else of it. I don’t suppose that any evil for a man is as great as false belief about the things we’re discussing right now…8

I think religion was the first subject in my life that I took seriously. As best as I can recall at this point, I have no “deconversion story” or tale to tell, since I don’t remember ever seriously believing9 - the stories in the Bible or at my Catholic church were interesting, but they were obviously fiction to some degree. I wasn’t going to reject religion out of hand because some of the stories were made-up (any more than I believed George Washington didn’t exist because the story of him chopping down an apple tree was made-up), but the big claims didn’t seem to be panning out either:

  1. My prayers received no answers of any kind, not even a voice in my head
  2. I didn’t see any miracles or intercessions like I expected from a omnipotent loving god

The latter was probably due to the cartoons I watched on TV, which seemed quite sensible to me: a powerful figure like a god would act in all sorts of ways. If there really was a god, that was something that ought to be quite obvious to anyone who ‘had eyes to see’. I had more evidence that Santa or China existed than did God, which seemed backwards. Explanations for the absence of divine action ranged from the strained to so ludicrously bad that they corroded what little faith I possessed10. I would later recognize my own doubts in passages of skeptical authors like Edward Gibbon and his Decline:

…From the first of the fathers to the last of the popes, a succession of bishops, of saints, of martyrs, and of miracles, is continued without interruption; and the progress of superstition was so gradual, and almost imperceptible, that we know not in what particular link we should break the chain of tradition. Every age bears testimony to the wonderful events by which it was distinguished, and its testimony appears no less weighty and respectable than that of the preceding generation, till we are insensibly led on to accuse our own inconsistency, if in the eighth or in the twelfth century we deny to the venerable Bede, or to the holy Bernard, the same degree of confidence which, in the second century, we had so liberally granted to Justin or to Irenaeus. If the truth of any of those miracles is appreciated by their apparent use and propriety, every age had unbelievers to convince, heretics to confute, and idolatrous nations to convert; and sufficient motives might always be produced to justify the interposition of Heaven. And yet, since every friend to revelation is persuaded of the reality, and every reasonable man is convinced of the cessation, of miraculous powers, it is evident that there must have been some period in which they were either suddenly or gradually withdrawn from the Christian church. Whatever aera is chosen for that purpose, the death of the apostles, the conversion of the Roman empire, or the extinction of the Arian heresy, the insensibility of the Christians who lived at that time will equally afford a just matter of surprise. They still supported their pretensions after they had lost their power. Credulity performed the office of faith; fanaticism was permitted to assume the language of inspiration, and the effects of accident or contrivance were ascribed to supernatural causes. The recent experience of genuine miracles should have instructed the Christian world in the ways of Providence, and habituated their eye (if we may use a very inadequate expression) to the style of the divine artist…Whatever opinion may be entertained of the miracles of the primitive church since the time of the apostles, this unresisting softness of temper, so conspicuous among the believers of the second and third centuries, proved of some accidental benefit to the cause of truth and religion. In modern times, a latent and even involuntary scepticism adheres to the most pious dispositions. Their admission of supernatural truths is much less an active consent than a cold and passive acquiescence.

I have seen these reasons mocked as simplistic and puerile, and I was certainly aware that there were subtle arguments which intelligent philosophers believed resolved the (such as , which is valid but I do not consider it sound since it requires the meaningless concept of free will) and that Christians of various stripes had various complicated explanations for why this world was consistent with there being a God (if for no other reason than that I observed there were theists as intelligent or more intelligent than me). But the basic concept seemed confused, free will was an even more dubious plank to go on, and in general the entire complex of historical claims, metaphysics, and activities of religious people did not seem convincing. (Richard Carrier’s 2011 Why I Am Not A Christian expresses the general tenor of my misgivings, especially after I checked out everything the library had on , , the Gnostics, and early Christianity - Je n’avais pas besoin de cette hypothèse-là, basically.)

So I never believed (although it was obvious enough that there was no point in discussing this since it might just lead to me going to church more and sitting on the hard wooden pews), but there was still the troubling matter of Heaven & Hell: those infinities meant I couldn’t simply dismiss religion and continue reading about dinosaurs or Alcatraz. If I got religion wrong, I would have gotten literally the most important possible thing wrong! Nothing else was as important - if you’re wrong about a round earth, at worst you will never be a good geographer or astronomer; if you’re wrong about believing in astrology, at worst you waste time and money; if you’re wrong about evolution and biology, at worst you endanger your life; and so on. But if you’re wrong about religion, wasting your life is about the least of the consequences. And everyone accepts a religion or at least the legitimacy of religious claims, so it would be unspeakably arrogant of a kid to dismiss religion entirely - that sort of evidence is simply not there1112. (Oddly enough, atheists - who are not immediately shown to be mistaken or fools - are even rarer in books and cartoons than they are in real life.)

Kids actually are kind of skeptical if they have reason to be skeptical, and likewise will believe all sorts of strange things if the source was previously trustworthy13. This is as it should be! Kids cannot come prewired with 100% correct beliefs, and must be able to learn all sorts of strange (but true) things from reliable authorities; these strategies are exactly what one would advise. It is not their fault that some of the most reliable authorities in their lives (their parents) are mistaken about one major set of beliefs. They simply have bad epistemic luck.

So I read the Bible, which veered from boring to incoherent to disgusting. (I became a fan of the , however, and still periodically read the Book of Job, Ecclesiastes, and Proverbs.) That didn’t help much. Well, maybe Christianity was not the right religion? My elementary school library had a rather strange selection of books which included various Eastern texts or anthologies (I remember in particular one anthology on meditation, which was a hodge-podge of religious instruction manuals, essays, and scientific studies on meditation - that took me a long time to read, and it was only in high school and college that I really became comfortable reading psychology papers). I continued reading in this vein for years, in between all my more normal readings. The Koran was interesting and in general much better than the Bible. Shinto texts were worthless mythologizing. Taoism had some very good early texts (the Chuang-tzu in particular) but then bizarrely degenerated into alchemy. Buddhism was strange: I rather liked the general philosophical approach, but there were many populist elements in Mahayana texts that bothered me. Hinduism had a strange beauty, but my reaction was similar to that of the early translators, who condemned it for sloth and lassitude. I also considered the Occult seriously and began reading the Skeptical literature on that and related topics (see the later section).

By this point in my reading, I had reached middle school; this summary makes my reading sound more systematic than it was. I still hadn’t found any especially good reason to believe in God or any gods, and had a jaundiced view of many texts I had read. was a shock when I finally became capable of reading it and source-texts like Josephus: it’s amazing just how uncertain, variable, self-contradictory, edited, and historically inconsistent both the Old and New Testaments are. There are hundreds of major variants of the various books, there are countless thousands of textual variants (many of key theological passages) leaving traces of ideological fabrication throughout besides the casual falsification of many historical events for dramatic effect or faked coherence with Old Testament ‘prophecies’ (we all know of false claims like the Massacre of the Innocents or Jesus being born of a ‘virgin’ to fit an erroneous translation or the sun stopping or the Temple veil being ripped or the Roman census) but it’s dramatic to find that Jesus was an utter nobody even in Josephus, where the only mention of Jesus seems to be falsified (by Christians of course) despite him name-dropping constantly - and speaking of Josephus, it’s hard not to be impressed that while one recites every Sunday how Jesus was “crucified under Pontius Pilate” as proof we are told of Jesus’s historicity & that he was not a story or mythology, the Pontius Pilate in Josephus is a corrupt merciless Roman official who doesn’t hesitate to get his hands bloody if necessary and who doesn’t match the Gospel story in the slightest; indeed, reading through Josephus’s accounts of constant turmoil in Jerusalem of various false prophets and messiahs and rebels (hmm….) and the difficulties the authorities face, one can’t believe the entire story of Pilate because to not either immediately execute Jesus or leave the whole matter until well after the very politically sensitive holiday is to assume both the Roman and Jewish officials suffered a sudden attack of the stupids and a collective amnesia of how they usually dealt with such problems. The whole story is blatant rubbish! Yet my religion teachers kept occasionally emphasizing how Jesus was a historical figure. The mythicist case is not compelling but they do a good job of showing how elements of the story were standard tropes for the ancient world, with many parts of the narrative having multiple precedents or weirdly-interpreted Old Testament justifications. In short, reading through higher Biblical criticism, it’s not a surprise that it is such anathema to modern Christian sects and scriptural inerrancy developed in reaction to it.

At some point, I shrugged and gave up and decided I was an atheist14 because certainly I felt nothing15. Theology was interesting to some extent, but there were better things to read about. (My literary interest in Taoism and philosophical interest in Buddhism remain, but I put no stock in any supernatural claims they might make.)

The American Revolution

In middle school, we were assigned a pro-con debate about the American Revolution; I happened to be on the pro side, but as I read through the arguments, I became increasingly disturbed and eventually decided that the pro-Revolution arguments were weak or fallacious.

The Revolution was a bloodbath with ~100,000 casualties or fatalities followed by 62,000 Loyalist/Tory refugees for fear of retaliation and their expropriation (the ones who stayed did not escape persecution); this is a butcher’s bill that did not seem justified in the least by anything in Britain or America’s subsequent history (what, were the British going to randomly massacre Americans for fun?), even now with a population of >300 million, and much less back when the population was 1/100th the size. Independence was granted to similar English colonies at the smaller price of “waiting a while”: Canada was essentially autonomous by 1867 (less than a century later) and Australia was first settled in 1788 with autonomous colonies not long behind and the current Commonwealth formed by 1901. (Nor did Canada or Australia suffer worse at England’s hands during the waiting period than, say, America in that time suffered at its own hands.) In the long run, independence may have been good for the USA, but this would be due to sheer accident: the British were holding the frontier at the Appalachians (see ), and Napoleon likely would not have been willing engage in the with English colonies inasmuch as he was at war with England. (Assuming we see this as a good thing: Bryan Caplan describes that as removing “the last real check on American aggression against the Indians”.)

Neither of these is a very strong argument; the British could easily have revoked the Proclamation in face of the colonial resistance (and in practice did16), and Napoleon could not hold onto New France for very long against the British fleets. The argument from ‘freedom’ is a buzzword or unsupported by the facts - Canada and Australia are hardly hellhole bastions of totalitarianism, and are ranked by as being as free as the USA. (Steve Sailer asks “Yet how much real difference did the very different political paths of America and Canada make in the long run?”; could we have been Canada?)

And there are important arguments for the opposite, that America would have been better off under British rule - Britain very early on and likely would have ended slavery in the colonies as well. (Some have argued that with continued control of the southern colonies, Britain would have not been able to do this; but the usual arguments for the Revolution center on the tyranny of Britain - so was the dog wagging the tail or the tail the dog?) The South crucially depended on England’s tacit support (seeing the South as a counterweight to the dangerous North?), so the would either never have started or have been suppressed very quickly. The Civil War would also have lacked its intellectual justification of if the states had remained Crown colonies. The Civil War was so bloody and destructive17 that avoiding it is worth a great deal indeed. And then there comes WWI and WWII. It is not hard to see how America remaining a colony would have been better for both Europe and America.

Aside from the better outcomes for slaves and Indians, it’s been suggested that America would have benefited from maintaining a parliamentary constitutional-monarchy democracy rather than inventing its particular president-oriented republic (a view that has some more appeal in the 2000s, but is more broadly supported by the popularity of parliamentary democracies globally and their apparent greater stability & success compared to the more American-style systems in unstable & coup-prone Latin America).

Since that paradigm shift in middle school, my view has changed little:

  • Crane Brinton’s confirmed my beliefs with statistics about the economic class of participants: naked financial self-interest is not a very convincing argument for plunging a country into war, given that England had incurred substantial debt defending and expanding the colonies and their tax burden - that they endlessly complained of - was comically tiny compared to England proper. One of the interesting points Brinton makes was that contrary to the universal belief, revolutions do not universally tend to occur at times of poverty or increasing wealth inequality; indeed, before the American revolution, the colonists were less taxed, wealthier & more equal than the English.

  • continuing the economic theme, the burdens on the American colonists such as the are now considered to not be burdensome at all, but negligible or positive, especially compared to independence. Famed Scottish economist Adam Smith supported the Navigation Acts as a critical part of the Empire’s defense18 (which included the American colonies; but see again the colonies’ gratitude for the French-Indian War). Their light burden has become economic history consensus since the discussion was sparked in the 1960s (eg. Thomas 1965, Thomas 1968): in 1994, 198 economic historians were surveyed asked several questions on this point finding that:

    1. 132 disagreed with the proposition “One of the primary causes of the American Revolution was the behavior of British and Scottish merchants in the 1760s and 1770s, which threatened the abilities of American merchants to engage in new or even traditional economic pursuits.”
    2. 178 agreed or partially agreed that “The costs imposed on the colonists by the trade restrictions of the Navigation Acts were small.”
    3. 111 disagreed that “The economic burden of British policies was the spark to the American Revolution.”
    4. 117 agreed or partially agreed that “The personal economic interests of delegates to the Constitutional Convention generally had a [substantial] effect on their voting behavior.”
  • Mencius Moldbug discussed good deal of primary source material which supported my interpretation.

    I particularly enjoyed his description of the Pulitzer-winning , a study of the popular circulars and essays (of which Thomas Paine’s is only the most famous): the author finds that the rebels and their leaders believed there was a conspiracy by English elites to strip them of their freedoms and crush the Protestants under the yoke of the .

    Bailyn points out that no traces of any such conspiracy has ever been found in the diaries or memorandums or letters of said elites. Hence the Founding Fathers were, as Moldbug claimed, exactly analogous to or . Moldbug further points out that reality has directly contradicted their predictions, as both the Monarchy and Church of England have seen their power continuously decreasing to their present-day ceremonial status, a diminution in progress long before the American Revolution.

  • Possibly on Moldbug’s advice, I then read volume 1 of Murray Rothbard’s . I was unimpressed. Rothbard seems to think he is justifying the Revolution as a noble libertarian thing (except for those other scoundrels who just want to take over); but all I saw were scoundrels.

  • Attempting to take an outside view and ignore the cult built up around the Founding Fathers, viewing them as a cynical foreigner might, the Fathers do not necessarily come off well.

    For example, one can compare George Washington to : both led a guerrilla revolution of British colonies against the country which had built their colony up into a wealthy regional powerhouse, and they or their allies employed mobs and terrorist tactics; both oversaw hyperinflation of their currency; both expropriated politically disfavored groups, and engaged in give-aways to supporters (Mugabe redistributed land to black supporters, Washington approved of states’ war-debts - an incredible windfall for the Hamilton-connected speculators, who supported the ); both were overwhelmingly voted into office and commanded mass popularity even after major failures of their policies became evident (economic growth & hyperinflation for Mugabe, the Whiskey Rebellion for Washington), being hailed as fathers of their countries; and both wound up one of, if not the, most wealthy men in the country (Mugabe’s fortune has been estimated at anywhere from $3b to $10b; Washington, in inflation-adjusted terms, has been estimated at $0.5b).

  • Jeremy Bentham amusingly eviscerates the Independence’s complaints

Communism

In roughly middle school as well, I was very interested in economic injustice and guerrilla warfare, which naturally led me straight into the communist literature. I grew out of this when I realized that while I might not be able to pinpoint the problems in communism, a lot of that was due to the sheer obscurity and bullshitting in the literature (I finally gave up after reading twice, concluding the problem was not me, Marxism was really that intellectually worthless), and the practical results with economies & human lives spoke for themselves: the ideas were tried in so many countries by so many groups in so many different circumstances over so many decades that if there were anything to them, at least one country would have succeeded. In comparison, even with the broadest sample including hellholes like the Belgian Congo, capitalism can still point to success stories like Japan.

(Similar arguments can be used for science and religion: after early science got the basic inductive empirical formula right, it took off and within 2 or 3 centuries had conquered the intellectual world and assisted the conquest of much of the real world too; in contrast, 2 or 3 centuries after Christianity began, its texts were beginning to finally congeal into the beginnings of a canon, it was minor, and the Romans were still making occasional efforts to exterminate this irksome religion. Charles Murray, in a book I otherwise approve of, attempts to argue in Human Accomplishment that Christianity was a key factor in the great accomplishments of Western science & technology by some gibberish involving human dignity; the argument is intrinsically absurd - Greek astronomy and philosophy were active when Christianity started, St. Paul literally debated the Greek philosophers in Athens, and yet Christianity did not spark any revolution in the 100s, or 200s, or 300s, or for the next millennium, nor the next millennium and a half. It would literally be fairer to attribute science to William the Conqueror, because that’s a gap one-third the size and there’s at least a direct line from William the Conqueror to the Royal Society! If we try to be fairer and say it’s late Christianity as exemplified by the philosophy of Thomas Aquinas - as influenced by non-Christian thought like Aristotle as it is - that still leaves us a gap of something like 300-500 years. Let us say I would find Murray’s argument of more interest if it were coming from a non-Christian…)

The Occult

This is not a particular error but a whole class of them. I was sure that the overall theistic explanations were false, but surely there were real phenomenon going on? I’d read up on individual things like Nostradamus’s prophecies or the Lance of Longinus, check the skeptics literature, and disbelieve; rinse and repeat until I finally dismiss the entire area with some exceptions like the mental & physical benefits of meditation. One might say my experience was a little like career as recounted in “The Elusive Open Mind: Ten Years of Negative Research in Parapsychology”, sans the detailed experiments. (I am still annoyed that I was unable to disbelieve the research on until I read more about the corruption, deception, and falsified predictions of the TM organization itself.) Fortunately, I had basically given up on occult things by high school, before I read Eco’s , so I don’t feel too chagrined about this.

Fiction

I spend most of my time reading; I also spent most of my time in elementary, middle, and high school reading. What has changed in what I read - I now read principally nonfiction (philosophy, economics, random sciences, etc.), where I used to read almost exclusively fiction. (I would include one nonfiction book in my stacks of books to check out, on a sort of ‘vegetables’ approach. Eat your vegetables and you can have dessert.) I, in fact, aspired to be a novelist. I thought fiction was a noble task, the highest production of humanity, and writers some of the best people around, producing immortal works of truth. Slowly this changed. I realized fiction changed nothing, and when it did change things, it was as oft as not for the worse. Fiction promoted simplification, focus on sympathetic examples, and I recognized how much of my own infatuation with the Occult (among other errors) could be traced to fiction. What a strange belief, that you could find truths in lies.19 And there are so many of them, too! So very many. (I wrote one essay on this topic, .) I still produce some fiction these days, but mostly when I can’t help it or as a writing exercise.

Nicotine

I changed my mind about in 2011. I had naturally assumed, in line with the usual American cultural messages, that there was nothing good about tobacco and that smoking is deeply shameful, proving that you are a selfish lazy short-sighted person who is happy to commit slow suicide (taking others with him via second-hand smoke) and cost society a fortune in medical care. Then some mentions of nicotine as useful came up and I began researching it. I’m still not a fan of smoking, and I regard any tobacco with deep trepidation, but the research literature seems pretty clear: nicotine enhances mental performance in multiple domains and may have some minor health benefits to boot. Nicotine sans tobacco seems like a clear win. (It amuses me that of the changes listed here, this is probably the one people will find most revolting and bizarre.)

Centralized darknet-markets

I overestimated the stability of Bitcoin+Tor darknet markets such as : I was aware that the centralization of the first-generation DNMs (SR/BMR/Atlantis/Sheep) meant that the site operators had a strong temptation to steal all deposit & escrows, but I thought that the value of future escrow commissions provided enough incentive to make rip-and-run scams rare - certainly they were fairly rare during the Silk Road 1 era.

After Silk Road was shut down in October 2013, SR turned out to be highly unusual: both less hacked than most markets, and it seems that whatever his (many) other failings, Ross Ulbricht genuinely believed his own ideology and so was running Silk Road out of principle rather than greed (which also explains why he didn’t retire despite a fortune larger than he could spend in a lifetime). Attracted by the sudden void in a large market, and by the FBI’s press releases crowing over how many hundreds of millions of dollars Silk Road had earned, dozens of new markets sprang up to fill the void. Many then proceeded to scam users (often taking advantage of the standard ‘seller bonds’: sellers would deposit a large sum as a guarantee against scamming buyers in the early period where they were accepting orders but most packages would not have arrived) or alternately, be hacked due to the operators’ get-rich-quick incompetence and rather than refund users from future profits, decide to steal everything the hacker didn’t get. As of April 2014, it seems users have mostly learned caution, and the shift to multisig escrow removes the need to trust market operators and hence the risk from the operators or hackers, so matters may finally be stabilizing.

I think my original point is still correct that markets can be trusted as long as the discounted present value of their future earnings exceeds the amount they can steal. My mistake here was overestimating the net present value: I didn’t realize that site operators had such high discount rates (one, PBF, pulled its scam after perhaps a few thousand dollars’ worth of Bitcoin had been deposited despite positive initial reviews) and there was so much risk involved (the Bitcoin exchange rate, arrest, hacking; all exacerbated by the incompetence of many site operators).

This mistake lead to complacency on my part in archiving the markets & forums: if you expect a market to be around for years, there is no particular need to try to mirror them weekly. And so while I have good coverage of the DNMs post-December-2013, I am missing most of the markets before then.

Potential changes

The mind cannot foresee its own advance.20

There are some things I used to be certain about, but I am no longer certain either way; I await future developments which may tip me one way or the other.

Near Singularity

I am no longer certain that is near.

In the 1990s, all the numbers seem to be ever-accelerating. Indeed, I could feel with Kurzweil that . But an odd thing happened in the 2000s (a dreary decade, distracted by the dual dissipation of Afghanistan & Iraq). The hardware kept getting better mostly in line with Moore’s Law (troubling as the flight to parallelism is), but the AI software didn’t seem to keep up. I am only a layman, but it looks as if all the AI applications one might cite in 2011 as progress are just old algorithms now practical with newer hardware. And economic growth slowed down, and the stock market ticked along, barely maintaining itself. The Human Genome Project completely fizzled out, with interesting insights and not much else. (It’s great that genome sequencing has improved exactly as promised, but what about everything else? Where are our embryo selections, our germ-line engineering, our universal genetic therapies, our customized drugs?21) The pharmaceutical industry has reached such diminishing returns that even the optimists have noticed the problems in the drug pipeline, problems so severe that it’s hard to wave them away as due to that dratted FDA or ignorant consumers. As of 2007, the increases in longevity for the elderly22 in the US has continued to be less each year and , which isn’t good news for those hoping to reach “escape velocity”; and medicine has been a repeated disappointment even to forecasting-savvy predictors (the ’90s and the genetic revolution being especially remarkable for its lack of concrete improvements). Kurzweil published an evaluation of his predictions up to ~2009 with great fanfare and self-congratulation, but reading through them, I was struck by how many he weaseled out on (claiming as a hit anything that existed in a lab or a microscopic market segment, even though in context he had clearly expected it to be widespread) and how often they failed due to unintelligent software.

And there are many troubling long-term metrics. I was deeply troubled to read ’s pointing out a long-term decline in discoveries per capita (despite ever increasing scientists and artists per capita!), even after he corrected for everything he could think of. I didn’t see any obvious mistakes. twisted the knife further, and then I read . I have kept notes since and see little reason to expect a general exponential upwards over all fields, including the ones minimally connected to computing. ( “The End of the Future” makes a distinction between “the progress in computers and the failure in energy”; he also makes an interesting link between the lack of progress and the many recent speculative bubbles in “The Optimistic Thought Experiment”.) The Singularity is still more likely than not, but these days, I tend to look towards emulation of human brains via scanning of as the cause. Whole brain emulation is not likely for many decades, given the extreme computational demands (even if we are optimistic and take the Whole Brain Emulation Roadmap figures, one would not expect a upload until the 2030s) and it’s not clear how useful an upload would be in the first place. It seems entirely possible that the mind will run slowly, be able to self-modify only in trivial ways, and in general be a curiosity akin to the Space Shuttle than a pivotal moment in human history deserving of the title Singularity.

Counter-point

LessWrong discussion

I respect my own opinion, but at the same time I know I am not immune to common beliefs; so it bothers me to see ‘stagnation’ and pessimistic ideas become more widespread because this means I may just be following a trend. I did not like agreeing with any of Wired’s hyperbolic forecasts back in the 1990s, and I do not like agreeing with Peter Thiel or Neal Stephenson now. One of Buffet’s classic sayings is “if they [investors] insist on trying to time their participation in equities, they should try to be fearful when others are greedy and greedy when others are fearful.” What grounds do I have for being ‘greedy’ now, when many are being ‘fearful’? What Kahneman-style pre-mortem would I give for explaining why the Singularity might indeed be Near?

First, one could point out that a number of technological milestones seem to be catching up, after long stagnations. From 2009-2012, there were a number of unexpected achievements: Google’s robotic car astounded me, the long AI-resistant game of is falling to techniques are closing in the Go masters (I expect computers to take world champion by 2030), online education seems to be starting to realize its promise (eg. the success of ), private space exploitation is doing surprisingly well (as are Tesla electric cars, which seem to be moving from playthings to perhaps mass market cars), smartphones - after a decade of being crippled by telecoms and limited computing power - are becoming ubiquitous and desktop replacements. Old dreams like and online have swung into action and given rise to active & growing communities. In the larger picture, the Long Depression beginning in 2008 has wreaked havoc on young people, but China has not imploded while continuing to move up the quality chain (replacing laborers with robots) and far from being ‘the crisis of capitalism’ or ‘the end of capitalism’, in general, global life is going on. Even Africa, while the population size is exploding, is growing economically - perhaps thanks to universally available cheap cellphones. Peak Oil continues to be delayed by new developments like fracking and resultant gluts of natural gas (the US now exports energy!), although the long-term scientific productivity trends seem to still point downwards.

What many of these points have in common is that their forebears germinated for a long time in niches and did not live up to the forecasts of their proponents - smartphones, for example, have been expected to revolutionize everything since at least the 1980s (by members of the in particular). And indeed, they are revolutionizing everything, worldwide, 30 years later. This exemplifies a line attributed to Roy Amara:

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

A number of current but disappointing trends may be disappointing only in the “short run”, the flat part of their respective exponential or sigmoid development curves23. For example, DNA sequencing has been plummeting and sequencing a whole human genome will likely be <$100 by 2015; this has been an incredible boon for basic research and our knowledge of the world, but so far the applications have been fairly minimal - but this may not be true forever, with new projects starting up tackling topics of the greatest magnitude like using thousands of genomes to search for the thousands of alleles which each affect intelligence a tiny bit. Were embryo selection for intelligence to become viable (as there is no reason to believe would not be possible once the right alleles have been identified) and every baby could be born with IQs >130, society would change.

Does this apply to AI? At least two of the examples seem clear-cut examples of an ‘Amara effect’: Go-playing AIs were for decades toys easily beaten by bad amateurs until Monte Carlo Tree Search was introduced in 200624 and then a decade after that they were superhuman, while the first in was a debacle in which no car finished and the best car managed to travel a whopping 7 miles before getting stuck on a rock. 8 years later, and now the conversation has suddenly shifted from “will Go AIs ever reach human level?” or “will self-driving cars ever be able to cope with the real world?” to the simple question when?.

One of the ironies is that I am sure a ‘pure’ AI is possible; but the AI can’t be developed before the computing power is available (we humans are just not good enough at math & programming to achieve it without running code), which means the AI will be developed either simultaneous with or after enough computing power becomes available. If the latter, if the AI is not run at the exact instant that there is enough processing power available, ever more computing power in excess of what is needed (by definition) builds up. It is like a dry forest roasting in the summer sun: the longer the wait until the match, the faster and hotter the wildfire will burn25. Perhaps paradoxically, the longer I live without seeing an AI of any kind, the more schizophrenic my forecasts will appear to an outsider who hasn’t carefully thought about the issue - I will predict with increasingly high confidence that the future will be boring and normal (because the continued non-appearance makes it increasingly likely AI is impossible, see & AI), that AI is more likely to be created in the next year (because the possibilities are being exhausted as time passes) and the changes I predict become ever more radical!

This set of estimates is obviously consistent with an appearance of stagnation: each year small advances build up, but no big breakthroughs appear - until they do.

Neo-Luddism

“Almost in the same way as earlier physicists are said to have found suddenly that they had too little mathematical understanding to be able to master physics; we may say that young people today are suddenly in the position that ordinary common sense no longer suffices to meet the strange demands life makes. Everything has become so intricate that for its mastery an exceptional degree of understanding is required. For it is not enough any longer to be able to play the game well; but the question is again and again: what sort of game is to be played now anyway?”

Wittgenstein’s Culture and Value, MS 118 20r: 27.8.1937

The idea of - permanent and a - used to be dismissed contemptuously as the . (There are models where technology does produce permanent unemployment, and quite plausible ones too; see Autor et al 2003 and Autor & Hamilton26 and Krugman’s commentary pointing to recent data showing the ‘hollowing out’ and ‘deskilling’ predicted by the Autor model, which is also consistent with the long-term decline in teenage employment due to immigration. Martin Ford has some graphs explaining the complementation-substitution model.) But ever since the Internet bubble burst, it’s been looking more and more likely, with scads of evidence for it since the housing bubble like the otherwise peculiar changes in the value of college degrees27. (This is closely related to my grounds for believing in a distant Singularity.) When I look around, it seems to me that we have been suffering tremendous unemployment for a long time. When Alex Tabarrok writes “If the Luddite fallacy were true we would all be out of work because productivity has been increasing for two centuries”, I think, isn’t that correct? If you’re not a student, you’re retired; if you’re not retired, you’re disabled28; if you’re not disabled, perhaps you are institutionalized; if you’re not that, maybe you’re on welfare, or just unemployed.

Compare now to most of human history, or just the 1300s:

  • every kid in special ed would be out working on the farm; there would, if only from reduced 29 be fewer disabled than now (federal alone supports 8 million Americans)

  • everyone in college would be out working (because the number of students was a rounding error and they didn’t spend very long in higher education to begin with)

    Indeed, education and healthcare are a huge chunk of the US economy - and both have serious questions about how much good, exactly, they do and whether they are grotesquely inefficient or just inefficient.

  • retirees didn’t exist outside the tiny nobility

  • ‘guard labor’ - people employed solely to control and ensure productivity of the others has increased substantially (Bowles & Jayadev 2006 claim US guard labor has gone from 6% of the 1890 labor force to 26% in 2002; this is not due to manufacturing declines30); examples of guard labor:

    • standing militaries were unusual (although effective when needed31); the US maintains the in the world - ~1.5m (~0.5% of the population), which employs millions more with its $700 billion budget32 and is a key source of pork33 and make-work
    • prisons were mostly for temporary incarceration pending trial or punishment34; the US has ~2.3m (nearly 1% of the population!), and perhaps another 4.9m on parole/probation. (See also the relationship of psychiatric imprisonment with criminal imprisonment.) That’s impressive enough, but as with the military, consider how many people are tied down solely because of the need to maintain and supply the prison system - prison wardens, builders, police etc.
  • people worked hard; the and 5-day workweek were major hard-fought changes (a plank of the !). Switching from a 16-hour to an 8-hour day means we are half-retired already and need many more workers than otherwise.

In contrast, Americans now spend most of their lives not working.

The unemployment rate looks good - 9% is surely a refutation of the Luddite fallacy! - until you look into the meat factory and see that that is the best rate, for college graduates actively looking for jobs, and not the overall population one including those who have given up. Economist Alan Krueger writes of the ratio (which covers only 15-64 year olds):

Tellingly, the has hardly budged since reaching a low of 58.2% in December 2009. Last month it stood at just 58.4%. Even in the expansion from 2002 to 2007 the share of the population employed never reached the peak of 64.7% it attained before the March-November 2001 recession.

Let’s break it down by age group using :

Labor force participation rate 1993-2013, by age groups: 25-54yo, 20-24yo, >55yo

correctly points out that the employment:population ratio itself doesn’t intrinsically tell us about whether things are going well or poorly - one could imagine a happy and highly automated country with a where only 20% of the population works or an agricultural country where everyone works and is desperately poor. What matters more is wealth inequality combined with employment ratio: how many people are either rich enough that not having a job is not a disaster or at least can get a job?

What do you suppose the employment-to-population rate was in 1300 in the poorer 99% of the world population (remembering how homemaking and raising children is effectively a full-time job)? I’d bet it was a lot higher than the world record in 2005, Iceland’s 84%. And Iceland is a very brainy place. What are the merely average with IQs of 100-110 supposed to do? (Heck, what is the half of America with IQs in that region or below supposed to do? Learn C++ and statistics so they can work on Wall Street?) If you want to see the future, look at our youth; where are summer jobs these days? Gregory Clark comments sardonically (although he was likely not thinking of whole brain emulation) in :

Thus, while in preindustrial agrarian societies half or more of the national income typically went to the owners of land and capital, in modern industrialized societies their share is normally less than a quarter. Technological advance might have been expected to dramatically reduce unskilled wages. After all, there was a class of workers in the preindustrial economy who, offering only brute strength, were quickly swept aside by machinery. By 1914 most horses had disappeared from the British economy, swept aside by steam and internal combustion engines, even though a million had been at work in the early nineteenth century. When their value in production fell below their maintenance costs they were condemned to the knacker’s yard.

Technology may increase total wealth under many models, but there’s a key loophole in the idea of “Pareto-improving” gains—they don’t ever have to make some people better off. And a Pareto-improvement is a good result! Many models don’t guarantee even that - it’s perfectly possible to become worse off (see the horses above and the fate of humans in ‘crack of a future dawn’ scenario). Such doctrinairism is not useful:

“Like experts in many fields who give policy advice, the authors show a preference for first-best, textbook approaches to the problems in their field, while leaving other messy objectives acknowledged but assigned to others. In this way, they are much like those public finance economists who oppose tax expenditures on principle, because they prefer direct expenditure programs, but do not really analyze the various difficulties with such programs; or like trade economists who know that the losers from trade surges need to be protected but regard this as not a problem for trade policy.” –, “Comments on ‘The Contradiction in China’s Gradualist Banking Reforms’”, Brookings Papers on Economic Activity 2006, 2, 149-162

This is closely related to what I’ve dubbed the “‘Luddite fallacy’ fallacy” (along the lines of the Pascal’s Wager Fallacy Fallacy): technologists who are extremely intelligent and have worked most of their life only with fellow potential confidently say that “if there is structural unemployment (and I’m being generous in granting you Luddites even this contention), well, better education and training will fix that!” It’s a little hard to appreciate what a stupendous mixture of availability bias, infinite optimism, and plain denial of intelligence differences this all is. Marc Andreessen offers an example in 2011:

Secondly, many people in the U.S. and around the world lack the education and skills required to participate in the great new companies coming out of the software revolution. This is a tragedy since every company I work with is absolutely starved for talent. Qualified software engineers, managers, marketers and salespeople in Silicon Valley can rack up dozens of high-paying, high-upside job offers any time they want, while national unemployment and underemployment is sky high. This problem is even worse than it looks because many workers in existing industries will be stranded on the wrong side of software-based disruption and may never be able to work in their fields again. There’s no way through this problem other than education, and we have a long way to go.

I see. So all we have to do with all the people with <120 IQs, who struggled with algebra and never made it to calculus (when they had the self-discipline to learn it at all), is just to train them into world-class software engineers and managers who can satisfy Silicon Valley standards; and we have to do this for the first time in human history. Gosh, is that all? Why didn’t you say so before - we’ll get on that right away! Or an anonymous “data scientist” recorded in the NYT: “He found my concerns to be amusing. People can get work creating SEO-optimized niche blogs, he said. Or they can learn to code.” Thomas Friedman:

Every middle-class job today is being pulled up, out or down faster than ever. That is, it either requires more skill or can be done by more people around the world or is being buried - made obsolete - faster than ever. Which is why the goal of education today, argues Wagner, should not be to make every child “college ready” but “innovation ready” - ready to add value to whatever they do…more than ever, our kids will have to “invent” a job. (Fortunately, in today’s world, that’s easier and cheaper than ever before.) Sure, the lucky ones will find their first job, but, given the pace of change today, even they will have to reinvent, re-engineer and reimagine that job much more often than their parents if they want to advance in it… What does that mean for teachers and principals? [Tony Wagner:] “All students should have digital portfolios to show evidence of mastery of skills like critical thinking and communication, which they build up right through K-12 and post-secondary. Selective use of high-quality tests, like the College and Work Readiness Assessment, is important. Finally, teachers should be judged on evidence of improvement in students’ work through the year - instead of a score on a bubble test in May. We need lab schools where students earn a high school diploma by completing a series of skill-based ‘merit badges’ in things like entrepreneurship. And schools of education where all new teachers have ‘residencies’ with master teachers and performance standards - not content standards - must become the new normal throughout the system.”

These sentiments or goals are so breathtakingly delusional (have these people ever met the average American? or tried to recall their middle school algebra? or thought about how many of their classmates actually learned anything?) that I find myself wondering (despite my personal injunctions against resorting to ad hominems) that “surely no one could believe such impossible things, either before or after breakfast; surely an award-winning New York Times columnist or a famous Harvard educational theorist, surely these people cannot seriously believe the claims they are supposedly making, and there is some more reasonable explanation - like they have been bribed by special interests, or are expounding propaganda designed to safeguard their lucrative profits from populist redistribution, or are pulling a prank in very bad taste, or (like President Reagan) are tragically in the grips of a debilitating brain disease?” But the sentiments are so consistent and people who’ve met proponents of the training panacea say they are genuine about it (eg Scott Alexander thought the retraining people were just Internet strawmen until he met them), that it must be what they think.

But moving on past Andreessen and Friedman. If it really is possible for people to rise to the demands of the New Economy, why is it not happening? For example (emphasis added)

As documented in Turner (2004), Bound and Turner (2007, 2011), and , while the number of students attending college has increased over the past three decades in the U.S., college graduation rates (i.e., the fraction of college enrollees that graduate) and college attainment rates (i.e., the fraction of the population with a college degree) have hardly changed since 1970 and the time it takes college students to complete a baccalaureate (BA) degree has increased (Bound, Lovenheim and Turner, 2010b). The disparities between the trends in college attendance and completion or time-to-completion of college degrees is all the more stark given that the earnings premium for a college degree relative to a high school degree nearly doubled over this same period (Goldin and Katz, 2008).

  • Bound, John and Sarah Turner (2011). “Dropouts and Diplomas: The Divergence in Collegiate Outcomes.” in Handbook of the Economics of Education, Vol. 4, E. Hanushek, S. Machin and L. Woessmann (eds.) Elsevier B.V., 573-613
  • Goldin, Claudia and Lawrence Katz (2008). The Race between Education and Technology. Cambridge: Harvard University Press

Or “Study of Men’s Falling Income Cites Single Parents”:

The fall of men in the workplace is widely regarded by economists as one of the nation’s most important and puzzling trends. While men, on average, still earn more than women, the gap between them has narrowed considerably, particularly among more recent entrants to the labor force. For all Americans, it has become much harder to make a living without a college degree, for intertwined reasons including foreign competition, advancements in technology and the decline of unions. Over the same period, the earnings of college graduates have increased. Women have responded exactly as economists would have predicted, by going to college in record numbers. Men, mysteriously, have not. Among people who were 35 years old in 2010, for example, women were 17% more likely to have attended college, and 23% more likely to hold an undergraduate degree. “I think the greatest, most astonishing fact that I am aware of in social science right now is that women have been able to hear the labor market screaming out ‘You need more education’ and have been able to respond to that, and men have not,” said Michael Greenstone, an M.I.T. economics professor who was not involved in Professor Autor’s work. “And it’s very, very scary for economists because people should be responding to price signals. And men are not. It’s a fact in need of an explanation.”

It’s always a little strange to read an economist remark that potential returns to education have been rising and so more people should get an education, but this same economist somehow not realize that the continued presence of this free lunch indicates it is not free at all. Look at how the trend of increasing education has stalled out:

“Education attainment climbed dramatically in the 20th century, but its growth has flattened recently (source: Census)”

Apparently markets work and people respond to incentives—except when it comes to education, and there people simply aren’t picking up those $100 bills laying on the ground and have been not picking them up for decades for some reason35, as the share of income accruing to ‘labor’ falls both in the USA and worldwide I see. (In England, there’s evidence that college graduates were still being successfully absorbed in the ’90s and earlier, although apparently there weren’t relatively many during those periods36.) What do bad students know that good economists don’t?

(inventor of the much-cited - by optimists and anti-neo-luddites - comparative advantage concept) changed his mind about whether technological unemployment was possible, but he thought it was possible only under certain conditions; Sachs & Kotlikoff 2013 gives a multi-generational model of suffering. Most economists, though, continue to dismiss this line of thought, saying that technological changes and structural unemployment are real but things will work themselves out somehow. Robin Hanson, for example, seems to think that and he’s a far better economist than me and has thought a great deal about AI and the economic implications. Their opposition to Neo-Luddism is about the only reason I remain uncertain, because otherwise, the data for the economic troubles starting in 2007, and especially the unemployment data, seem to match nicely. From a Federal Reserve brief (principally arguing that the data is better matched by a model in which the longer a worker remains unemployed, the longer they are likely to remain unemployed):

For most of the post-World War II era, unemployment has been a relatively short-lived experience for the average worker. Between 1960 and 2010, the average duration of unemployment was about 14 weeks. The duration always rose during recessions, but relatively quick upticks in hiring after recessions kept the long-term unemployment rate fairly low. Even during the two “jobless recoveries” that followed the 1990-91 and 2001 recessions, the peak shares of long-term unemployment were 21% and 23%, respectively. But the 2007-09 recession represents a marked departure from previous experience: the average duration has increased to 40 weeks, and the share of long-term unemployment remains high more than two years after the official end of the recession.37 Never before in the postwar period have the unemployed been unemployed for so long.

The Economist asked in 2011:

But here is the question: if the pace of technological progress is accelerating faster than ever, as all the evidence indicates it is, why has unemployment remained so stubbornly high - despite the rebound in business profits to record levels? Two-and-a-half years after the Great Recession officially ended, unemployment has remained above 9% in America. That is only one percentage point better than the country’s joblessness three years ago at the depths of the recession. The modest 80,000 jobs added to the economy in October were not enough to keep up with population growth, let alone re-employ any of the 12.3m Americans made redundant between 2007 and 2009. Even if job creation were miraculously nearly to triple to the monthly average of 208,000 that is was in 2005, it would still take a dozen years to close the yawning employment gap caused by the recent recession, says Laura D’Andrea Tyson, an economist at University of California, Berkeley, who was chairman of the Council of Economic Advisers during the Clinton administration.

And lays out the central argument for neo-Luddism, why “this time is different”:

Thanks to tractors, combine harvesters, crop-picking machines and other forms of mechanisation, agriculture now accounts for little more than 2% of the working population. Displaced agricultural workers then, though, could migrate from fields to factories and earn higher wages in the process. What is in store for the Dilberts of today? Media theorist (Program or Be Programmed and Life Inc) would argue “nothing in particular.” Put bluntly, few new white-collar jobs, as people know them, are going to be created to replace those now being lost-despite the hopes many place in technology, innovation and better education.

The argument against the Luddite Fallacy rests on two assumptions: one is that machines are tools used by workers to increase their productivity; the other is that the majority of workers are capable of becoming machine operators. What happens when these assumptions cease to apply - when machines are smart enough to become workers? In other words, when capital becomes labour. At that point, the Luddite Fallacy looks rather less fallacious…In his analysis [Lights in the Tunnel], Mr [Martin] Ford noted how technology and innovation improve productivity exponentially, while human consumption increases in a more linear fashion. In his view, Luddism was, indeed, a fallacy when productivity improvements were still on the relatively flat, or slowly rising, part of the exponential curve. But after two centuries of technological improvements, productivity has “turned the corner” and is now moving rapidly up the more vertical part of the exponential curve. One implication is that productivity gains are now outstripping consumption by a large margin.

The American oddities began before the current recession:

Unemployment increased during the 2001 recession, but it subsequently fell almost to its previous low (from point A to B and then back to C). In contrast, job openings plummeted-much more sharply than unemployment rose-and then failed to recover. In previous recoveries, openings eventually outnumbered job seekers (where a rising blue line crosses a falling green line), but during the last recovery a labor shortage never emerged. The anemic recovery was followed in 2007 by an increase in unemployment to levels not seen since the early 1980s (the rise after point C). However, job openings fell only a little-and then recovered. The recession did not reduce hiring; it just dumped a lot more people into an already weak labor market.38

And then there is the well-known example of Japan. Yet overall, both Japanese, American, and global wealth continue to grow. The hopeful scenario is that all we are suffering is temporary pains, which will eventually be grown out of, as forecast in his 1930 essay “Optimism in a Terrible Economy”:

At the same time technical improvements in manufacture and transport have been proceeding at a greater rate in the last ten years than ever before in history. In the United States factory output per head was 40 per cent greater in 1925 than in 1919. In Europe we are held back by temporary obstacles, but even so it is safe to say that technical efficiency is increasing by more than 1 per cent per annum compound…For the moment the very rapidity of these changes is hurting us and bringing difficult problems to solve. Those countries are suffering relatively which are not in the vanguard of progress. We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come - namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour. But this is only a temporary phase of maladjustment. All this means in the long run that mankind is solving its economic problem. I would predict that the standard of life in progressive countries one hundred years hence will be between four and eight times as high as it is to-day. There would be nothing surprising in this even in the light of our present knowledge. It would not be foolish to contemplate the possibility of afar greater progress still.

Evaluation

Of course, as plausible as this all looks, that doesn’t mean much. Anyone can cherrypick a bunch of quotes and citations. When making predictions, there are a few heuristics or principles I try to apply, and it might be worth applying a few here.

The specification seems fairly clear: the Neo-Luddite claim, in its simplest form predicts that ever fewer people will be able to find employment in undistorted free markets. We can see other aspects as either tangents (will people be able to consume due to a Basic Income or via capital ownership?) or subsets (the Autor thesis of polarization would naturally lead to an overall increase in unemployment). The due date is not clear, but we can see the Neo-Luddite thesis as closely linked to artificial intelligences, and 2050 would be as good a due date as any inasmuch as I expect to be alive then & AI will have matured substantially (if we date serious AI to 1960 then 2012 is a bit past halfway) & many predictions like Ray Kurzweil’s will have been verified or falsified.

The probability part of a prediction is the hard part. Going in order (the latter heuristics aren’t helpful):

  1. “What does the prediction about the future world imply about the present world?”

    What would we expect in a world in which the Neo-Luddite thesis were true?

    • first and foremost, we would expect both software & hardware to continue improving. Both are true: Moore’s law continues despite the breakdown in chip frequencies, and AI research forges on with things like deep neural networks being deployed at scale by companies such as Google. If we did not see improvement, that would be extremely damaging to the thesis. However, this is a pretty boring retrodiction to make: technology has improved for so many centuries now that it would be surprising if the improvements had suddenly stopped, and if it had, why would anyone be taking this thesis seriously? It’s not like anyone worries over the implications of a philosopher’s stone for forex.

    • More meaningfully: capital & labor increasingly cease to be complements, and become substitutes. We would expect gradually rising disemployment as algorithms & software & hardware were refined and companies learned when employees could be replaced by technological substitutes, with occasional jumps as idiosyncratic breakthroughs were made for particular tasks. We would expect returns on capital to increase, and we would expect that employees with un-substitutable skills or properties would increase wealth. This seems sort of true: STEM-related salaries in particular fields seem to be steady and tech companies continue to complain that good software engineers are hard to find (and Congress should authorize ever more H-1B visas) with consequences such as skyrocketing San Francisco real estate as tech companies flock there to find the rare talent they require, which is the sort of “superstar effect” we would expect if human beings with certain properties were intrinsically rare & valuable and the remainder just so much useless dross that hold back a business or worse. This is particularly striking when we note that it has never been cheaper or easier to become a software engineer as adequate computer hardware is dirt cheap & all necessary software is available for free online & instructional materials likewise, and it’s unclear how barriers like certification could matter when programmers are producing objective products - either a website is awesome and works, or it doesn’t.

      On the other hand, I also read of booming poor economies like China or Africa where wages are rising in general and unemployment seems to be less of a concern. This might fit the Autor model of polarization if we figure that those booming economies are pricing human labor so cheap that it outcompetes software/robots/etc, in which case we would expect to see these countries hit a “wall” where only a part of their populations can pass the ‘valley of death’ to reach the happy part of the polarized economy but the rest of the population is now struggling to be cheap enough to compete with the capital-alternatives. I’m not sure I see this. Yes, there are a lot of robotic factories being set up in China now, but does that really mean anything important on China’s scale? What’s a few million robots in a country of 1.3 billion people? If China does wind up falling into what looks like a , that would be consistent with the Autor model, I think, and strengthen this retrodiction.

    • As technology is mobile and can easily be sold or exported, we would expect to see this general trend in many wealthy Western countries. This is a serious weak point of my knowledge thus far: I simply don’t know what it really looks like in eg Japan or England or Germany. Are they seeing similar things to my factoids about the USA?

  2. “Base rates” here is essentially applying the Outside View

    The main problem here is that it’s very difficult to rebut the Outside View: the Luddite thesis has, it seems, failed many times in the past; why expect this time to be any different? The historical horse example is amusing, certainly, but there could be many factors separating horses from ordinary people. To this, I don’t have any good reply. Even if the thesis is “right” from the perspective of 1000 years from now, there is good reason to be chary of expecting it to happen in my lifetime. Computers themselves furnish a great many examples of people who, with vision and deep insight not shared by the people who ridiculed them as techno-utopians, correctly foresaw things like the personal computer or the Internet or online sales - and started their companies too early. The best I can say is that software/AI seems completely & qualitatively different from earlier technologies like railroads or assembly lines, in that they are performing deeply human mental functions that earlier technologies did not come anywhere near: the regulator of a steam engine is solving a problem so much simpler than an autonomous car solves that it’s hard to even see them as being even theoretically related in exerting control on processes by feedback processes. The dimmest human could productively use contemporary technologies, where today we struggle to find subsidized jobs for the mentally handicapped where they are even just not a net loss.

From these musings, I think we can extract a few warning signs which would indicate the Neo-Luddite thesis breaking down:

  • global economic growth stopping
  • AI research progress stopping
  • Moore’s law in terms of FLOPS/$ breaking down
  • decreased wealth inequality (eg. Gini) in the First World
  • increases in population working

Daniel Kahneman has an interesting thinking technique he calls the “pre-mortem”, where you ask yourself: “assume it’s the future, and my confident predictions have completely failed to come true. What went wrong?” Looking back, if the Neo-Luddite thesis fails, I think the most likely explanation for what I’ve seen in the USA would be something related to globalization & China in particular: the polarization, increased disemployment, increasing need for technical training etc, all seem explainable by those jobs heading overseas, exacerbated by other factors such as domestic politics (Bush’s tax cuts on the rich?) and maybe things like the structural unemployment relating to existing workers having difficulty switching sectors or jobs but new workers being able to adapt. If this is so, then I think we would expect the trends to gradually ameliorate themselves: older workers will die off & retire, new workers will replace them, new niches and jobs will open up as the economy adapts, China’s exponential growth will result in catchup being completed within 2 or 3 decades, and so on.

IQ

may be even more inflammatory than supporting nicotine, but it’s an important entry on any honest list. I never doubted that IQ was in part hereditary (Stephen Jay Gould aside, this is too obvious - what, everything from drug responses to skin and eye color would be heritable except the most important things which would have a huge effect on reproductive fitness?), but all the experts seemed to say that diluted over entire populations, any tendency would be non-existent. Well, OK, I could believe that; visible traits consistent over entire populations like skin color might differ systematically because of sexual selection or something, but why not leave IQ following the exact same bell curve in each population? There was no specific thing here that made me start to wonder, more a gradual undermining (Gould’s work like being completely dishonest is one example - with enemies like that…) as I continued to read studies and wonder why Asian model minorities did so well, and a lack of really convincing counter-evidence like one would expect the last two decades to have produced - given the politics involved - if the idea were false. And one can always ask oneself: suppose that intelligence was meaningful, and did have a large genetic component, and the likely genetic ranking East Asians > Caucasian > Africans; in what way would the world, or the last millennium (eg the growth of the Asian tigers vs Africa, or the different experiences of discriminated-against minorities in the USA), look different than it does now?

Mu

It’s worth noting that the IQ wars are a rabbit hole you can easily dive down. The literature is vast, spans all sorts of groups, all sorts of designs, from test validities to sampling to statistical regression vs causal inference to forms of bias; every point is hotly debated, the ways in which studies can be validly critiqued are an education in how to read papers and look for how they are weak or make jumps or some of the data just looks wrong, and you’ll learn every technical requirement and premise and methodological limitation because the opponents of that particular result will be sure to bring them up if it’ll at all help their case.

In this respect, it’s a lot like the feuds in biblical criticism over issues like , or the long philosophical debate over the . There too is an incredible amount of material to cover, by some really smart people (what did geeks do before science and modernity? well, for the most part, they seem to have done theology; consider how much time and effort Isaac Newton reportedly and , or the sheer brainpower that must’ve been spent over the centuries in rabbinical studies). You could learn a lot about the ancient world or the incredibly complex chain of transmission of the Bible’s constituents in their endless varieties and how they are put together into a single canonical modern text, or the other countless issues of . An awful lot, indeed. One could, and people as smart or smarter than you have, lose one’s life in exploring little back-alleys and details.

If, like most people, you’ve only read a few papers or books on it, your opinion (whatever that is) is worthless and you probably don’t even realize how worthless your opinion is, how far you are from actually grasping the subtleties involved and having a command of all the studies and criticisms of said studies. I exempt myself from this only inasmuch as I have realized how little I still know after all my reading. No matter how tempting it is to think that you may be able to finally put together the compelling refutation of God’s existence or to demonstrate that Jesus’s divinity was a late addition to his gospel, you won’t make a dent in the debate. In other words, these can become forms of nerd sniping and intellectual crack. “If only I compile a few more studies, make a few more points—then my case will become clear and convincing, and people on the Internet will stop being wrong!”

But having said that, and admiring things like Plantinga’s free will defense, and the subtle logical issues in formulating it and the lack of any really concrete evidence for or against Jesus’s existence, do I take the basic question of God seriously? No. The theists’ rearguard attempts and ever more ingenious explanations and indirect pathways of reasons and touted miracles fundamentally do not add up to an existing whole. The universe does not look anything like a omni-benevolent/powerful/scient god was involved, a great deal of determined effort has failed to provide any convincing proof, there not being a god is consistent with all the observed processes and animal kingdom and natural events and material world we see, and so on. The persistence of the debate reflects more what motivated cognition can accomplish and the weakness of existing epistemology and debate. Unfortunately, this could be equally well-said by someone on the other side of the debate, and in any case, I cannot communicate my gestalt impression of the field to anyone else. I don’t expect anyone to be the least bit swayed by what I’ve written here.

So why be interested in the topics at all? If you cannot convince anyone, if you cannot learn the field to a reasonable depth, and you cannot even communicate well what convinced you, why bother? In the spirit of , I say: it’s not clear at all. So you should know in advance whether you want to take the red pill and see how far down the rabbit hole you go before you finally give up, or you take the blue pill and be an onlooker as you settle for a high-level overview of the more interesting papers and issues and accept that you will only have that and a general indefensible assessment of the state of play.

My own belief is that as interesting as it is, you should take the blue pill and not adopt any strong position but perhaps (if it doesn’t take too much time) point out any particularly naive or egregious holes in argument, by people who are simply wrong or don’t realize how little they know or how slanted a view they have received from the material they’ve read. It’s sad to not reach agreement with other people, dangerous to ignore critics, tempting to engage trolls - but life is too short to keep treading the same ground.

The reason for IQ is this: yes, Murray failed to organize a definitive genetic study. It hasn’t happened yet even though it’s more important than most of the trivialities that get studied in population genetics (like historical movements of random groups). I don’t need to explain why this would be the case even if people on the environmentalist side of the IQ wars were confident they were right. But the massive fall in genome sequencing costs (projected to be <$1000 by ~2014) means that large human datasets will be produced, and the genetics directly examined, eliminating entire areas of objections to the previous heredity studies. And at some point, some researcher will manage the study - some group inside or outside the USA will fund it, at some point a large enough genetic database will be cross-referenced against IQ tests and existing racial markers. We already see some of this in research: (followup: ) found 3 SNPs simply by pooling existing databases of genetics data & correlating against schooling. I don’t know when the definitive paper will come out, if it’ll be this year, or by 2020, although I would be surprised if there was still nothing by 2030; but it will happen and it will happen relatively soon (for a debate going on for the past century or more). Genome sequencing is simply going to be too cheap for it to not happen. By 2030 or 2040, I expect the issue will be definitively settled in the same way earlier debates about the validity of IQ tests were eventually settled (even if the public hasn’t yet gotten the word, the experts all concede that IQ tests are valid, reliable, not biased, and meaningful predictors of a wide variety of real-world variables).

Value of Information

What is the direct value of learning about IQ? Speaking of it in terms of money may not be the best approach, so instead we can split the question up into a few different sub-questions:

  1. how much do your efforts lead to additional information?

    In this case, not much. I would have to be very arrogant to think I can go through a large fraction of the literature and evaluate it better than the existing authorities like Nisbett or Flynn or Jensen. I have no advantages over them.

  2. would this information-gathering be expensive?

    Yes. A single paper can take an hour to read well, and a technical book weeks. There are hundreds of papers and dozens of books to learn. The mathematics and statistics are nontrivial, and sooner or later, one will have to learn them in order to evaluate the seriousness of criticisms for oneself. The time spent will not have been throw-away recreational time, either, like slumming on the couch watching TV, but will be one’s highest-quality time, which could have been spent learning other difficult material, working, meaningfully interacting with other people, and so on. Given the decline with age of fluid intelligence, one may be wasting a non-trivial fraction of one’s lifetime learning.

  3. will new information come in the absence of your efforts?

    Yes. My interest does not materially affect when the final genetic studies will be conducted.

  4. what decisions or beliefs would the additional information change?

    Suppose the environmentalists were 100% right and the between-race genetics were a negligibly small factor. Regardless, the topic of IQ and its correlates and what it predicts does not live and die based on there being a genetic factor to average IQ differences between groups; if the admixture and genetics studies turn in a solid estimate of 0, IQ will still predict lifetime income, still predict crime rates, still predict educational scores, and so on.

    In contrast, some of the other topics have very concrete immediate implications. Switching from occultism/theism to atheism implies many changed beliefs & choices; a near vs far Singularity has considerable consequences for retirement planning, if nothing else; while Neo-Luddism has implications for both career choice and retirement planning; attitudes towards fiction and nicotine also cash out in obvious ways. Of the topics here, perhaps only Communism and the American Revolution are as sterile in practical application.

If genetic differences and inequality exists, they should be engineered away.

So, I try not to spend too much time thinking about this issue: the results will come in regardless of my opinion, and unlike other issues here, does not materially affect my worldview or suggest action. Given this, there’s no reason to invest your life in the topic! It has no practical ramifications for you, discussing the issue can only lead to negative consequences - and on the intellectual level, no matter how much you read, you’ll always have nagging doubts, so you won’t get any satisfaction. You might as well just wait patiently for the inevitable final answer.

See Also

Appendix

Miller on neo-Luddism

From chapter 13 of Singularity Rising, James Miller 2012:

“There’s this stupid myth out there that AI has failed, but AI is everywhere around you every second of the day. People just don’t notice it. You’ve got AI systems in cars, tuning the parameters of the fuel injection systems. When you land in an airplane, your gate gets chosen by an AI scheduling system. Every time you use a piece of Microsoft software, you’ve got an AI system trying to figure out what you’re doing, like writing a letter, and it does a pretty damned good job. Every time you see a movie with computer-generated characters, they’re all little AI characters behaving as a group. Every time you play a video game, you’re playing against an AI system.” –Rodney Brooks, Director, MIT Computer Science and AI Laboratory 288

…In the next few decades, all of my readers might have their market value decimated by intelligent machines. Should you be afraid? Fear of job-destroying technology is nothing new. During the eighteenth century, clothing manufacturers in England replaced some of their human laborers with machines. In response, a gang supposedly led by one Ned Ludd smashed a few machines owned by a sock maker. Ever since then, people opposing technology have been called Luddites. Luddites are correct in thinking that machines can cause workers to lose their jobs. But overall, in the past, job-destroying machine production has overall greatly benefited workers. “Destroying jobs” sounds bad - like something that should harm an economy. But the benefits of job destruction become apparent when you realize that an economy’s most valuable resource is human brains. If a businessman figured out how to make a product using less energy or fewer materials, we would applaud him because the savings could be used to produce additional goods. The same holds true when we figure out how to make something using less labor. If you used to need 1,000 workers to run your sock factory but you can now produce the same number of socks by employing only 900 workers, then you probably would (and perhaps even should) fire the other 100. Although in the short run these workers will lack jobs, in the long run they will likely find new employment and expand the economy.

The obliteration of most agricultural jobs has been a huge source of economic growth for America. In 1900, farmers made up 38% of the Americans workforce, whereas now they constitute less than 2% of it.289 Most of the displaced agricultural laborers found work in cities. Yet despite the massive decrease in farming jobs, the United States has steadily produced more and more food since 1900. Agricultural technology gave the American people a “free lunch,” in which we got more food with less effort, making obesity a greater threat to American health than calorie deprivation. Technology raises wages by increasing worker productivity. In a free-market economy, the value of the goods an employee produces for his employer roughly determines his wage. A farmer with a tractor produces more food than one with just a hoe. Consequently, modern farmers earn higher wages than they would if they lived in a world deprived of modern agricultural technology. In rich nations, wages have risen steadily over the last two hundred years because technology keeps increasing worker productivity. But will this trend continue? Past technologies never completely eliminated the need for humans, so fired sock workers usually found other employment. But a sufficiently advanced AI possessing a robot body might outperform people at every single task.

…If a Kurzweilian merger doesn’t occur, sentient AIs might compete directly with people in the labor market. Let’s now explore what happens to human wages if these AIs become better than humans at every task. Adam Smith, the great eighteenth-century economist, explained that everyone benefits from trade if each participant makes what he is best at. So, for example, if I’m better at making boots than you are, but you have more skill at making candles, then we would both become richer if I produced your boots and you made my candles. But what if you’re more skilled at making both boots and candles? What if, compared to you, I’m worse at doing everything? Adam Smith never answered this question, but nineteenth-century economist David Ricardo did. This question is highly relevant to our future, as an AI might be able to produce every good and service at a lower cost than any human could, and if we turn out to have no economic value to the advanced artificial intelligences, then they might (at best) ignore us, depriving humanity of any benefits of their superhuman skills.

Most people intuitively believe that mutually beneficial trades take place only when each person has an area of absolute excellence. But Ricardo’s theory of comparative advantage shows that trade can make everyone better off regardless of a person’s absolute skill because everyone has an area of comparative advantage. I’ll illustrate Ricardo’s nineteenth-century theory with a twenty-first-century example involving donuts and anti-gravity flying cars. Let’s assume that humans can’t make flying cars, but an AI can; and although people can make donuts, an AI can make them much faster than we can. Let’s pretend that at least one AI likes donuts, where donuts represent anything a human can make that an AI would want. Here’s how a human and AI could both benefit from trade: a human could offer to give an AI many donuts in return for a flying car. The trade could clearly benefit the human. If it gets enough donuts, the AI also benefits from the trade. To see how this could work, imagine that (absent trade) it takes an AI one second to make a donut. The AI could build a flying car in one minute.

  • Time needed for an AI to make a donut: one second
  • Time needed for an AI to make a flying: car one minute

A human then offers the following deal to the AI: Build me a flying car and I will give you one hundred donuts. It will take you one minute to make me a flying car. In return for this flying car you get something that would cost you 100 seconds to make. Consequently, our trade saves you 40 seconds. As the AI’s powers grew, people could still gain from trading with it. If, say, it took the AI only one nanosecond to make a donut and 60 nanoseconds to make a flying car, then it would still become better off by trading 100 donuts for 1 flying car. 292 In general, as an AI becomes more intelligent, trading with humans will save it less time, but what the AI can do with this saved time goes up, especially since a smarter AI would probably gain the capacity to create entirely new categories of products. An AI might trade 100 donuts for a flying car, but an “AI+” would trade this number of donuts for a wormhole generator. Modern economists use Ricardo’s theory of comparative advantage to show how rich and poor countries can benefit by trading with each other. Understanding Ricardo’s theory causes almost all economists to favor free trade. If we substitute “humanity” for “poor countries” and AI for “rich countries,” then Ricardo gives us some hope for believing that even self-interested advanced artificial intelligences would want to take actions that bestow tremendous economic benefits on mankind.

MAGIC WANDS

In the previous scenario, I implicitly assumed that producing donuts doesn’t require the use of some “factor of production.” A factor of production is an essential nonhuman element needed to create a good. Factors of production for donuts include land, machines, and raw materials, and without these factors, a person (no matter how smart and hardworking) can’t make donuts. Instead of using the intimidating and boring term “factor of production,” I’m going to say that to make a good or produce a service you need the right “magic production wand,” with the wand being the appropriate set of factors of production. For example, a donut maker needs a donut wand.

If a relatively small number of wands existed and no more could be created, then all of the wands would go to AIs. Let’s say donuts sell for $1 each and an AI could use a donut wand to produce one million donuts, whereas a human using the same wand could make only a thousand donuts. A human would never be willing to pay more than $1,000 for the wand, whereas an AI would earn a huge profit if it bought a donut wand for, say, $10,000. Even if a human initially owned a donut wand, he would soon sell it to an AI. Human wand owners in this situation would benefit from AIs because AIs would greatly raise the market value of wands. Human workers who had never had a wand would become impoverished because they couldn’t produce anything.

The Roman Republic’s conquests in the first century BC effectively stripped many Roman citizens of their production wands. In the early Republic, poor citizens had access to wands, as they were often hired to farm the land of the nobility. But after the Republic’s conquests brought in a huge number of slaves, the noblemen had their slaves use almost all of the available land wands. Cheap slave labor enriched the landowning nobility by reducing their production costs. But abundant slave labor impoverished non-landowning Romans by depriving them of wands. Cheap slave labor contributed to the fall of the Roman Republic. As Roman inequality increased, common soldiers came to rely on their generals for financial support. The troops put loyalty to their generals ahead of loyalty to the Roman state. Generals such as Sulla and Julius Caesar took advantage of their increased influence over their troops to propel themselves to absolute political power. Caesar sought to reduce the social instability caused by slaves by giving impoverished free Roman citizens new lands from the territories Rome had recently conquered. Caesar essentially created many new wands and gave them to his subjects.

Although AIs will use wands, they will also likely help create them. For example, using nanotechnology, they might be able to build dikes to reclaim land from the ocean. Or perhaps they’ll figure out how to terraform Mars, making Martian land cheap enough for nearly any human to afford. AIs could also figure out better ways to extract raw materials from the earth or invent new ways to use raw materials, resulting in each product needing fewer wands. The future of human wages might come down to a race between the number of AIs and the quantity of wands. Economist and former artificial-intelligence programmer Robin Hanson has created a highly counterintuitive theory of why (in the long run) AIs will destroy nearly all human jobs: they will end up using all of the production wands (“Economic Growth Given Machine Intelligence”).

…What I’ve written so far about the economics of emulations probably seems correct to most readers. After all, if we can make copies of extraordinarily bright and productive people and employ multiple copies of them in science and industry, then we should all get richer. The results would be similar to what would happen if a select few nursery schools became so fantastically good that each year they turned ten thousand toddlers into von Neumann-level geniuses who then immediately entered the workforce.

Robin Hanson, however, isn’t willing to rely on mere intuition when analyzing the economics of emulations. Robin realizes that if, after we have emulations, the price of computing power continues to fall at an exponential rate, then emulations will soon become extraordinarily cheap. If you combine extremely inexpensive emulations with a bit of economic theory, you get a seemingly crazy result, something that you might think is too absurd to ever happen. But Robin, ever the bullet-eater, refuses to turn away from his conclusion. Robin thinks that in the long run, emulations will drive wages down to almost zero, pushing most of the people who are unfortunate enough to rely on their wages into starvation-because emulations will kick us back into a “Malthusian trap.” Arguably, humanity’s greatest accomplishment was escaping the Malthusian trap. Thomas Malthus, a nineteenth-century economist, believed that starvation would ultimately strike every country in the entire world. Malthus wrote that if a population is not facing starvation, people in that population will have many children who grow up, get married, and have even more children. A country with an abundance of food, Malthus wrote, is one with an increasing population. Unfortunately, in Malthus’s time, as the size of a country’s population went up, it became more difficult to feed everyone in the country. Eventually, when the population got large enough, many starved. Only when lots of people were dying of starvation would the country’s population stabilize. Consequently, Malthus believed that all countries were trapped in one of two situations:

  1. Many people are starving.
  2. The population is growing, and so many will eventually starve.

…Pretend that someone emulates Robin and places the software in the public domain. Anyone can now freely copy e-Robin, although it still costs something to buy enough computing power to run him on, say a hundred thousand dollars a year. A profit-maximizing business would employ an e-Robin if the e-Robin brought the business more than $100,000 a year in revenue. After Moore’s law pushes the annual hardware costs of an e-Robin down to a mere $1, then a company would hire e-Robins as long as each brought the business more than $1 per annum. What happens to the salary of bio-Robin if you can hire an e-Robin for only a dollar? David Ricardo implicitly knew the answer to that question. Ricardo wrote that if it costs 5,000 pounds to rent a machine, and this machine could do the work of 100 men, the total wages paid to 100 men will never be greater than 5,000 pounds because if the total wages were higher, manufacturers would fire the workers and rent the machine.296 Applying Ricardo’s theory to an economy with emulations tells us that, if an emulation can do whatever you can do, your wage will never be higher than what it costs to employ the emulation. The question now is whether, if it’s extremely cheap to run an e-Robin, these e-Robins would still earn high salaries and therefore allow the original Robin to bring home a decent paycheck. Unfortunately, the answer is no because if an e-Robin were earning much more than what it costs to run an e-Robin, then it would be profitable for businesses to create many more of them. Companies will keep making copies of their emulations until they no longer make a profit by producing the next copy. A general rule of economics is that the more you have of something, the smaller its value. For example, even though water is inherently much more useful than diamonds because there is so much more water than diamonds, the price of water is much lower. If anyone can freely copy e-Robin, then the free market would drive the wage of an e-Robin down to what it costs to run one.

…Even if the emulations push wages to almost zero, lots of bio-humans would be much richer than they would be in a world without emulations. Though ancient Rome was in a Malthusian trap, its landowning nobility was rich. When you have lots of people and little land, the land is extremely valuable because it’s cheap to hire people to work the land