That intelligence (g) in healthy people is nearly impossible to improve is clear from the failure of psychology to provide any such method. But why intelligence would be so constant is not as clear: many other cognitive abilities are improvable (like working memory), so why not intelligence?
Arthur Jensen noted the failure of interventions in the 1960s, and the failure remains complete now, half a century later: if you are a bright healthy young man or woman gifted with an IQ in the 130s, there is nothing you can do to increase your underlying intelligence by a standard deviation. New methods like dual n-back or nootropics are trumpeted in the media, and years later are discovered to increase motivation & not intelligence, or to have been overstated, or work only in damaged or deficient subpopulations, or to be statistical/methodological artifacts, or to be tantamount to training on IQ tests themselves which destroys their meaning (like memorizing vocabulary), or to be so anomalous as to verge on fraudulent (like the Pygmalion effect). The only question worth asking is which of these explanations is the real explanation this time.
For IQ in particular, people discussing human-enhancement (especially transhumanists) have proposed a pessimistic observation & evolutionary explanation, dubbed the “Algernon principle” or “Algernon’s law” or my preference, the “Algernon Argument”.
The famous SF story “Flowers for Algernon” postulates surgery which triples the IQ score of the retarded protagonist - but which comes with the devastating side-effects of the gain being both temporary and sometimes fatal; fictional evidence aside, it is curious that despite the incredible progress mankind has made in countless areas like building cars or going to the moon or fighting cancer or extincting smallpox or inventing computers or artificial intelligence, we lack any meaningful way to positively affect people’s intelligence beyond curing diseases & deficiencies. If we compare the smartest people in the world now like Terence Tao to the smartest people of more than half a century ago like John von Neumann, there seems to be little difference. Eliezer Yudkowsky expands the thought out in his essay “Algernon’s Law”, stating it as:
Any simple major enhancement to human intelligence is a net evolutionary disadvantage.
Trade-offs are endemic in biology. Anything which isn’t carrying its own weight will be eliminated - organs which are no longer used will be stunted by evolution and within a lifetime, unused muscles & bones will start weakening or being scavenged for resources, as astronauts may find out the hard way1. Often, if you use a drug or surgery to optimize something, you will discover penalties elsewhere. If you delay aging & length lifespan as is possible in many species, you might find that you have encouraged cancer or - still worse - decreased reproduction2 as evidenced by the dramatic deaths of salmon or brown antechinus3; if your immune system goes all-out against disease, you either deplete your energetic and chemical reserves4 or risk autoimmune disorders; similarly, we heal much slower than seems possible despite the clear advantage5; if you try to enhance attention with an amphetamine, you destroy creativity, or if the amphetamines reduce sleep, you damage memory consolidation or peripheral awareness6; or improving memory (which requires active effort to maintain7) also increases sensitivity to pain8 and interferes with other mental tasks910 (as increased WM does, slightly11); if a mouse invests in anti-aging cellular repairs, it may freeze to death12, and so on. (What are we to make of inducing savant-like abilities by brute-force suppression of brain regions13, or tDCS improving learning?) From this perspective, it’s not too surprising that human medicine may be largely wasted effort or harmful14 (although most - especially doctors - would strenuously deny this). “Hardly any man is clever enough to know all the evil he does.”15
An analogy to complex systems is a superficial analysis at best. Many complex systems are routinely optimizable on some parameter of interest by orders of magnitudes, or at least factors. Economies grow exponentially, on the back of all sorts of improving performance curves which make us richer than emperors compared with our ancestors; the miracle of economic growth, built on thousands of distinct complex systems being optimized by humans, seems to go unnoticed and be so normal and taken for granted. If we were computers, an ordinary nerd with access to some liquid nitrogen could double our clock speed.
With intelligence, on the other hand, not only do we have no interventions to make one an order of magnitude smarter on some hypothetical measure of absolute intelligence (perhaps such a man would be to us as we are to dogs?) but we have no interventions which make one a few factors smarter (the smartest man to ever live?) nor do we even have any interventions which can move one more than a few percentage points up in the general population! We remain the same. It is as if scientists and doctors, after studying cars for centuries, shamefacedly had to admit that their thousands of experimental cars all still had their speed throttles stuck on 25-30kph - but the good news was that this new oil additive might make a few of the cars run 0.1kph faster!
This is not the usual state of affairs for even extremely complex systems. This raises the question of why all these cars are so uniformly stuck at a certain top speed and how they got to be so optimized; why are we like these fantastical cars, and not computer processors?
Intelligence is an almost unalloyed good when we look at correlations in the real-world for income, longevity, happiness, contributions to science or medicine, criminality, favoring of free speech etc16. Why is it, then, that we can find quotes like “the rule that human beings seem to follow is to engage the brain only when all else fails - and usually not even then”17 or “In effect, all animals are under stringent selection pressure to be as stupid as they can get away with”18? Why does so much psychological research, especially heuristics & biases, seem to boil down to a dichotomy of a slow accurate way of thinking and a fast less-accurate way of thinking (“system I” vs “system II” being just one of the innumerable pairs coined by researchers)?
Because thinking is expensive and slow, and while it may be an unalloyed good, it is subject to diminishing returns like anything else (if it is profitable at all in a particular niche: no bacteria needs sophisticated cognitive skills, nor most mammals) and other things become more valuable. Just another tradeoff.
In “The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement”19 (Human Enhancement 2008), Nick Bostrom and Anders Sandberg put this principle as a question or challenge, “evolutionary optimality challenge” (EOC):
If the proposed intervention would result in an enhancement, why have we not already evolved to be that way?
We could take this as a biological version of Chesterton’s fence. Evolution is a massively parallel search process which has been running on humans (and predecessor organisms) for billions of years, ruthlessly optimizing for reproductive fitness. It is an immensely stupid and blind idiot god which will accomplish its goal by any available means, and if that means evolutionary mechanisms will cause individuals to drive their own species extinct because this was the fittest thing for each individual to do or if the highly artificial & unlikely conditions for group selection are enforced by an experimenter and evolution causes group norms of mass infanticidal cannibalization to develop, so be it!
It is of course possible for a new mutation to be fitter or for the environment to change and render some alternative more fit. This is sometimes true, but it is overwhelmingly usually false. If you do not believe me, feel free to go try to beat just the stock market, or if you’re up for a challenge, be more reproductively fit in a tuna fish’s niche than the tuna fish. Every so often you hear of a hedge fund which found alpha, or of an invading alien species that is beating the native flora: but remember that they are just one such fund or species out of thousands and thousands trying. Francis Bacon:
It was a good answer that was made by one who when they showed him hanging in a temple a picture of those who had paid their vows as having escaped shipwreck, and would have him say whether he did not now acknowledge the power of the gods, - “Aye”, asked he again, “but where are they painted that were drowned after their vows?” And such is the way of all superstition, whether in astrology, dreams, omens, divine judgments, or the like; wherein men, having a delight in such vanities, mark the events where they are fulfilled, but where they fail, though this happens much oftener, neglect and pass them by.
They are exceptions which prove the rule; they are, in fact, the exceptions which cause the rule to be true, by exploiting the niche or opportunity. Suppose we turned out to be harmfully miserly with calories and there is some receptor (such as those acted upon by stimulants like caffeine or nicotine) which triggers a cascade of changes leading to behavior which is a superior tradeoff in caloric consumption vs activity. Evolution would slowly increase the market-share of alleles which affect this receptor, and after a while, the new level of activity would become optimal and now use of a stimulant affecting the receptor would cease to be fit because it cranks it too high. There may be some such opportunities available to humans today, since we know of past opportunities like adult lactose tolerance which have been sweeping through gene pools over the past thousand years, but can we really claim that all the interventions in which we differ from our distant ancestors can be traced to such reproductive fitness justifications? (And people think evolutionary psychology overreaches and speculates without evidence!) Theoretical calculations apparently indicate that in a changing environment, the “reproductive fitness gap” between the current allele and its alternatives will be small and large gaps exponentially rare20; this seems intuitive - to continue the market analogy, the bigger the arbitrage, the faster it will be exploited.
Obviously we humans do intervene all the time, and many of those interventions are worthwhile. Women, for example, are big fans of birth control, and if the female reproductive system isn’t controlled by evolution, nothing is. How are we to reconcile the theoretical expectation that we should find it nigh-impossible to beat evolution at its own game with the observed fact that we seem to intervene successfully all the time?
What a book a devil’s chaplain might write on the clumsy, wasteful blundering, low and horribly cruel works of nature!21
It is a profound truth - realized in the nineteenth century by only a handful of astute biologists and by philosophers hardly at all (indeed, most of those who held and views on the matter held a contrary opinion) - a profound truth that Nature does not know best; that genetical evolution, if we choose to look at it liverishly instead of with fatuous good humor, is a story of waste, makeshift, compromise and blunder.22
There may be no free lunches, but might there be some cheap lunches? Yudkowsky’s formulation points out several ways to escape the argument:
interventions may not be simpleSo one might find major enhancements through some very complex surgery or prosthetic; perhaps brain implants which expand memory or enable (controlled) wireheading. Evolution is a search procedure for finding1 local optima, which are not necessarily global optima. Examples like the giraffe’s nerves demonstrates such traps, but how would evolution fix them? Even if a mutation sudden made the nerve go the shorter direction, it’s not clear what other changes would have to be made to deal with this improvement, and this combination of multiple rare mutations may not happen enough times for the small reproductive fitness improvement (less resources used on nerves) to make it to fixation.
the simple interventions may not lead to a major enhancementNutritional supplements are examples; it makes perfect sense that fixing a chemical deficiency could be a simple matter and enhance reproductive fitness - but one would expect only minor mental enhancements and this effect would not generalize to very many people. (Similarly, most nootropics do not do very much.)
the intervention may be simple, give major enhancements, but result in a net loss of reproductive fitness
The famous Ashkenazi theory of intelligence comes to mind. According to this theory, the Ashkenazi were forced into occupations demanding intelligence, and micro-selected for high intelligence. Except the high IQ genes were not previously prevalent among either Jews or gentiles because - like sickle-cell anemia - when they became too prevalent, they result in horrible diseases like Tay-Sachs. In 2007, a unique mutation in a Scottish family was found to increase verbal IQ in afflicted family members vs non by something like 25 points; this would be great for them - except for how that mutation starts causing blindness in one’s 20s or later. (In general, it’s much easier to find mutations or other genetic changes breaking intelligence than helping23.)
Bostrom also offers 3 categories of ways in which interventions can escape his ‘EOC’:
- “Changed tradeoffs. Evolution ‘designed’ the system for operation in one type of environment, but now we wish to deploy it in a very different type of environment. It is not surprising, then, that we might be able to modify the system better to meet the demands imposed on it by the new environment.24
- Value discordance. There is a discrepancy between the standards by which evolution measured the quality of her work, and the standards that we wish to apply. Even if evolution had managed to build the finest reproduction-and-survival machine imaginable, we may still have reason to change it because what we value is not primarily to be maximally effective inclusive-reproductive-fitness optimizers.
- Evolutionary restrictions. We have access to various tools, materials, and techniques that were unavailable to evolution. Even if our engineering talent is far inferior to evolution’s, we may nevertheless be able to achieve certain things that stumped evolution, thanks to these novel aids."
An example of how not to escape the EOC, I believe, is offered in “The Likelihood of Cognitive Enhancement” (Lynch et al 2012), when the authors attempt to argue that powerful nootropics are possible:
But perhaps the ‘room for improvement’ issue can be recast in terms of brain evolution by asking whether comparative anatomical evidence points to strong adaptive pressures for designs that are logically related to improved cognitive performance. Anatomists often resort to allometry when dealing with questions of selective pressures on brain regions. Applied to brain proportions, this involves collecting measurements for the region of interest - e.g., frontal cortex – for a series of animals within a given taxonomic group and then relating it to the volume or weight of the brains of those animals. This can establish with a relatively small degree of error whether a brain component in a particular species is larger than would be predicted from that species’ brain size. While there is not a great deal of evidence, studies of this type point to the conclusion that cortical subdivisions in humans, including association regions, are about as large as expected for an anthropoid primate with a 1350cc brain. The volume of area 10 of human frontal cortex, for example, fits on the regression line (area 10 vs. whole brain) calculated from published data (Semendeferi et al., 2001) for a series composed of gibbons, apes and humans (Lynch and Granger, 2008). Given that this region is widely assumed to play a central role in executive functions and working memory, these observations do not encourage the idea that selective pressures for cognition have differentially shaped the proportions of human cortex. Importantly, this does not mean that those proportions are in any sense typical. The allometric equations involve different exponents for different regions, meaning that absolute proportions (e.g., primary sensory cortex vs. association cortex) change as brains grow larger. The balance of parts in the cortex of the enormous human brain is dramatically different than found in the much smaller monkey brain: area 10, for instance, occupies a much greater percentage of the cortex in man. But these effects seem to reflect expansion according to rules embedded in a conserved brain plan rather than selection for the specific pattern found in humans (Finlay et al., 2001).
…But our argument here is that these expanded cortical areas are likely to use generic network designs shared by most primates; if so, then it appears unlikely that the designs are in any sense ‘optimized’ for cognition. We take this as a starting position for the assumption that the designs are far from being maximally effective for specialized human functions, and therefore that it is realistic to expect that cognition-related operations can be significantly enhanced.
I would agree that the human brain’s architecture does not seem to be optimal in any universal sense; and that this would constitute an interesting argument if one were arguing that artificial intelligences will not inherently be limited to a level of intelligence comparable to the greatest human geniuses.
However, this does not offer hope for nootropics because the human brain can easily be suboptimal in its gross anatomical architecture but close to optimal in any factor easily tweaked by chemicals! (A suggestion that brain region size is suboptimal is a suggestion only that a large change in brain region size might lead to large gains - but large changes are neither easy, simple, nor possible currently.)
Teleology is like a mistress to a biologist: he cannot live without her but he’s unwilling to be seen with her in public.25
Bostrom’s criteria are more general, so we’ll use them.
Birth control is a clear example of satisfying loophole #2, ‘value discordance’. Ovulation is under the body’s control and is linked in evolutionary psychology to many changes in behavior; unprocreative sex is common throughout the animal kingdom where it serves other purposes like forming social connections in bonobo troupes. Hunter-gatherer women practice spaced births by letting their child suckle them for years; maternal cannibalism has been observed when mothers are under particular stress (and perhaps also in humans). So, it’s clear that there is birth control capability already available to hominids, and not too surprising that it’s possible to render a healthy woman entirely infertile without major health consequences. Many women would prefer evolution have done just this! They do not value having a dozen children while young; they would rather have just 2 at a time of their choosing - if any at all. Why is evolution not so obliging? Well, it obviously would not be very reproductively fit…
Pacemakers are an example of #3: evolution couldn’t afford to engineer more reliable hearts, in part for lack of electronic microchips and possibly because humans are already at the limits of the performance envelope26.
Many traits related to nutrition fall into the category of #1.27
How about supplements? Most supplements are just tweaking biochemical processes, and don’t obviously fall under 1, 2, or 3; and the few which seem to enhance healthy humans are finicky creatures (see my introduction to Nootropics). Melatonin, for example, may seem particularly questionable as one’s body secretes considerable quantities in an intricate cycle (but see later).
The Flynn effect is a possible counter-example: it operates broadly over many countries, improves average IQ by perhaps 10 or more points over the last century28, presumably is environmental, and operates without any explicit expensive eugenics programs or anything like that.
However, there are several ways in which the Flynn effect respects Algernon’s argument and passes the loopholes:
the Flynn effect is limited in its gains and so will result in not major gainsthe Flynn effect has already stopped in 3 of the wealthiest countries and reversed to some degree. The situation in the US is unclear, but given the outright losses in verbal & science skills seen 1981-2010 in the most intelligent of Midwestern students29, this is consistent with a Flynn effect operating through eliminating deficiencies & improving the lower end or with a Flynn effect that has ceased to exist
the Flynn effect is apparently environmental, and one of the most plausible explanations is that it is due to either nutritional deficits or public health interventions against infectious diseases.In neither case are interventions ‘easy’ in any sense, nor are the interventions available to evolution - if one’s diet is lacking in an essential element like iodine deficiency, evolution cannot simply conjure it away; nor can it invent any better immune systems than it already has as part of the usual arms race with infectious agents. As we already noted, we could expect nutritional interventions to produce small benefits, and we might expect that implementing a whole battery of possible improvements (iodine deficiency, iron deficiency, a dozen childhood infections etc) to produce much what we see with the Flynn effect. But we would expect the gains to be specific and quickly exhausted once the low-hanging fruit is exhausted. (There cannot be indefinitely many deficiencies and infections!) This too is what we observe with the halting of the Flynn effect.
- The intelligence gains from the Flynn effect may not be reproductive-fitness-increasing; IQ correlates strongly with many desirable things like income, happiness, knowledge, education, etc. - but not having more than average children. The correlations are found both in the West and worldwide. (It is of course possible that the Flynn effect causes IQ gains and reproductive fitness increases on the lower end of the spectrum and high IQ is intrinsically reproductive-fitness-reducing in the modern environment, but the observation is suggestive.)
the Flynn effect does not actually reflect intelligence gains but damage to the validity of the subtest in which the gains appear, and is irrelevant
…one reason to be wary of, say, cholinergic memory enhancers [such as piracetam]: if they have no downsides, why doesn’t the brain produce more acetylcholine already? Maybe you’re using up a limited memory capacity, or forgetting something else…
Let’s consider the specific case of piracetam. Piracetam is so old and has so many studies on its efficacy (real if not substantial) and safety (utterly) that it screens off a lot of secondary considerations.
Might piracetam escape the EOC with #3?No. Whatever receptors or buttons piracetam pushes could already be pushed by the brain the usual way. There is nothing novel about piracetam in that sense.
Might piracetam escape the EOC with #2?Perhaps. Hard to see how piracetam trades off reproductive fitness for something else, though. Since its synthesis in 1964, minimal side-effects or other safety issues have been noted, unlike other drugs such as caffeine or aspirin.
Might piracetam escape the EOC with #1?
Probably. Many tradeoffs are different in contemporary First World countries than in the proverbial Stone Age veldt. We should look more closely at what piracetam does and what tradeoffs it may be changing.
A ‘cholinergic’ operates by encouraging higher levels of the acetylcholine neurotransmitter; acetylcholine is one of the most common neurotransmitters. If serotonin is loosely associated with mood, we might say that acetylcholine is loosely associated with the ‘velocity’ of thoughts in the brain. If one is using more acetylcholine, one needs to create more acetylcholine (the brain cannot borrow indefinitely like the US federal government). Acetylcholine is made out of the essential nutrient choline.
An interesting thing about piracetam use is that it doesn’t do very much by itself30. It is charitably described as ‘subtle’. The standard advice is to take a choline supplement with the piracetam: a gram of soy lecithin, choline bitartrate, or choline citrate.
Isn’t this interesting? Presumably we are not Irish peasants consuming wretched diets of potato, potato, and more potato, with some mutton on the holidays. We are cognizant of how a good diet & exercise are prerequisites to brain power. Yet, a gram of straight choline still boosts piracetam’s effects from subtle or placebo, to noticeable & measurable.
This suggests that perhaps a normal First World diet is choline-deficient. If even well-fed humans must economize on choline & acetylcholine, then surely our ancestors, who were worse off nutritionally, had to economize even more severely. Evolution would frown on squandering acetylcholine on idle thoughts like ‘what was that witty saying by Ugh the other day?’ That choline might be needed in the next famine! This suggestion is buttressed by one small mouse experiment:
Administering choline supplementation to pregnant rats improved the performance of their pups, apparently as a result of changes in neural development in turn due to changes in gene expression (Meck et al. 1988; Meck & Williams 2003; Mellott et al. 2004). Given the ready availability of choline supplements, such prenatal enhancement, may already (inadvertently) be taking place in human populations. Supplementation of a mother’s diet during late and 3 months postpartum with long-chained fatty acids has also been demonstrated to improve cognitive performance in human children (Helland et al. 200331).32
Past our embryo-hood, we can’t tell our bodies that we have available as much choline as it could possibly need, that we value our synapses blazing at every moment more than a better chance of surviving a famine (which effectively no longer exist). So we have to override it, for our own good.
(It’s worth noting here that there is considerable overlap between #1 and #2. Whether you see piracetam as a conflict in values between evolution’s worst-case planning and our desire for greater average or peak performance, or as a shift in optimal expenditure based on a historical drop in the cost of bulk quantities of choline, is a matter of preference.)
How about melatonin? It is a clear-cut example of failing #3, but perhaps it passes under #1 like piracetam?
A shift worker is an obvious case of value discordance: humans are meant to work mostly during the day, with minimal dangerous night-time activity. Shift workers perversely insist on doing the exact opposite, even struggling against the circadian rhythms (to the detriment of their health). Evolution wots not of your ‘employment contract’, pitiful human!
Regular people have a less extreme version of the shift worker’s dilemma. The modern population doesn’t rise and set with the sun, for imponderable reasons. (My personal theory is widespread akrasia: darkness overcomes hyperbolic discounting and forced the ancients to bed, but we have electric lighting and can stay up indefinitely.) This leads to a values mismatch, and a similar solution.
Modafinil is another drug that seems suspiciously like a free lunch. The side-effects are minimal and rare, and the benefit quite unusual and striking: not needing to sleep for a night. The research on general cognitive benefits is mixed but real33. (My own experience with armodafinil was that after 41 hours of sleep-deprivation, my working memory and focus were actually better than normal as judged by dual n-back scores! An anomaly, but still curious.) Yes, modafinil costs money, but that’s not really relevant to our health or to Evolution. Yes, there is, anecdotally, a risk of coming to tolerate modafinil (although no addiction), but again that doesn’t matter to Evolution - there would still be benefits before the tolerance kicked in.
What heuristic might we use?
- Chemically, modafinil does not seem to be so bizarre that evolution could not stumble across it or an equivalent mechanism, so probably we cannot appeal to #3, “evolutionary restrictions”. Its mechanism is not very clear, but mostly seems to manipulate things like the histamine system (and to a much lesser extent, dopamine), all things Evolution could easily do.
Nor is it clear what value discordance might be involved. We could come up with one, though.If one theorized that modafinil came with a memory penalty, inasmuch as memory consolidation and the hippocampus seem to intimately involve sleep, then we might have a discordance where we value being able to produce and act more than being able to remember things. This might even be a sensible tradeoff for a modern man: why not sacrifice some ability to learn or remember long-term, since you can immediately gain back that capacity and more by suitable use of efficient memory techniques like spaced repetition?
#1 seems promising. Like piracetam, there is something in short supply that modafinil would use more of: calories! While you are awake, you are burning more calories than while asleep. During the day, synapses build up levels of some proteins, which get wiped out by sleep; is this because synapses and memories are expensive34 and cannot be allowed to consume ever more resources without some sort of ‘garbage collection’, synaptic homeostasis? Fly & rat studies bear out some of the predictions of the model and may lead to interesting new findings35 (see also Bom & Feld 2012 discussing Chauvette et al 2012).
Previously noted was the metabolic cost of defending against infections; one animal study found the proximate cause of death in sleep deprivation to be bacterial infections36. You are also - in the ancient evolutionary environment - perhaps exposing yourself to additional risks in the dark night. (This would be the preservation & protection theory of sleep.)
Although it only accounts for 2% of an adult’s body weight, it accounts for 20-25% of an adult’s resting oxygen and energy intake (Attwell & Laughlin 2001: 1143). In early life, the brain even makes up for up 60-70% of the body’s total energy requirements. A chimpanzee’s brain, in comparison, only consumes about 8-9% of its resting metabolism (Aiello & Wells 2002: 330). The human brain’s energy demands are about 8 to 10 times higher than those of skeletal muscles (Dunbar & Shultz 2007: 1344), and, in terms of energy consumption, it is equal to the rate of energy consumed by leg muscles of a marathon runner when running (Attwell & Laughlin 2001: 1143). All in all, its consumption rate is only topped by the energy intake of the heart (Dunbar & Shultz 2007: 1344).
There are additional disadvantages to increased intelligence - larger heads would drive maternal & infant mortality rates even higher than they are38. And it’s worth noting that while the human brain is disproportionately huge, yet the human cerebral cortex is not any bigger than one would predict be extrapolating from gibbon or ape cortex volumes, despite the human lineage splitting off millions of years ago.39 The human brain seems to be special only in being a scaled-up primate brain40, with close to the metabolic limit in its number of neurons41 (which suggests a resolution to the question why despite convergent evolution of relatively high intelligent42, only primates “took off”). There are other ways in which humans seem to have hit intelligence limits - why did our ancestors’ brains grow in volume for millions of years43, only to come to a halt with the Neanderthals44 & Cro-Magnons and actually start shrinking45 to the modern volume, and why did old age only start increasing 50,000 years ago or later46, well after humans began developing technology like controlled fire (>=400,000 years ago47); or why are primate guts (also resource-expensive) inversely correlated with brain size & in one fish breeding experiment, or muscles starved of sugars and brains favored48; or why do the Ashkenazi seem to pay for their intelligence with endemic genetic disorders49; or why does evolution permit human brains to shrink dramatically with age, as much as 15% of volume, besides the huge performance losses, while the brains of our closest relative-species (the chimpanzees), do not shrink at all?50 For that matter, why are heads, central nervous systems, and primate-level intelligence so extremely rare on the tree of life, with no examples of convergent evolution of intelligence (as opposed to examples like basic eye-spots, which are such a fantastically adaptive tool that they have independently evolved “somewhere between 45 and 60 times”)?51
The obvious answer is that diminishing returns have kicked in for intelligence in primates and humans in particular5253. (Indeed, it’s apparently been argued that not only are humans not much smarter than primates54, but there is little overall intelligence differences in vertebrates55. Humans lose embarrassingly on even pure tests of statistical reasoning; we are outperformed on the Monty Hall problem by pigeons and to a lesser extent monkeys!) The last few millennia aside, humans have not done well and has apparently verged on extinction before, and the demographic transition56 and anthropogenic existential risks suggest that our current success may be short-lived (not that agriculture & civilization were great in the first place). Some psychologists have even tried to make the case that increases in intelligence do not lead to better inferences or choices (Hertwig & Todd 2003).
Modafinil or modafinil-like traits might be selected against due to increased calorie expenditure, decreased calorie consumption, or risks of night-time activity. Either explanation fails in a modern environment; modern societies have murder and assault rates orders of magnitude lower than that seen among aborigines57, and calories are so abundant that they have begun reducing reproductive fitness (we call this poisoning-by-too-many-calories the obesity epidemic).
Is that last a convincing defense of modafinil against the EOC or Algernon’s principle? It seems reasonable to me, if not as strong a defense as I would like.
How about opiates? Morphine and other painkillers can easily be justified as evolution not knowing when a knife cut is by a murderous enemy and when it’s by a kindly surgeon (which didn’t exist way back when), and choosing to make us err on the side of always feeling pain. But recreational drug abuse?
- #1 doesn’t seem too plausible - what about modern society would favor opiate consumption outside of medicinal use? If one wishes to deaden the despair and ennui of living in a degenerate atheistic material culture, we have beer for that.58
- #3 doesn’t work either; opioids have been around for ages and work via the standard brain machinery.
- #2 might work here as well, but this dumps us straight into the debate about the War on Drugs and what harm drug use does to the user & society.
But even this analysis is helpful: we now know on what basis to oppose drug use, and most importantly, what kind of evidence which we should look for to support or falsify our belief about heroin.
MDMA is another popular illicit drug. Reading accounts of early MDMA use or studies on its beneficial psychological properties (a bit like those claimed for previous psychedelics like LSD or psilocybin), one is struck by how fear seems to be a common trait - or rather, the lack of fear:
With Ecstasy, I had simply stepped outside the worn paths in my brain and, in the process, gained some perspective on my life. It was an amazing feeling. Small inconsistencies became obvious. “I need money, I have a $500 motorcycle that I’m too scared to ride, so why not sell it?” So did big psychological ones: “The more angry I am at myself, the more critical I am of my girlfriend. Why should I care how Carol chews her gum?” Ecstasy nudges you to think, very deeply, about one thing at a time. (It wasn’t that harsh LSD feeling, where every thought seems like an absurd paradox - like the fact that we’re all, deep down, just a bunch of monkeys.)..A government-approved study in Spain has just begun in which Ecstasy is being offered to treat rape victims for whom no treatment has worked, based on the premise that MDMA “reduces the fear response to a perceived emotional threat” in therapy sessions. A Swiss study in 1993 yielded positive anecdotal evidence on its effect on people suffering from post-traumatic stress disorder. And a study in California may soon begin in which Ecstasy is administered to end-stage cancer patients suffering from depression, existential crises and chronic pain. The F.D.A. will be reviewing the protocol for Stage 2 of the trial; results are expected in 2002.
Reading, I can’t help but be reminded of the popular self-help practice “rejection therapy” (an exposure therapy), elements of which reappear among businessmen/entrepreneurs, pick up artists, shyness therapists59, nerds, and others: one goes out in public and makes small harmless requests of various strangers until one is no longer uncomfortable or afraid. Eventually one realizes that it is harmless to ask - the worst that will happen is they will say no - and one will presumably be more confident, less fearful, happy, and effective a person. What is the justification for this? After all, one doesn’t regard being afraid of, say, snake venom as a problem and a good reason to undertake a long regimen of Mithridatism! Snake venom is dangerous and should be feared, and deliberating destroying one’s useful fear would be like a mouse doing ‘cat therapy’.
Rejection therapy fans argue that there is a mismatch between fear and reality: our fears and social anxiety are calibrated for the world of a few centuries ago where >90% of the world lived on farms and villagers where a poor reputation & social rejection could mean death; while in the modern world, social rejection is a mere inconvenience because even if one is rejected by one’s extended circle of 150 people there are 100x more people in a small town, and even more thousands of times more people in a city (to say nothing of a megalopolis like New York City where the numbers get vague into the millions). Risk-taking behavior which is optimal in the village will be ludicrously conservative and inefficient in the big city.
If this theory were correct (it is possible but far from proven), and if MDMA worked the same way (unlikely), then we have a clear example of #1, “changed tradeoffs”: we are too risk-averse and fearful of social sanction for a modern environment. (Curiously, this is also a proposed explanation for the apparent increase in psychopathy in modern societies: psychopaths are “defectors” or “hawk” who would normally be suppressed or less fit in a tightly-networked tribe or village, but can thrive in the reputation-poor modern world as they move from place to place and social circle to social circle, leaving behind their victims.60)
Packing For Mars: The Curious Science of Life in the Void, Mary Roach 2010; from the chapter “The Horizontal Stuff: What If You Never Got Out of Bed?”:
The human body is a frugal contractor. It keeps the muscles and skeleton as strong as they need to be, no more and no less. “Use it or lose it” is a basic mantra of the human body. If you take up jogging or gain thirty pounds, your body will strengthen your bones and muscles as needed. Quit jogging or lose the thirty pounds, and your frame will be appropriately downsized. Muscle is regained in a matter of weeks once astronauts return to earth (and bed-resters get out of bed), but bone takes three to six months to recover. Some studies suggest that the skeletons of astronauts on long-duration missions never quite recover, and for this reason it’s bone that gets the most study at places like FARU.
The body’s foreman on call is a cell called the osteocyte, embedded all through the matrix of the bone. Every time you go for a run or lift a heavy box, you cause minute amounts of damage to your bone. The osteocytes sense this and send in a repair team: osteoclasts to remove the damaged cells, and osteoblasts to patch the holes with fresh ones. The repaving strengthens the bone. This is why bone-jarring exercise like jogging is recommended to beef up the balsa-wood bones of thin, small-boned women of northern European ancestry, whose genetics, postmenopause, will land them on the short list for hip replacement.
Likewise, if you stop jarring and stressing your bones - by going into space, or into a wheelchair or a bed-rest study - this cues the strain-sensing osteoclasts to have bone taken away. The human organism seems to have a penchant for streamlining. Whether it’s muscle or bone, the body tries not to spend its resources on functions that aren’t serving any purpose.
Tom Lang, a bone expert at the University of California, San Francisco, who has studied astronauts, explained all this to me. He told me that a German doctor named Wolff figured it out in the 1800s by studying X-rays of infants’ hips as they transitioned from crawling to walking. “A whole new evolution of bone structure takes place to support the mechanical loads associated with walking,” said Lang. “Wolff had the great insight that form follows function.”…
How bad can it get? If you stay off your feet indefinitely, will your body completely dismantle your skeleton? Can humans become jellyfish by never getting up? They cannot. Paraplegics eventually lose from 1/3 to 1/2 of their bone mass in the lower body. Computer modeling done by Dennis Carter and his students at Stanford University suggests that a two-year mission to Mars would have about the same effect on one’s skeleton. Would an astronaut returning from Mars run the risk of stepping out of the capsule into Earth gravity and snapping a bone? Carter thinks so. It makes sense, given that extremely osteoporotic women have been known to break a hip (actually, the top of the thighbone where it enters the pelvis) by doing nothing but shifting their weight while standing. They don’t fall and break a bone; they break a bone and fall. And these women have typically lost a good deal less than 50% of their bone mass.
A number of leading theories of aging, namely The Antagonistic Pleiotropy Theory (Williams, 1957), The Disposable Soma Theory (Kirkwood, 1977) and most recently The Reproductive-Cell Cycle Theory (Bowen and Atwood, 2004, 2010) suggest a tradeoff between longevity and reproduction. While there has been an abundance of data linking longevity with reduced fertility in lower life forms, human data have been conflicting. We assessed this tradeoff in a cohort of genetically and socially homogeneous Ashkenazi Jewish centenarians (average age ~100 years). As compared with an Ashkenazi cohort without exceptional longevity, our centenarians had fewer children (2.01 vs 2.53, p<0.0001), were older at first childbirth (28.0 vs 25.6, p<0.0001), and at last childbirth (32.4 vs 30.3, p<0.0001). The smaller number of children was observed for male and female centenarians alike. The lower number of children in both genders together with the pattern of delayed reproductive maturity is suggestive of constitutional factors that might enhance human life span at the expense of reduced reproductive ability.
“When do evolutionary explanations of belief debunk belief?”, Griffiths & Wilkins 2010:
Resources allocated to forming true beliefs are resources unavailable for making sperm or eggs, or fighting off the effects of aging by repairing damaged tissues. Modern humans in first-world countries lead a sheltered life and it is hard for us to appreciate just how direct these trade-offs can be. A dramatic example comes from a small Australian mammal, the Brown Antechinus (Antechinus Stuartii). In this and several related species a short, frenzied mating season is followed by a period during which the male’s sexual organs regress and their immune system collapses. Then all the males in the population die. The Antechinus has little chance of surviving to the next breeding season and so it allocates all of its resources to the reproductive effort and none to tissue maintenance. There can be little doubt that if, like us, the Antechinus had a massively hypertrophied cortex and engaged in a lot of costly thinking, it would allow that neural tissue to decay in the mating season so as to allocate more resources to sperm production and sexual competition.
Some years ago I drew attention to the “paradox of placebos”, a paradox that must strike any evolutionary biologist who thinks about it. It’s this. When a person’s health improves under the influence of placebo medication, then, as we’ve noted already, this has to be a case of “self-cure”. But if people have the capacity to heal themselves by their own efforts, why not get on with it as soon as needed? Why wait for permission - from a sugar pill, a witch doctor - that it’s time to get better?
Presumably the explanation must be that self-cure has costs as well as benefits. What kind of costs are these? Well, actually they’re fairly obvious. Many of the illnesses we experience, like pain, fever and so on, are actually defenses which are designed to stop us from getting into more trouble than we’re already in. So “curing” ourselves of these defenses can indeed cost us down the line. Pain reduces our mobility, for example, and stops us from harming ourselves further; so, relieving ourselves of pain is actually quite risky. Fever helps kill bacterial parasites by raising body temperature to a level they can’t tolerate; so again, curing ourselves of fever is risky. Vomiting gets rid of toxins; so suppressing vomiting is risky. The same goes for the deployment of the immune system. Mounting an immune response is energetically expensive. Our metabolic rate rises 15% or so, even if we’re just responding to a common cold. What’s more, when we make antibodies we use up rare nutrients that will later have to be replaced. Given these costs, it becomes clear that immediate self-cure from an occurrent illness is not always a wise thing to do. In fact there will be circumstances when it would be best to hold back from deploying particular healing measures because the anticipated benefits are not likely to exceed the anticipated costs. In general it will be wise to err on side of caution, to play safe, not to let down our defenses such as pain or fever until we see signs that the danger has passed, not to use up our stock of ammunition against parasites until we know we’re in relatively good shape and there’s not still worse to come. Healing ourselves involves - or ought to involve - a judgment call…There’s plenty of evidence that we have just such a system at work overseeing our health. For example, in winter, we are cautious about deploying our immune resources. That’s why a cold lasts much longer in winter than it does in summer. It’s not because we’re cold, it’s because our bodies, based on deep evolutionary history reckon that it’s not so safe to use our immune resources in winter, as it would be in summer. There’s experimental confirmation of this in animals. Suppose a hamster is injected with bacteria which makes it sick - but in one case the hamster is on an artificial day/night cycle that suggests it’s summer; in the other case it’s on a cycle that suggests it’s winter. If the hamster is tricked into thinking it’s summer, it throws everything it has got against the infection and recovers completely. If it thinks it’s winter then it just mounts a holding operation, as if it’s waiting until it knows it’s safe to mount a full-scale response. The hamster “thinks” this or that?? No, of course it doesn’t think it consciously - the light cycle acts as a subconscious prime to the hamster’s health management system.
Humphrey also goes on to point out that exploiting the placebo effect satisfies one of Bostrom’s EOC criteria (which we haven’t discussed yet):
But, as I said, the world has changed - or at least is changing for most of us. We no longer live in such an oppressive environment. We no longer need to play by the old rules, and rein in our peculiar strengths and idiosyncrasies. We can afford to take risks now we couldn’t before. So, yes, I’m hopeful. I think it really ought to be possible to devise placebo treatments for the self, which do indeed induce them to come out from their protective shells - and so to emerge as happier, nicer, cleverer, more creative people than they would ever otherwise have dared to be.
A scratch or worse injuries can take weeks to heal fully, yet human cells can replicate far faster and fill the equivalent volume in hours. Such fast repair has obvious survival value, so why don’t we? A simple physics model seems to predict healing fairly accurately by a rough calculation of the metabolic expenditure of such all-out healing and assuming that the body only has some metabolic energy to spare at any time.↩
Keeping awake using stimulants prevents the memory consolidation that would have taken place during sleep, and enhanced concentration ability may impair the ability to notice things in peripheral awareness.
One of the surprising facts about memory is that it seems that every time one recalls a memory, the memory is effectively destroyed and must be recreated. This constant cycle of creation and destruction seems to be key in how spaced repetition works, and also explains the well-documented fallibility of eyewitnesses, the ease of inducing false and ‘recovered’ memories, and the dramatic effect of drugs on recall. An article on those drugs describes the active process:
The disappearance of the fear memory suggested that every time we think about the past we are delicately transforming its cellular representation in the brain, changing its underlying neural circuitry. It was a stunning discovery: Memories are not formed and then pristinely maintained, as neuroscientists thought; they are formed and then rebuilt every time they’re accessed. “The brain isn’t interested in having a perfect set of memories about the past,” LeDoux says. “Instead, memory comes with a natural updating mechanism, which is how we make sure that the information taking up valuable space inside our head is still useful. That might make our memories less accurate, but it probably also makes them more relevant to the future.”
…What does PKMzeta do? The molecule’s crucial trick is that it increases the density of a particular type of sensor called an AMPA receptor on the outside of a neuron. It’s an ion channel, a gateway to the interior of a cell that, when opened, makes it easier for adjacent cells to excite one another. (While neurons are normally shy strangers, struggling to interact, PKMzeta turns them into intimate friends, happy to exchange all sorts of incidental information.) This process requires constant upkeep - every long-term memory is always on the verge of vanishing. As a result, even a brief interruption of PKMzeta activity can dismantle the function of a steadfast circuit. If the genetic expression of PKMzeta is amped up - by, say, genetically engineering rats to overproduce the stuff - they become mnemonic freaks, able to convert even the most mundane events into long-term memory. (Their performance on a standard test of recall is nearly double that of normal animals.) Furthermore, once neurons begin producing PKMzeta, the protein tends to linger, marking the neural connection as a memory. “The molecules themselves are always changing, but the high level of PKMzeta stays constant,” Sacktor says. “That’s what makes the endurance of the memory possible.” For example, in a recent experiment, Sacktor and scientists at the Weizmann Institute of Science trained rats to associate the taste of saccharin with nausea (thanks to an injection of lithium). After just a few trials, the rats began studiously avoiding the artificial sweetener. All it took was a single injection of a PKMzeta inhibitor called zeta-interacting protein, or ZIP, before the rats forgot all about their aversion. The rats went back to guzzling down the stuff.
Genetic memory enhancement has been demonstrated in rats and mice. In normal animals, during maturation expression of the NR2B subunit of the N-methyl-D-aspartate (NMDA) receptor is gradually replaced with expression of the NR2A subunit, something that may be linked to the lower brain plasticity in adult animals. Tsien’s group (Tang et al. 1999) modified mice to overexpress the NR2B. The NR2B ‘Doogie’ mice demonstrated improved memory performance, both in terms of acquisition and retention. This included unlearning of fear conditioning, which is believed to be due to the learning of a secondary memory (Falls et al. 1992). The modification also made the mice more sensitive to certain forms of pain, suggesting a nontrivial trade-off between two potential enhancement goals (Wei et al. 2001).
“Why Aren’t We Smarter Already: Evolutionary Trade-Offs and Cognitive Enhancements”, Hills & Hertwig 2011:
If better memory, for example, is unequivocally beneficial, why do seemingly trivial neuromolecular changes that would enhance memory, such as the over-expression of NMDA receptors in the hippocampus (Tang et al., 1999), not (to our knowledge) exist in natural populations? If it is so easy to evolve superior cognitive capacities, why aren’t we smarter already?
“Forgetting is Key to a Healthy Mind: Letting go of memories supports a sound state of mind, a sharp intellect - and superior recall”, Scientific American (consonant with some of the sleep research on forgetting and some theories explaining Spaced repetition).↩
Taxi drivers forfeit some things in exchange for their navigational skills; see the BBC’s “Brain changes seen in cabbies who take ‘The Knowledge’”; from “Acquiring”the Knowledge" of London’s Layout Drives Structural Brain Changes“ (Woollett & Maguire 2011):
The memory profile displayed by the now qualified trainees mirrors exactly the pattern displayed in several previous cross-sectional studies of licensed London taxi drivers [3, 4, 20] (and that which normalized in the retired taxi drivers ). In those studies also, the taxi drivers displayed more knowledge of the spatial relationships between landmarks in London, unsurprisingly, given their increased exposure to the city compared to control participants. By contrast, this enhanced spatial representation of the city was accompanied by poorer performance on a complex figure test, a visuospatial task designed to assess the free recall of visual material after 30 min. Our findings therefore not only replicate those of previous cross-sectional studies but extend them by showing the change in memory profile within the same participants. That the only major difference between T1 and T2 was acquiring ‘the Knowledge’ strongly suggests that this is what induced the memory change.
Hills & Hertwig 2011:
The benefits of limited memory have also been proposed to explain the curious constraints on working-memory span to a limited number of information chunks (for several related examples, see Hertwig & Todd, 2003)…As an example, working memory is correlated with performance on many cognitive tasks, such as the Scholastic Aptitude Test. However, individuals with high working-memory capacity often fail to hear their own name in a cocktail-party task and recall fewer items from a list after experiencing a context change (see Unsworth & Engle, 2007). These results demonstrate that the effects of enhancements should be viewed as we view adaptations: Enhancement is only meaningful with respect to specific individuals in specific environments.
Somatic maintenance needs only to be good enough to keep the organism in sound physiological condition for as long as it has a reasonable chance of survival in the wild. For example, since more than 90% of wild mice die in their first year (Phelan and Austad, 1989), any investment of energy in mechanisms for survival beyond this age benefits at most 10% of the population. Nearly all of the mechanisms required for somatic maintenance and repair (DNA repair, antioxidant systems, etc.) require [substantial] amounts of energy (ATP). Energy is scarce, as shown by the fact that the major cause of mortality for wild mice is cold, due to failure to maintain thermogenesis (Berry and Bronson, 1992). The mouse will therefore benefit by investing any spare energy into thermogenesis or reproduction, rather than into better capacity for somatic maintenance and repair, even though this means that damage will eventually accumulate to cause aging. The three-year lifespan potential of the mouse is sufficient for its actual needs in the wild, and yet it is not excessive, given that some mice will survive into their second year and that age-related deterioration will become apparent before maximum life span potential is reached. Thus, it makes sense to suppose that the intrinsic life span of the mouse has been optimized to suit its ecology. The idea that intrinsic longevity is tuned to the prevailing level of extrinsic mortality is supported by extensive observations on natural populations (Ricklefs, 1998). Evolutionary adaptations such as flight, protective shells, and large brains, all of which tend to reduce extrinsic mortality, are associated with increased longevity.
Hills & Hertwig 2011:
Perhaps the clearest natural evidence for between-domain trade-offs in performance across tasks comes from savants, whose spectacular skills in one domain are associated with poor performance in other domains. Those associations are not coincidental. Savant-like skills can be induced in healthy participants by turning off particular functional areas of the brain - for example, via repetitive transcranial magnetic stimulation (Snyder, 2009).
The fads for anti-oxidants and vitamins are good examples (what, the body can’t clean up oxidants already?) and so we sometimes find evidence of harm; medicine and psychology are rife with methodological & systematic problems perhaps because it’s so hard to find anything which works (“bad money drives out good”). Medical economist Robin Hanson has blogged for years on various ways in which medicine is ineffective, expensive, or harmful (as the evolutionary perspective would predict would be the case in all but exceptional cases like broken bones, the low-hanging fruit which may have been discovered as much as millennia ago); here is a selection of his medical posts in chronological order:
- “RAND Health Insurance Experiment”
- “Disagreement Case Study: Hanson and Cutler”
- “Cut Medicine in Half”; from ‘Is More Medicine Better?’; see also Overtreated
- “Hospice Beats Hospital”
- “Eternal Medicine”
- “Beware Transfusions”
- “Beware High Standards”
- “Free Docs Not Help Poor Kids”
- “Avoid Vena Cava Filters”
- “Question Medical Findings” (Stuart Buck)
- “Medical Ideology”
- “Meds to Cut”
- “Our Nutrition Ignorance”
- “Wasted Cancer Hope”
- “Africa HIV: Perverts or Bad Med?”
- “Megan on Med”
- “Hard Facts: Med”
- “In Favor of Fever”
- “Death Panels Add Life”
- “How Med Harms”
- “Beware Knives”
- “The Oregon Health Insurance Experiment”
- “Skip Cancer Screens”
- “Beware Cancer Med”
- “Forget Salt”
- “All In Their Heads”
- “Don’t Torture Mom & Dad”
- “Dog vs. Cat Medicine”
- “Farm vs Pet Medicine”
- “1/6 of US Deaths From Hospital Errors”
Not all psychological traits seem simply good; Big Five personality traits often seem like too extreme a score could be a pretty bad thing (most obviously for Neuroticism and Openness). This may be vindicated by looking at the influence of genes on the two different categories. From Stanovich 2010, Rationality & The Reflective Mind:
For many years, evolutionary psychology had little to say about individual differences because the field had as a foundational assumption that natural selection would eliminate heritable differences because heritable traits would be driven to fixation (Buss, 2009 [The Handbook of Evolutionary Psychology]). recently however, evolutionary psychologists have attempted to explain the contrary evidence that virtually all cognitive and personality traits that have been measured have heritabilities hovering around 50%. Penke, Denissen, & Miller 2007 have proposed a theory that explains these individual differences. Interestingly, the theory accounts for heritable cognitive ability differences in a different way than it accounts for heritable cognitive thinking dispositions and personality variables. the basis of their theory is a distinction that I will be stressing throughout this book - that between typical performance indicators and optimal performance indicators.
Penke et al (2007) argue that “the classical distinction between cognitive abilities and personality traits is much more than just a historical convention or a methodological matter of different measurement approaches (Cronbach, 1949 [Essentials of Psychological Testing]), and instead reflects different kinds of selection pressures that have shaped distinctive genetic architectures for these two classes” (p.550) of individual differences. On their view, personality traits and thinking dispositions (reflective-level individual differences) represent preserved, heritable variability that is maintained by different biological processes than intelligence (algorithmic-level individual differences). Thinking dispositions and personality traits are maintained by balanced selection, most probably frequency-dependent selection (Buss, 2009). The most famous example of the latter is cheater-based personality traits that flourish when they are rare but become less adaptive as the proportion of cheaters in the population rises (as cheaters begin to heat each other), finally reaching an equilibrium.
In contrast, variability in intelligence is thought to be maintained by constant changes in mutation load (Buss, 2009; Penke et al, 2007). As Pinker (2009 pg46) notes:
…new mutations creep into the genome faster than natural selection can weed them out. At any given moment, the population is laden with a portfolio of recent mutations, each of whose days are numbered. This Sisyphean struggle between selection and mutation is common with traits that depend on many genes, because there are so many things that can go wrong…Unlikely personality, where it takes all kinds to make a world, with intelligence, smarter is simply better, so balancing selection is unlikely. But intelligence depends on a large network of brain areas, and it thrives in a body that is properly nourished and free of diseases and defects…Mutations in general are far more likely to be harmful than helpful, and the large, helpful ones were low-hanging fruit that were picked long ago in our evolutionary history and entrenched in the species…But as the barrel gets closer to the target, smaller and smaller tweaks are needed to bring any further improvement…Though we know that genes for intelligence must exist, each is likely to be small in effect, found in only a few people, or both [In a recent study of 6,000 children, the gene with the biggest effect accounted for less than one-quarter of an I.Q. point.]…The hunt for personality genes, though not yet Nobel-worthy, has had better fortunes. Several associations have been found between personality traits and genes that govern the breakdown, recycling or detection of neurotransmitters
Richerson & Boyd 2005, pg135; Not by genes alone: How culture transformed human evolution.↩
Human beings are a marvel of evolved complexity. Such systems can be difficult to enhance. When we manipulate complex evolved systems, which are poorly understood, our interventions often fail or backfire. It can appear as if there is a “wisdom of nature” which we ignore at our peril. Sometimes the belief in nature’s wisdom - and corresponding doubts about the prudence of tampering with nature, especially human nature - manifest as diffusely moral objections against enhancement. Such objections may be expressed as intuitions about the superiority of the natural or the troublesomeness of hubris, or as an evaluative bias in favor of the status quo. This chapter explores the extent to which such prudence-derived anti-enhancement sentiments are justified. We develop a heuristic, inspired by the field of evolutionary medicine, for identifying promising human enhancement interventions. The heuristic incorporates the grains of truth contained in ‘’nature knows best’’ attitudes while providing criteria for the special cases where we have reason to believe that it is feasible for us to improve on nature.
From the review “The population genetics of beneficial mutations”, Orr 2010:
Under these so-called strong-selection weak-mutation conditions, the population is essentially made up of a single wild-type DNA sequence….Each of these [possible] sequences, including the wild-type, is assigned a [reproductive] fitness from some distribution. The key point, however, is that this overall distribution of fitness is unknown. Despite this, we do know two things. First, the wild-type allele is highly fit; indeed it is fitter than all of its m mutant “neighbour” sequences (this is why it is wild-type). Second, any beneficial mutations would be even fitter and so would fall even farther out in the tail of the fitness distribution. (We assume for now that this tail falls off in some “ordinary” smooth way; see below.) At some point in time, the environment changes and the wild-type allele slips slightly in fitness and one or more of the m mutations becomes beneficial. The question is: what is the size of the fitness gap between the wild-type and a beneficial sequence?
To answer this, Gillespie assumed that only one beneficial mutation is available. Taking advantage of an obscure part of EVT concerned with “extreme spacings”, he showed that, more or less independently of the shape of the unknown overall distribution of fitness, this fitness gap - the fitness effects of new beneficial mutations - is exponentially distributed. This result was later generalized by Orr (2003) to any modest number of beneficial mutations (i.e. the wild-type sequence might mutate to 5 or 10 or so different beneficial mutations). Mutation should thus often yield beneficial alleles of small effect and rarely yield those of large effect. In retrospect, it is clear that this exponential distribution of beneficial effects is a simple consequence of a well-known result from so-called peaks-over-threshold models in EVT (Leadbetter et al. 1983). A large set of results, concerning both the first substitution during adaptation and the properties of entire adaptive walks to local optima rest on this result (Orr 2002, 2004, 2005; Rokyta et al. 2005)…Unfortunately, the data available thus far from the relevant experiments are mixed.
For example, Baker et al 2002, Knight et al 1999 & Rauch et al 2012 do some genetics work with retarded children and turn up all sorts of mutations & change, while Zechner et al 2001 observes that literally hundreds of mutations (many on the X chromosome) have been linked with retardation. Autism likewise impairs intelligence massively, and seems to be caused by de novo mutations; see Iossifov et al 2014, De Rubeis et al 2014. In contrast, the search for genetics leading to greater intelligence is still in its infancy with tentative findings of small effect.↩
One old observation in allometry is that animals and mammals in particular have particular mathematical relation between heart rate, mass, and longevity - except humans are an outlier, living almost twice as long as they ‘should’, even compared to chimpanzees. See “Human Longevity Compared to Mammals”, “Animal Longevity and Scale”; cf. the Heartbeat hypothesis.↩
IQ scores of any reliability are unavailable from before the early 1990s; broad estimates from dysgenic considerations and computer models (of uncertain reliability) suggest genotypic IQs (ceilings given good environment, nutrition etc) for western Europe in the middle 90s.↩
I should note the authors claim their data shows that they found a Flynn effect and moreover, “The effect was found in the top 5% at a rate similar to the general distribution, providing evidence for the first time that the entire curve is likely increasing at a constant rate.” I disagree with this interpretation; the scores increases come solely from the mathematical subtests. As they acknowledge on page 5:
In contrast to the mathematical ability results, the ACT-S, ACT-E, and SAT-V all indicated a slight decrease (− 0.05 for the ACT-S and SAT-V and − 0.06 for the ACT-E). For 7th-grade students the only verbal test that demonstrated a slight gain was the ACT-R (0.09). Appendixes A and B show increasing variances for the SAT-V and ACT-R, but fairly stable or slightly decreasing variances for the ACT-S and ACT-E. Therefore, the small composite gains on the SAT and ACT were generally composed of large gains on the math subtests and slight losses on the science and verbal subtests.
This is more than a little strange if the Flynn effect is genuinely operating, as an increase in Gf ought to increase scores on all subtests; it is more consistent with prosaic explanations like the increased emphasis on math education slightly increasing scores.↩
See Wikipedia or look at the literature, eg. “Profound effects of combining choline and piracetam on memory enhancement and cholinergic function in aged rats”↩
But note that Helland et al 2008 found the IQ increase was gone by the 7 year followup, the latest in a long line of infancy or early childhood interventions to discover that promising early IQ gains “faded out”.↩
Emphasis added; Bostrom/Sandberg 2006.↩
Lynch et al 2011:
There is a large and often conflicting literature on the effects of modafinil on components of cognition. Some studies obtained a clear improvement in sustained attention in healthy human subjects (Randall et al., 2005) but others failed to find such effects (Turner et al., 2003). Similar discrepancies occur in the literature on animals (Waters et al., 2005). A recent, multi-factorial analysis provided convincing evidence that moderate doses of modafinil improve attention in healthy middle-aged rats without affecting motivation or locomotor activity (Morgan et al., 2007). Importantly, these effects became evident only as attentional demands were increased. In all, it seems reasonable at this point to conclude that modafinil’s effects on basic psychological state variables - wakefulness - can translate into selective improvements in attention.
There is also a sizable literature suggesting that the above conclusion can be extended to memory encoding. An intriguing aspect of these studies in rodents (Beracochea et al., 2002) and humans (Turner et al., 2003; Baranski et al., 2004; Muller et al., 2004; Randall et al., 2005) is that they generally point to a drug influence on working memory as opposed to the encoding of long-term memory for specific information (Minzenberg and Carter, 2008). (A similar argument was made earlier for Ritalin.) For example, the above noted work on middle-aged rats found no evidence for accelerated acquisition of a visual discrimination problem, with minimal demands on working memory, despite clear improvements in attention. There are, however, studies showing that modafinil accelerates the acquisition of simple rules (‘win-stay’) (Beracochea et al., 2003), a spatial learning protocol (Shuman et al., 2009), and a non-match to position problem (Ward et al., 2004) in rodents. It is tempting to speculate that we are here seeing hierarchical effects of modafinil such that enhanced wakefulness produces greater attention that in turn improves both working memory and simple rule learning.
But does the above sequence in fact improve the integrative psychological processes that constitute cognition? By far the greater part of the human studies with modafinil involves subjects with impairments to performance (sleep deprivation) or psychiatric disorders. None of the animal studies used recently developed tests (see below) that are explicitly intended to assess variables such as recall vs. recognition or ‘top-down’ forcing of attention. This leaves a small set of experiments involving performance by healthy human subjects on relatively simple learning/perceptual problems. A retrospective analysis of several studies led to the conclusion that modafinil does not produce a ‘global’ enhancement of cognition (Randall et al., 2005).
Tononi & Cirelli 2006:
About 40% of the energy requirements of the cerebral cortex - by far the most metabolically expensive tissue in the body - are due to neuronal repolarization following postsynaptic potentials.67 The higher the synaptic weight impinging on a neuron, the higher this portion of the energy budget. Moreover, increased synaptic weight is thought to lead to increased average firing rates,68 and spikes in turn are responsible for another 40% of the gray matter energy budget.67 Therefore, it would seem energetically prohibitive for the brain to let synaptic weight grow without checks as a result of waking plasticity. Indeed, if PET data11 offer any indication, after just one waking day energy expenditure may grow by as much as 18%.
…Another benefit of synaptic downscaling/downselection during sleep would be in terms of space requirements. Synaptic strengthening is thought to be accompanied by morphological changes, including increased size of terminal boutons and spines, and synapses may even grow in number (e.g.3,4,63). But space is a precious commodity in the brain, and even minuscule increases in volume are extremely dangerous. For example, neocortical gray matter is tightly packed, with wiring (axons and dendrites) taking up ~60% of the space, synaptic contacts (boutons and spines) ~20%, and the rest (cell bodies, vessels, extracellular space) the remaining 20%.69 Thus, sleep would be important not just to keep in check the metabolic cost of strengthened synapses, but also to curb their demands on brain real estate.
“Acetylcholine and synaptic homeostasis”, Fink et al 2012, suggests the mechanism of synaptic upscaling & downscaling to be related to acetylcholine:
We propose that the influence of acetylcholine (ACh) may provide a mechanism for both upscaling and downscaling of cortical synapses. Specifically, experimental studies have shown that ACh modulation switches the phase response curves of cortical pyramidal cells from Type II to Type I. Our computational studies of cortical networks show that the presence of ACh induces cellular and network dynamics which lead to net synaptic potentiation under a standard STDP rule, while the absence of ACh alters dynamics in such a way that the same STDP rule leads to net depotentiation (see Fig. 1). Thus the well-established prevalence of ACh in cortical circuits during waking may lead to global synaptic potentiation, while the absence of ACh during NREM sleep may lead to global depotentation.
ACh is broken down by acetylcholinesterase, and ACh receptors can also be blocked by various drugs classified as anticholinergics; anticholinergics like diphenhydramine cause sedation, “brain fog”, and have been used to treat insomnia. If the absence of ACh enables the downscaling, could the process be sped up by intervening with such drugs during sleep? Alternately, the theory could be tested by intervening with the opposite drugs and testing how high brain caloric consumption is upon waking. (These would have to be animal studies; drugs like atropine or scopolamine are dangerous to use in humans.)
If this theory is borne out, it may suggest a more precise exception for piracetam or cholinergics in general: increasing potentiation may make the later depotentiation “too expensive” either energy- or time-wise. Additional predictions may be that: people who are more intelligent (thanks to having ACh upregulated for whatever reason) will need more sleep or energy; use of cholinergics will increase sleep or energy needs in normal people. Retrodictions include babies sleeping a great deal and the elderly sleeping less. (Since memory formation is already strongly linked to sleep and may increase sleep need itself, this is a confound that needs to be taken into account along with others like the major sleep disturbances of the elderly & lack of melatonin secretion.)↩
“Molecular Insights into Human Brain Evolution”, Bradbury 2005:
A bigger, more complex brain may have advantages over a small brain in terms of computing power, but brain expansion has costs. For one thing, a big brain is a metabolic drain on our bodies. Indeed, some people argue that, because the brain is one of the most metabolically expensive tissues in our body, our brains could only have expanded in response to an improved diet. Another cost that goes along with a big brain is the need to reorganise its wiring. “As brain size increases, several problems are created”, explains systems neurobiologist Jon Kaas (Vanderbilt University, Nashville, Tennessee, United States). “The most serious is the increased time it takes to get information from one place to another.” One solution is to make the axons of the neurons bigger but this increases brain size again and the problem escalates. Another solution is to do things locally: only connect those parts of the brain that have to be connected, and avoid the need for communication between hemispheres by making different sides of the brain do different things. A big brain can also be made more efficient by organising it into more subdivisions, “rather like splitting a company into departments”, says Kaas. Overall, he concludes, because a bigger brain per se would not work, brain reorganisation and size increase probably occurred in parallel during human brain evolution. The end result is that the human brain is not just a scaled-up version of a mammal brain or even of an ape brain.
Hills & Hertwig 2011:
Consider the human female pelvis. Because its dimensions are small relative to a baby’s head, obstetric complications during labor are common. Why hasn’t evolution improved the survival chances of both mother and baby by selecting for a larger female pelvis? The widely accepted explanation is that the optimal pelvis for bipedal locomotion and the optimal pelvis for encephalization (the progressive increase in the baby’s brain size) place competing demands on the human pelvis. Bipedal locomotion requires substantial skeletal changes, including alterations in the pelvic architecture (Wittman & Wall, 2007), and such changes must compete (in an evolutionary sense) with the obstetric demands of human babies’ relatively large brains.
pg 2, “The likelihood of cognitive enhancement”, Lynch et al 2011:
Anatomists often resort to allometry when dealing with questions of selective pressures on brain regions. Applied to brain proportions, this involves collecting measurements for the region of interest - e.g., frontal cortex - for a series of animals within a given taxonomic group and then relating it to the volume or weight of the brains of those animals. This can establish with a relatively small degree of error whether a brain component in a particular species is larger than would be predicted from that species’ brain size. While there is not a great deal of evidence, studies of this type point to the conclusion that cortical subdivisions in humans, including association regions, are about as large as expected for an anthropoid primate with a 1350 cm3 brain. The volume of area 10 of human frontal cortex, for example, fits on the regression line (area 10 vs. whole brain) calculated from published data (Semendeferi et al., 2001) for a series composed of gibbons, apes and humans (Lynch and Granger, 2008 [Big brain: the origins and future of human intelligence]). Given that this region is widely assumed to play a central role in executive functions and working memory, these observations do not encourage the idea that selective pressures for cognition have differentially shaped the proportions of human cortex. Importantly, this does not mean that those proportions are in any sense typical. The allometric equations involve different exponents for different regions, meaning that absolute proportions (e.g., primary sensory cortex vs. association cortex) change as brains grow larger. The balance of parts in the cortex of the enormous human brain is dramatically different than found in the much smaller monkey brain: area 10, for instance, occupies a much greater percentage of the cortex in man. But these effects seem to reflect expansion according to rules embedded in a conserved brain plan rather than selection for the specific pattern found in humans (Finlay et al., 2001).
Humans also do not rank first, or even close to first, in relative brain size (expressed as a percentage of body mass), in absolute size of the cerebral cortex, or in gyrification (3). At best, we rank first in the relative size of the cerebral cortex expressed as a percentage of brain mass, but not by far. Although the human cerebral cortex is the largest among mammals in its relative size, at 75.5% (4), 75.7% (5), or even 84.0% (6) of the entire brain mass or volume, other animals, primate and nonprimate, are not far behind: The cerebral cortex represents 73.0% of the entire brain mass in the chimpanzee (7), 74.5% in the horse, and 73.4% in the short-finned whale (3).
…If encephalization were the main determinant of cognitive abilities, small-brained animals with very large encephalization quotients, such as capuchin monkeys, should be more cognitively able than large-brained but less encephalized animals, such as the gorilla (2). However, the former animals with a smaller brain are outranked by the latter in cognitive performance (13).
…However, this notion is in disagreement with the observation that animals of similar brain size but belonging to different mammalian orders, such as the cow and the chimpanzee (both at about 400 g of brain mass), or the rhesus monkey and the capybara (at 70-80 g of brain mass), may have strikingly different cognitive abilities and behavioral repertoires.
…Despite common remarks in the literature that the human brain contains 100 billion neurons and 10- to 50-fold more glial cells (e.g., 57-59), no references are given to support these statements; to the best of my knowledge, they are none other than ballpark estimates (60). Comparing the human brain with other mammalian brains thus required first estimating the total numbers of neuronal and nonneuronal cells that compose these brains, which we did a few years ago (25). Remarkably, at an average of 86 billion neurons and 85 billion nonneuronal cells (25), the human brain has just as many neurons as would be expected of a generic primate brain of its size and the same overall 1:1 nonneuronal/ neuronal ratio as other primates (26). Broken down into the cerebral cortex, cerebellum, and rest of the brain, the neuronal scaling rules that apply to primate brains also apply to the human brain (25) (Fig. 3 A and C, arrows). Neuronal densities in the cerebral cortex and cerebellum also fit the expected values in humans as in other primate species (Fig. 3B), and the ratio between nonneuronal and neuronal cells in the whole human brain of 1:1 (not 10:1, as commonly reported) is similar to that of other primates (25). The number of neurons in the gray matter alone of the human cerebral cortex, as well as the size of the subcortical white matter and the number of nonneuronal cells that it contains, also conforms to the rules that apply to other primates analyzed (47). Most importantly, even though the relative expansion of the human cortex is frequently equated with brain evolution, which would have reached its crowning achievement in us (61), the human brain has the ratio of cerebellar to cerebral cortical neurons predicted from other mammals, primate and nonprimate alike (Fig. 4A).
Contrary to expectations, dividing total glucose use per minute in the cerebral cortex or whole brain (69) by the number of brain neurons revealed a remarkably constant average glucose use per neuron across the mouse, rat, squirrel, monkey, baboon, and human, with no significant relationship to neuronal density and, therefore, to average neuronal size (70). This is in contrast to the decreasing average metabolic cost of other cell types in mammalian bodies with increasing cell size (71-73), with the single possible exception of muscle fibers (74). The higher levels of expression of genes related to metabolism in human brains compared with chimpanzee and monkey brains (75, 76) might therefore be related not to an actual increase in metabolism per cell but to the maintenance of average neuronal metabolism in the face of decreasing metabolism in other cell types in the body. That the average energetic cost per neuron does not scale with average neuronal cell size has important physiological implications. First, considering the obligatory increased cost related to a larger surface area (68), the evolution of neurons with a constant average energetic cost regardless of their total cell size implies that the relationship between larger neuronal size and a larger G/N ratio must not be related to increased metabolic needs, as usually assumed.
…Second, the constant average energetic cost per neuron across species implies that larger neurons must compensate for the obligatory increased metabolic cost related to repolarizing the increased surface area of the cell membrane. This compensation could be implemented by a decreased number of synapses and/or decreased rates of excitatory synaptic transmission (69). Synaptic homeostasis and elimination of excess synapses [e.g., during sleep (77)], the bases of synaptic plasticity, might thus be necessary consequences of a tradeoff imposed by the need to constrain neuronal energetic expenditure (70). Another consequence of a seemingly constant metabolic cost per neuron across species is that the total metabolic cost of rodent and primate brains, and of the human brain, is a simple, linear function of their total number of neurons (70) (Fig. 6), regardless of average neuronal size, absolute brain size, or relative brain size compared with the body. At an average rate of 6 kcal/d per billion neurons (70), the average human brain, with 86 billion neurons, costs about 516 kcal/d. That this represents an enormous 25% of the total body energetic cost is simply a result of the “economical” neuronal scaling rules that apply to primates in comparison to rodents, and probably to other mammals in general
…Growing a large body comes at a cost. Although large animals require less energy per unit of body weight, they have considerably larger total metabolic requirements that, on average, scale with body mass raised to an exponent of ∼3/4 (84-87). Thus, large mammals need to eat more, and they cannot concentrate on rare, hard-to-find, or catch foods (88). Adding neurons to the brain, however, also comes at a sizable cost, as reviewed above: 6 kcal/d per billion neurons (70). In primates, whose brain mass scales linearly with its number of neurons, this implies that total brain metabolism scales linearly with brain volume or mass, that is, with an exponent of 1, which is much greater than the much cited 3/4 exponent of Kleiber (84) that relates body metabolism to body mass. The discrepancy suggests that, per gram, the cost of primate brain tissue scales faster than the cost of nonneuronal bodily tissues, which calls for a modification of the “expensive tissue hypothesis” of brain evolution (89), according to which brain size is a limiting factor. Given the steep, linear increase in brain metabolic cost with increasing numbers of neurons, we conclude that metabolic cost is a more limiting factor to brain expansion than previously suspected. In our view, it is not brain size but, instead, absolute number of neurons that imposes a metabolic constraint on brain scaling in evolution, because individuals with larger numbers of neurons must be able to sustain their proportionately larger metabolic requirements to keep their brain functional. The larger the number of neurons, the higher is the total caloric cost of the brain, and therefore the more time required to be spent feeding to support the brain alone, and feeding can be very time-consuming (90). Based on their brain mass [estimated from cranial capacity (91)], we predicted that total numbers of neurons in the brain increased from 27 to 35 billion neurons in Australopithecus and Paranthropus species to close to 50-60 billion neurons in Homo species from Homo rudolfensis to Homo antecessor, to 62 billion neurons in Homo erectus, and to 76-90 billion neurons in Homo heidelbergensis and Homo neanderthalensis (62), which is within the range of variation found in modern Homo sapiens (25). It can thus be seen how any increase in total numbers of neurons in the evolution of hominins and great apes would have taxed survival in a limiting, if not prohibitive, way, given that it probably would have to occur in a context of already limiting feeding hours: The added 60 billion brain neurons from an orangutan-sized hominin ancestor to modern Homo require an additional 360 kcal/d, which is probably not readily available to great apes on their diet.
It has been proposed that the advent of the ability to control fire to cook foods, which increases enormously the energy yield of foods and the speed with which they are consumed (92, 93), may have been a crucial step in allowing the near doubling of numbers of brain neurons that is estimated to have occurred between H. erectus and H. sapiens (94).
“How Hard Is Artificial Intelligence? Evolutionary Arguments and Selection Effects”, Shulman & Bostrom 2012:
…convergent evolution-the independent development of an innovation in multiple taxa-can help us to understand the evolvability of human intelligence and its precursors, and to evaluate the evolutionary arguments for AI.
The Last Common Ancestor (LCA) shared between humans and octopuses, estimated to have lived at least 560 million years in the past, was a tiny wormlike creature with an extremely primitive nervous system; it was also an ancestor to nematodes and earthworms (Erwin and Davidson 2002). Nonetheless, octopuses went on to evolve extensive central nervous systems, with more nervous system mass (adjusted for body size) than fish or reptiles, and a sophisticated behavioral repertoire including memory, visual communication, and tool use. [See e.g. Mather (1994, 2008), Finn, Tregenza, and Norman (2009) and Hochner, Shomrat, and Fiorito (2006) for a review of octopus intelligence.] Impressively intelligent animals with more recent LCAs include, among others, corvids (crows and ravens, LCA about 300 million years ago),[For example, a crow named Betty was able to bend a straight wire into a hook in order to retrieve a food bucket from a vertical tube, without prior training; crows in the wild make tools from sticks and leaves to aid their hunting of insects, pass on patterns of tool use, and use social deception to maintain theft-resistant caches of food; see Emery and Clayton (2004). For LCA dating, see Benton and Ayala (2003).] elephants (LCA about 100 million years ago). [See Archibald (2003) for LCA dating, and Byrne, Bates, and Moss (2009) for a review arguing that elephants’ tool use, number sense, empathy, and ability to pass the mirror test suggest that they are comparable to non-human great apes.] In other words, from the starting point of those wormlike common ancestors in the environment of Earth, the resources of evolution independently produced complex learning, memory, and tool use both within and without the line of human ancestry.
See the chart on page 3 of “The pattern of evolution in Pleistocene human brain size”; note also how high some of the recent skull volumes are - ~1800 cc - compared to modern cranial capacity with an average closer to 1500 cc (although apparently modern extremes can still reach 1800-1900 cc).↩
The Neanderthals’ birth brain size was similar to ours, and their adult brain size was noticeably larger:
Brain size reduction in modern humans over the past 40,000 years is well-documented," the researchers said in their notes. “We hypothesize that growing smaller but similarly efficient brains might have represented an energetic advantage, which paid off in faster reproductive rates in modern [humans] compared to Pleistocene people. Reducing brain size thus might represent an evolutionary advantage.
“Evolution of the human brain: is bigger better?”: “Since the Late Pleistocene (approximately 30,000 years ago), human brain size decreased by approximately 10%” For popular coverage of explanations, see Discover’s “If Modern Humans Are So Smart, Why Are Our Brains Shrinking?”.↩
The data is uncertain, but there seems to be a substantial increase in ratio of old skeletons found over time; Caspari & Lee 2004 (“Older age becomes common late in human evolution”), pg 2, find the ‘older to younger adults ratio’ for various humanoid groups to be:
This could have many explanations (perhaps a slow accretion of technology/culture allowed older people to survive with no connection to human biology or evolution), but in this context, I can’t help but wonder - could old age be increasing because intelligence is so expensive that old age is the only way for the genes to recoup their investments? Or could it be that increases in human intelligence used to pay off within a normal lifespan because the humans learned faster, but now they have reached a limit on their intelligence, how fast they learn, and so to be more effective, the slow learners have to learn for longer?↩
…Wray and his colleagues compared SLC2A1 in humans and other animals. They discovered that our ancestors acquired an unusually high number of mutations in the gene. The best explanation for that accumulation of mutations is that SLC2A1 experienced natural selection in our own lineage, and the new mutations boosted our reproductive success. Intriguingly, the Duke team discovered that the mutations didn’t alter the shape of the glucose transporters. Rather, they changed stretches of DNA that toggled the SLC2A1 gene on and off.
Wray guessed that these mutations changed the total number of glucose transporters built in the human brain. To test his theory, he looked at slices of human brain tissue. In order to make glucose transporters, the cells must first make copies of the SLC2A1 gene to serve as a template. Wray discovered that in human brains there were 2.5 to 3 times as many copies of SLC2A1 as there were in chimpanzee brains, suggesting the presence of more glucose transporters as well. Then he looked at glucose transporters that deliver the sugar to muscles. The gene for these muscle transporters, called SLC2A4, also underwent natural selection in humans, but in the opposite direction. Our muscles contain fewer glucose transporters than in chimps’ muscles. Wray’s results support the notion that our ancestors evolved extra molecular pumps to funnel sugar into the brain, while starving muscles by giving them fewer transporters.
I am not the only one to have noticed that the genetic disorder theory of Ashkenazi intelligence seems like a beautiful example of trade-offs; Hills & Hertwig 2011:
The Ashkenazi Jew population provides a less well-known but more dramatic example of between-domains trade-offs (see Cochran, Hardy, & Harpending, 2006). Among the Ashkenazi Jews, the average IQ is approximately 0.7 to 1 standard deviation above that of the general European population. Recent evidence indicates that this rise in IQ was the consequence of evolutionary selection for greater intelligence among European Jews over approximately the last 2,000 years. However, this greater capacity for learning appears to have come with a specific side effect: a rise in the prevalence of sphingolipid diseases, such as Tay-Sachs, Niemann Pick, Gaucher, and mucolipidosis. Central to our point, these diseases are correlated with the same neural causes that rendered possible increased IQ, such as increased dendrite development.
…overt volumetric decline of particular brain structures, such as the hippocampus and frontal lobe, has only been observed in humans…In contrast to humans, who showed a decrease in the volume of all brain structures over the lifespan [on fMRI], chimpanzees did not display significant age-related changes. Using an iterative age-range reduction procedure, we found that the significant aging effects in humans were because of the leverage of individuals that were older than the maximum longevity of chimpanzees. Thus, we conclude that the increased magnitude of brain structure shrinkage in human aging is evolutionarily novel and the result of an extended lifespan.
“Paleontological Tests: Human-like Intelligence Is Not A Convergent Feature Of Evolution”, Lineweaver 2007 discusses the rarity of intelligence in the context of the Fermi paradox:
G.G. Simpson in “The Nonprevalence of Humanoids” (1964) articulated the case that humans (or any given species) were a quirky product of terrestrial evolution and therefore we should not expect to find humanoids elsewhere. Thus stupid things do not, in general acquire human-like intelligence. The evidence we have tells us that once extinct, species do not re-evolve. Evolution is irreversible. This is known as Dollo’s Law (Dollo 1893, Gould 1970). The re-evolution of the same species is not something that happens only rarely. It never has happened…
“We are not requiring that they follow the particular route that led to the evolution of humans. There may be many different evolutionary pathways, each unlikely, but the sum of the number of pathways to intelligence may nevertheless be quite substantial.” (Sagan 1995a)
To which Mayr replied:
“Sagan adopts the principle”it is better to be smart than to be stupid," but life on Earth refutes this claim. Among all the forms of life, neither the prokaryotes nor protists, fungi or plants has evolved smartness, as it should have if it were “better.” In the 28 plus phyla of animals, intelligence evolved in only one (chordates) and doubtfully also in the cephalopods. And in the thousands of subdivisions of the chordates, high intelligence developed in only one, the primates, and even there only in one small subdivision. So much for the putative inevitability of the development of high intelligence because “it is better to be smart.”( Mayr 1995b)
…These ancestors and their lineages have continued to exist and evolve and have not produced intelligence. All together that makes about 3 billion years of prokaryotic evolution that did not produce high intelligence and about 600 million years of protist evolution that did not produce high intelligence…What Drake, Sagan and Conway-Morris have done is interpret correlated parallel moves in evolution as if they were unconstrained by shared evolution but highly constrained by a universal selection pressure towards intelligence that could be extrapolated to extraterrestrials. I am arguing just the opposite – that the apparently independent evolution toward higher E.Q. is largely constrained by shared evolution with no evidence for some universal selection pressure towards intelligence. If this view is correct, we cannot extrapolate the trends toward higher E.Q. to the evolution of extraterrestrials. If the convergence of dolphins and humans on high E.Q. has much to do with the 3.5 Gyr of shared history (and I argue that it has everything to do with it) then we are not justified to extrapolate this convergence to other extraterrestrial life forms that did not share this history. Extraterrestrials are related to us in the sense that they may be carbon and water based - they may have polymerized the same monomers using amino acids to make proteins, nucleotides to make a genetic code, lipids to make fats and sugars to make polysaccharides. However, our “common ancestor” with extraterrestrials was probably pre-biotic and did not share a common limited set of genetic toggle switches that is responsible for the apparently independent convergences among terrestrial life forms.
…If heads were a convergent feature of evolution one would expect independent lineages to evolve heads. Our short twig on the lower left labeled “Homo” has heads, but heads are found in no other branch. Our two closest relatives, plants and fungi, do not seem to have any tendency toward evolving heads. The evolution of heads (encephalization) is therefore not a convergent feature of evolution. Heads are monophyletic and were once the possessions of only one quirky unique species that lived about six or seven hundred million years ago. Its ancestors, no doubt possessed some kind of proto-head related to neural crests and placodes (Wada 2001, Manzanares and Nieto 2003). Drake (2006) stated that “[intelligence] is not a fluke that has occurred in some small sub-set of animal life.” However, Fig. 4 shows that intelligence, heads, even all animal life or multicellular life, may well be a fluke that is a small sub-set of terrestrial life. One potential problem with this conclusion: It is possible that existing heads could have suppressed the emergence of subsequent heads. Such suppression would be difficult to establish….Life has been evolving on this planet for ~4 billion years. If the Planet of the Apes Hypothesis is correct and there is an intelligence niche that we have only recently occupied – Who occupied it 2 billion years ago, or 1 billion years ago or 500 million years ago? Stromatolites? Algae? Jellyfish?
…Today there are about a million species of protostomes and about 600,000 species of deuterostomes (of which we are one). We consider ourselves to be the smartest deuterostome. The most intelligent protostome is probably the octopus. After 600 million years of independent evolution and despite their big brains, octopi do not seem to be on the verge of building radio telescopes. The dolphinoidea evolved a large E.Q. between ~60 million years ago and ~20 million years ago (Marino et al 2004). Thus, dolphins have had ~20 million years to build a radio telescope and have not done so. This strongly suggests that high E.Q. may be a necessary, but is not a sufficient condition for the construction of radio telescopes. Thus, even if there were a universal trend toward high E.Q., the link between high E.Q. and the ability to build a radio telescope is not clear. If you live underwater and have no hands, no matter how high your E.Q., you may not be able to build, or be interested in building, a radio telescope.
More on octopuses and squids as possibly the only other example for intelligence:
That’s because other creatures that are believed intelligent - such as dolphins, chimpanzees, some birds, elephants - are relatively closely related to humans. They’re all on the vertebrate branch of the tree of life, so there’s a chance the intelligence shares at least some characteristics. Octopuses, however, are invertebrates. Our last common ancestor reaches back to the dim depths of time, 500 million to 600 million years ago. That means octopus intelligence likely evolved entirely separately and could be very different from that of vertebrates. “Octopuses let us ask which features of our minds can we expect to be universal whenever intelligence arises in the universe, and which are unique to us,” Godfrey-Smith said. “They really are an isolated outpost among invertebrates. … From the point of view of the philosophy of the mind, they are a big deal.
One of the major explanations for why primates and humans evolved intelligence is their close social relations & pack structure. Cephalopods are solitary, so why are they intelligent? Camouflage seems like a possibility… as does the endemic cephalopod cannibalism; from “Cannibalism in Cephalopods”:
Cannibalism is so common in adult squids that it was assumed that they are unable to maintain their daily consumption without a cannibalistic part in their diet, due to their high metabolic rates…Cephalopods have the capacity to prey on both relatively small and large prey due to the skilfulness of their arms and tentacles as well as the possibility to shred their food with their beaks…Recognition of familiarity in cephalopods is possible, but not certain…and the possible lack of recognition could promote non hetero-cannibalism in cephalopods.
To extensively quote the June 2011 Scientific American cover story:
“I think it is very likely that there is a law of diminishing returns” to increasing intelligence indefinitely by adding new brain cells…Size carries burdens with it, the most obvious one being added energy consumption. In humans, the brain is already the hungriest part of our body: at 2% of our body weight, this greedy little tapeworm of an organ wolfs down 20% of the calories that we expend at rest. In newborns, it’s an astounding 65%…
For decades this dividing of the brain into more work cubicles was viewed as a hallmark of intelligence. But it may also reflect a more mundane truth…: specialization compensates for the connectivity problem that arises as brains get bigger. As you go from a mouse brain to a cow brain with 100 times as many neurons, it is impossible for neurons to expand quickly enough to stay just as well connected. Brains solve this problem by segregating like-functioned neurons into highly interconnected modules, with far fewer long-distance connections between modules. The specialization between right and left hemispheres solves a similar problem; it reduces the amount of information that must flow between the hemispheres, which minimizes the number of long, interhemispheric axons that the brain needs to maintain. “All of these seemingly complex things about bigger brains are just the backbends that the brain has to do to satisfy the connectivity problem” as it gets larger…“It doesn’t tell us that the brain is smarter.”…Neurons do get larger as brain size increases, but not quite quickly enough to stay equally well connected. And axons do get thicker as brains expand, but not quickly enough to make up for the longer conduction delays…In fact, neuroscientists have recently seen a similar pattern in variations within humans: people with the quickest lines of communication between their brain areas also seem to be the brightest. One study…used functional magnetic resonance imaging to measure how directly different brain areas talk to one another - that is, whether they talk via a large or a small number of intermediary areas…Shorter paths between brain areas correlated with higher IQ…[Others] compared working memory (the ability to hold several numbers in one’s memory at once) among 29 healthy people…People with the most direct communication and the fastest neural chatter had the best working memory.
It is a momentous insight. We know that as brains get larger, they save space and energy by limiting the number of direct connections between regions. The large human brain has relatively few of these long-distance connections. But…these rare, nonstop connections have a disproportionate influence on smarts: brains that scrimp on resources by cutting just a few of them do noticeably worse…There is another reason to doubt that a major evolutionary leap could lead to smarter brains. Biology may have had a wide range of options when neurons first evolved, but 600 million years later a peculiar thing has happened. The brains of the honeybee, the octopus, the crow and intelligent mammals, Roth points out, look nothing alike at first glance. But if you look at the circuits that underlie tasks such as vision, smell, navigation and episodic memory of event sequences, “very astonishingly they all have absolutely the same basic arrangement.” Such evolutionary convergence usually suggests that a certain anatomical or physiological solution has reached maturity so that there may be little room left for improvement…So have humans reached the physical limits of how complex our brain can be, given the building blocks that are available to us? Laughlin doubts that there is any hard limit on brain function the way there is one on the speed of light. “It’s more likely you just have a law of diminishing returns,” he says. “It becomes less and less worthwhile the more you invest in it.” Our brain can pack in only so many neurons; our neurons can establish only so many connections among themselves; and those connections can carry only so many electrical impulses per second. Moreover, if our body and brain got much bigger, there would be costs in terms of energy consumption, dissipation of heat and the sheer time it takes for neural impulses to travel from one part of the brain to another.
Counter-evidence would be observations that indicate evolution trying to compensate for limits in one system by investing even more into another system; for example, it has been observed that childbirth in humans is extremely risky and dangerous compared to other primates because the infant head is so enormous compared to the birth canal. If intelligence weren’t valuable, one would expect the head size to remain constant or decrease, and one certainly would not expect the over-sized human brain to grow even faster after child birth; yet the human prefrontal cortex grows much faster in infancy than the chimpanzee prefrontal cortex does.↩
One peculiar example of how human brains may represent diminishing returns with a vengeance is how little damage losing most of the brain can cause - the loss of neurons and brain volume involved in hydrocephalus can be staggering, yet result only in mental impairment (as opposed to death) or even result in individuals who are above-normal in IQ & ‘bright’; neuroscientist Bradley Voytek covers examples in “Why We Don’t Need A Brain”.↩
eg. chimpanzees outperform humans on the simple working memory task Monkey Ladder (but Silberberg & Kearns 2009 and Cook & Wilson 2010 claim humans are equal or better with training; see also “Super Smart Animals”). Another fun statistic is that besides obviously being stronger, faster, and more dangerous than humans, chimpanzees have better immune systems inasmuch as they don’t overreact - over-reacting being a major cause of common issues like arthritis or asthma.↩
“If a Lion Could Talk: Animal Intelligence and the Evolution of Consciousness”, Stephen Budiansky:
Giving a blind person a written IQ test is obviously not a very mean meaningful evaluation of his mental abilities. Yet that is exactly what many cross-species intelligence tests have done. Monkeys, for example, were found not only to learn visual discrimination tasks but to improve over a series of such tasks – they formed a learning set, a general concept of the problem that betokened a higher cognitive process than a simple association. Rats given the same tasks showed difficulty in mastering the problems and no ability to form a learning set. The obvious conclusion was that monkeys are smarter than rats, a conclusion that was comfortably accepted, as it fit well with our preexisting prejudices about the distribution of general intelligence in nature. But when the rat experiments were repeated, only this time the rats were given the task of discriminating different smells, they learned quickly and showed rapid improvement on subsequent problems, just as the monkeys did.
The problem of motivation is another major confounding variable. Sometimes we may think we are testing an animal’s brain when we are only testing its stomach. For example, in a series of studies goldfish never learned to improve their performance when challenged with “reversal” tasks. These are experiments in which an animal is trained to pick one of two alternative stimuli (a black panel versus a white panel, say) in order to obtain a food reward; the correct answer is then switched and the subject has to relearn which one to pick. Rats quickly learned to switch their response when the previously rewarded answer no longer worked. Fish didn’t. This certainly fit comfortably with everyone’s sense that fish are dumber than rats. But when the experiment was repeated with a different food reward (a paste squirted into the tank right where the fish made its correct choice, as opposed to pellets dropped into the back of the tank), lo and behold the goldfish suddenly did start improving on reversal tasks. Other seemingly fundamental learning differences between fish and rodents likewise vanished when the experiments were redesigned to take into account differences in motivation.
Equalizing motivation is an almost insoluble problem for designers of experiments. Are three goldfish pellets the equivalent of one banana or fifteen bird seeds? How could we even know? We would somehow have to enter into the internal being of different animals to know for sure, and if we could do that we would not need to be devising roundabout experiments to probe their mental processes in the first place. When we do control for all of the confounding variables that we possibly can, the striking thing about the “pure” cognitive differences that remain is how the similarities in performance between different animals given similar problems vastly outweigh the differences. To be sure, there seems to be little doubt that chimpanzees can learn new associations with a single reinforced trial, and that that is genuinely faster than other mammals or pigeons do it. Monkeys and apes also learn lists faster than pigeons do. Apes and monkeys seem to have a faster and more accurate grasp of numerosity judgments than birds do. The ability to manipulate spatial information appears to be greater in apes than in monkeys.
But again and again experiments have shown that many abilities thought the sole province of “higher” primates can be taught, with patience, to pigeons or other animals. Supposedly superior rhesus monkeys did better than the less advanced cebus monkeys in a visual learning-set problem using colored objects. Then it turned out that the cebus monkeys did better than the rhesus monkeys when gray objects were used. Rats were believed to have superior abilities to pigeons in remembering locations in a radial maze. But after relatively small changes in the procedure and the apparatus, pigeons did just as well.
If such experiments had shown, say, that monkeys can learn lists of forty-five items but pigeons can only learn two, we would probably be convinced that there are some absolute differences in mental machinery between the two species. But the absolute differences are far narrower. Pigeons appear to differ from baboons and people in the way they go about solving problems that involve matching up two images that have been rotated one from the other, but they still get the right answers. They essentially do just as well as monkeys in categorizing slides of birds or fish or other things. Euan Macphail’s review of the literature led him to conclude that when it comes to the things that can be honestly called general intelligence, no convincing differences, either qualitative or quantitative, have yet been demonstrated between vertebrate species. While few cognitive researchers would go quite so far – and in deed we will encounter a number of examples of differences in mental abilities between species that are hard to explain as anything but a fundamental difference in cognitive function – it is striking how small those differences are, far smaller than “common sense” generally has it. Macphail has suggested that the “no-difference” stance should be taken as a “null hypothesis” in all studies of comparative intelligence; that is, it is an alternative that always has to be considered and ought to be assumed to be the case unless proven otherwise.
A recent example of teaching pigeons something previously only rhesus monkeys had been shown to learn is a 2011 paper demonstrating that pigeons can learn the general concept of ‘ascending’ or ‘larger’ groups - being taught to peck on groups of 3 rather than 2, or 4 rather than 3, and generalizing to pecking groups of 8 rather than 6.↩
“The pursuit of happiness (with or without kids)”, BBC News Online Magazine 2003:
Again, the figures do not bear it out. While the birth rate in the UK is the lowest since records began in 1924, our level of contentment has remained fairly steady. Two of the foremost thinkers on well-being, Richard Layard and Andrew Oswald, agree that children have a statistically insignificant impact on our happiness…In 2001, almost 90% of British people reported they were very or fairly satisfied with life. According to this new study, those without children are, by and large, every bit as content as those with…For mothers in particular, parenthood brings a new sort of pleasure, the result of spending time with their children, seeing them develop and providing a different take on life. Yet this comes at a cost, both financial and emotional, according to the report, which spoke to 1,500 adults, parents and non-parents, between the ages of 20 and 40. “Full-time working mothers are lower paid relative to women without children,” says Kate Stanley, who carried out the survey for the Institute for Public Policy Research. Most women also tend to take on the lion’s share of domestic and child-care duties, according to the survey. And since income and independence have a bearing on happiness, what motherhood giveth with one hand, it taketh away with the other. The trade-off is less acute for men, but according to the survey, they are less ecstatic about children anyway. While two-thirds of mothers say their children make them most happy, just over 40% of fathers agree…On the other side, those without children recognise they are freer to pursue their own interests and enjoyment than their tied-up, family-focused friends.
The Little Book of Talent (Coyle 2012), pg 80:
The solution is to ignore the bad habit and put your energy toward building a new habit that will override the old one. A good example of this technique is found in the work of the Shyness Clinic, a program based in Los Altos, California, that helps chronically shy people improve their social skills. The clinic’s therapists don’t delve into a client’s personal history; they don’t try to “fix” anything. Instead, they focus on building new skills through what they call a social fitness model: a series of simple, intense, gradually escalating workouts that develop new social muscles. One of the first workouts for a Shyness Clinic client is to walk up to a stranger and ask for the time. Each day the workout grows more strenuous-soon clients are asking five strangers for the time, making phone calls to acquaintances, or chatting with a stranger in an elevator. After a few months, some clients are “socially fit” enough to perform the ultimate workout: They walk into a crowded grocery store, lift a watermelon above their head, and purposely drop it on the floor, triumphantly enduring the stares of dozens of strangers. (The grocery store cleanup crew doesn’t enjoy this quite as much as the clients do.)
Handbook of Psychopathy, ed. Christopher Patrick 2005; “Psychopathic Personality: The Scope of the Problem”, Lykken:
For example, in her important study of mental illness in primitive societies, Murphy (1976) found that the Yupic-speaking Eskimos in northwest Alaska have a name, kunlangeta, for the
man who, for example, repeatedly lies and cheats and steals things and does not go hunting and, when the other men are out of the village, takes sexual advantage of many women-someone who does not pay attention to reprimands and who is always being brought to the elders for punishment. One Eskimo among the 499 on their island was called kunlangeta. When asked what would have happened to such a person traditionally, an Eskimo said that probably somebody would have pushed him off the ice when nobody else was looking. (p. 1026)
This is interesting since out of 500, the usual American base rates would predict not 1 but >10 psychopaths. Is this all due to the tribal and closely knit nature of more aboriginal societies, or could Eskimo society really have been selecting against psychopaths while big modern societies give scope for their talents & render them more evolutionarily fit? This may be unanswerable until the relevant genes are identified and samples of gene pools examined for the frequencies.
“Psychopathy in specific subpopulations”, Sullivan & Kosson (Handbook):
Rasmussen and colleagues (1999) hypothesized that in a nation such as Norway, where imprisonment is less frequent, more severe offenders who would be incarcerated anywhere are likely to comprise a higher proportion of the inmate population. However, this explanation does not likely apply to Scotland: The incarceration rate for the United States is five to eight times that of Scotland, but the base-rates of psychopathy in Scottish prisons are extremely low when compared to North American samples (i.e., 3% in Scotland vs. 28.4% in North America, applying the traditional PCL-R cutoff of ≥30; Cooke, 1995; Cooke & Michie, 1999; Hare, 1991).
“Treatment of Psychopathy: A Review of Empirical Findings”, Harris & Rice 2006 (Handbook):
We believe there is no evidence that any treatments yet applied to psychopaths have been shown to be effective in reducing violence or crime. In fact, some treatments that are effective for other offenders are actually harmful for psychopaths in that they appear to promote recidivism. We believe that the reason for these findings is that psychopaths are fundamentally different from other offenders and that there is nothing “wrong” with them in the manner of a deficit or impairment that therapy can “fix.” Instead, they exhibit an evolutionarily viable life strategy that involves lying, cheating, and manipulating others.
The evolutionary hypothesis of psychopathy is striking (eg. it’s partially heritable; or, sex offenders who target post-pubertal women have the highest PCL-R scores compared to any other subdivision of sex offenders), but highly speculative. It’s discussed a little skeptically in the chapter “Theoretical and Empirical Foundations” in the Handbook.↩