Are Sunk Costs Fallacies?

Human and animal sunk costs often aren’t, and sunk cost bias may be useful on an individual level to encourage learning. Convincing examples of sunk cost bias typically operate on organizational levels and are probably driven by non-psychological causes like competition.
psychology, philosophy, decision-theory, survey
2012-01-242019-06-12 finished certainty: likely importance: 9


“It is time to let bygones be bygones”

1, head of state (est. deaths: )

The (“Con­corde fal­lacy”, “esca­la­tion bias”, “com­mit­ment effect” etc.) could be defined as when an agent ignores that option X has the high­est , and instead chooses option Y because he chose option Y many times before, or sim­ply as “throw­ing good money after bad”. It can be seen as an attempt to derive some gain from mis­taken past choic­es. (A slo­gan for avoid­ing sunk costs: “give up your hopes for a bet­ter yes­ter­day!”) The sin­gle most famous exam­ple, and the rea­son for it also being called the “ fal­lacy”, would be the British and French gov­ern­ment invest­ing hun­dreds of mil­lions of dol­lars into the devel­op­ment of a super­sonic pas­sen­ger jet despite know­ing that it would never suc­ceed com­mer­cially2. Since Arkes & Blumer 1985’s3 force­ful inves­ti­ga­tion & denun­ci­a­tion, it has become received wis­dom4 that sunk costs are a bane of human­i­ty.

But to what extent is the “sunk cost fal­lacy” a real fal­la­cy?

Below, I argue the fol­low­ing:

  1. sunk costs are prob­a­bly issues in big orga­ni­za­tions

    • but maybe not ones that can be helped
  2. sunk costs are not issues in ani­mals

  3. sunk costs appear to exist in chil­dren & adults

    • but many appar­ent instances of the fal­lacy are bet­ter explained as part of a learn­ing strat­egy
    • and there’s lit­tle evi­dence sunk cost-like behav­ior leads to actual prob­lems in indi­vid­u­als
  4. much of what we call “sunk cost” looks like sim­ple care­less­ness & thought­less­ness

Subtleties

“One can­not pro­ceed from the infor­mal to the for­mal by for­mal means.”

, “

A “sunk cost fal­lacy” is clearly a fal­lacy in a sim­ple mod­el: ‘imag­ine an agent A who chooses between option X which will return $10 and option Y which will return $6, and agent A in pre­vi­ous rounds chose Y’. If A chooses X, it will be bet­ter off by $4 than if it chooses Y. This is cor­rect and as hard to dis­pute as ‘A implies B; A; there­fore B’. We can call both exam­ples . But in phi­los­o­phy, when we dis­cuss , we agree that it is always valid, but we do not always agree that it is : that A does in fact imply B, or that A really is the case, and so B is the case. ‘The moon being made of cheese implies the astro­nauts walked on cheese; the moon is made of cheese; there­fore the astro­nauts walked on cheese’ is log­i­cally valid, but not sound, since we don’t think that the moon is made of cheese. Or we differ with the first line as well, point­ing out that only some of the Apollo astro­nauts walked on the moon. We reject the sound­ness.

We can and must do the same thing in eco­nom­ic­s—but ceteris is never paribus. In sim­ple mod­els, sunk cost is clearly a valid fal­lacy to be avoid­ed. But is the real world com­pli­ant enough to make the fal­lacy sound? Notice the assump­tions we had to make: we wish away issues of risk (and risk aver­sion), long-de­layed con­se­quences, changes in options as a result of past invest­ment, and so on.

We can illus­trate this by look­ing at an even more sacred aspect of nor­ma­tive eco­nom­ics: . One of the key jus­ti­fi­ca­tions of expo­nen­tial dis­count­ing is that any other dis­count­ing can be mon­ey-pumped by an expo­nen­tial agent invest­ing at each time period at what­ever the pre­vail­ing return is or loan­ing at appro­pri­ate times. ( in The Break­down of Will gives the exam­ple of a hyper­bolic agent improv­i­dently sell­ing its win­ter coat every spring and buy­ing it just before the snow­storms every win­ter, being mon­ey-pumped by the con­sis­tent expo­nen­tial agen­t.) One of the assump­tions is that cer­tain rates of invest­ment return will be avail­able; but in the real world, rates can stag­ger around for long peri­ods. (Farmer & Geanako­p­los 2009)5 argues that if returns fol­low a more geo­met­ric ran­dom walk, hyper­bolic dis­count­ing is supe­rior6. Are they cor­rect? They are not much-cited or crit­i­cized. But even if they are wrong about hyper­bolic dis­count­ing, it needs prov­ing that expo­nen­tial dis­count­ing does in fact deal cor­rectly with chang­ing returns. (The mar­ket over the past few years has not turned in the prover­bial 8–9% annual returns, and one won­ders if there will ever be a big bull mar­ket that makes up for the great stag­na­tion.)

If we look at sunk cost lit­er­a­ture, we must keep many things in mind. For exam­ple:

  1. orga­ni­za­tion ver­sus indi­vid­u­als

    , “Bilge” (2013)

    Sunk costs seem espe­cially com­mon in groups, as has been noticed since the begin­ning of sunk cost research7; Khan et al 2000 found that cul­ture influ­enced how much man­agers were will­ing to engage in hypo­thet­i­cal sunk costs (South & East Asian more so than North Amer­i­can), and a 2005 meta-analy­sis that sunk cost was an issue, espe­cially in soft­ware-re­lated projects8, agree­ing with a 2009 meta-analy­sis, Desai & Chulkov. Muk­er­jee 2011 inter­viewed prin­ci­pals at Cal­i­forn­ian schools, find­ing evi­dence of sunk cost bias. Wikipedia char­ac­ter­izes the Con­corde inci­dent as “regarded pri­vately by the British gov­ern­ment as a ‘com­mer­cial dis­as­ter’ which should never have been start­ed, and was almost can­celed, but polit­i­cal and legal issues had ulti­mately made it impos­si­ble for either gov­ern­ment to pull out.” So at every point, coali­tions of politi­cians and bureau­crats found it in their self­-in­ter­est to keep the ball rolling. A sunk cost for the gov­ern­ment or nation as a whole is far from the same thing as a sunk cost for those coali­tion­s—re­spon­si­bil­ity is diffused, which encour­ages sunk cost9 (If Kennedy or other US pres­i­dents could not with­draw from Viet­nam or Iraq10 or Afghanistan11 due to per­ceived sunk costs12, per­haps the real prob­lem was why Amer­i­cans thought Viet­nam was so impor­tant and why they feared look­ing weak or pro­vok­ing another debate.) Peo­ple com­mit sunk cost much more eas­ily if some­one else is pay­ing, pos­si­bly in part because they are try­ing to still prove them­selves right—an under­stand­able and ratio­nal choice13! Other anec­dotes from Baz­er­man & Neale 1992 sug­gest sunk cost can be expen­sive cor­po­rate prob­lems, but of course are only anec­dotes; killed his com­pany by esca­lat­ing to an impos­si­bly expen­sive acqui­si­tion of but would Campeau ever have been a good cor­po­rate raider with­out his aggres­sive­ness; or can we say the Philip Mor­ris-Proc­tor & Gam­ble coffee price war was a mis­take with­out a great deal more infor­ma­tion; and was Bobby Fis­cher’s vendetta against the Soviet Union sunk cost or a ratio­nal strat­egy to Soviet col­lu­sion or sim­ply an early symp­tom of the appar­ent men­tal issues that saw him con­vert­ing to and impov­er­ished by a pecu­liar church and ulti­mately an inter­nally per­se­cuted con­vict in Ice­land?

    And why were those coali­tions in power in the first place? France and Britain have not found any bet­ter sys­tems of gov­ern­men­t—sys­tems which oper­ate effi­ciently and are also Nash equi­lib­ri­ums, which suc­cess­fully avoid any sunk costs in their myr­i­ads of projects and ini­tia­tives. In 1988 Col­lapse of Com­plex Soci­eties, he argues that soci­eties that over­reach do so because it is impos­si­ble for the orga­ni­za­tions and mem­bers to back down on com­plex­ity as long as there is still wealth to extract, even when ; when we accuse Pueblo Indi­ans of sunk cost and caus­ing their civ­i­liza­tion to col­lapse14, we should keep in mind there may be no gov­er­nance alter­na­tives. Deba­cles like the Con­corde may be nec­es­sary because the alter­na­tives are even worse—de­ci­sion paral­y­sis or insti­tu­tional para­noia15. Aggres­sive polic­ing of projects for sunk-costs may wind up vio­lat­ing Chester­ton’s fence if man­agers in later time peri­ods are not very clear on why the projects were started in the first place and what their ben­e­fits will be. If we suc­cess­fully ‘avoid’ sunk cost-style rea­son­ing, does that mean we will avoid future Viet­nams, at the expense of World War IIs?16 comes to mind here, par­tic­u­larly because one study recorded how a bank’s attempt to elim­i­nate sunk cost bias in its loan offi­cers resulted in back­fir­ing and eva­sion17; the over­all results seem to still have been an improve­ment, but it remains a cau­tion­ary les­son.

    What­ever pres­sures and feed­back loops cause sunk cost fal­lacy in orga­ni­za­tions may be com­pletely differ­ent from the causes in indi­vid­u­als.

  2. Non-mon­e­tary rewards and penal­ties

    “Indi­vid­ual organ­isms are best thought of as adap­ta­tion-ex­e­cuters rather than as fit­ness-max­i­miz­ers.” What does this mean in a sunk cost con­text? That we should be aware that humans may not treat the model at its lit­eral face value (with­out care­ful thought or strong encour­age­ment to do so, any­way), treat the sit­u­a­tion as sim­ply as ‘$10 ver­sus $6 (and sunk cost)’. It may be more like ‘$10 (and your—non-ex­is­ten­t—tribe’s con­dem­na­tion of you as greedy, insin­cere, smal­l­-mind­ed, and dis­loy­al) ver­sus $6 (and sunk cost)’18. If humans really are forced to think like this, then the mod­el­ing of pay­offs sim­ply does­n’t cor­re­spond with real­ity and of course our judge­ments will be wrong. Some assump­tions spit out sunk costs as ratio­nal strate­gies19. This is not a triv­ial issue here (see the self­-jus­ti­fi­ca­tion lit­er­a­ture, eg. Brock­ner 1981) or in other areas; for exam­ple, pro­vid­ing the cor­rect amount of rewards caused many differ­ences in lev­els of ani­mal intel­li­gence to sim­ply van­ish—the rewards had been unequal (see my excerpts of the essay “If a Lion Could Talk: Ani­mal Intel­li­gence and the Evo­lu­tion of Con­scious­ness”).

  3. Sunk costs ver­sus invest­ments and switch­ing costs

    Many choices for lower imme­di­ate mar­ginal return are invest­ments for greater future return. A sin­gle-stage model can­not cap­ture this. Like­wise, switch­ing to new projects is not free, and the more expen­sive switches are, the fewer switches is opti­mal (eg Chu­peau et al 2017).

  4. Demon­strated harm

    It’s not enough to sug­gest that a behav­ior may be harm­ful; it needs to be demon­strat­ed. One might argue that an all-y­ou-can-eat buffet will cause overeat­ing and then long-term harm to health, but do exper­i­ments bear out that the­o­ry?

Indeed, meta-analy­sis of esca­la­tion effect stud­ies sug­gests that sunk cost behav­ior is not one thing but reflects a vari­ety of the­o­rized behav­iors & effects of vary­ing ratio­nal­i­ty, rang­ing from pro­tect­ing one’s image & prin­ci­pal-a­gent con­flict to lack of information/options (Sleesman 2012), not all of which can be regarded as a sim­ple cog­ni­tive bias to be fixed by greater aware­ness.

Animals

“It really is the hard­est thing in life for peo­ple to decide when to cut their loss­es.”

“No, it’s not. All you have to do is to peri­od­i­cally pre­tend that you were mag­i­cally tele­ported into your cur­rent sit­u­a­tion. Any­thing else is the sunk cost fal­la­cy.”

John, Over­com­ing Bias

Point 3 leads us to an inter­est­ing point about sunk cost: it has only been iden­ti­fied in humans, or pri­mates at the widest20.

Arkes & Ayton 1999 (“The Sunk Cost and Con­corde Effects: Are Humans Less Ratio­nal Than Lower Ani­mals?”) claims (see also the very sim­i­lar Curio 1987):

The sunk cost effect is a mal­adap­tive eco­nomic behav­ior that is man­i­fested in a greater ten­dency to con­tinue an endeavor once an invest­ment in mon­ey, effort, or time has been made. The Con­corde fal­lacy is another name for the sunk cost effect, except that the for­mer term has been applied strictly to lower ani­mals, whereas the lat­ter has been applied solely to humans. The authors con­tend that there are no unam­bigu­ous instances of the Con­corde fal­lacy in lower ani­mals and also present evi­dence that young chil­dren, when placed in an eco­nomic sit­u­a­tion akin to a sunk cost one, exhibit more nor­ma­tively cor­rect behav­ior than do adults. These find­ings pose an enig­ma: Why do adult humans com­mit an error con­trary to the nor­ma­tive cost-ben­e­fit rules of choice, whereas chil­dren and phy­lo­ge­net­i­cally hum­ble organ­isms do not? The authors attempt to show that this para­dox­i­cal state of affairs is due to humans’ over­gen­er­al­iza­tion of the “Don’t waste” rule.

Specifi­cal­ly, in 1972, Trivers pro­posed that fathers are more likely to aban­don chil­dren, and moth­ers less like­ly, because fathers invest less resources into chil­dren—­moth­ers are, in effect, com­mit­ting sunk cost fal­lacy in tak­ing care of them. Dawkins & Carlisle 1976 pointed out that this is a mis­ap­pli­ca­tion of sunk cost, a ver­sion of point #3; Arkes & Ayton’s sum­ma­ry:

If parental resources become deplet­ed, to which of the two off­spring should nur­tu­rance be given? Accord­ing to Triver­s’s analy­sis, the older of the two off­spring has received more parental invest­ment by dint of its greater age, so the par­ent or par­ents will favor it. This would be an exam­ple of a past invest­ment gov­ern­ing a cur­rent choice, which is a man­i­fes­ta­tion of the Con­corde fal­lacy and the sunk cost effect. Dawkins and Carlisle sug­gested that the rea­son the older off­spring is pre­ferred is not because of the mag­ni­tude of the prior invest­ment, as Trivers had sug­gest­ed, but because of the older off­spring’s need for less invest­ment in the future. Con­sid­er­a­tion of the incre­men­tal ben­e­fits and costs, not of the sunk costs, com­pels the con­clu­sion that the older off­spring rep­re­sents a far bet­ter invest­ment for the par­ent to make.

Direct test­ing fails:

A num­ber of exper­i­menters who have tested lower ani­mals have con­firmed that they sim­ply do not suc­cumb to the fal­lacy (see, e.g., Arm­strong & Robert­son, 1988; Burger et al., 1989; Maestrip­ieri & All­e­va, 1991; Wik­lund, 199021).

A direct exam­ple of the Trivers vs Dawkins & Carlisle argu­ment:

A pro­to­typ­i­cal study is that of Maestrip­ieri and All­eva [1991], who tested the lit­ter defense behav­ior of female albino mice. On the 8th day of a moth­er’s lac­ta­tion peri­od, a male intruder was intro­duced to four differ­ent groups of mother mice and their lit­ters. Each lit­ter of the first group had been culled at birth to four pups. Each lit­ter of the sec­ond group had been culled at birth to eight pups. In the third group, the lit­ters had been culled at birth to eight pups, but four addi­tional pups had been removed 3 to 4 hr before the intruder was intro­duced. The fourth group was iden­ti­cal to the third except that the removed pups had been returned to the lit­ter after only a 10-min absence.

The logic of the Maestrip­ieri and All­eva (1991) study is straight­for­ward. If each mother attended to past invest­ment, then those lit­ters that had eight pups dur­ing the prior 8 days should be defended most vig­or­ous­ly, as opposed to those lit­ters that had only four pups. After all, hav­ing cared for eight pups rep­re­sents a larger past invest­ment than hav­ing cared for only four. On the other hand, if each mother attended to future costs and ben­e­fits, then those lit­ters that had eight pups at the time of test­ing should be defended most vig­or­ous­ly, as opposed to those lit­ters that had only four pups. The results were that the moth­ers with eight pups at the time of test­ing defended their lit­ters more vig­or­ously than did the moth­ers with four pups at the time of test­ing. The two groups of moth­ers with four pups did not differ in their level of aggres­sion toward the intrud­er, even though one group of moth­ers had invested twice the energy in rais­ing the young because they ini­tially had to care for lit­ters of eight pups.

Arkes & Ayton rebut 3 stud­ies by argu­ing:

  1. Dawkins & Brock­mann 1980: dig­ger wasps fight harder in pro­por­tion to how much food they con­tribut­ed, rather than the total—be­cause they are too stu­pid to count the total and only know how much they per­son­ally col­lected & stand to lose
  2. Lav­ery 1995: cich­lid fish suc­cess­ful in breed­ing also fight harder against preda­tors; because this may reflect an intrin­sic greater health­i­ness and greater future oppor­tu­ni­ties, rather than sunk cost fal­la­cy, an argu­ment sim­i­lar to Northcraft & Wolfe 1984’s crit­i­cism of appar­ent sunk costs in eco­nom­ics22
  3. Weath­er­head 1979: savan­nah spar­rows defend their nests fiercer as the nest approaches hatch­ing; because as already pointed out, the closer to hatch­ing, the less future invest­ment is required for X chicks com­pared to start­ing all over
  4. To which 3 we may add tun­dra swan feed­ing habits, which are pre­dicted to be opti­mal by Pavlic & Passino 201123, who remark “we show how opti­miza­tion of Eq. 3 pre­dicts the sunk-cost effect for cer­tain sce­nar­ios; a com­mon ele­ment of every case is a large ini­tial cost.”

(Navarro & Fan­tino 2004, , claim sunk cost effect in pigeons, but it’s hard to com­pare its strength to sunk cost in humans, and the setup is com­plex enough I’m not sure it is sunk cost.)

Humans

Children

Arkes & Ayton cite 2 stud­ies find­ing that com­mit­ting sunk cost bias increases with age—as in, chil­dren do not com­mit it. They also cite 2 stud­ies say­ing that

Web­ley and Plaisier (1997) tested chil­dren at three differ­ent age groups (5–6, 8–9, and 11–12) with the fol­low­ing mod­i­fi­ca­tion of the Tver­sky and Kah­ne­man (1981) exper­i­ment …the older chil­dren pro­vided data anal­o­gous to those found by Tver­sky and Kah­ne­man (1981): When the money was lost, the major­ity of the respon­dents decided to buy a tick­et. On the other hand, when the ticket was lost, the major­ity decided not to buy another tick­et. This differ­ence was absent in the youngest chil­dren. Note that it is not the case that the youngest chil­dren were respond­ing ran­dom­ly. They showed a defi­nite pref­er­ence for pur­chas­ing a new ticket whether the money or the ticket had been lost. Like the ani­mals that appear to be immune to the Con­corde fal­la­cy, young chil­dren seemed to be less sus­cep­ti­ble than older chil­dren to this vari­ant of the sunk cost effect. The results of the study by Krouse (1986) cor­rob­o­rate this find­ing: Com­pared with adult humans, young chil­dren, like ani­mals, seem to be less sus­cep­ti­ble to the Con­corde fallacy/sunk cost effect.

… Per­haps the impul­sive­ness of young chil­dren (Mis­chel, Shoda, & Rodriguez, 1989) fos­tered their desire to buy a ticket for the mer­ry-go-round right away, regard­less of whether a ticket or money had been lost. How­ev­er, this alter­na­tive inter­pre­ta­tion does not explain why the younger chil­dren said that they would buy the ticket less often than the older chil­dren in the lost-money con­di­tion. Nor does this expla­na­tion explain the greater adher­ence to nor­ma­tive rules of deci­sion mak­ing by younger chil­dren com­pared with adults in cases where impul­sive­ness is not an issue (see, e.g., Jacobs & Poten­za, 1991; Reyna & Ellis, 1994).

I think Arkes & Ayton are prob­a­bly wrong about chil­dren. Those 2 early stud­ies can be crit­i­cized eas­ily24, and other stud­ies point the oppo­site way. Baron et al 1993 asked poor and rich kids (age 5–12) ques­tions includ­ing an Arkes & Blumer 1985 ques­tion, and found, in their first study no differ­ence by age, ~30% of the 101 kids com­mit­ting sunk cost and another ~30% unsure; in their sec­ond, they asked 2 ques­tions, with ~50% com­mit­ting sunk cost—and responses on the 2 ques­tions min­i­mally cor­re­lated (r = 0.17). Klaczyn­ski & Cot­trell 2004 found that cor­rect (non-sunk cost) responses went up with age (age 5–12, 16%; 5–16, 27%; and adults 37%). Bru­ine de Bruin et al 2007 found older adults more sus­cep­ti­ble than young adults to some tested fal­lac­i­es, but that sunk cost resis­tance increased some­what with age. Strough et al 2008 stud­ied 75 col­lege age stu­dents, find­ing small or non-sig­nifi­cant results for IQ (as did other stud­ies, see lat­er), edu­ca­tion, and age; still older adults (60+) beat their col­lege-age peers at avoid­ing sunk cost in both Strough et al 2008 & Strough et al 2011.

(Chil­dren also vio­late tran­si­tiv­ity of choices & are more hyper­bolic than adults, which is hardly nor­ma­tive.25)

Uses

Learning & Memory

18. “If the fool would per­sist in his folly he would become wise.”

46. “You never know what is enough unless you know what is more than enough.”

, “Proverbs of Hell”

Felix Hoeffler in his 2008 paper “Why humans care about sunk costs while (low­er) ani­mals don’t: An evo­lu­tion­ary expla­na­tion” takes the pre­vi­ous points at face val­ues and asks how sunk cost might be use­ful for humans; his answer is that sunk cost for­feits some total gains/utility—just as our sim­ple model indi­cat­ed—but in exchange for faster learn­ing, an exchange moti­vated by humans’ well-known and dis­like of uncer­tainty26. It is harder to learn the value of choices if one is con­stantly break­ing off before com­ple­tion to make other choic­es, or real­ize any value at all (the clas­sic prob­lem, amus­ingly illus­trated in ’s story “Supe­ri­or­ity”).

One could imag­ine a not too intel­li­gent pro­gram which is, like humans, over-op­ti­mistic about the value of new pro­jects; it always chooses the high­est value option, of course, to avoid com­mit­ting sunk cost bias, but oddly enough, it never seems to fin­ish projects because bet­ter oppor­tu­ni­ties seem to keep com­ing along… In the real world, learn­ing is valu­able and one has many rea­sons to per­se­vere even past the point one regards a deci­sion as a mis­take; McAfee et al 2007 (re­mem­ber the expo­nen­tial vs hyper­bolic dis­count­ing exam­ple):

Con­sider a project that may take an unknown expen­di­ture to com­plete. The fail­ure to com­plete the project with a given amount of invest­ment is infor­ma­tive about the expected amount needed to com­plete it. There­fore, the expected addi­tional invest­ment required for fruition will be cor­re­lated with the sunk invest­ment. More­over, in a world of ran­dom returns, the real­iza­tion of a return is infor­ma­tive about the expected value of con­tin­u­ing a pro­ject. A large loss, which leads to a ratio­nal infer­ence of a high vari­ance, will often lead to a higher option value because option val­ues tend to rise with vari­ance. Con­se­quent­ly, the infor­ma­tive­ness of sunk invest­ments is ampli­fied by con­sid­er­a­tion of the option val­ue…­More­over, given lim­ited time to invest in pro­jects, as the time remain­ing shrinks, indi­vid­u­als have less time over which to amor­tize their costs of exper­i­ment­ing with new pro­jects, and there­fore may be ratio­nally less likely to aban­don cur­rent pro­ject­s…­Past invest­ments in a given course of action often pro­vide evi­dence about whether the course of action is likely to suc­ceed or fail in the future. Other things equal, a greater invest­ment usu­ally implies that suc­cess is closer at hand. Con­sider the fol­low­ing sim­ple mod­el…The only case in which the size of the sunk invest­ment can­not affect the fir­m’s ratio­nal deci­sion about whether to con­tinue invest­ing is the rather spe­cial case in which the haz­ard is exactly con­stant.

If this model is applic­a­ble to humans, we would expect to see a clus­ter of results related to age, learn­ing, teach­ing, diffi­culty of avoid­ing even with train­ing or edu­ca­tion, min­i­mal avoid­ance with greater intel­li­gence, com­ple­tion of tasks/projects, large­ness of sums (the risks most worth avoid­ing), and com­pet­i­tive­ness of envi­ron­ment. (As well as occa­sional null results like Elliott & Curme 2006.) And we do! Many oth­er­wise anom­alous results snap into focus with this sug­ges­tion:

  1. infor­ma­tion is worth most to those who have the least: as we pre­vi­ously saw, the young com­mit sunk cost more than the old

  2. in sit­u­a­tions where par­tic­i­pants can learn and update, we should expect sunk cost to be atten­u­ated or dis­ap­pear; we do see this (eg. Fried­man et al 200727, Gar­land et al 199028, Born­stein et al 199929, Staw 198130, McCain 198631, Phillips et al 199132, Wang & Xu 201233)

  3. the nois­ier (higher vari­ance) feed­back on profitabil­ity was, the more data it took before peo­ple give up (Brag­ger et al 1998, Brag­ger et al 2003)

  4. sunk costs were sup­ported more when sub­jects were given jus­ti­fi­ca­tions about learn­ing to make bet­ter deci­sions or whether teachers/students were involved (Born­stein & Chap­man 199534)

  5. exten­sive eco­nomic train­ing does not stop eco­nom­ics pro­fes­sors from com­mit­ting sunk cost, and stu­dents can be quickly edu­cated to answer sunk cost ques­tions cor­rect­ly, but with lit­tle car­ry-through to their lives35, and researchers in the area argue about whether par­tic­u­lar setups even rep­re­sent sunk costs at all, on their own mer­its36 (but don’t feel smug, you prob­a­bly would­n’t do much bet­ter if you took quizzes on it either)

  6. when mea­sured, avoid­ing sunk cost has lit­tle cor­re­la­tion with intel­li­gence37—and one won­ders how much of the cor­re­la­tion comes from intel­li­gent peo­ple being more likely to try to con­form to what they have learned is eco­nom­ics ortho­doxy

  7. a ‘nearly com­pleted’ effect dom­i­nates ‘sunk cost’ (Con­lon & Gar­land 1993, Gar­land & Con­lon 1998, Boehne & Paese 2002, Fontino et al 2007)

  8. for exam­ple, the larger the pro­por­tion, the more costs were sunk (Gar­land & New­port 1991)

  9. it is sur­pris­ingly hard to find clear-cut real-world non-gov­ern­ment exam­ples of seri­ous sunk costs; the com­monly cited non-his­tor­i­cal exam­ples do not stack up:

    • Staw & Hoang 199538 stud­ied the NBA to see whether high­-ranked but under­per­form­ing play­ers were over-used by coach­es, a sunk cost.

      Unfor­tu­nate­ly, they do not track the over-use down to actual effects on win-loss or other mea­sures of team per­for­mance, effects which are unlikely to be very large since the overuse amounts to ~10–20 min­utes a game. Fur­ther, “The econo­met­rics and behav­ioral eco­nom­ics of esca­la­tion of com­mit­ment: a re-ex­am­i­na­tion of Staw and Hoang’s NBA data” (Camerer & Weber 1999), claims to do a bet­ter analy­sis of the NBA data and find the effect is actu­ally weak­er. As usu­al, there are mul­ti­ple alter­na­tives39.

    • McCarthy et al 199340 is a much-cited cor­re­la­tional study find­ing that small entre­pre­neurs invest fur­ther in com­pa­nies they founded (rather than bought) when the com­pany appar­ently does poor­ly; but they acknowl­edge that there are finan­cial strate­gies cloud­ing the data, and like Staw & Hoang, do not tie the small effec­t—which appears only for a year or two, as the entre­pre­neurs appar­ently learn—to actual neg­a­tive out­comes or decrease in expected val­ue.

    • sim­i­lar to McCarthy et al 1993, Wennberg et al 200941 tracked ‘exit routes’ for young com­pa­nies such as being bought, merged, or bank­rup­t—but again, they did not tie appar­ent sunk cost to actual poor per­for­mance.

    • in 2 stud­ies42, Africans did not engage in sunk cost with insec­ti­cide-treated bed net­s—whether they paid a sub­si­dized price or free did not affect use lev­els, and in one study, this null effect hap­pened despite the same house­hold engag­ing in sunk cost for hypo­thet­i­cal ques­tions

    • Inter­net users may com­mit sunk cost in brows­ing news web­sites43 (but is that seri­ous?)

    • an unpub­lished 2001 paper (Bar­ron et al “The Esca­la­tion Phe­nom­e­non and Exec­u­tive Turnover: The­ory and Evi­dence”) report­edly finds that projects are ‘sig­nifi­cantly more likely’ to be can­celed when their top man­agers leave, sug­gest­ing a sunk cost effect of sub­stan­tial size; but it is unclear how much money is at stake or whether this is—re­mem­ber point #1—power pol­i­tics44

    • sunk cost only weakly cor­re­lates with sub­op­ti­mal behav­ior (much less demon­strates cau­sa­tion):

      Parker & Fis­chhoff 2005 and Bru­ine de Bruin et al 2007 com­piled a num­ber of ques­tions for sev­eral cog­ni­tive bias­es—in­clud­ing sunk cost—and then asked ques­tions about impul­sive­ness, num­ber of sex­ual part­ners, etc, while the lat­ter devel­oped a 34-item index of bad decisions/outcomes (the DOI): ever rent a movie you did­n’t watch, get expelled, file for bank­rupt­cy, for­feit your dri­ver’s license, miss an air­plane, bounce a check, etc. Then they ran cor­re­la­tions. They repli­cated the min­i­mal cor­re­la­tion of sunk cost avoid­ance with IQ, but sunk cost (and ‘path inde­pen­dence’) exhib­ited fas­ci­nat­ing behav­iors com­pared to the other biases/fallacies mea­sured: sunk cost & path inde­pen­dence cor­re­lated min­i­mally with the other tested biases/fallacies, s were almost use­lessly low, edu­ca­tion did not help much, age helped some, and sunk cost had low cor­re­la­tions with the risky behav­ior or the DOI (eg. after con­trol­ling for deci­sion-mak­ing styles, 0.13).

    • Lar­rick et al 1993 found tests of nor­ma­tive eco­nomic rea­son­ing, includ­ing sunk cost ques­tions, cor­re­lated with increased aca­d­e­mic salaries, even for non-e­co­nomic pro­fes­sors like biol­o­gists & human­ists (but the effect size & causal­ity are unclear)

  10. Dis­so­ci­a­tion in hypo­thet­i­cal­s—be­ing told a prior man­ager made deci­sion­s—­does not always coun­ter­act effects (Biyal­o­gorsky 2006)

Sunk costs may also reflect imper­fect mem­ory about what infor­ma­tion one had in the past; one may rea­son that one’s past self had bet­ter infor­ma­tion about all the for­got­ten details that went into a deci­sion to make some invest­ments, and respect their deci­sion, thus appear­ing to honor sunk costs (Baliga & Ely 2011).

Countering hyperbolic discounting?

“Use bar­bar­ians against bar­bar­ians.”45

, On China 2011

The clas­sic kicker of hyper­bolic dis­count­ing is that it induces tem­po­ral dis­count­ing—y­our far-sighted self is able to cal­cu­late what is best for you, but then your near-sighted self screws it all up by chang­ing tacks. Know­ing this, it may be a good idea to not work on your ‘bad’ habit of being over­con­fi­dent about your projects46 or engag­ing in plan­ning fal­la­cy, since at least they will coun­ter­act a lit­tle the hyper­bolic dis­count­ing; in par­tic­u­lar, you should dis­trust near-term esti­mates of the fun or value of activ­i­ties when you have not learned any­thing very impor­tant47. We could run the same argu­ment but instead point to the psy­chol­ogy research on the con­nec­tion between blood sugar lev­els and ‘willpower’; if it takes willpower to start a project but lit­tle willpower to cease work­ing on or quit a pro­ject, then we would expect our deci­sions to quit be cor­re­lated with low willpower and blood sugar lev­els, and hence to be ignored!

It’s hard to oppose these issues: humans are biased hard­ware. If one does­n’t know exactly why a bias is bad, coun­ter­ing a bias may sim­ply let other biases hurt you. Anec­do­tal­ly, a num­ber of peo­ple have prob­lems with quite the oppo­site of sunk cost fal­la­cy—over­es­ti­mat­ing the mar­ginal value of the alter­na­tives and dis­count­ing how lit­tle fur­ther invest­ment is nec­es­sary48, and peo­ple try to com­mit them­selves by delib­er­ately buy­ing things they don’t val­ue.49 (This seems dou­bly plau­si­ble given the high value of Con­sci­en­tious­ness/Grit50—with mar­ginal return high enough that it sug­gests most peo­ple do not com­mit long-term nearly enough, and if sunk cost is the price of reap­ing those gain­s…)

Thoughtlessness: the real bias

One of the known ways to elim­i­nate sunk cost bias is to be explicit and empha­size the costs of con­tin­u­ing (Northcraft and Neale, 1986, Tan and Yates, 1995, Brock­ner et al 198251 & con­versely Brock­ner 198152, McCain 1986), as well as set­ting explicit bud­gets (Simon­son & Staw 1992, Heath 199553, Bould­ing et al 1997). Fancy tools don’t add much effec­tive­ness54

This, com­bined with the pre­vi­ous learn­ing-based the­ory of sunk cost, sug­gests some­thing to me: sunk cost is a . One does­n’t intrin­si­cally over-value some­thing due to past invest­ment, one fails to think about the value at all.

Further reading


  1. “World: Asi­a-Paci­fic: US demands ‘killing fields’ trial”, BBC 1998-12-29↩︎

  2. And the Con­corde defi­nitely did not suc­ceed com­mer­cial­ly: oper­at­ing it could barely cover costs, its min­i­mal profits never came close to the total R&D or oppor­tu­nity costs, its last flight was in 2003 (a shock­ingly low life­time in an indus­try which typ­i­cally tries to oper­ate indi­vid­ual planes, much less entire designs, for decades), and as of 2017, the Con­corde still has no suc­ces­sors in its niche or appar­ent upcom­ing suc­ces­sors despite great tech­no­log­i­cal progress & global eco­nomic devel­op­ment and the noto­ri­ous growth in wealth of the “1%”.↩︎

  3. Arkes & Blumer 1985, “The psy­chol­ogy of sunk cost”:

    The sunk cost effect is man­i­fested in a greater ten­dency to con­tinue an endeavor once an invest­ment in mon­ey, effort, or time has been made. Evi­dence that the psy­cho­log­i­cal jus­ti­fi­ca­tion for this behav­ior is pred­i­cated on the desire not to appear waste­ful is pre­sent­ed. In a field study, cus­tomers who had ini­tially paid more for a sea­son sub­scrip­tion to a the­ater series attended more plays dur­ing the next 6 months, pre­sum­ably because of their higher sunk cost in the sea­son tick­ets. Sev­eral ques­tion­naire stud­ies cor­rob­o­rated and extended this find­ing. It is found that those who had incurred a sunk cost inflated their esti­mate of how likely a project was to suc­ceed com­pared to the esti­mates of the same project by those who had not incurred a sunk cost. The basic sunk cost find­ing that peo­ple will throw good money after bad appears to be well described by prospect the­ory (D. Kah­ne­man & A. Tver­sky, 1979, Econo­met­rica, 47, 263–291). Only mod­er­ate sup­port for the con­tention that per­sonal involve­ment increases the sunk cost effect is pre­sent­ed. The sunk cost effect was not less­ened by hav­ing taken prior courses in eco­nom­ics. Final­ly, the sunk cost effect can­not be fully sub­sumed under any of sev­eral social psy­cho­log­i­cal the­o­ries.

    As an exam­ple of the sunk cost effect, con­sider the fol­low­ing exam­ple [from Thaler 1980]. A man wins a con­test spon­sored by a local radio sta­tion. He is given a free ticket to a foot­ball game. Since he does not want to go alone, he per­suades a friend to buy a ticket and go with him. As they pre­pare to go to the game, a ter­ri­ble bliz­zard begins. The con­test win­ner peers out his win­dow over the arc­tic scene and announces that he is not going, because the pain of endur­ing the snow­storm would be greater than the enjoy­ment he would derive from watch­ing the game. How­ev­er, his friend protests, ‘I don’t want to waste the twelve dol­lars I paid for the tick­et! I want to go!’ The friend who pur­chased the ticket is not behav­ing ratio­nally accord­ing to tra­di­tional eco­nomic the­o­ry. Only incre­men­tal costs should influ­ence deci­sions, not sunk costs. If the agony of sit­ting in a blind­ing snow­storm for 3 h is greater than the enjoy­ment one would derive from try­ing to see the game, then one should not go. The $12 has been paid whether one goes or not. It is a sunk cost. It should in no way influ­ence the deci­sion to go. But who among us is so ratio­nal?

    Our final sam­ple thus had eigh­teen no-dis­count, nine­teen $2 dis­count, and sev­en­teen $7 dis­count sub­jects. Since the ticket stubs were color cod­ed, we were able to col­lect the stubs after each per­for­mance and deter­mine how many per­sons in each group had attended each play…We per­formed a 3 (dis­count: none, $2, $7) x 2 (half of sea­son) analy­sis of vari­ance on the num­ber of tick­ets used by each sub­ject. The lat­ter vari­able was a with­in-sub­jects fac­tor. It was also the only sig­nifi­cant source of vari­ance, F(1,51) = 32.32, MS, = 1.81, (p < 0.OO). More tick­ets were used by each sub­ject on the first five plays (3.57) than on the last five plays (2.09). We per­formed a pri­ori tests on the num­ber of tick­ets used by each of the three groups dur­ing the first half of the the­ater sea­son. The no-dis­count group used sig­nifi­cantly more tick­ets (4.11) than both the $2 dis­count group (3.32) and the $7 dis­count group (3.29), t = 1.79, 1.83, respec­tive­ly, p’s < .05, one tailed. The groups did not use sig­nifi­cantly differ­ent num­bers of tick­ets dur­ing the last half of the the­ater sea­son (2.28, 1 .S4, 2.18, for the no-dis­count, $2 dis­count, and $7 dis­count groups, respec­tive­ly). Con­clu­sion. Those who had pur­chased the­ater tick­ets at the nor­mal price used more the­ater tick­ets dur­ing the first half of the sea­son than those who pur­chased tick­ets at either of the two dis­counts. Accord­ing to ratio­nal eco­nomic the­o­ry, after all sub­jects had their ticket book­let in hand, they should have been equally likely to attend the plays.

    …A sec­ond fea­ture of prospect the­ory per­ti­nent to sunk costs is the cer­tainty effect. This effect is man­i­fested in two ways. First, absolutely cer­tain gains (P = 1) are greatly over­val­ued. By this we mean that the value of cer­tain gains is higher than what would be expected given an analy­sis of a per­son’s val­ues of gains hav­ing a prob­a­bil­ity less than 1.0. Sec­ond, cer­tain losses (P = 1.0) are greatly under­val­ued (i.e., fur­ther from zero). The value is more neg­a­tive than what would be expected given an analy­sis of a per­son’s val­ues of losses hav­ing a prob­a­bil­ity less than 1.0. In other words, cer­tainty mag­ni­fies both pos­i­tive and neg­a­tive val­ues. Note that in ques­tion 3A the deci­sion not to com­plete the plane results in a cer­tain loss of the amount already invest­ed. Since prospect the­ory states that cer­tain losses are par­tic­u­larly aver­sive, we might pre­dict that sub­jects would find the other option com­par­a­tively attrac­tive. This is in fact what occurred. When­ever a sunk cost dilemma involves the choice of a cer­tain loss (stop the water­way pro­ject) ver­sus a long shot (maybe it will become profitable by the year 2500), the cer­tainty effect favors the lat­ter option.

    …Fifty-nine stu­dents had taken at least one course; six­ty-one had taken no such course. All of these stu­dents were admin­is­tered the Exper­i­ment 1 ques­tion­naire by a grad­u­ate stu­dent in psy­chol­o­gy. A third group com­prised 61 stu­dents cur­rently enrolled in an eco­nom­ics course, who were admin­is­tered the Exper­i­ment 1 ques­tion­naire by their eco­nom­ics pro­fes­sor dur­ing an eco­nom­ics class. Approx­i­mately three fourths of the stu­dents in this group had also taken one prior eco­nom­ics course. All of the eco­nom­ics stu­dents had been exposed to the con­cept of sunk cost ear­lier that semes­ter both in their text­book (Gwart­ney & Stroup, 1982, p. 125 [Micro­eco­nom­ics: Pri­vate and pub­lic choice]) and in their class lec­tures. Results. Table 1 con­tains the results. The x2 analy­sis does not approach sig­nifi­cance. Even when an eco­nom­ics teacher in an eco­nom­ics class hands out a sunk cost ques­tion­naire to eco­nom­ics stu­dents, there is no more con­for­mity to ratio­nal eco­nomic the­ory than in the other two groups. We con­clude that gen­eral instruc­tion in eco­nom­ics does not lessen the sunk cost effect. In a recent analy­sis of entrap­ment exper­i­ments, Northcraft and Wolf (1984) con­cluded that con­tin­ued invest­ment in many of them does not nec­es­sar­ily rep­re­sent an eco­nom­i­cally irra­tional behav­ior. For exam­ple, con­tin­ued wait­ing for the bus will increase the prob­a­bil­ity that one’s wait­ing behav­ior will be reward­ed. There­fore there is an emi­nently ratio­nal basis for con­tin­ued patience. Hence this sit­u­a­tion is not a pure demon­stra­tion of the sunk cost effect. How­ev­er, we believe that some sunk cost sit­u­a­tions do cor­re­spond to entrap­ment sit­u­a­tions. The sub­jects who ‘owned’ the air­line com­pany would have endured con­tin­u­ing expen­di­tures on the plane as they sought the even­tual goal of finan­cial res­cue. This cor­re­sponds to the Brock­ner et al. entrap­ment sit­u­a­tion. How­ev­er, entrap­ment is irrel­e­vant to the analy­sis of all our other stud­ies. For exam­ple, peo­ple who paid more money last Sep­tem­ber for the sea­son the­ater tick­ets are in no way trapped. They do not incur small con­tin­u­ous losses as they seek an even­tual goal. There­fore we sug­gest that entrap­ment is rel­e­vant only to the sub­set of sunk cost sit­u­a­tions in which con­tin­u­ing losses are endured in the hope of later res­cue by a fur­ther invest­ment.

    Accord­ing to Thomas 1981 [Micro­eco­nomic appli­ca­tions: Under­stand­ing the Amer­i­can econ­omy], one per­son who rec­og­nized it as an error was none other than . In the 1880s Edi­son was not mak­ing much money on his great inven­tion, the elec­tric lamp. The prob­lem was that his man­u­fac­tur­ing plant was not oper­at­ing at full capac­ity because he could not sell enough of his lamps. He then got the idea to boost his plan­t’s pro­duc­tion to full capac­ity and sell each extra lamp below its total cost of pro­duc­tion. His asso­ciates thought this was an exceed­ingly poor idea, but Edi­son did it any­way. By increas­ing his plan­t’s out­put, Edi­son would add only 2% to the cost of pro­duc­tion while increas­ing pro­duc­tion 25%. Edi­son was able to do this because so much of the man­u­fac­tur­ing cost was sunk cost. It would be present whether or not he man­u­fac­tured more bulbs. [the Europe price > mar­ginal cost] Edi­son then sold the large num­ber of extra lamps in Europe for much more than the small added man­u­fac­tur­ing costs. Since pro­duc­tion increases involved neg­li­gi­ble new costs but sub­stan­tial new income, Edi­son was wise to increase pro­duc­tion. While Edi­son was able to place sunk costs in proper per­spec­tive in arriv­ing at his deci­sion, our research sug­gests that most of the rest of us find that very diffi­cult to do.

    Fried­man et al 2006 crit­i­cism of Arkes:

    This is con­sis­tent with the sunk cost fal­la­cy, but the evi­dence is not as strong as one might hope. The reported sig­nifi­cance lev­els appar­ently assume that (apart from the excluded cou­ples) all atten­dance choices are inde­pen­dent. The authors do not explain why they divided the sea­son in half, nor do they report the sig­nifi­cance lev­els for the entire sea­son (or first quar­ter, etc.). The data show no sig­nifi­cant differ­ence between the small and large dis­count groups in the first half sea­son nor among any of the groups in the sec­ond half sea­son. We are not aware of any repli­ca­tion of this field exper­i­ment.

    ↩︎
  4. Davis’s com­plaint is a lit­tle odd, inas­much as eco­nom­ics text­books do appar­ently dis­cuss sunk costs; Steele 1996 gives exam­ples back to 1910, or from “Do Sunk Costs Mat­ter?”, McAfee et al 2007:

    Intro­duc­tory text­books in eco­nom­ics present this as a basic prin­ci­ple and a deep truth of ratio­nal deci­sion-mak­ing (Frank and Bernanke, 2006, p. 10, and Mankiw, 2004, p. 297).

    ↩︎
  5. Pop­u­lar­ized dis­cus­sions of Farmer & Geanako­p­los 2009:

    ↩︎
  6. Some quotes from the paper:

    Con­ven­tional eco­nom­ics sup­poses that agents value the present vs. the future using an expo­nen­tial dis­count­ing func­tion. In con­trast, exper­i­ments with ani­mals and humans sug­gest that agents are bet­ter described as hyper­bolic dis­coun­ters, whose dis­count func­tion decays much more slowly at large times, as a power law. This is gen­er­ally regarded as being time incon­sis­tent or irra­tional. We show that when agents can­not be sure of their own future one-pe­riod dis­count rates, then hyper­bolic dis­count­ing can become ratio­nal and expo­nen­tial dis­count­ing irra­tional. This has impor­tant impli­ca­tions for envi­ron­men­tal eco­nom­ics, as it implies a much larger weight for the far future.

    …Why should we dis­count the future? Bohm-Baw­erk (1889,1923) and Fisher (1930) argued that men were nat­u­rally impa­tient, per­haps owing to a fail­ure of the imag­i­na­tion in con­jur­ing the future as vividly as the pre­sent. Another jus­ti­fi­ca­tion for declin­ing Ds (τ) in τ, given by Rae (1834,1905), is that peo­ple are mor­tal, so sur­vival prob­a­bil­i­ties must enter the cal­cu­la­tion of the ben­e­fits of future poten­tial con­sump­tion. There are many pos­si­ble rea­sons for dis­count­ing, as reviewed by Das­gupta (2004, 2008). Most eco­nomic analy­sis assumes expo­nen­tial dis­count­ing Ds (τ) = D(τ) = exp(−rτ), as orig­i­nally posited by Samuel­son (1937) and put on an axiomatic foun­da­tion by Koop­mans (1960). A nat­ural jus­ti­fi­ca­tion for expo­nen­tial dis­count­ing comes from finan­cial eco­nom­ics and the oppor­tu­nity cost of fore­go­ing an invest­ment. A dol­lar at time s can be placed in the bank to col­lect inter­est at rate r, and if the inter­est rate is con­stant, it will gen­er­ate exp(r(t—s)) dol­lars at time t. A dol­lar at time t is there­fore equiv­a­lent to exp(−r(t—s)) dol­lars at time s. Let­ting τ = t—s, this moti­vates the expo­nen­tial dis­count func­tion Ds (τ) = D(τ) = exp(−rτ), inde­pen­dent of s.

    …For roughly the first eighty years the cer­tainty equiv­a­lent dis­count func­tion for the geo­met­ric ran­dom walk stays fairly close to the expo­nen­tial, but after­ward the two diverge sub­stan­tial­ly, with the geo­met­ric ran­dom walk giv­ing a much larger weight to the future. A com­par­i­son using more real­is­tic para­me­ters is given in Table 1. For large times the differ­ence is dra­mat­ic.

    Farmer & Geanako­p­los 2009 Table 1, com­par­ing geo­met­ric ran­dom walk (GRW) vs expo­nen­tial dis­count­ing over increas­ing time peri­ods show­ing that GRW even­tu­ally decays much slow­er.
    year GRW expo­nen­tial
    20 0.462 0.456
    60 0.125 0.095
    100 0.051 0.020
    500 0.008 2 × 10−9
    1000 0.005 4 × 10−18

    …What this analy­sis makes clear, how­ev­er, is that the long term behav­ior of val­u­a­tions depends extremely sen­si­tively on the inter­est rate mod­el. The fact that the present value of actions that affect the far future can shift from a few per­cent­age points to infin­ity when we move from a con­stant inter­est rate to a geo­met­ric ran­dom walk calls seri­ously into ques­tion many well regarded analy­ses of the eco­nomic con­se­quences of global warm­ing. … no fixed dis­count rate is really ade­quate—as our analy­sis makes abun­dantly clear, the proper dis­count­ing func­tion is not an expo­nen­tial.

    ↩︎
  7. For exam­ple, Staw 1981, “The Esca­la­tion of Com­mit­ment to a Course of Action”:

    A sec­ond way to explain deci­sional errors is to attribute a break­down in ratio­nal­ity to inter­per­sonal ele­ments such as social power or group dynam­ics. Pfeffer [1977] has, for exam­ple, out­lined how and when power con­sid­er­a­tions are likely to out­weigh more ratio­nal aspects of orga­ni­za­tional deci­sion mak­ing, and Janis [1972] has noted many prob­lems in the deci­sion mak­ing of pol­icy groups. Cohe­sive groups may, accord­ing to Janis, sup­press dis­sent, cen­sor infor­ma­tion, cre­ate illu­sions of invul­ner­a­bil­i­ty, and stereo­type ene­mies. Any of these by-prod­ucts of social inter­ac­tion may, of course, hin­der ratio­nal deci­sion mak­ing and lead indi­vid­u­als or groups to deci­sional errors.

    ↩︎
  8. Wang & Keil 2005; from the abstract:

    Using meta-analy­sis, we ana­lyzed the results of 20 sunk cost exper­i­ments and found: (1) a large effect size asso­ci­ated with sunk costs, (2) vari­abil­ity of effect sizes across exper­i­ments that was larger than pure sub­jec­t-level sam­pling error, and (3) stronger effects in exper­i­ments involv­ing IT projects as opposed to non-IT pro­jects.

    Back­ground on why one might expect effects with IT in par­tic­u­lar:

    Although project esca­la­tion is a gen­eral phe­nom­e­non, IT project esca­la­tion has received con­sid­er­able atten­tion since Keil and his col­leagues began study­ing the phe­nom­e­non (Keil, Mixon et al. 1995). Sur­vey data sug­gest that 30–40% of all IT projects involve some degree of project esca­la­tion (Keil, Mann, and Rai 2000). To study the role of sunk cost in soft­ware project esca­la­tion, Keil et al. (1995) con­ducted a series of lab exper­i­ments, in which sunk costs were manip­u­lated at var­i­ous lev­els, and sub­jects decided whether or not to con­tinue an IT project fac­ing neg­a­tive prospects. This IT ver­sion of the sunk cost exper­i­ment was later repli­cated across cul­tures (Keil, Tan et al. 2000), with group deci­sion mak­ers (Boon­thanom 2003), and under differ­ent de-esca­la­tion sit­u­a­tions (Heng, Tan et al. 2003). These exper­i­ments demon­strated the sunk cost effect to be sig­nifi­cant in IT project esca­la­tion.

    The “real option” defense of sunk cost behav­ior has been sug­gested for soft­ware projects (Tiwana & Fich­man 2006)↩︎

  9. “Diffu­sion of Respon­si­bil­i­ty: Effects on the Esca­la­tion Ten­dency”, Whyte 1991 (see also Whyte 1993):

    In a lab­o­ra­tory study, the pos­si­bil­ity was inves­ti­gated that group deci­sion mak­ing in the ini­tial stages of an invest­ment project might reduce the esca­la­tion ten­dency by diffus­ing respon­si­bil­ity for ini­ti­at­ing a fail­ing pro­ject. Sup­port for this notion was found. Esca­la­tion effects occurred less fre­quently and were less severe among indi­vid­u­als described as par­tic­i­pants in a group deci­sion to ini­ti­ate a fail­ing course of action than among indi­vid­u­als described as per­son­ally respon­si­ble for the ini­tial deci­sion. Self­-jus­ti­fi­ca­tion the­ory was found to be less rel­e­vant after group than after indi­vid­ual deci­sions. Because most deci­sions about impor­tant new poli­cies in orga­ni­za­tions are made by groups, these results indi­cate a gap in the­o­riz­ing about the deter­mi­nants of esca­lat­ing com­mit­ment for an impor­tant cat­e­gory of esca­la­tion sit­u­a­tions.

    …The impact of per­sonal respon­si­bil­ity on per­sis­tence in error has been repli­cated sev­eral times (e.g., Baz­er­man, Beekun, & Schoor­man, 1982; Cald­well & O’Reil­ly, 1982; Staw, 1976; Staw & Fox, 1977).

    ↩︎
  10. Both of them—but the on the Iraqi side, specifi­cally ; Baz­er­man & Neale 1992:

    Sim­i­lar­ly, it could be argued that in the Iraqi/Kuwait con­flict, Iraq (Hus­sein) had the infor­ma­tion nec­es­sary to ratio­nally pur­sue a nego­ti­ated set­tle­ment. In fact, early on in the cri­sis, he was offered a pack­age for set­tle­ment that was far bet­ter than any­thing that he could have expected through a con­tin­ued con­flict. The esca­la­tion lit­er­a­ture accu­rately pre­dicts that the ini­tial “invest­ment” incurred in invad­ing Kuwait would lead Iraq to a fur­ther esca­la­tion of its com­mit­ment not to com­pro­mise on the return of Kuwait.

    ↩︎
  11. Kelly 2004:

    The physi­cist Eugene Dem­ler informs me that exactly par­al­lel argu­ments were quite com­monly made in the Soviet Union in the late 1980s in an attempt to jus­tify con­tin­ued Soviet involve­ment in Afghanistan.

    ↩︎
  12. Dawkins & Carlisle 1976 sar­cas­ti­cally remark:

    …The idea has been influ­en­tial4, and it appeals to eco­nomic intu­ition. A gov­ern­ment which has invested heav­ily in, for exam­ple, a super­sonic air­lin­er, is under­stand­ably reluc­tant to aban­don it, even when sober judge­ment of future prospects sug­gests that it should do so. Sim­i­lar­ly, a pop­u­lar argu­ment against Amer­i­can with­drawal from the Viet­nam war was a ret­ro­spec­tive one: ‘We can­not allow those boys to have died in vain’. Intu­ition says that pre­vi­ous invest­ment com­mits one to future invest­ment.

    ↩︎
  13. The for­mer can be found in Baz­er­man, Giu­liano, & Appel­man, 1984, Davis & Bobko, 1986, & Staw, 1976 among other stud­ies cited here. The lat­ter is often called ‘self­-jus­ti­fi­ca­tion’ or the ‘jus­ti­fi­ca­tion effect’ (eg. Brock­ner 1992).

    Self­-jus­ti­fi­ca­tion is, of course, in many con­texts a valu­able trait to have; is the fol­low­ing an error, or busi­ness stu­dents demon­strat­ing their pre­co­cious under­stand­ing of an invalu­able bureau­cratic in-fight­ing skill? Baz­er­man et al 1982, “Per­for­mance eval­u­a­tion in a dynamic con­text: A lab­o­ra­tory study of the impact of prior com­mit­ment to the ratee” (see also Cald­well & O’Reil­ly, 1982; Staw, 1976; Staw & Fox, 1977):

    A dynamic view of per­for­mance eval­u­a­tion is pro­posed that argues that raters who are pro­vided with neg­a­tive per­for­mance data on a pre­vi­ously pro­moted employee will sub­se­quently eval­u­ate the employee more pos­i­tively if they, rather than their pre­de­ces­sors, made the ear­lier pro­mo­tion deci­sion. A total of 298 busi­ness majors par­tic­i­pated in the study. The exper­i­men­tal group made a pro­mo­tion deci­sion by choos­ing among three can­di­dates, whereas the con­trol group was told that the deci­sion had been made by some­one else. Both groups eval­u­ated the pro­moted employ­ee’s per­for­mance after review­ing 2 years of data. The hypoth­e­sized esca­la­tion of com­mit­ment effect was observed in that the exper­i­men­tal group con­sis­tently eval­u­ated the employee more favor­ably, pro­vided larger rewards, and made more opti­mistic pro­jec­tions of future per­for­mance than did the con­trol group.

    ↩︎
  14. And it is diffi­cult to judge from a dis­tance when sunk cost has occurred: what exactly else are the Indi­ans going to invest in? Remem­ber our expo­nen­tial dis­count­ing exam­ple. As long as var­i­ous set­tle­ments are not run­ning at an out­right loss or are being sub­si­dized, how steep an oppor­tu­nity cost do they really face? From the paper:

    By the end of the occu­pa­tion in the late-A.D. 800s there is evi­dence of deple­tion of wood resources, pi on seeds, and ani­mals (re­viewed by Kohler 1992). Fol­low­ing the col­lapse of these vil­lages, the Dolores area was never reoc­cu­pied in force by Puebloan farm­ers. A sec­ond sim­i­lar case comes from nearby Sand Canyon Local­ity west of Cortez, Col­orado, inten­sively stud­ied by the Crow Canyon Archae­o­log­i­cal Cen­ter over the last 15 years (Lipe 1992). Here the main occu­pa­tion is sev­eral hun­dred years later than in Dolores, but the pat­terns of con­struc­tion in ham­lets ver­sus vil­lages are sim­i­lar (fig. 4, bot­tom). The demise of the two vil­lages con­tribut­ing dated con­struc­tion events to fig. 4 (bot­tom) coin­cides with the famous depop­u­la­tion of the Four Cor­ners region of the U.S. South­west. There is strong evi­dence for declin­ing avail­abil­ity of pro­tein in gen­eral and large game ani­mals in par­tic­u­lar, and increased com­pe­ti­tion for the best agri­cul­tural land, dur­ing the ter­mi­nal occu­pa­tion (re­viewed by Kohler 2000). We draw a final exam­ple from an inter­me­di­ate peri­od. The most famous Anasazi struc­tures, the “great houses” of Chaco Canyon, may fol­low a sim­i­lar pat­tern. Windes and Ford (1996) show that early con­struc­tion episodes (in the early A.D. 900s) in the canyon great houses typ­i­cally coin­cide with peri­ods of high poten­tial agri­cul­tural pro­duc­tiv­i­ty, but later con­struc­tion con­tin­ues in both good peri­ods and bad, par­tic­u­larly in the poor period from ca. A.D. 1030–1050.

    Cer­tainly there is strong evi­dence of dimin­ish­ing mar­ginal return­s—ev­i­dence for the Tain­ter the­sis—but dimin­ish­ing mar­ginal returns is not sunk cost fal­lacy. Given the gen­eral envi­ron­ment, and given that there was a ‘col­lapse’, arguably there was no oppor­tu­nity cost to remain­ing there. How would the Indi­ans have become bet­ter off if they aban­doned their vil­lages, given that there is lit­tle evi­dence that other places were bet­ter off in that period of great droughts and the obser­va­tion that they would need to make sub­stan­tial cap­i­tal invest­ments wher­ever they went?↩︎

  15. Janssen & Scheffer 2004:

    In fact, esca­la­tion of com­mit­ment is found in group deci­sion mak­ing (Baz­er­man et al. 1984). Mem­bers of a group strive for una­nim­i­ty. A typ­i­cal goal for polit­i­cal deci­sions within smal­l­-s­cale soci­eties is to reach con­sen­sus (Boehm 1996). Once una­nim­ity is reached, the eas­i­est way to pro­tect it is to stay com­mit­ted to the group’s deci­sion (Baz­er­man et al. 1984, Janis 1972 [Vic­tims of group­think]). Thus, when the group is faced with a neg­a­tive feed­back, mem­bers will not sug­gest aban­don­ing the ear­lier course of action, because this might dis­rupt the exist­ing una­nim­i­ty.

    ↩︎
  16. McAfee et al 2007:

    But there are also exam­ples of peo­ple who suc­ceeded by not ignor­ing sunk costs. The same “we-owe-it-to-our-fal­l­en-coun­try­men” logic that led Amer­i­cans to stay the course in Viet­nam also helped the war effort in World War II. More gen­er­al­ly, many suc­cess sto­ries involve peo­ple who at some time suffered great set­backs, but per­se­vered when short­-term odds were not in their favor because they “had already come too far to give up now.” Colum­bus did not give up when the shores of India did not appear after weeks at sea, and many on his crew were urg­ing him to turn home (see Olson, 1967 [The North­men, Colum­bus and Cabot, 985–1503], for Colum­bus’ jour­nal). Jeff Bezos, founder of Ama­zon.­com, did not give up when Ama­zon’s loss totaled $1.4 bil­lion in 2001, and many on Wall Street were spec­u­lat­ing that the com­pany would go broke (see Mendel­son and Meza, 2001).

    ↩︎
  17. “Bank­ing on Com­mit­ment: Intended and Unin­tended Con­se­quences of an Orga­ni­za­tion’s Attempt to Atten­u­ate Esca­la­tion of Com­mit­ment”, McNa­mara et al 2002:

    The notion that deci­sion mak­ers tend to incor­rectly con­sider pre­vi­ous expen­di­tures when delib­er­at­ing cur­rent util­i­ty-based deci­sions (Arkes & Blumer, 1985) has been used to explain fias­coes rang­ing from the pro­longed involve­ment of the United States in the Viet­nam War to the dis­as­trous cost over­run dur­ing the con­struc­tion of the (Ross & Staw, 1993). In the Shore­ham Nuclear Power Plant exam­ple, esca­la­tion of com­mit­ment meant bil­lions of wasted dol­lars (Ross & Staw, 1993). In the Viet­nam War, it may have cost thou­sands of lives…Kirby and Davis’s (1998) exper­i­men­tal study showed that increased mon­i­tor­ing could dampen the esca­la­tion of com­mit­ment. Staw, Barsade, and Kop­ut’s (1997) field data on the bank­ing indus­try led them to con­clude that top man­ager turnover led to de-esca­la­tion of com­mit­ment at an aggre­gate lev­el.

    …So far, the results sup­port the effi­cacy of changes in mon­i­tor­ing and deci­sion respon­si­bil­ity as cures for the esca­la­tion of com­mit­ment bias. We now turn to the side effects of these treat­ments. Hypothe­ses 4 and 5 pro­pose that the threat of increased mon­i­tor­ing and change in man­age­ment respon­si­bil­ity increase the like­li­hood of a differ­ent form of unde­sir­able deci­sion com­mit­men­t—the per­sis­tent under­assess­ment of bor­rower risk. The results in col­umn 3 of Table 2 sup­port these hypothe­ses. Both the threat of increased mon­i­tor­ing and the threat of change in deci­sion respon­si­bil­ity increase the like­li­hood of per­sis­tent under­assess­ment of bor­rower risk (0.47, p < 0.01, and 0.50, p < 0.05, respec­tive­ly). These find­ings sup­port the view that deci­sion mak­ers are likely to fail to appro­pri­ately down­grade a bor­rower when, by doing so, they avoid an orga­ni­za­tional inter­ven­tion. We exam­ined the change in invest­ment com­mit­ment for bor­row­ers whose risk was per­sis­tently under­assessed and who faced either increased mon­i­tor­ing or change in deci­sion respon­si­bil­ity if the deci­sion mak­ers had admit­ted that the risk needed down­grad­ing. We found that deci­sion mak­ers did appear to exhibit esca­la­tion of com­mit­ment to these bor­row­ers. The change in com­mit­ment (on aver­age, over 30%) is sig­nifi­cantly greater than 0 (t = 2.94, p < 0.01) and greater than the change in com­mit­ment to those bor­row­ers who were cor­rectly assessed as remain­ing at the same risk level (t = 2.58, p = 0.01). Com­bined, these find­ings sug­gest that although the orga­ni­za­tional efforts to min­i­mize unde­sir­able deci­sion com­mit­ment appeared suc­cess­ful at first glance, the threat of these inter­ven­tions increased the like­li­hood that deci­sion mak­ers would per­sis­tently give over­fa­vor­able assess­ments of the risk of bor­row­ers. In turn, the lend­ing offi­cers would then esca­late their mon­e­tary com­mit­ment to these riskier bor­row­ers.

    On nuclear power plants as sunk cost fal­la­cy, McAfee et al 2007:

    Accord­ing to evi­dence reported by , man­agers of many util­ity com­pa­nies in the U.S. have been overly reluc­tant to ter­mi­nate eco­nom­i­cally unvi­able nuclear plant pro­jects. In the 1960s, the nuclear power indus­try promised “energy too cheap to meter.” But nuclear power later proved unsafe and uneco­nom­i­cal. As the U.S. nuclear power pro­gram was fail­ing in the 1970s and 1980s, Pub­lic Ser­vice Com­mis­sions around the nation ordered pru­dency reviews. From these reviews, De Bondt and Makhija find evi­dence that the Com­mis­sions denied many util­ity com­pa­nies even par­tial recov­ery of nuclear con­struc­tion costs on the grounds that they had been mis­man­ag­ing the nuclear con­struc­tion projects in ways con­sis­tent with “throw­ing good money after bad.”…In most projects there is uncer­tain­ty, and restart­ing after stop­ping entails costs, mak­ing the option to con­tinue valu­able. This is cer­tainly the case for nuclear power plants, for exam­ple. Shut­ting down a nuclear reac­tor requires dis­man­tling or entomb­ment, and the costs of restart­ing are extremely high. More­over, the vari­ance of energy prices has been quite large. The option of main­tain­ing nuclear plants is there­fore poten­tially valu­able. Low returns from nuclear power in the 1970s and 1980s might have been a con­se­quence of the large vari­ance, sug­gest­ing a high option value of main­tain­ing nuclear plants. This may in part explain the evi­dence (re­ported by De Bondt and Makhi­ja, 1988) that man­agers of util­i­ties at the time were so reluc­tant to shut down seem­ingly unprofitable plants.

    ↩︎
  18. , 2011, pg 336:

    In the case of a war of attri­tion, one can imag­ine a leader who has a chang­ing will­ing­ness to suffer a cost over time, increas­ing as the con­flict pro­ceeds and his resolve tough­ens. His motto would be: ‘We fight on so that our boys shall not have died in vain.’ This mind­set, known as loss aver­sion, the sunk-cost fal­la­cy, and throw­ing good money after bad, is patently irra­tional, but it is sur­pris­ingly per­va­sive in human deci­sion-mak­ing.65 Peo­ple stay in an abu­sive mar­riage because of the years they have already put into it, or sit through a bad movie because they have already paid for the tick­et, or try to reverse a gam­bling loss by dou­bling their next bet, or pour money into a boon­dog­gle because they’ve already poured so much money into it. Though psy­chol­o­gists don’t fully under­stand why peo­ple are suck­ers for sunk costs, a com­mon expla­na­tion is that it sig­nals a pub­lic com­mit­ment. The per­son is announc­ing: ‘When I make a deci­sion, I’m not so weak, stu­pid, or inde­ci­sive that I can be eas­ily talked out of it.’ In a con­test of resolve like an attri­tion game, loss aver­sion could serve as a costly and hence cred­i­ble sig­nal that the con­tes­tant is not about to con­cede, pre­empt­ing his oppo­nen­t’s strat­egy of out­last­ing him just one more round.

    It’s worth not­ing that there is at least one exam­ple of sunk cost (“entry licenses” [fees]) encour­ag­ing coop­er­a­tion (“col­lu­sive price path”) in mar­ket agents: Offer­man & Pot­ter 2001, “Does Auc­tion­ing of Entry Licenses Induce Col­lu­sion? An Exper­i­men­tal Study”, who point out another case of how our sunk cost map may not cor­re­spond to the ter­ri­to­ry:

    There is one caveat to the sunk cost argu­ment, how­ev­er. If the game for which the posi­tions are allo­cated has mul­ti­ple equi­lib­ria, an entry fee may affect the equi­lib­rium that is being select­ed. Sev­eral exper­i­men­tal stud­ies have demon­strated the force of this prin­ci­ple. For exam­ple, Coop­er, DeJong, Forsythe and Ross (1993), Van Huy­ck, Bat­talio and Beil (1993), and Cachon and Camer­er, (1996) study coor­di­na­tion games with mul­ti­ple equi­lib­ria and find that an entry fee may induce play­ers to coor­di­nate on a differ­ent (Pareto supe­ri­or) equi­lib­ri­um.

    ↩︎
  19. McAfee et al 2007

    Rep­u­ta­tional Con­cerns. In team rela­tion­ships, each par­tic­i­pan­t’s will­ing­ness to invest depends on the invest­ments of oth­ers. In such cir­cum­stances, a com­mit­ment to fin­ish­ing projects even when they appear ex post unprofitable is valu­able, because such a com­mit­ment induces more effi­cient ex ante invest­ment. Thus, a rep­u­ta­tion for “throw­ing good money after bad”—the clas­sic sunk cost fal­la­cy—­can solve a coor­di­na­tion prob­lem. In con­trast to the desire for com­mit­ment, peo­ple might ratio­nally want to con­ceal bad choices to appear more tal­ent­ed, which may lead them to make fur­ther invest­ments, hop­ing to con­ceal their invest­ments gone bad.

    Kan­odia, Bush­man, and Dick­haut (1989), Pren­der­gast and Stole (1996), and Camerer and Weber (1999) develop prin­ci­pal-a­gent mod­els in which ratio­nal agents invest more if they have invested more in the past to pro­tect their rep­u­ta­tion for abil­i­ty. We elu­ci­date the gen­eral fea­tures of these mod­els below and argue that con­cerns about rep­u­ta­tion for abil­ity are espe­cially pow­er­ful in explain­ing appar­ent reac­tions to sunk costs by politi­cians. [see also Car­pen­ter & Matthews 2003] develop a model in which agents ini­tially make invest­ments inde­pen­dently and are later matched in pairs, their match pro­duces a sur­plus, and they bar­gain over it based on cul­tural norms of fair divi­sion. A fair divi­sion rule in which each agen­t’s sur­plus share is increas­ing in their sunk invest­ment, and decreas­ing in the oth­er’s sunk invest­ment, is shown to be evo­lu­tion­ar­ily sta­ble.

    …If a mem­ber of an ille­gal price-fix­ing car­tel seems likely to con­fess to the gov­ern­ment in exchange for immu­nity from pros­e­cu­tion, the other car­tel mem­bers may race to be first to con­fess, since only the first gets immu­nity (in Europe, such immu­nity is called “leniency”). Sim­i­lar­ly, a spouse who loses faith in the long-term prospects of a mar­riage invests less in the rela­tion­ship, thereby reduc­ing the gains from part­ner­ship, poten­tially doom­ing the rela­tion­ship. In both cas­es, beliefs about the future via­bil­ity mat­ter to the suc­cess of the rela­tion­ship, and there is the poten­tial for self­-ful­fill­ing opti­mistic and pes­simistic beliefs.

    In such a sit­u­a­tion, indi­vid­u­als may ratio­nally select oth­ers who stay in the rela­tion­ship beyond the point of indi­vid­ual ratio­nal­i­ty, if such a com­mit­ment is pos­si­ble. Indeed, ex ante it is ratio­nal to con­struct exit bar­ri­ers like costly and diffi­cult divorce laws, so as to reduce early exit. Such exit bar­ri­ers might be behav­ioral as well as legal. If an indi­vid­ual can develop a rep­u­ta­tion for stick­ing in a rela­tion­ship beyond the break-even point, it would make that indi­vid­ual a more desir­able part­ner and thus enhance the set of avail­able part­ners, as well as encour­age greater and longer last­ing invest­ment by the cho­sen part­ner. One way of cre­at­ing such a rep­u­ta­tion is to act as if one cares about sunk cost­s…We now for­mal­ize this con­cept using a sim­ple two-pe­riod model that sets aside con­sid­er­a­tion of selec­tion…That is, a slight pos­si­bil­ity of breach is col­lec­tively harm­ful; both agents would be ex ante bet­ter off if they could pre­vent breach when V—ρ < 1, which holds as long as the rep­u­ta­tion cost ρ of breach­ing is not too small. In this mod­el, a ten­dency to stay in the rela­tion­ship due to a large sunk invest­ment would be ben­e­fi­cial to each par­ty.

    ↩︎
  20. The qual­i­fier is because hyper­bolic dis­count­ing has been demon­strated in many pri­mates, and a num­ber of other bias­es, eg Chen et al 2006, “How Basic Are Behav­ioral Bias­es? Evi­dence from Capuchin Mon­key Trad­ing Behav­ior”↩︎

  21. See also Rad­ford & Blakey 2000, :

    Nest-de­fence behav­iour of passer­ines is a form of parental invest­ment. Par­ents are select­ed, there­fore, to vary the inten­sity of their nest defence with respect to the value of their off­spring. Great tit, Parus major, males were tested for their defence response to both a nest preda­tor and play­back of a great tit chick dis­tress call. The results from the two tri­als were sim­i­lar; males gave more alarm calls and made more perch changes if they had larger broods and if they had a greater pro­por­tion of sons in their brood. This is the first evi­dence for a rela­tion­ship between nest-de­fence inten­sity and off­spring sex ratio. Pater­nal qual­i­ty, size, age and con­di­tion, lay date and chick con­di­tion did not sig­nifi­cantly influ­ence any of the mea­sured nest-de­fence para­me­ters.

    …The most con­sis­tent pat­tern found in stud­ies of avian nest defence has been an increase in the level of the parental response to preda­tors from clutch ini­ti­a­tion to £edg­ing (e.g.Bier­mann & Robert­son 1981; Regel­mann & Curio 1983; Mont­gomerie & Weath­er­head 1988; Wik­lund 1990 a). This sup­ports the pre­dic­tion from parental invest­ment the­ory (Trivers 1972) that par­ents should risk more in defence of young that are more valu­able to them. The inten­sity of nest defence is also expected to be pos­i­tively cor­re­lated with brood size because the ben­e­fits of deter­ring a preda­tor will increase with off­spring num­ber (Williams 1966; Wik­lund 1990 b).

    ↩︎
  22. Northcraft & Wolf 1984, . Acad­emy of Man­age­ment Review, 9, 225–234:

    The deci­sion maker also may treat the neg­a­tive feed­back as sim­ply a learn­ing expe­ri­ence-a cue to redi­rect efforts within a project rather than aban­don it (Con­nol­ly, 1976).

    …In some cases (Brock­n­er, Shaw, & Rubin, 1979), the expected rate of return for fur­ther finan­cial com­mit­ment even can be shown with a few assump­tions to be increas­ing and (after a cer­tain amount of invest­ment) finan­cially advis­able, despite the claim that fur­ther resource com­mit­ment under the cir­cum­stances is psy­cho­log­i­cally rather than eco­nom­i­cally moti­vat­ed…­More to the point, the life cycle model clearly reveals the psy­chol­o­gist’s fal­la­cy: con­tin­u­ing a project in the face of a finan­cial set­back is not always irra­tional (it depends on the stage in the project and the mag­ni­tude of the finan­cial set­back). Sec­ond, the life cycle model pro­vides an insight into the man­ager’s pre­oc­cu­pa­tion with a pro­jec­t’s finan­cial past. It demon­strates how a pro­jec­t’s finan­cial past can be used heuris­ti­cally to under­stand the pro­jec­t’s future.

    Fried­man et al 2006:

    …There are also sev­eral pos­si­ble ratio­nal expla­na­tions for an appar­ent con­cern with sunk costs. Main­tain­ing a rep­u­ta­tion for fin­ish­ing what you start may have suffi­cient value to com­pen­sate for the expected loss on an addi­tional invest­ment. The ‘’ value (e.g., Dixit and Pindy­ck, 1994 [Invest­ment Under Uncer­tainty]) [cf.O’Brien & Folta 2009, Tiwana & Fich­man 2006] of con­tin­u­ing a project also may off­set an expected loss. in orga­ni­za­tions may make it per­son­ally bet­ter for a man­ager to con­tinue an unprofitable project than to can­cel it and take the heat from its sup­port­ers (e.g., Mil­grom and Roberts, 1992 [Eco­nom­ics, Orga­ni­za­tion, and Man­age­ment]).

    (Cer­tainty effects seem to be sup­ported by fMRI imag­ing.) One may ask why cap­i­tal con­straints aren’t solved—if the projects really are good profitable ideas—by resort to equity or debt? But those are always last resorts due to fun­da­men­tal coor­di­na­tion & trust issues; McAfee et al 2007:

    Abun­dant the­o­ret­i­cal lit­er­a­ture in cor­po­rate finance shows that impos­ing finan­cial con­straints on firm man­agers improves (see Stiglitz and Weiss, 1981, Myers and Majluf, 1984, Lewis and Sap­ping­ton, 1989, and Hart and Moore, 1995). The the­o­ret­i­cal con­clu­sion finds over­whelm­ing empir­i­cal sup­port, and only a small frac­tion of busi­ness invest­ment is funded by bor­row­ing (see Faz­zari and Athey, 1987, Faz­zari and Peter­son, 1993, and Love, 2003). When man­agers face finan­cial con­straints, sunk costs must influ­ence firm invest­ments sim­ply because of bud­get­s…­Firms with finan­cial con­straints might ratio­nally react to sunk costs by invest­ing more in a pro­ject, rather than less, because the abil­ity to under­take alter­na­tive invest­ments declines in the level of sunk cost­s…­Given lim­ited resources, if the firm has already sunk more resources into the cur­rent pro­ject, then the value of the option to start a new project if it arises is lower rel­a­tive the value of the option to con­tinue the cur­rent pro­ject, because fewer resources are left over to bring any new project to fruition, and more resources have already been spent to bring the cur­rent project to fruition. There­fore, the fir­m’s incen­tive to con­tinue invest­ing in the cur­rent project is higher the more resources it has already sunk into the pro­ject.

    • Stiglitz, Joseph E. and Weiss, Andrew, 1981. “Credit Rationing in Mar­kets with Imper­fect Infor­ma­tion”, Amer­i­can Eco­nomic Review 71, 393–410.
    • Myers, Stew­art and Majluf, Nicholas S., 1984. “Cor­po­rate Financ­ing and Invest­ment Deci­sions when Firms Have Infor­ma­tion that Investors Do Not Have,” Jour­nal of Finan­cial Eco­nom­ics 13, 187–221
    • Lewis, Tracy and Sap­ping­ton, David E. M., 1989. “Coun­ter­vail­ing Incen­tives in Agency Prob­lems,” Jour­nal of Eco­nomic The­ory 49, 294–313
    • Hart, Oliver and Moore, John, 1995. “Debt and Senior­i­ty: An Analy­sis of the Role of Hard Claims in Con­strain­ing Man­age­ment,” Amer­i­can Eco­nomic Review 85, 567–585
    • Faz­zari, Steven and Athey, Michael J., 1987. “Asym­met­ric Infor­ma­tion, Financ­ing Con­straints, and Invest­ment,” Review of Eco­nom­ics and Sta­tis­tics 69, 481–487.
    • Faz­zari, Steven and Petersen, Bruce, 1993. “Work­ing Cap­i­tal and Fixed Invest­ment: New Evi­dence on Financ­ing Con­straints,” RAND Jour­nal of Eco­nom­ics 24, 328–342
    • Love, Ines­sa, 2003. “Finan­cial Devel­op­ment and Financ­ing Con­straints: Inter­na­tional Evi­dence from the Struc­tural Invest­ment Mod­el,” Review of Finan­cial Stud­ies 16, 765–791
    ↩︎
  23. The swans:

    Although none of these terms are used, the same phe­nom­ena is also observed by Nolet et al. (2001). In par­tic­u­lar, tun­dra swans must expend more energy to “up-end” to feed on deep­-wa­ter tuber patches than they do to “head­-dip” to feed on shal­low-wa­ter patch­es; how­ev­er, con­trary to the expec­ta­tions of Nolet et al., the swans feed for a longer time on each high­-cost deep­-wa­ter patch. In every con­text, the obser­va­tion of the sunk-cost effect is an enigma because intu­ition sug­gests that this behav­ior is sub­op­ti­mal. Here, we show how opti­miza­tion of Eq. (3) pre­dicts the sunk-cost effect for cer­tain sce­nar­ios; a com­mon ele­ment of every case is a large ini­tial cost.

    ↩︎
  24. Klaczyn­ski & Cot­trell 2004:

    Although con­sid­er­able evi­dence indi­cates that adults com­mit the SC fal­lacy fre­quent­ly, age differ­ences in the propen­sity to hon­our sunk costs have been lit­tle stud­ied. In their inves­ti­ga­tions of 7–15-year-olds (Study 1) and 5–12-year-olds (Study 2), Baron et al. (1993) found no rela­tion­ship between age and SC deci­sions. By con­trast, Klaczyn­ski (2001b) reported that the SC fal­lacy decreased from early ado­les­cence to adult­hood, although nor­ma­tive deci­sions were infre­quent across ages. A third pat­tern of find­ings is reviewed by Arkes and Ayton (1999). Specifi­cal­ly, Arkes and Ayton argue that two stud­ies (Krouse, 1986; Web­ley & Plais­er, 1998) indi­cate that younger chil­dren com­mit the SC fal­lacy less fre­quently than older chil­dren. Mak­ing sense of these con­flict­ing find­ings is diffi­cult because crit­i­cisms can be levied against each inves­ti­ga­tion. For instance, Arkes and Ayton (1999) ques­tioned the null find­ings of Baron et al. (1993) because sam­ple sizes were small (e.g., in Baron et al., Study 2, n per age group ranged from 7 to 17). The prob­lems used by Krouse (1986) and Web­ley and Plaiser (1998) were not, strictly speak­ing, SC prob­lems (rather, they were prob­lems of ‘men­tal account­ing’; see Web­ley & Plais­er, 1998). Because Klaczyn­ski (2001b) did not include chil­dren in his sam­ple, the age trends he reported are lim­ited to ado­les­cence. Thus, an inter­pretable mon­tage of age trends in SC deci­sions can­not be cre­ated from prior research.

    …An alter­na­tive propo­si­tion is based on the pre­vi­ously out­lined the­ory of the role of metacog­ni­tion in medi­at­ing inter­ac­tions between ana­lytic and heuris­tic pro­cess­ing. In this view, even young chil­dren have had ample oppor­tu­ni­ties to con­vert the ‘waste not’ heuris­tic from a con­scious strat­egy to an auto­mat­i­cally acti­vated heuris­tic stored as a pro­ce­dural mem­o­ry. Evi­dence from chil­dren’s expe­ri­ences with food (e.g., Birch, Fish­er, & Grim­m-Thomas, 1999) pro­vides some sup­port for the argu­ment that even preschool­ers are fre­quently rein­forced for not ‘wast­ing’ food. Moth­ers com­monly extort their chil­dren to ‘clean up their plates’ even though they are sated and even though the nutri­tional effects of eat­ing more than their bod­ies require are gen­er­ally neg­a­tive. If the ‘waste not’ heuris­tic is auto­mat­i­cally acti­vated in sunk cost sit­u­a­tions for both chil­dren and adults, then one pos­si­bil­ity is that no age differ­ences in com­mit­ting the fal­lacy should be expect­ed. How­ev­er, if acti­vated heuris­tics are momen­tar­ily avail­able for eval­u­a­tion in work­ing mem­o­ry, then the supe­rior metacog­ni­tive abil­i­ties of ado­les­cents and adults should allow them to inter­cede in expe­ri­en­tial pro­cess­ing before the heuris­tic is actu­ally used. Although the evi­dence is clear that most adults do not take advan­tage of this oppor­tu­nity for eval­u­a­tion, the pro­por­tion of ado­les­cents and adults who actively inhibit the ‘waste not’ heuris­tic should be greater than the same pro­por­tion of chil­dren.

    ↩︎
  25. eg. “Dis­count­ing of Delayed Rewards: A Life-S­pan Com­par­i­son”, Green et al 1994; abstract:

    In this study, chil­dren, young adults, and older adults chose between imme­di­ate and delayed hypo­thet­i­cal mon­e­tary rewards. The amount of the delayed reward was held con­stant while its delay was var­ied. All three age groups showed delay dis­count­ing; that is, the amount of an imme­di­ate reward judged to be of equal value to the delayed reward decreased as a func­tion of delay. The rate of dis­count­ing was high­est for chil­dren and low­est for older adults, pre­dict­ing a life-s­pan devel­op­men­tal trend toward increased self­-con­trol. Dis­count­ing of delayed rewards by all three age groups was well described by a sin­gle func­tion with age-sen­si­tive para­me­ters (all R2s > .94). Thus, even though there are quan­ti­ta­tive age differ­ences in delay dis­count­ing, the exis­tence of an age-in­vari­ant form of dis­count func­tion sug­gests that the process of choos­ing between rewards of differ­ent amounts and delays is qual­i­ta­tively sim­i­lar across the life span.

    ↩︎
  26. “The Bias Against Cre­ativ­i­ty: Why Peo­ple Desire But Reject Cre­ative Ideas”, Mueller et al 2011:

    Uncer­tainty is an aver­sive state (Fiske & Tay­lor, 1991 [Social cog­ni­tion]; Hei­der, 1958 [The psy­chol­ogy of inter­per­sonal rela­tions]) which peo­ple feel a strong moti­va­tion to dimin­ish and avoid (Whit­son & Galin­sky, 2008).

    ↩︎
  27. “Search­ing for the Sunk Cost Fal­lacy”, Fried­man et al 2007:

    Sub­jects play a com­puter game in which they decide whether to keep dig­ging for trea­sure on an island or to sink a cost (which will turn out to be either high or low) to move to another island. The research hypoth­e­sis is that sub­jects will stay longer on islands that were more costly to find. Nine treat­ment vari­ables are con­sid­ered, e.g. alter­na­tive visual dis­plays, whether the trea­sure value of an island is shown on arrival or dis­cov­ered by trial and error, and alter­na­tive para­me­ters for sunk costs. The data reveal a sur­pris­ingly small and erratic sunk cost effect that is gen­er­ally insen­si­tive to the pro­posed psy­cho­log­i­cal dri­vers.

    I cite Fried­man 2006 here so much because it’s unusu­al—as McAfee et al 2007 puts it:

    …Most of the exist­ing empir­i­cal work has not con­trolled for chang­ing haz­ards, option val­ues, rep­u­ta­tions for abil­ity and com­mit­ment, and bud­get con­straints. We are aware of only one study in which sev­eral of these fac­tors are elim­i­nat­ed—Fried­man et al. (2006). In an exper­i­men­tal envi­ron­ment with­out option value or rep­u­ta­tion con­sid­er­a­tions, the authors find only very small and sta­tis­ti­cally insignifi­cant sunk cost effects in the major­ity of their treat­ments, con­sis­tent with the ratio­nal the­ory pre­sented here.

    ↩︎
  28. On the lessons of Gar­land et al 1990’s obser­va­tion of non sunk cost fal­la­cy, McAfee et al 2007:

    While some projects have an increas­ing haz­ard, oth­ers appear to have a decreas­ing haz­ard. For exam­ple, , orig­i­nally expected to cost $1 bil­lion (see Epstein, 1998), prob­a­bly has a decreas­ing haz­ard; given ini­tial fail­ure, the odds of imme­di­ate suc­cess recede and the likely expen­di­tures required to com­plete grow. Oil-ex­plo­ration projects might also be char­ac­ter­ized by decreas­ing haz­ards. Sup­pose a firm acquires a license to drill a num­ber of wells in a fixed area. It decides to drill a well on a par­tic­u­lar spot in the area. Sup­pose the well turns out to be dry. The costs of drilling the well are then sunk. But the dry well might indi­cate that the like­li­hood of strik­ing oil on another spot in the area is low since the geo­phys­i­cal char­ac­ter­is­tics of sur­face rocks and ter­rain for the next spot are more or less the same as the ones for the pre­vi­ous spot that turned out to be dry. Thus, the firm might be ratio­nally less likely to drill another well. In gen­er­al, firms might be less will­ing to drill another well the more wells they had already found to be dry. This may in part explain the rapid “de-esca­la­tion” observed by Gar­land, Sande­ford, and Rogers (1990) in their oil-ex­plo­ration exper­i­ments.

    ↩︎
  29. Born­stein et al. 1999:

    Mea­sure­ments and main results: Res­i­dents eval­u­ated med­ical and non-med­ical sit­u­a­tions that var­ied the amount of pre­vi­ous invest­ment and whether the present deci­sion maker was the same or differ­ent from the per­son who had made the ini­tial invest­ment. They rated rea­sons both for con­tin­u­ing the ini­tial deci­sion (e.g., stay with the med­ica­tion already in use) and for switch­ing to anew alter­na­tive (e.g., a differ­ent med­ica­tion). There were two main find­ings: First, the res­i­dents’ rat­ings of whether to con­tinue or switch med­ical treat­ments were not influ­enced by the amount of the ini­tial invest­ment (p’s > 0.05). Sec­ond, res­i­dents’ rea­son­ing was more nor­ma­tive in med­ical than in non-med­ical sit­u­a­tions, in which it par­al­leled that of under­grad­u­ates (p’s < 0.05).

    Con­clu­sions: Med­ical res­i­dents’ eval­u­a­tion of treat­ment deci­sions reflected good rea­son­ing, in that they were not influ­enced by the amount of time and/or money that had already been invested in treat­ing a patient. How­ev­er, the res­i­dents did demon­strate a sunk-cost effect in eval­u­at­ing non-med­ical sit­u­a­tions. Thus, any advan­tage in deci­sion mak­ing that is con­ferred by med­ical train­ing appears to be domain spe­cific.

    Some of this was repli­cated & gen­er­al­ized in Braver­man & Blu­men­thal-Barby 2012:

    Specifi­cal­ly, we sur­veyed 389 health care providers in a large urban med­ical cen­ter in the United States dur­ing August 2009. We asked par­tic­i­pants to make a treat­ment rec­om­men­da­tion based on one of four hypo­thet­i­cal clin­i­cal sce­nar­ios that var­ied in the source and type of prior invest­ment described. By com­par­ing rec­om­men­da­tions across sce­nar­ios, we found that providers did not demon­strate a sunk-cost effect; rather, they demon­strated a sig­nifi­cant ten­dency to over-com­pen­sate for the effect. In addi­tion, we found that more than one in ten health care providers rec­om­mended con­tin­u­a­tion of an ineffec­tive treat­ment.

    ↩︎
  30. Staw 1981, “The Esca­la­tion of Com­mit­ment to a Course of Action”:

    …How­ev­er, when choos­ing to com­mit resources, sub­jects did not appear to per­sist unswerv­ingly in the face of con­tin­ued neg­a­tive results or to ignore infor­ma­tion about the pos­si­bil­ity of future returns. These incon­sis­ten­cies led to a third study [Staw & Ross, 1978] designed specifi­cally to find out how indi­vid­u­als process infor­ma­tion fol­low­ing neg­a­tive ver­sus pos­i­tive feed­back. In this third study, pre­vi­ous success/failure and causal infor­ma­tion about a set­back were both exper­i­men­tally var­ied. Results showed that sub­jects invested more resources in a course of action when infor­ma­tion pointed to an exoge­nous rather than endoge­nous cause of a set­back, and this ten­dency was most pro­nounced when sub­jects had been given a pre­vi­ous fail­ure rather than a suc­cess. The exoge­nous cause in this exper­i­ment was one that was both exter­nal to the pro­gram in which sub­jects invested and was unlikely to per­sist, whereas the endoge­nous cause was a prob­lem cen­tral to the pro­gram and likely to per­sist.

    ↩︎
  31. “Con­tin­u­ing invest­ment under con­di­tions of fail­ure: A lab­o­ra­tory study of the lim­its to esca­la­tion”, McCain 1986:

    Brock­ner et al. (1982), fol­low­ing Teger (1980), have also specifi­cally sug­gested that entrap­ment involves two dis­tinct stages. In the first stage sub­jects respond pri­mar­ily to eco­nomic incen­tives, whereas self­-jus­ti­fi­ca­tion sup­pos­edly gov­erns the sec­ond. Brock­ner et al. found that cost salience sig­nifi­cantly reduced entrap­ment early on but had lit­tle effect in later peri­od­s…Thus, a process that reflects efforts to learn both what caused the set­backs and the impli­ca­tions of that cause for future action may pro­vide a bet­ter model of de-esca­la­tion.

    …The find­ings of this study clearly showed that the esca­la­tion effect, defined by a differ­ence between the allo­ca­tions of high- and low-choice sub­jects, was lim­ited to the ini­tial stages of con­tin­u­ing invest­ment. The find­ings were con­sis­tent with pre­vi­ous research (Staw & Fox, 1977) and sup­port the con­tention that invest­ment in fail­ing projects involves two stages. Clear­ly, too, the avail­abil­ity of alter­na­tive invest­ments lim­ited the esca­la­tion effect. When sub­jects were given alter­na­tives to the fail­ing invest­ment, the differ­ence between the invest­ments of the high- and low-choice groups dis­ap­peared. The results showed, as well, that high­-choice sub­jects who dis­played the esca­la­tion effect quit fund­ing the fail­ing invest­ment sooner than com­pa­ra­ble low-choice sub­jects, con­trary to a com­mit­ment per­spec­tive. Sim­i­lar­ly, the declin­ing haz­ard rates observed here sup­port a learn­ing model more than they sup­port the self­-jus­ti­fi­ca­tion mod­el…­Some authors (e.g., Northcraft & Wolf, 1984) have sug­gested that investors react differ­ently to cost over­runs than they react to rev­enue short­falls, yet many esca­la­tion exper­i­ments do not clearly spec­ify whether set­backs result from higher than expected costs or from lower than expected rev­enues. Clear­ly, if investors are sen­si­tive to uncer­tain­ty, as the attri­bu­tional model sug­gests, researchers must con­sider how sub­jects may respond to an inad­e­quately spec­i­fied invest­ment con­text…

    ↩︎
  32. “Sunk and Oppor­tu­nity Costs in Val­u­a­tion and Bid­ding”, Phillips et al 1991, which also men­tions another appar­ent instance of mar­ket agents ini­tially com­mit­ting sunk cost and then learn­ing: Plott & Uhl 1981 “Com­pet­i­tive Equi­lib­rium with Mid­dle­men: An Empir­i­cal Study”, South­ern Eco­nomic Jour­nal↩︎

  33. In par­tic­u­lar, “Pay-to-bid” s such as have been called instances of sunk cost fal­lacy, or “esca­la­tion of com­mit­ment” by no less than Richard H. Thaler, and described by techie as being “as close to pure, dis­tilled evil in a busi­ness plan as I’ve ever seen”. But as Wang & Xu 2012 and Cal­dara 2012 indi­cate, while peo­ple do lose money to the penny auc­tions, they even­tu­ally do learn that penny auc­tions are not good ideas and escape the trap. A pre­vi­ous analy­sis of penny auc­tions, Augen­blick 2009, omit­ted detailed sur­vivor­ship data but still found some learn­ing effects. And indeed, Swoopo has since shut down.↩︎

  34. “Learn­ing lessons from sunk costs”, Born­stein & Chap­man 1995:

    Study par­tic­i­pants rated the qual­ity of sev­eral argu­ments for con­tin­u­ing an orig­i­nal plan in sunk cost sit­u­a­tions in order to (a) avoid wast­ing resources, (b) learn to make bet­ter deci­sions, (c) pun­ish poor deci­sion mak­ing, and (d) appear con­sis­tent. The lesson-learn­ing argu­ment was per­ceived as most appro­pri­ate when adult teach­ers taught lessons to oth­ers, the orig­i­nal deci­sion was care­lessly made, or if it con­sumed com­par­a­tively more resources. Rat­ings of the lesson-learn­ing argu­ment were higher for teacher-learner than for adult-alone sit­u­a­tions, regard­less of whether the learner was a child or an adult. The impli­ca­tions for improv­ing deci­sion mak­ing and judg­ing whether the sunk cost effect is a bias are dis­cussed…How­ev­er, prospect the­ory does not pre­dict an effect of vari­ables such as whether the deci­sion maker acted alone, the care with which the deci­sion was made, or the nature of the rela­tion­ship between teacher and learn­er. The other three responses were influ­enced by these vari­ables.

    What appears to be a bias in the lab­o­ra­tory may be func­tional behav­ior in a more real­is­tic con­text (Fun­der, 1987; Hog­a­rth, 1981), where a vari­ety of jus­ti­fi­ca­tions for the behav­ior can be con­sid­ered. In gen­er­al, ignor­ing sunk costs is an adap­tive, cost-effec­tive strat­e­gy. Yet what appears to be biased, irra­tional behav­ior—­such as decreas­ing util­ity through atten­tion to irre­triev­ably wasted resources—­can be described as ‘meta-ra­tional’ (Junger­mann, 1986), assum­ing the ben­e­fits of learn­ing and imple­ment­ing the les­son out­weigh the costs of stick­ing to the orig­i­nal plan. How­ev­er, it raises the inter­est­ing ques­tion of why con­tin­u­ing a failed plan is the best (or even a good) way to learn to make bet­ter deci­sions in the future. Per­haps one could both aban­don the cur­rent unsuc­cess­ful plan and learn to think more care­fully in future deci­sions.

    ↩︎
  35. See Arkes & Blumer 1985, which found no resis­tance in stu­dents who had were taken an eco­nom­ics course and most of whom had taken other eco­nom­ics cours­es; also good is Lar­rick et al 1993 or Lar­rick et al 1990, “Teach­ing the use of cost-ben­e­fit rea­son­ing in every­day life”:

    It may be seen in Table 1 that econ­o­mists’ rea­son­ing on the uni­ver­sity and inter­na­tional pol­icy ques­tions was more in line with cost-ben­e­fit rules than was that of biol­o­gists and human­ists. This pat­tern was found for the net-ben­e­fit ques­tions (p < 0.05), and for the oppor­tu­nity cost ques­tions (p < 0.05) and a trend was found for the sunk cost ques­tions (p < 0.15)…Third, econ­o­mists were more likely than biol­o­gists and human­ists to report that they ignored sunk costs or attended to oppor­tu­nity costs in their per­sonal deci­sions. For instance, they were more likely to have dropped a research project because it was not prov­ing worth­while. (It is inter­est­ing to note that econ­o­mists were not sim­ply more likely to drop pro­jects. All three dis­ci­plines gave the same answer on aver­age to the ques­tion “have you ever dropped a research project because of a lack of fund­ing?”) Final­ly, econ­o­mists par­tic­i­pated in a greater num­ber of time-sav­ing activ­i­ties…The results show that train­ing peo­ple only briefly on an eco­nomic prin­ci­ple sig­nifi­cantly alters their solu­tions to hypo­thet­i­cal eco­nomic prob­lems [in­clud­ing sunk cost]. More­over, train­ing effects gen­er­al­ize fully from a finan­cial domain to a non­fi­nan­cial one and vice ver­sa…The means for both indices showed that [quick­-e­co­nom­ics] trained sub­jects were ignor­ing sunk costs more than untrained sub­jects, but only the nine-item index based on the ques­tion “Have you bought one of the fol­low­ing items at some time and then not used it in the past month” approached sig­nifi­cance. Trained sub­jects reported that they had paid for but not used 1.14 objects and activ­i­ties com­pared to 0.84 for untrained sub­jects, f(78) = 1.64, p = 0.10.

    On the other hand, Fen­nema & Perkins 2008:

    The results indi­cate that prac­tic­ing Cer­ti­fied Pub­lic Accoun­tants (CPAs), Mas­ters of Busi­ness Admin­is­tra­tion stu­dents (MBAs) and under­grad­u­ate account­ing stu­dents per­form bet­ter than under­grad­u­ate psy­chol­ogy stu­dents. The level of train­ing, as mea­sured by the num­ber of col­lege courses in man­age­r­ial account­ing, was found to be pos­i­tively cor­re­lated with per­for­mance, while the level of expe­ri­ence, as mea­sured by years of finan­cial­ly-re­lated work, was not. Jus­ti­fi­ca­tion was found to improve deci­sions only for those par­tic­i­pants with sig­nifi­cant work expe­ri­ence (MBAs and CPAs). Strate­gies used in this type of deci­sion were exam­ined with the sur­pris­ing find­ing that eco­nom­i­cally ratio­nal deci­sions can be made even if sunk costs are not ignored.

    ↩︎
  36. For exam­ple, Heath 1995 spends a page crit­i­ciz­ing Brock­ner & Rubin 1985’s setup of endowed sub­jects buy­ing tick­ets in a lot­tery, point­ing out they took sub­jects quit­ting ticket pur­chases after a long run of tick­et-buy­ing as evi­dence of sunk cost, even though if the expected value of the lot­tery was pos­i­tive, the nor­ma­tive ratio­nal strat­egy is for the sub­ject spend every penny of the endow­ment buy­ing tick­ets! “Con­sid­er, for exam­ple, the aver­age of $3.82 invested in the game with a $10.00 prize. In this game, the aver­age sub­ject quits at a point where the expected ben­e­fits from a mar­ginal invest­ment are three times what they were when the sub­ject began invest­ing.” [em­pha­sis added]↩︎

  37. The pre­vi­ously men­tioned stud­ies of sunk cost in chil­dren found min­i­mal cor­re­la­tions with intel­li­gence, when that was mea­sured. For adults, see Strough et al 2008, pre­vi­ously cit­ed. Also, Stanovich, K. E., & West, R. F. (2008b). “On the rel­a­tive inde­pen­dence of think­ing biases and cog­ni­tive abil­ity”. Jour­nal of Per­son­al­ity and Social Psy­chol­ogy, 94, 672–695 (pg 7–8):

    Both cog­ni­tive abil­ity groups dis­played sunk-cost effects of roughly equal mag­ni­tude. For the high-SAT group, the mean in the no-sunk-cost con­di­tion was 6.90 and the mean in the sunk-cost con­di­tion was 5.08, whereas for the low-SAT group, the mean in the no-sunk-cost con­di­tion was 6.50 and the mean in the sunk-cost con­di­tion was 4.19. A 2 (cog­ni­tive abil­i­ty) ϫ 2 (con­di­tion) indi­cated a sig­nifi­cant main effect of cog­ni­tive abil­i­ty, F(1, 725) ϭ 8.40, MSE ϭ 9.13, p Ͻ .01, and a sig­nifi­cant main effect of con­di­tion, F(1, 725) ϭ 84.9, MSE ϭ 9.13, p Ͻ .001. There was a slight ten­dency for the low-SAT par­tic­i­pants to show a larger sunk-cost effect, but the Cog­ni­tive Abil­ity ϫ Con­di­tion inter­ac­tion did not attain sta­tis­ti­cal sig­nifi­cance, F(1, 725) ϭ 1.21, MSE ϭ 9.13. The inter­ac­tion was also tested in a regres­sion analy­ses in which SAT was treated as a con­tin­u­ous vari­able rather than as a dichoto­mous vari­able. The Form ϫ SAT cross pro­duct, when entered third in the equa­tion, was not sig­nifi­cant, F(1, 725) ϭ 0.32.

    The sunk-cost effect thus rep­re­sents another cog­ni­tive bias that is not strongly atten­u­ated by cog­ni­tive abil­i­ty. How­ev­er, this is true only when it is assessed in a between-sub­jects con­text. Using a sim­i­lar sunk-cost prob­lem, Stanovich & West 1999 did find an asso­ci­a­tion with cog­ni­tive abil­ity when par­tic­i­pants responded in a with­in-sub­jects design.

    And Parker & Fis­chhoff 2005:

    The first two rows of Table 5 show strong cor­re­la­tions between five of the seven DMC com­po­nent mea­sures and respon­dents’ scores on the WISC-R vocab­u­lary test and on Gian­cola et al.’s (1996) mea­sure of ECF. Con­sis­tency in risk per­cep­tion and resis­tance to sunk cost show lit­tle rela­tion­ship to either of these gen­eral cog­ni­tive abil­i­ties.8,9 [cor­re­la­tions: 0.12, 0.08]

    Bru­ine de Bruin et al 2007, mod­i­fy­ing Parker & Fis­chhoff 2005’s test bat­tery, improved the con­sis­tency of the sunk cost ques­tions, and found sim­i­lar small cor­re­la­tions with their 2 IQ mea­sures, of 0.17 and 0.04. Lar­rick et al 1993 recorded SAT/ACT scores (close prox­ies for IQ) and found some cor­re­la­tion, and noted that IQ was “pos­i­tively related to recog­ni­tion of econ­o­mists’ posi­tion on var­i­ous eco­nomic prob­lems.”↩︎

  38. “Sunk Costs in the NBA: Why Draft Order Affects Play­ing Time and Sur­vival in Pro­fes­sional Bas­ket­ball”:

    A sec­ond prob­lem is that much of the esca­la­tion lit­er­a­ture, despite its intent to sources of com­mit­ment, has not directly explain non­ra­tional chal­lenged the assump­tions of eco­nomic deci­sion mak­ing. By and large, the esca­la­tion lit­er­a­ture has demon­strated that psy­cho­log­i­cal and social fac­tors can influ­ence resource allo­ca­tion deci­sions, not that the ratio­nal assump­tions of deci­sion mak­ing are in error. A third weak­ness is that almost all the esca­la­tion lit­er­a­ture is lab­o­ra­tory based. Aside from a few recent qual­i­ta­tive case stud­ies (e.g., Ross and Staw, 1986, 1993), esca­la­tion pre­dic­tions have not been con­firmed set­tings, using data that are or fal­si­fied in real orga­ni­za­tional gen­er­ated in their nat­ural con­text. There­fore, despite the size of the esca­la­tion lit­er­a­ture, it is still uncer­tain if to esca­la­tion effects can be gen­er­al­ized from the lab­o­ra­tory the field.

    …Gar­land, Sande­fur, and Rogers (1990) found a sim­i­lar absence of sunk-cost effects in an exper­i­ment using an oil-drilling sce­nario. Prior expen­di­tures on dry wells were not asso­ci­ated with con­tin­ued drilling, per­haps because dry wells were so clearly seen as reduc­ing rather than increas­ing the like­li­hood of future oil pro­duc­tion. Thus it appears that sunk costs may only be influ­en­tial on project deci­sions when they are linked to the per­cep­tion (if not the real­i­ty) of progress on a course of action.

    …Table 2 also shows that draft order was a sig­nifi­cant pre­dic­tor of min­utes played over the entire five-year peri­od. This effect was above and beyond any effects of a play­er’s per­for­mance, injury, or trade sta­tus. The regres­sions showed that every incre­ment in the draft num­ber decreased play­ing time by as much as 23 min­utes in the sec­ond year (I, = −22.77, p < 0.001, one-tailed test). Like­wise, being taken in the sec­ond rather than the first round of the draft meant 552 min­utes less play­ing time dur­ing a play­er’s sec­ond year in the NBA.

    ↩︎
  39. Fried­man et al 2006: “…Of course, it is hard to com­pletely rule out other expla­na­tions based on unob­served com­po­nents of per­for­mance or the coach­es’ Bayesian pri­ors.”↩︎

  40. “Rein­vest­ment deci­sions by entre­pre­neurs: Ratio­nal deci­sion-mak­ing or esca­la­tion of com­mit­ment?”, McCarthy et al 1993:

    The hypothe­ses were tested using data from a lon­gi­tu­di­nal study involv­ing 1112 firms. It was found that entre­pre­neurs who had started their firms and those who had expressed sub­stan­tial over-con­fi­dence were sig­nifi­cantly more likely to make the deci­sion to expand. The hypothe­ses that those who had part­ners and those who expected to apply their skills would be more likely to expand were not sup­port­ed. Fur­ther­more, and con­sis­tent with pre­vi­ous research, these psy­cho­log­i­cal esca­la­tion pre­dic­tors seemed to exert a greater influ­ence when feed­back from the mar­ket­place was neg­a­tive. As expect­ed, there was a declin­ing influ­ence in the third year as com­pared with the sec­ond. Con­sis­tent with the prior lit­er­a­ture and the hypothe­ses, these psy­cho­log­i­cal pre­dic­tors did show a small, but sys­tem­atic influ­ence upon rein­vest­ment deci­sions.

    …Although the hypoth­e­sis regard­ing PARTNR was not sup­port­ed, as not­ed, the zero-order cor­re­la­tion between PARTNR and NEWCAP2 is in the pre­dicted direc­tion (r = 0.06, p < 0.05, one-tailed). Thus, entre­pre­neurs with part­ners may be more likely to expand the asset base of their firms than they would be if they were sole own­ers. This has sig­nifi­cant impli­ca­tions for entre­pre­neur­ial teams, in that the pres­ence of part­ners does not inhibit the ten­dency to esca­late, but in fact increases that ten­den­cy. This means that hav­ing part­ners is not insur­ance against the ten­dency to esca­late. This is con­sis­tent with the research on esca­la­tion (Baz­er­man et al. 1984).

    …A puz­zling find­ing was the lack of any rela­tion­ship between finan­cial indi­ca­tors from the pre­vi­ous year and new cap­i­tal invested in the busi­ness. In other words, there was no sys­tem­atic rela­tion­ship between sales growth and expan­sion of the asset base for these young firms. This may mean that many of these firms started with some excess capac­ity so that it was not nec­es­sary to add to facil­i­ties to sup­port their early growth. It may also mean that man­age­ment of work­ing cap­i­tal was errat­ic. On the other hand. the psy­cho­log­i­cal fac­tors pre­dicted by esca­la­tion the­ory did, in two of four cas­es, show sys­tem­atic rela­tion­ships to addi­tional invest­ment.

    …One final issue worth com­ment is the rel­a­tively small amount of vari­ance accounted for by the mod­els described in this study. The vari­ance accounted for in this research is in line with the find­ings in sim­i­lar stud­ies of esca­la­tion. In a recent field study of the esca­la­tion bias, Schoor­man (1988) reported that the esca­la­tion bias accounted for 6% of the vari­ance in per­for­mance rat­ings. Schoor­man (1988) noted in this arti­cle that the esca­la­tion vari­ables were more pow­er­ful pre­dic­tors of per­for­mance (at 6%) than a mea­sure of abil­ity used in a val­i­dated selec­tion test for these same employ­ees…­Taken together these find­ings pro­vide sup­port for the view that esca­la­tion bias is a sig­nifi­cant and com­mon prob­lem in deci­sion-mak­ing among entre­pre­neurs. The char­ac­ter­is­tics of entre­pre­neurs and the nature of the deci­sions they are required to make leave them par­tic­u­larly vul­ner­a­ble to esca­la­tion bias. Efforts to train entre­pre­neurs to guard against this bias may be very valu­able.

    ↩︎
  41. “Recon­cep­tu­al­iz­ing entre­pre­neur­ial exit: Diver­gent exit routes and their dri­vers”, Wennberg et al 2009:

    …An alter­na­tive fail­ure-avoid­ance strat­egy is to invest addi­tional equi­ty. We found that such rein­vest­ments reduced the prob­a­bil­ity of all exit routes. While pre­vi­ous research on rein­vest­ment also found that rein­vest­ment was not related to well-de­fined per­for­mance lev­els (Mc­Carthy et al., 1993), it is inter­est­ing that it also reduced the odds of har­vest sales and har­vest liq­ui­da­tions. As a fail­ure-avoid­ance strat­e­gy, rein­vest­ment thus seems to be less effec­tive than cost reduc­tion. Cost reduc­tions have direct effects on firm per­for­mance while rein­vest­ments pro­vide a tem­po­rary buffer for fail­ing firms. As sug­gest­ed, there might be dis­in­cen­tives to addi­tional invest­ments if tax laws pun­ish entre­pre­neurs tak­ing out money as salaries or div­i­dends. If cor­rob­o­rat­ed, this is an impor­tant find­ing for pub­lic pol­icy mak­ers.

    ↩︎
  42. See and Ashraf et al 2007; on the hypo­thet­i­cal (sum­mary from Holla & Kre­mer 2008, pg 11):

    When they divide their sam­ple into house­holds that dis­played a sunk-cost effect when respond­ing to a hypo­thet­i­cal sce­nario posed to them by sur­vey­ors and those that did not, they find coeffi­cients of much larger mag­ni­tude for the hypo­thet­i­cal-sunk-cost house­holds, although these remain insignifi­cant and can­not be sta­tis­ti­cally dis­tin­guished from the esti­mated effects for house­holds that did not dis­play this hypo­thet­i­cal sunk-cost effect. Ashraf et al (2007) iden­tify hypo­thet­i­cal-sunk-cost house­holds from their answers to the fol­low­ing ques­tion posed dur­ing the fol­low-up sur­vey: Sup­pose you bought a bot­tle of juice for 1,000 Kw. When you start to drink it, you real­ize you don’t really like the taste. Would you fin­ish drink­ing it?

    ↩︎
  43. Fried­man et al 2006:

    Do Inter­net users respond to sunk time costs? Man­ley & Seltzer (1997) report that after a par­tic­u­lar web­site imposed an access charge, the remain­ing users stayed longer. A rival expla­na­tion to the sunk cost fal­lacy is selec­tion bias: the users with short­est stays when the site was free are those who stopped com­ing when they had to pay. Klein et al (1999) report that users stick around longer on their site after encoun­ter­ing delays while play­ing a game, but again selec­tion bias is a pos­si­ble alter­na­tive expla­na­tion. The issue is impor­tant in e-com­merce because ‘stick­ier’ sites earn more adver­tis­ing rev­enue. Schwartz (1999) [Dig­i­tal Dar­win­ism: 7 Break­through Busi­ness Strate­gies for Sur­viv­ing in the Cut­throat Web Econ­omy] reports that man­agers of the free Wall Street Jour­nal site delib­er­ately slowed the login process in the belief that users would then stay longer. One of us (Lukose) took a sam­ple of 2000 user logs from a web­site and found a sig­nifi­cant pos­i­tive cor­re­la­tion between res­i­dence time at the site and down­load laten­cy. One alter­na­tive expla­na­tion is unob­served con­ges­tion on the web, and users may have been respond­ing more to expected future time costs than to time costs already sunk. Also, good sites may be more pop­u­lar because they are good, lead­ing to (a) con­ges­tion and (b) more time spent on the site.

    ↩︎
  44. Fried­man et al 2006:

    …Bar­ron et al. (2001) find that US firms are sig­nifi­cantly more likely to ter­mi­nate projects fol­low­ing the depar­ture of top man­agers. This might reflect the new man­agers’ insen­si­tiv­ity to costs sunk by their pre­de­ces­sors, or it might sim­ply reflect two aspects of the same broad realign­ment deci­sion.

    ↩︎
  45. A “tra­di­tional maxim” in Chi­nese state­craft.↩︎

  46. Dil Green’s descrip­tion:

    • ‘tac­ti­cal opti­mism’ : David Bohm’s term for the way in which humans over­come the (so far) inescapable assess­ment that; ‘in the long run, we’re all dead’. Specifi­cal­ly, within the build­ing indus­try, rife with non-op­ti­mal ingrained con­di­tions, you would­n’t come to work if you weren’t an opti­mist. Builders who cease to have an opti­mistic out­look go and find other things to do.

    It’s hard not to think of Arkes & Hutzel 2000, “The Role of Prob­a­bil­ity of Suc­cess Esti­mates in the Sunk Cost Effect”

    The sunk cost effect is man­i­fested in a ten­dency to con­tinue an endeavor once an invest­ment has been made. Arkes and Blumer (1985) showed that a sunk cost increases one’s esti­mated prob­a­bil­ity that the endeavor will suc­ceed [ p(s)]. Is this p(s) increase a cause of the sunk cost effect, a con­se­quence of the effect, or both? In Exper­i­ment 1 par­tic­i­pants read a sce­nario in which a sunk cost was or was not pre­sent. Half of each group read what the pre­cise p(s) of the project would be, thereby dis­cour­ag­ing p(s) infla­tion. Nev­er­the­less these par­tic­i­pants man­i­fested the sunk cost effect, sug­gest­ing p(s) infla­tion is not nec­es­sary for the effect to occur. In Exper­i­ment 2 par­tic­i­pants gave p(s) esti­mates before or after the invest­ment deci­sion. The lat­ter group man­i­fested higher p(s), sug­gest­ing that the inflated esti­mate is a con­se­quence of the deci­sion to invest.

    ↩︎
  47. The rea­son­ing goes like this:

    Another rea­son for hon­or­ing the sunk cost of the movie ticket (re­lated to avoid­ing regret) is that you know your­self well enough to real­ize you often make mis­takes. There are many irra­tional rea­sons why you would not want to see the movie after all. Maybe you’re unwill­ing to get up and go to the movie because you feel a lit­tle tired after eat­ing too much. Maybe a friend who has already seen the movie dis­cour­ages you to go, even though you know your tastes in movies don’t always match. Maybe you’re a lit­tle depressed and dis­tracted by work/relationship/whatever prob­lems. Etc.

    For what­ever rea­son, your past self chose to buy the tick­et, and your present self does not want to see the movie. Your present self has more infor­ma­tion. But this extra infor­ma­tion is of dubi­ous qual­i­ty, and is not always rel­e­vant to the deci­sion. But it still influ­ences your state of mind, and you know that. How do you know which self is right? You don’t, until after you’ve seen the movie. The mar­ginal costs, in terms of men­tal dis­com­fort, of see­ing the movie and not lik­ing it, are usu­ally smaller than the mar­ginal ben­e­fit of stay­ing home and think­ing about what a great movie it could have been. The rea­son­ing behind this triv­ial exam­ple can eas­ily be adapted to sunk cost choices in sit­u­a­tions that do mat­ter.

    And again:

    Peo­ple who take into account sunken costs in every­day deci­sions will make bet­ter deci­sions on aver­age. My argu­ment relies on the propo­si­tion that a per­son’s esti­mate of his own util­ity func­tion is highly noisy. In other words, you don’t really know if going to the movie will make you happy or not, until you actu­ally do it.

    So if you’re in this movie-go­ing sit­u­a­tion, then you have at least two pieces of data. Your cur­rent self has pro­duced an esti­mate that says the util­ity of going to the movie is neg­a­tive. But your for­mer self pro­duced an esti­mate that says the util­ity is sub­stan­tially pos­i­tive—e­nough so that he was will­ing to fork over $10. So maybe you aver­age out the esti­mates: if you cur­rently value the movie at -$5, then the aver­age value is still pos­i­tive and you should go. The real ques­tion is how con­fi­dent you are in your cur­rent esti­mate, and whether that con­fi­dence is jus­ti­fied by real new infor­ma­tion.

    ↩︎
  48. “Real artists ship”, as the say­ing goes, and don’t give into the temp­ta­tion to rewrite entire sys­tems unless start­ing over or start­ing an entirely new sys­tem is truly nec­es­sary (par­tic­u­larly given ). One might call using “sunk cost fal­lacy” to jus­tify aban­don­ing par­tial projects for new projects the “‘sunk cost fal­lacy’ fal­lacy”:

    I have a prob­lem with never fin­ish­ing things that I want to work on. I get enthu­si­as­tic about them for a while, but then find some­thing else to work on. This prob­lem seems to be pow­ered par­tially by my sunk costs fal­lacy hooks. When faced with the choice of fin­ish­ing my cur­rent project or start­ing this shiny new pro­ject, my sunk costs hook acti­vates and says “eval­u­ate future expected util­ity and ignore sunk costs”. The new project looks very shiny com­pared to the old pro­ject, enough that it looks like a bet­ter thing to work on than the rest of the cur­rent pro­ject. The trou­ble is that this always seems to be the case. It seems weird that the awe­some­ness of my project ideas would have expo­nen­tial growth over time, so there must be some­thing else here.

    Johni­cholas:

    …Some­times it can be hard to main­tain a good bal­ance among mul­ti­ple activ­i­ties. For exam­ple, it is impor­tant to notice new good ideas. How­ev­er, I tend to spend too much time pur­su­ing nov­el­ty, and not enough time work­ing on the best idea that I’ve found so far. There is a tra­di­tion of browser games (see ) that enforce a kind of bal­ance using a vir­tual cur­rency of ‘turns’. You accu­mu­late turns slowly in real time, and essen­tially every action within the game uses up turns. This enforces not spend­ing too much time play­ing the game (and increases the per­ceived value of the game via forced arti­fi­cial scarci­ty, of course). If I gave myself ‘explore dol­lars’ for doing non-ex­plo­ration (so-called exploit) tasks, and charged myself for doing explo­ration tasks (like read­ing or Wikipedi­a), I could enforce a bal­ance. If I were also prone to the oppo­site prob­lem (‘A few months in the lab can often save whole hours in the library.’), then I might use two cur­ren­cies; explor­ing costs explore points but rewards with exploit points, and exploit­ing costs exploit points but rewards with explore points. (Vir­tual cur­ren­cies are ubiq­ui­tous in games, and they can be used for many pur­pos­es; I expect to find them able to be placed across from many differ­ent fail­ure modes.)

    Mass Dri­ver:

    …I have the same prob­lem at work; although, by main­stream soci­ety’s stan­dards, I am a rea­son­ably suc­cess­ful pro­fes­sion­al, I can’t really sit down and write a great essay when I’m too hot, or, at least, it seems like I would be more pro­duc­tive if I stopped writ­ing for 5 min­utes and cranked up the A/C or changed into shorts. An hour lat­er, it seems like I would be more pro­duc­tive if I stopped writ­ing for 20 min­utes and ate lunch. Later that after­noon, it seems like I would be more pro­duc­tive if I stopped for a few min­utes and read an inter­est­ing arti­cle on gen­eral sci­ence. These things hap­pen even in an ideal work­ing envi­ron­ment, when I’m by myself in a place I’m famil­iar with. If I have cowork­ers, or if I’m in a new town, there are even more dis­trac­tions. If I have to learn who to ask for help with learn­ing to use the new soft­ware so that I can research the data that I need to write a report, then I might spend 6 hours prepar­ing to spend 1 hour writ­ing a report.

    All this wor­ries me for two rea­sons: (1) I might be fail­ing to actu­ally opti­mize for my goals if I only spend 10–20% of my time directly per­form­ing tar­get actions like ‘write essay’ or ‘kayak with friends’, and (2) even if I am suc­cess­fully opti­miz­ing, it sucks that the way to achieve the results that I want is to let my atten­tion dwell on the most effi­cient ways to, say, brush my teeth. I don’t just want to go kayak­ing, I want to think about kayak­ing. Think­ing about dri­ving to the river seems like a waste of cog­ni­tive ‘time’ to me.

    Daniel Meade:

    …I’ve had so many ‘projects’ over the past few years I’ve lost count. Has any one of them seen the light of day? Well yes, but that failed mis­er­ably. The point is, it’s all too easy to get dis­tracted and move onto some­thing else, per­haps it’s that hur­dle where you’re just not sure what to do next or how to do it, so instead of find­ing a way to tackle it head on, you take the easy way out and start some­thing new. I’m quick to blame my fail­ings on the lack of cap­i­tal, that I ‘need’ to get my projects off the ground. And that jus­ti­fies my avoid­ance. Of course, it does­n’t, far from it. But I just don’t know how to push through, not right now any way.

    We all have heard of busi­nesses engaged in sunk cost, but that does­n’t tell us any­thing unless we know to what degree, if any, they engage in the oppo­site behav­ior, switch­ing too much; from “How Dig­i­Cash Blew Every­thing”, Next! Mag­a­zine:

    It all started out quite nice­ly. The brand new com­pany sold a smart card for closed sys­tems which was a cash-cow for years. It was at this time that the first irri­tants appeared. Even if you are a bril­liant sci­en­tist, that does­n’t mean you are a good man­ag­er. was a con­trol freak, some­one who could­n’t del­e­gate any­thing to any­one else, and insisted upon watch­ing over every­body’s shoul­ders. “That resulted in slow­ing down research,” explains an ex- employee who wished to remain anony­mous. “We had a lot of half-fin­ished prod­uct. He con­tin­u­ously changed his mind about where things were head­ed.”

    ↩︎
  49. Ini­ti­a­tion rit­u­als may increase com­mit­ment as they become more unpleas­ant or demand­ing (Aron­son & Mills 1959); and we can see attempts in our daily lives with con­sumeris­m—buy­ing gym mem­ber­ships or exer­cise equip­ment; McAfee et al 2007:

    Anec­do­tal evi­dence sug­gests that indi­vid­u­als may even exploit their own reac­tions to sunk expen­di­tures to their advan­tage. Steele (1996, p. 610) [cf. Elster 2000, Ulysses Unbound: Stud­ies in Ratio­nal­i­ty, Pre­com­mit­ment and Con­straints, and Kelly 2004] and Wal­ton (2002, p. 479) recount sto­ries of indi­vid­u­als who buy exer­cise machines or gym mem­ber­ships that cost in the thou­sands of dol­lars, even though they are reluc­tant to spend this much mon­ey, rea­son­ing that if they do, it will make them exer­cise, which is good for their health. A reac­tion to sunk costs that assists in com­mit­ment is often help­ful.

    Or pre­pay­ing for lessons, or buy­ing exces­sively expen­sive writ­ing tools:

    ‘Are really worth the cost com­pared to Mead? If so, why?’

    Plenty of peo­ple seem to swear by them. But here’s the thing—it’s not so much the cost (in absolute sums, it’s not that large). It’s whether you use it. You obvi­ously sweat over costs; per­haps this sweat­ing can be a cud­gel to force you to write down what­ev­er. The more a mole­sk­ine isn’t worth buy­ing, the more you will find your­self com­pelled to use it. Then would­n’t you be bet­ter off in the end?

    mocks this logic (pg 122 of Spent 2011):

    All expe­ri­enced fit­ness machine sales­peo­ple are well aware that this is the fate of most of their prod­ucts. What they are really sell­ing con­sumers is the delu­sion that the sunk costs of buy­ing the machines will force them to exer­cise con­sci­en­tious­ly. (The con­sumers know that they could have already been jog­ging for months around their neigh­bor­hood parks in their old run­ning shoes, but they also know that their access to the parks and shoes has not, empir­i­cal­ly, been suffi­cient to induce reg­u­lar exer­cise.) So, the con­sumer thinks: ‘If I invest $3,900 in this Pre­Cor EFX5.33 ellip­ti­cal train­er, it will (1) call forth reg­u­lar aer­o­bic activ­ity from my flawed and unwor­thy body, through the tech­no-fetishis­tic magic of its build qual­i­ty, and (2) save me money in the long run by reduc­ing med­ical expens­es.’ The sales­per­son mean­while thinks: ‘20% com­mis­sion!’ and the man­u­fac­turer thinks: ‘We can safely offer a ten-year war­ran­ty; because the aver­age machine only gets used sev­en­teen times in the first two months after pur­chase.’ Every­body’s hap­py, except for most con­sumers, and they don’t com­plain because they think it’s all their fault that they’re fail­ing to use the machine.’ The few con­sci­en­tious con­sumers who do use the equip­ment reg­u­larly enjoy many ben­e­fits: effi­cient mus­cle build­ing and fat burn­ing through the low per­ceived exer­tion of the Pre­Cor’s smooth ellip­ti­cal move­ment; a lean body that elic­its lust and respect; a self­-sat­is­fied glow of moral supe­ri­or­i­ty.

    ↩︎
  50. Grit is a slightly nar­rower ver­sion of ; from “Grit: Per­se­ver­ance and Pas­sion for Long-Term Goals”:

    …We define grit as per­se­ver­ance and pas­sion for long-term goals. Grit entails work­ing stren­u­ously toward chal­lenges, main­tain­ing effort and inter­est over years despite fail­ure, adver­si­ty, and plateaus in progress. The gritty indi­vid­ual approaches achieve­ment as a marathon; his or her advan­tage is sta­mi­na. Whereas dis­ap­point­ment or bore­dom sig­nals to oth­ers that it is time to change tra­jec­tory and cut loss­es, the gritty indi­vid­ual stays the course. Our hypoth­e­sis that grit is essen­tial to high achieve­ment evolved dur­ing inter­views with pro­fes­sion­als in invest­ment bank­ing, paint­ing, jour­nal­ism, acad­e­mia, med­i­cine, and law. Asked what qual­ity dis­tin­guishes star per­form­ers in their respec­tive fields, these indi­vid­u­als cited grit or a close syn­onym as often as tal­ent. In fact, many were awed by the achieve­ments of peers who did not at first seem as gifted as oth­ers but whose sus­tained com­mit­ment to their ambi­tions was excep­tion­al. Like­wise, many noted with sur­prise that prodi­giously gifted peers did not end up in the upper ech­e­lons of their field.

    More than 100 years prior to our work on grit, Gal­ton (1892) col­lected bio­graph­i­cal infor­ma­tion on emi­nent judges, states­men, sci­en­tists, poets, musi­cians, painters, wrestlers, and oth­ers. Abil­ity alone, he con­clud­ed, did not bring about suc­cess in any field. Rather, he believed high achiev­ers to be triply blessed by ‘abil­ity com­bined with zeal and with capac­ity for hard labour’ (p. 33). Sim­i­lar con­clu­sions were reached by Cox (1926) in an analy­sis of the biogra­phies of 301 emi­nent cre­ators and lead­ers drawn from a larger sam­ple com­piled by J. M. Cat­tell (1903). Esti­mated IQ and Cat­tel­l’s rank order of emi­nence were only mod­er­ately related (r = ϭ.16) when reli­a­bil­ity of data was con­trolled for. Rat­ing geniuses on 67 char­ac­ter traits derived from Webb (1915), Cox con­cluded that hold­ing con­stant esti­mated IQ, the fol­low­ing traits evi­dent in child­hood pre­dicted life­time achieve­ment: ‘per­sis­tence of motive and effort, con­fi­dence in their abil­i­ties, and great strength or force of char­ac­ter’ (p. 218).

    …How­ev­er, in the Ter­man lon­gi­tu­di­nal study of men­tally gifted chil­dren, the most accom­plished men were only 5 points higher in IQ than the least accom­plished men (Ter­man & Oden, 1947). To be sure, restric­tion on range of IQ partly accounted for the slight­ness of this gap, but there was suffi­cient vari­ance in IQ (SD ϭ 10.6, com­pared with SD ϭ 16 in the gen­eral pop­u­la­tion) in the sam­ple to have expected a much greater differ­ence. More pre­dic­tive than IQ of whether a men­tally gifted Ter­man sub­ject grew up to be an accom­plished pro­fes­sor, lawyer, or doc­tor were par­tic­u­lar noncog­ni­tive qual­i­ties: ‘Per­se­ver­ance, Self­-Con­fi­dence, and Inte­gra­tion toward goals’ (Ter­man & Oden, 1947, p. 351). Ter­man and Oden, who were close col­lab­o­ra­tors of Cox, encour­aged fur­ther inquiry into why intel­li­gence does not always trans­late into achieve­ment: ‘Why this is so, what cir­cum­stances affect the fruition of human tal­ent, are ques­tions of such tran­scen­dent impor­tance that they should be inves­ti­gated by every method that promises the slight­est reduc­tion of our present igno­rance’ (p. 352).

    …The cross-sec­tional design of Study 1 lim­its our abil­ity to draw strong causal infer­ences about the observed pos­i­tive asso­ci­a­tion between grit and age. Our intu­ition is that grit grows with age and that one learns from expe­ri­ence that quit­ting plans, shift­ing goals, and start­ing over repeat­edly are not good strate­gies for suc­cess. In fact, a strong desire for nov­elty and a low thresh­old for frus­tra­tion may be adap­tive ear­lier in life: Mov­ing on from dead­-end pur­suits is essen­tial to the dis­cov­ery of more promis­ing paths. How­ev­er, as Eric­s­son and Char­ness (1994) demon­strat­ed, excel­lence takes time, and dis­cov­ery must at some point give way to devel­op­ment. Alter­na­tive­ly, McCrae et al. (1999) spec­u­lated that mat­u­ra­tional changes in per­son­al­i­ty, at least through mid­dle adult­hood, might be genet­i­cally pro­grammed. From an evo­lu­tion­ary psy­chol­ogy per­spec­tive, cer­tain traits may not be as ben­e­fi­cial when seek­ing mates as when pro­vid­ing for and rais­ing a fam­i­ly. A third pos­si­bil­ity is that the observed asso­ci­a­tion between grit and age is a con­se­quence of cohort effects. It may be that each suc­ces­sive gen­er­a­tion of Amer­i­cans, for social and cul­tural rea­sons, has grown up less gritty than the one before (cf. Twenge, Zhang, & Im, 2004).

    ↩︎
  51. “Fac­tors Affect­ing Entrap­ment in Esca­lat­ing Con­flicts: The Impor­tance of Tim­ing”, Brock­ner et al 1982

    All sub­jects were given an ini­tial mon­e­tary stake and had the oppor­tu­nity to win more by tak­ing part in an entrap­ping invest­ment sit­u­a­tion. In Exper­i­ment 1, half the sub­jects were pro­vided with a pay­off chart that made salient the costs asso­ci­ated with invest­ing (High­-cost salience con­di­tion) whereas half were not (Low-cost salience con­di­tion). More­over, for half of the sub­jects the pay­off chart was intro­duced before they were asked to invest (Early con­di­tion) whereas for the other half it was intro­duced after they had invested a con­sid­er­able por­tion of their resources (Late con­di­tion). Entrap­ment was lower in the High salience-Early than in the Low Salience-Early con­di­tion. How­ev­er, there was no differ­ence between groups in the Late con­di­tion. In Exper­i­ment 2, the per­ceived pres­ence of an audi­ence inter­acted with per­son­al­ity vari­ables related to face-sav­ing to effect entrap­ment. When the audi­ence was described as ‘experts in deci­sion mak­ing,’ sub­jects high in pub­lic self­-con­scious­ness (or social anx­i­ety) became less entrapped than those low on these dimen­sions. When the audi­ence con­sisted of indi­vid­u­als who ‘wished sim­ply to observe the exper­i­men­tal pro­ce­dure,’ how­ev­er, high pub­lic self­-con­scious­ness (or social anx­i­ety) indi­vid­u­als were…­more entrapped than lows. More­over, these inter­ac­tion effects occurred when the audi­ence was intro­duced late, but not ear­ly, into the entrap­ment sit­u­a­tion. Taken togeth­er, these (and oth­er) find­ings sug­gest that eco­nomic fac­tors are more influ­en­tial deter­mi­nants of behav­ior in the ear­lier stages of an entrap­ping con­flict, whereas face-sav­ing vari­ables are more potent in the later phas­es.

    …For exam­ple, indi­vid­u­als may ‘throw good money after bad’ in repair­ing an old car, remain for an exces­sively long period of time in unsat­is­fy­ing jobs or roman­tic rela­tion­ships, or decide to esca­late the arms race (even in the face of infor­ma­tion sug­gest­ing the imprac­ti­cal­ity of all these actions) because of their belief that they have ‘too much invested to quit’ (Te­ger, 1980).

    ↩︎
  52. “Face-sav­ing and entrap­ment”, Brock­ner 1981:

    Entrap­ping con­flicts are those in which indi­vid­u­als: (1) have made sub­stan­tial, unre­al­ized invest­ments in pur­suit of some goal, and (2) feel com­pelled to jus­tify these expen­di­tures with con­tin­ued invest­ments, even if the like­li­hood of goal attain­ment is low. It was hypoth­e­sized that entrap­ment (i.e., amount invest­ed) would be influ­enced by the rel­a­tive impor­tance indi­vid­u­als attach to the costs and rewards asso­ci­ated with con­tin­ued invest­ments. Two exper­i­ments tested the notion that entrap­ment would be more pro­nounced when costs were ren­dered less impor­tant (and/or rewards were made more impor­tan­t). In Exper­i­ment 1, half of the sub­jects were instructed before­hand of the virtues of invest­ing con­ser­v­a­tively (Cau­tious con­di­tion), whereas half were informed of the advan­tages of invest­ing a con­sid­er­able amount (Risky con­di­tion). Invest­ments were more than twice as great in the Risky con­di­tion. More­over, con­sis­tent with a face-sav­ing analy­sis, (1) the instruc­tions had a greater effect on sub­jects with high rather than low social anx­i­ety, and (2) indi­vid­u­als with high social anx­i­ety who par­tic­i­pated in front of a large audi­ence were more influ­enced by the instruc­tions than were indi­vid­u­als with low social anx­i­ety who par­tic­i­pated in front of a small audi­ence. In the sec­ond exper­i­ment, the impor­tance of costs and rewards were var­ied in a 2 × 2 design. As pre­dict­ed, sub­jects invested sta­tis­ti­cal­ly-sig­nifi­cantly more when cost impor­tance was low rather than high. Con­trary to expec­ta­tion, reward impor­tance had no effect. Ques­tion­naire data from this study also sug­gested that entrap­ment was at least par­tially medi­ated by the par­tic­i­pants’ con­cern over the way they thought they would be eval­u­at­ed. The­o­ret­i­cal impli­ca­tions are dis­cussed.

    Dis­agree­ing with Brock­ner 1981 on the social con­cern part; , Kar­a­vanov & Cai 2007:

    The cur­rent inves­ti­ga­tion did not sup­port the find­ings from pre­vi­ous stud­ies that sug­gest that jus­ti­fi­ca­tion processes and face con­cerns lead to entrap­ment. This study found that only inter­nal self­-jus­ti­fi­ca­tion and oth­er-pos­i­tive face con­cerns are related to entrap­ment, but instead of con­tribut­ing to entrap­ment, these aspects pre­vent indi­vid­u­als from becom­ing entrapped. Per­sonal net­works were demon­strated to have pos­i­tive effect on both self- and oth­er-pos­i­tive face con­cerns, pro­vid­ing empir­i­cal sup­port for the value of using per­sonal net­works as a pre­dic­tor of face goals. How­ev­er, per­sonal net­works did not con­tribute to entrap­ment.

    ↩︎
  53. Heath takes the use of ‘bud­get account­ing’—which can lead to reduced total return, as it did for sub­jects in his exper­i­ments, who if they stuck it out and esca­lated com­mit­ments earned $7.35 ver­sus the bud­get-users at $4.84—as often con­flict­ing with nor­ma­tive stan­dards. My own per­spec­tive is to won­der how much bud­get mak­ing resem­bles writ­ing down one’s jus­ti­fi­ca­tion for a par­tic­u­lar prob­a­bilis­tic pre­dic­tion, when one’s pre­dic­tions are ulti­mately fal­si­fied.↩︎

  54. See, for exam­ple, Pala et al 2007 which inves­ti­gated whether helped peo­ple pre­vent sunk cost more than being given “a list of impor­tant fac­tors”. They did­n’t.↩︎