Are Sunk Costs Fallacies?

Human and animal sunk costs often aren’t, and sunk cost bias may be useful on an individual level to encourage learning. Convincing examples of sunk cost bias typically operate on organizational levels and are probably driven by non-psychological causes like competition.
psychology, philosophy, decision-theory, survey
2012-01-242019-06-12 finished certainty: likely importance: 9


“It is time to let by­gones be by­gones”

1, head of state (est. deaths: )

The (“Con­corde fal­lacy”, “es­ca­la­tion bias”, “com­mit­ment effect” etc.) could be de­fined as when an agent ig­nores that op­tion X has the high­est , and in­stead chooses op­tion Y be­cause he chose op­tion Y many times be­fore, or sim­ply as “throw­ing good money after bad”. It can be seen as an at­tempt to de­rive some gain from mis­taken past choic­es. (A slo­gan for avoid­ing sunk costs: “give up your hopes for a bet­ter yes­ter­day!”) The sin­gle most fa­mous ex­am­ple, and the rea­son for it also be­ing called the “ fal­lacy”, would be the British and French gov­ern­ment in­vest­ing hun­dreds of mil­lions of dol­lars into the de­vel­op­ment of a su­per­sonic pas­sen­ger jet de­spite know­ing that it would never suc­ceed com­mer­cially2. Since Arkes & Blumer 1985’s3 force­ful in­ves­ti­ga­tion & de­nun­ci­a­tion, it has be­come re­ceived wis­dom4 that sunk costs are a bane of hu­man­i­ty.

But to what ex­tent is the “sunk cost fal­lacy” a real fal­la­cy?

Be­low, I ar­gue the fol­low­ing:

  1. sunk costs are prob­a­bly is­sues in big or­ga­ni­za­tions

    • but maybe not ones that can be helped
  2. sunk costs are not is­sues in an­i­mals

  3. sunk costs ap­pear to ex­ist in chil­dren & adults

    • but many ap­par­ent in­stances of the fal­lacy are bet­ter ex­plained as part of a learn­ing strat­egy
    • and there’s lit­tle ev­i­dence sunk cost-like be­hav­ior leads to ac­tual prob­lems in in­di­vid­u­als
  4. much of what we call “sunk cost” looks like sim­ple care­less­ness & thought­less­ness

Subtleties

“One can­not pro­ceed from the in­for­mal to the for­mal by for­mal means.”

, “

A “sunk cost fal­lacy” is clearly a fal­lacy in a sim­ple mod­el: ‘imag­ine an agent A who chooses be­tween op­tion X which will re­turn $10 and op­tion Y which will re­turn $6, and agent A in pre­vi­ous rounds chose Y’. If A chooses X, it will be bet­ter off by $4 than if it chooses Y. This is cor­rect and as hard to dis­pute as ‘A im­plies B; A; there­fore B’. We can call both ex­am­ples . But in phi­los­o­phy, when we dis­cuss , we agree that it is al­ways valid, but we do not al­ways agree that it is : that A does in fact im­ply B, or that A re­ally is the case, and so B is the case. ‘The moon be­ing made of cheese im­plies the as­tro­nauts walked on cheese; the moon is made of cheese; there­fore the as­tro­nauts walked on cheese’ is log­i­cally valid, but not sound, since we don’t think that the moon is made of cheese. Or we differ with the first line as well, point­ing out that only some of the Apollo as­tro­nauts walked on the moon. We re­ject the sound­ness.

We can and must do the same thing in eco­nom­ic­s—but ce­teris is never paribus. In sim­ple mod­els, sunk cost is clearly a valid fal­lacy to be avoid­ed. But is the real world com­pli­ant enough to make the fal­lacy sound? No­tice the as­sump­tions we had to make: we wish away is­sues of risk (and risk aver­sion), long-de­layed con­se­quences, changes in op­tions as a re­sult of past in­vest­ment, and so on.

We can il­lus­trate this by look­ing at an even more sa­cred as­pect of nor­ma­tive eco­nom­ics: . One of the key jus­ti­fi­ca­tions of ex­po­nen­tial dis­count­ing is that any other dis­count­ing can be mon­ey-pumped by an ex­po­nen­tial agent in­vest­ing at each time pe­riod at what­ever the pre­vail­ing re­turn is or loan­ing at ap­pro­pri­ate times. (George Ainslie in The Break­down of Will gives the ex­am­ple of a hy­per­bolic agent im­prov­i­dently sell­ing its win­ter coat every spring and buy­ing it just be­fore the snow­storms every win­ter, be­ing mon­ey-pumped by the con­sis­tent ex­po­nen­tial agen­t.) One of the as­sump­tions is that cer­tain rates of in­vest­ment re­turn will be avail­able; but in the real world, rates can stag­ger around for long pe­ri­ods. (Farmer & Geanako­p­los 2009)5 ar­gues that if re­turns fol­low a more geo­met­ric ran­dom walk, hy­per­bolic dis­count­ing is su­pe­rior6. Are they cor­rect? They are not much-cited or crit­i­cized. But even if they are wrong about hy­per­bolic dis­count­ing, it needs prov­ing that ex­po­nen­tial dis­count­ing does in fact deal cor­rectly with chang­ing re­turns. (The mar­ket over the past few years has not turned in the prover­bial 8–9% an­nual re­turns, and one won­ders if there will ever be a big bull mar­ket that makes up for the great stag­na­tion.)

If we look at sunk cost lit­er­a­ture, we must keep many things in mind. For ex­am­ple:

  1. or­ga­ni­za­tion ver­sus in­di­vid­u­als

    , “Bilge” (2013)

    Sunk costs seem es­pe­cially com­mon in groups, as has been no­ticed since the be­gin­ning of sunk cost re­search7; Khan et al 2000 found that cul­ture in­flu­enced how much man­agers were will­ing to en­gage in hy­po­thet­i­cal sunk costs (South & East Asian more so than North Amer­i­can), and a 2005 meta-analy­sis that sunk cost was an is­sue, es­pe­cially in soft­ware-re­lated projects8, agree­ing with a 2009 meta-analy­sis, De­sai & Chulkov. Muk­er­jee 2011 in­ter­viewed prin­ci­pals at Cal­i­forn­ian schools, find­ing ev­i­dence of sunk cost bias. Wikipedia char­ac­ter­izes the Con­corde in­ci­dent as “re­garded pri­vately by the British gov­ern­ment as a ‘com­mer­cial dis­as­ter’ which should never have been start­ed, and was al­most can­celed, but po­lit­i­cal and le­gal is­sues had ul­ti­mately made it im­pos­si­ble for ei­ther gov­ern­ment to pull out.” So at every point, coali­tions of politi­cians and bu­reau­crats found it in their self­-in­ter­est to keep the ball rolling. A sunk cost for the gov­ern­ment or na­tion as a whole is far from the same thing as a sunk cost for those coali­tion­s—re­spon­si­bil­ity is diffused, which en­cour­ages sunk cost9 (If Kennedy or other US pres­i­dents could not with­draw from Viet­nam or Iraq10 or Afghanistan11 due to per­ceived sunk costs12, per­haps the real prob­lem was why Amer­i­cans thought Viet­nam was so im­por­tant and why they feared look­ing weak or pro­vok­ing an­other de­bate.) Peo­ple com­mit sunk cost much more eas­ily if some­one else is pay­ing, pos­si­bly in part be­cause they are try­ing to still prove them­selves right—an un­der­stand­able and ra­tio­nal choice13! Other anec­dotes from Baz­er­man & Neale 1992 sug­gest sunk cost can be ex­pen­sive cor­po­rate prob­lems, but of course are only anec­dotes; killed his com­pany by es­ca­lat­ing to an im­pos­si­bly ex­pen­sive ac­qui­si­tion of but would Campeau ever have been a good cor­po­rate raider with­out his ag­gres­sive­ness; or can we say the Philip Mor­ris-Proc­tor & Gam­ble coffee price war was a mis­take with­out a great deal more in­for­ma­tion; and was Bobby Fis­cher’s vendetta against the So­viet Union sunk cost or a ra­tio­nal strat­egy to So­viet col­lu­sion or sim­ply an early symp­tom of the ap­par­ent men­tal is­sues that saw him con­vert­ing to and im­pov­er­ished by a pe­cu­liar church and ul­ti­mately an in­ter­nally per­se­cuted con­vict in Ice­land?

    And why were those coali­tions in power in the first place? France and Britain have not found any bet­ter sys­tems of gov­ern­men­t—sys­tems which op­er­ate effi­ciently and are also Nash equi­lib­ri­ums, which suc­cess­fully avoid any sunk costs in their myr­i­ads of projects and ini­tia­tives. In Joseph Tain­ter’s 1988 Col­lapse of Com­plex So­ci­eties, he ar­gues that so­ci­eties that over­reach do so be­cause it is im­pos­si­ble for the or­ga­ni­za­tions and mem­bers to back down on com­plex­ity as long as there is still wealth to ex­tract, even when ; when we ac­cuse Pueblo In­di­ans of sunk cost and caus­ing their civ­i­liza­tion to col­lapse14, we should keep in mind there may be no gov­er­nance al­ter­na­tives. De­ba­cles like the Con­corde may be nec­es­sary be­cause the al­ter­na­tives are even worse—de­ci­sion paral­y­sis or in­sti­tu­tional para­noia15. Ag­gres­sive polic­ing of projects for sunk-costs may wind up vi­o­lat­ing Chester­ton’s fence if man­agers in later time pe­ri­ods are not very clear on why the projects were started in the first place and what their ben­e­fits will be. If we suc­cess­fully ‘avoid’ sunk cost-style rea­son­ing, does that mean we will avoid fu­ture Viet­nams, at the ex­pense of World War IIs?16 comes to mind here, par­tic­u­larly be­cause one study recorded how a bank’s at­tempt to elim­i­nate sunk cost bias in its loan offi­cers re­sulted in back­fir­ing and eva­sion17; the over­all re­sults seem to still have been an im­prove­ment, but it re­mains a cau­tion­ary les­son.

    What­ever pres­sures and feed­back loops cause sunk cost fal­lacy in or­ga­ni­za­tions may be com­pletely differ­ent from the causes in in­di­vid­u­als.

  2. Non-mon­e­tary re­wards and penal­ties

    “In­di­vid­ual or­gan­isms are best thought of as adap­ta­tion-ex­e­cuters rather than as fit­ness-max­i­miz­ers.” What does this mean in a sunk cost con­text? That we should be aware that hu­mans may not treat the model at its lit­eral face value (with­out care­ful thought or strong en­cour­age­ment to do so, any­way), treat the sit­u­a­tion as sim­ply as ‘$10 ver­sus $6 (and sunk cost)’. It may be more like ‘$10 (and your—non-ex­is­ten­t—tribe’s con­dem­na­tion of you as greedy, in­sin­cere, smal­l­-mind­ed, and dis­loy­al) ver­sus $6 (and sunk cost)’18. If hu­mans re­ally are forced to think like this, then the mod­el­ing of pay­offs sim­ply does­n’t cor­re­spond with re­al­ity and of course our judge­ments will be wrong. Some as­sump­tions spit out sunk costs as ra­tio­nal strate­gies19. This is not a triv­ial is­sue here (see the self­-jus­ti­fi­ca­tion lit­er­a­ture, eg. Brock­ner 1981) or in other ar­eas; for ex­am­ple, pro­vid­ing the cor­rect amount of re­wards caused many differ­ences in lev­els of an­i­mal in­tel­li­gence to sim­ply van­ish—the re­wards had been un­equal (see my ex­cerpts of the es­say “If a Lion Could Talk: An­i­mal In­tel­li­gence and the Evo­lu­tion of Con­scious­ness”).

  3. Sunk costs ver­sus in­vest­ments and switch­ing costs

    Many choices for lower im­me­di­ate mar­ginal re­turn are in­vest­ments for greater fu­ture re­turn. A sin­gle-stage model can­not cap­ture this. Like­wise, switch­ing to new projects is not free, and the more ex­pen­sive switches are, the fewer switches is op­ti­mal (eg Chu­peau et al 2017).

  4. Demon­strated harm

    It’s not enough to sug­gest that a be­hav­ior may be harm­ful; it needs to be demon­strat­ed. One might ar­gue that an al­l-y­ou-can-eat buffet will cause overeat­ing and then long-term harm to health, but do ex­per­i­ments bear out that the­o­ry?

In­deed, meta-analy­sis of es­ca­la­tion effect stud­ies sug­gests that sunk cost be­hav­ior is not one thing but re­flects a va­ri­ety of the­o­rized be­hav­iors & effects of vary­ing ra­tio­nal­i­ty, rang­ing from pro­tect­ing one’s im­age & prin­ci­pal-a­gent con­flict to lack of in­for­ma­tion/op­tions (Sleesman 2012), not all of which can be re­garded as a sim­ple cog­ni­tive bias to be fixed by greater aware­ness.

Animals

“It re­ally is the hard­est thing in life for peo­ple to de­cide when to cut their loss­es.”

“No, it’s not. All you have to do is to pe­ri­od­i­cally pre­tend that you were mag­i­cally tele­ported into your cur­rent sit­u­a­tion. Any­thing else is the sunk cost fal­la­cy.”

John, Over­com­ing Bias

Point 3 leads us to an in­ter­est­ing point about sunk cost: it has only been iden­ti­fied in hu­mans, or pri­mates at the widest20.

Arkes & Ay­ton 1999 (“The Sunk Cost and Con­corde Effects: Are Hu­mans Less Ra­tio­nal Than Lower An­i­mals?”) claims (see also the very sim­i­lar Cu­rio 1987):

The sunk cost effect is a mal­adap­tive eco­nomic be­hav­ior that is man­i­fested in a greater ten­dency to con­tinue an en­deavor once an in­vest­ment in mon­ey, effort, or time has been made. The Con­corde fal­lacy is an­other name for the sunk cost effect, ex­cept that the for­mer term has been ap­plied strictly to lower an­i­mals, whereas the lat­ter has been ap­plied solely to hu­mans. The au­thors con­tend that there are no un­am­bigu­ous in­stances of the Con­corde fal­lacy in lower an­i­mals and also present ev­i­dence that young chil­dren, when placed in an eco­nomic sit­u­a­tion akin to a sunk cost one, ex­hibit more nor­ma­tively cor­rect be­hav­ior than do adults. These find­ings pose an enig­ma: Why do adult hu­mans com­mit an er­ror con­trary to the nor­ma­tive cost-ben­e­fit rules of choice, whereas chil­dren and phy­lo­ge­net­i­cally hum­ble or­gan­isms do not? The au­thors at­tempt to show that this para­dox­i­cal state of affairs is due to hu­mans’ over­gen­er­al­iza­tion of the “Don’t waste” rule.

Specifi­cal­ly, in 1972, Trivers pro­posed that fa­thers are more likely to aban­don chil­dren, and moth­ers less like­ly, be­cause fa­thers in­vest less re­sources into chil­dren—­moth­ers are, in effect, com­mit­ting sunk cost fal­lacy in tak­ing care of them. Dawkins & Carlisle 1976 pointed out that this is a mis­ap­pli­ca­tion of sunk cost, a ver­sion of point #3; Arkes & Ay­ton’s sum­ma­ry:

If parental re­sources be­come de­plet­ed, to which of the two off­spring should nur­tu­rance be given? Ac­cord­ing to Triver­s’s analy­sis, the older of the two off­spring has re­ceived more parental in­vest­ment by dint of its greater age, so the par­ent or par­ents will fa­vor it. This would be an ex­am­ple of a past in­vest­ment gov­ern­ing a cur­rent choice, which is a man­i­fes­ta­tion of the Con­corde fal­lacy and the sunk cost effect. Dawkins and Carlisle sug­gested that the rea­son the older off­spring is pre­ferred is not be­cause of the mag­ni­tude of the prior in­vest­ment, as Trivers had sug­gest­ed, but be­cause of the older off­spring’s need for less in­vest­ment in the fu­ture. Con­sid­er­a­tion of the in­cre­men­tal ben­e­fits and costs, not of the sunk costs, com­pels the con­clu­sion that the older off­spring rep­re­sents a far bet­ter in­vest­ment for the par­ent to make.

Di­rect test­ing fails:

A num­ber of ex­per­i­menters who have tested lower an­i­mals have con­firmed that they sim­ply do not suc­cumb to the fal­lacy (see, e.g., Arm­strong & Robert­son, 1988; Burger et al., 1989; Maestrip­ieri & Al­l­e­va, 1991; Wik­lund, 199021).

A di­rect ex­am­ple of the Trivers vs Dawkins & Carlisle ar­gu­ment:

A pro­to­typ­i­cal study is that of Maestrip­ieri and Al­l­eva [1991], who tested the lit­ter de­fense be­hav­ior of fe­male al­bino mice. On the 8th day of a moth­er’s lac­ta­tion pe­ri­od, a male in­truder was in­tro­duced to four differ­ent groups of mother mice and their lit­ters. Each lit­ter of the first group had been culled at birth to four pups. Each lit­ter of the sec­ond group had been culled at birth to eight pups. In the third group, the lit­ters had been culled at birth to eight pups, but four ad­di­tional pups had been re­moved 3 to 4 hr be­fore the in­truder was in­tro­duced. The fourth group was iden­ti­cal to the third ex­cept that the re­moved pups had been re­turned to the lit­ter after only a 10-min ab­sence.

The logic of the Maestrip­ieri and Al­l­eva (1991) study is straight­for­ward. If each mother at­tended to past in­vest­ment, then those lit­ters that had eight pups dur­ing the prior 8 days should be de­fended most vig­or­ous­ly, as op­posed to those lit­ters that had only four pups. After all, hav­ing cared for eight pups rep­re­sents a larger past in­vest­ment than hav­ing cared for only four. On the other hand, if each mother at­tended to fu­ture costs and ben­e­fits, then those lit­ters that had eight pups at the time of test­ing should be de­fended most vig­or­ous­ly, as op­posed to those lit­ters that had only four pups. The re­sults were that the moth­ers with eight pups at the time of test­ing de­fended their lit­ters more vig­or­ously than did the moth­ers with four pups at the time of test­ing. The two groups of moth­ers with four pups did not differ in their level of ag­gres­sion to­ward the in­trud­er, even though one group of moth­ers had in­vested twice the en­ergy in rais­ing the young be­cause they ini­tially had to care for lit­ters of eight pups.

Arkes & Ay­ton re­but 3 stud­ies by ar­gu­ing:

  1. Dawkins & Brock­mann 1980: dig­ger wasps fight harder in pro­por­tion to how much food they con­tribut­ed, rather than the to­tal—be­cause they are too stu­pid to count the to­tal and only know how much they per­son­ally col­lected & stand to lose
  2. Lav­ery 1995: ci­ch­lid fish suc­cess­ful in breed­ing also fight harder against preda­tors; be­cause this may re­flect an in­trin­sic greater health­i­ness and greater fu­ture op­por­tu­ni­ties, rather than sunk cost fal­la­cy, an ar­gu­ment sim­i­lar to Northcraft & Wolfe 1984’s crit­i­cism of ap­par­ent sunk costs in eco­nom­ics22
  3. Weath­er­head 1979: sa­van­nah spar­rows de­fend their nests fiercer as the nest ap­proaches hatch­ing; be­cause as al­ready pointed out, the closer to hatch­ing, the less fu­ture in­vest­ment is re­quired for X chicks com­pared to start­ing all over
  4. To which 3 we may add tun­dra swan feed­ing habits, which are pre­dicted to be op­ti­mal by Pavlic & Passino 201123, who re­mark “we show how op­ti­miza­tion of Eq. 3 pre­dicts the sunk-cost effect for cer­tain sce­nar­ios; a com­mon el­e­ment of every case is a large ini­tial cost.”

(Navarro & Fan­tino 2004, , claim sunk cost effect in pi­geons, but it’s hard to com­pare its strength to sunk cost in hu­mans, and the setup is com­plex enough I’m not sure it is sunk cost.)

Humans

Children

Arkes & Ay­ton cite 2 stud­ies find­ing that com­mit­ting sunk cost bias in­creases with age—as in, chil­dren do not com­mit it. They also cite 2 stud­ies say­ing that

Web­ley and Plaisier (1997) tested chil­dren at three differ­ent age groups (5–6, 8–9, and 11–12) with the fol­low­ing mod­i­fi­ca­tion of the Tver­sky and Kah­ne­man (1981) ex­per­i­ment …the older chil­dren pro­vided data anal­o­gous to those found by Tver­sky and Kah­ne­man (1981): When the money was lost, the ma­jor­ity of the re­spon­dents de­cided to buy a tick­et. On the other hand, when the ticket was lost, the ma­jor­ity de­cided not to buy an­other tick­et. This differ­ence was ab­sent in the youngest chil­dren. Note that it is not the case that the youngest chil­dren were re­spond­ing ran­dom­ly. They showed a defi­nite pref­er­ence for pur­chas­ing a new ticket whether the money or the ticket had been lost. Like the an­i­mals that ap­pear to be im­mune to the Con­corde fal­la­cy, young chil­dren seemed to be less sus­cep­ti­ble than older chil­dren to this vari­ant of the sunk cost effect. The re­sults of the study by Krouse (1986) cor­rob­o­rate this find­ing: Com­pared with adult hu­mans, young chil­dren, like an­i­mals, seem to be less sus­cep­ti­ble to the Con­corde fal­la­cy/­sunk cost effect.

… Per­haps the im­pul­sive­ness of young chil­dren (Mis­chel, Shoda, & Ro­driguez, 1989) fos­tered their de­sire to buy a ticket for the mer­ry-go-round right away, re­gard­less of whether a ticket or money had been lost. How­ev­er, this al­ter­na­tive in­ter­pre­ta­tion does not ex­plain why the younger chil­dren said that they would buy the ticket less often than the older chil­dren in the lost-money con­di­tion. Nor does this ex­pla­na­tion ex­plain the greater ad­her­ence to nor­ma­tive rules of de­ci­sion mak­ing by younger chil­dren com­pared with adults in cases where im­pul­sive­ness is not an is­sue (see, e.g., Ja­cobs & Poten­za, 1991; Reyna & El­lis, 1994).

I think Arkes & Ay­ton are prob­a­bly wrong about chil­dren. Those 2 early stud­ies can be crit­i­cized eas­ily24, and other stud­ies point the op­po­site way. Baron et al 1993 asked poor and rich kids (age 5–12) ques­tions in­clud­ing an Arkes & Blumer 1985 ques­tion, and found, in their first study no differ­ence by age, ~30% of the 101 kids com­mit­ting sunk cost and an­other ~30% un­sure; in their sec­ond, they asked 2 ques­tions, with ~50% com­mit­ting sunk cost—and re­sponses on the 2 ques­tions min­i­mally cor­re­lated (r = 0.17). Klaczyn­ski & Cot­trell 2004 found that cor­rect (non-sunk cost) re­sponses went up with age (age 5–12, 16%; 5–16, 27%; and adults 37%). Bru­ine de Bruin et al 2007 found older adults more sus­cep­ti­ble than young adults to some tested fal­lac­i­es, but that sunk cost re­sis­tance in­creased some­what with age. Strough et al 2008 stud­ied 75 col­lege age stu­dents, find­ing small or non-sig­nifi­cant re­sults for IQ (as did other stud­ies, see lat­er), ed­u­ca­tion, and age; still older adults (60+) beat their col­lege-age peers at avoid­ing sunk cost in both Strough et al 2008 & Strough et al 2011.

(Chil­dren also vi­o­late tran­si­tiv­ity of choices & are more hy­per­bolic than adults, which is hardly nor­ma­tive.25)

Uses

Learning & Memory

18. “If the fool would per­sist in his folly he would be­come wise.”

46. “You never know what is enough un­less you know what is more than enough.”

, “Proverbs of Hell”

Fe­lix Ho­effler in his 2008 pa­per “Why hu­mans care about sunk costs while (low­er) an­i­mals don’t: An evo­lu­tion­ary ex­pla­na­tion” takes the pre­vi­ous points at face val­ues and asks how sunk cost might be use­ful for hu­mans; his an­swer is that sunk cost for­feits some to­tal gain­s/u­til­i­ty—just as our sim­ple model in­di­cat­ed—but in ex­change for faster learn­ing, an ex­change mo­ti­vated by hu­mans’ well-known and dis­like of un­cer­tainty26. It is harder to learn the value of choices if one is con­stantly break­ing off be­fore com­ple­tion to make other choic­es, or re­al­ize any value at all (the clas­sic prob­lem, amus­ingly il­lus­trated in ’s story “Su­pe­ri­or­ity”).

One could imag­ine a not too in­tel­li­gent pro­gram which is, like hu­mans, over-op­ti­mistic about the value of new pro­jects; it al­ways chooses the high­est value op­tion, of course, to avoid com­mit­ting sunk cost bi­as, but oddly enough, it never seems to fin­ish projects be­cause bet­ter op­por­tu­ni­ties seem to keep com­ing along… In the real world, learn­ing is valu­able and one has many rea­sons to per­se­vere even past the point one re­gards a de­ci­sion as a mis­take; McAfee et al 2007 (re­mem­ber the ex­po­nen­tial vs hy­per­bolic dis­count­ing ex­am­ple):

Con­sider a project that may take an un­known ex­pen­di­ture to com­plete. The fail­ure to com­plete the project with a given amount of in­vest­ment is in­for­ma­tive about the ex­pected amount needed to com­plete it. There­fore, the ex­pected ad­di­tional in­vest­ment re­quired for fruition will be cor­re­lated with the sunk in­vest­ment. More­over, in a world of ran­dom re­turns, the re­al­iza­tion of a re­turn is in­for­ma­tive about the ex­pected value of con­tin­u­ing a pro­ject. A large loss, which leads to a ra­tio­nal in­fer­ence of a high vari­ance, will often lead to a higher op­tion value be­cause op­tion val­ues tend to rise with vari­ance. Con­se­quent­ly, the in­for­ma­tive­ness of sunk in­vest­ments is am­pli­fied by con­sid­er­a­tion of the op­tion val­ue…­More­over, given lim­ited time to in­vest in pro­jects, as the time re­main­ing shrinks, in­di­vid­u­als have less time over which to amor­tize their costs of ex­per­i­ment­ing with new pro­jects, and there­fore may be ra­tio­nally less likely to aban­don cur­rent pro­ject­s…­Past in­vest­ments in a given course of ac­tion often pro­vide ev­i­dence about whether the course of ac­tion is likely to suc­ceed or fail in the fu­ture. Other things equal, a greater in­vest­ment usu­ally im­plies that suc­cess is closer at hand. Con­sider the fol­low­ing sim­ple mod­el…The only case in which the size of the sunk in­vest­ment can­not affect the fir­m’s ra­tio­nal de­ci­sion about whether to con­tinue in­vest­ing is the rather spe­cial case in which the haz­ard is ex­actly con­stant.

If this model is ap­plic­a­ble to hu­mans, we would ex­pect to see a clus­ter of re­sults re­lated to age, learn­ing, teach­ing, diffi­culty of avoid­ing even with train­ing or ed­u­ca­tion, min­i­mal avoid­ance with greater in­tel­li­gence, com­ple­tion of tasks/pro­jects, large­ness of sums (the risks most worth avoid­ing), and com­pet­i­tive­ness of en­vi­ron­ment. (As well as oc­ca­sional null re­sults like El­liott & Curme 2006.) And we do! Many oth­er­wise anom­alous re­sults snap into fo­cus with this sug­ges­tion:

  1. in­for­ma­tion is worth most to those who have the least: as we pre­vi­ously saw, the young com­mit sunk cost more than the old

  2. in sit­u­a­tions where par­tic­i­pants can learn and up­date, we should ex­pect sunk cost to be at­ten­u­ated or dis­ap­pear; we do see this (eg. Fried­man et al 200727, Gar­land et al 199028, Born­stein et al 199929, Staw 198130, Mc­Cain 198631, Phillips et al 199132, Wang & Xu 201233)

  3. the nois­ier (higher vari­ance) feed­back on profitabil­ity was, the more data it took be­fore peo­ple give up (Brag­ger et al 1998, Brag­ger et al 2003)

  4. sunk costs were sup­ported more when sub­jects were given jus­ti­fi­ca­tions about learn­ing to make bet­ter de­ci­sions or whether teach­er­s/s­tu­dents were in­volved (Born­stein & Chap­man 199534)

  5. ex­ten­sive eco­nomic train­ing does not stop eco­nom­ics pro­fes­sors from com­mit­ting sunk cost, and stu­dents can be quickly ed­u­cated to an­swer sunk cost ques­tions cor­rect­ly, but with lit­tle car­ry-through to their lives35, and re­searchers in the area ar­gue about whether par­tic­u­lar se­tups even rep­re­sent sunk costs at all, on their own mer­its36 (but don’t feel smug, you prob­a­bly would­n’t do much bet­ter if you took quizzes on it ei­ther)

  6. when mea­sured, avoid­ing sunk cost has lit­tle cor­re­la­tion with in­tel­li­gence37—and one won­ders how much of the cor­re­la­tion comes from in­tel­li­gent peo­ple be­ing more likely to try to con­form to what they have learned is eco­nom­ics or­tho­doxy

  7. a ‘nearly com­pleted’ effect dom­i­nates ‘sunk cost’ (Con­lon & Gar­land 1993, Gar­land & Con­lon 1998, Boehne & Paese 2002, Fontino et al 2007)

  8. for ex­am­ple, the larger the pro­por­tion, the more costs were sunk (Gar­land & New­port 1991)

  9. it is sur­pris­ingly hard to find clear-cut re­al-world non-gov­ern­ment ex­am­ples of se­ri­ous sunk costs; the com­monly cited non-his­tor­i­cal ex­am­ples do not stack up:

    • Staw & Hoang 199538 stud­ied the NBA to see whether high­-ranked but un­der­per­form­ing play­ers were over-used by coach­es, a sunk cost.

      Un­for­tu­nate­ly, they do not track the over-use down to ac­tual effects on win-loss or other mea­sures of team per­for­mance, effects which are un­likely to be very large since the overuse amounts to ~10–20 min­utes a game. Fur­ther, “The econo­met­rics and be­hav­ioral eco­nom­ics of es­ca­la­tion of com­mit­ment: a re-ex­am­i­na­tion of Staw and Hoang’s NBA data” (Camerer & We­ber 1999), claims to do a bet­ter analy­sis of the NBA data and find the effect is ac­tu­ally weak­er. As usu­al, there are mul­ti­ple al­ter­na­tives39.

    • Mc­Carthy et al 199340 is a much-cited cor­re­la­tional study find­ing that small en­tre­pre­neurs in­vest fur­ther in com­pa­nies they founded (rather than bought) when the com­pany ap­par­ently does poor­ly; but they ac­knowl­edge that there are fi­nan­cial strate­gies cloud­ing the data, and like Staw & Hoang, do not tie the small effec­t—which ap­pears only for a year or two, as the en­tre­pre­neurs ap­par­ently learn—to ac­tual neg­a­tive out­comes or de­crease in ex­pected val­ue.

    • sim­i­lar to Mc­Carthy et al 1993, Wennberg et al 200941 tracked ‘exit routes’ for young com­pa­nies such as be­ing bought, merged, or bank­rup­t—but again, they did not tie ap­par­ent sunk cost to ac­tual poor per­for­mance.

    • in 2 stud­ies42, Africans did not en­gage in sunk cost with in­sec­ti­cide-treated bed net­s—whether they paid a sub­si­dized price or free did not affect use lev­els, and in one study, this null effect hap­pened de­spite the same house­hold en­gag­ing in sunk cost for hy­po­thet­i­cal ques­tions

    • In­ter­net users may com­mit sunk cost in brows­ing news web­sites43 (but is that se­ri­ous?)

    • an un­pub­lished 2001 pa­per (Bar­ron et al “The Es­ca­la­tion Phe­nom­e­non and Ex­ec­u­tive Turnover: The­ory and Ev­i­dence”) re­port­edly finds that projects are ‘sig­nifi­cantly more likely’ to be can­celed when their top man­agers leave, sug­gest­ing a sunk cost effect of sub­stan­tial size; but it is un­clear how much money is at stake or whether this is—re­mem­ber point #1—power pol­i­tics44

    • sunk cost only weakly cor­re­lates with sub­op­ti­mal be­hav­ior (much less demon­strates cau­sa­tion):

      Parker & Fis­chhoff 2005 and Bru­ine de Bruin et al 2007 com­piled a num­ber of ques­tions for sev­eral cog­ni­tive bi­as­es—in­clud­ing sunk cost—and then asked ques­tions about im­pul­sive­ness, num­ber of sex­ual part­ners, etc, while the lat­ter de­vel­oped a 34-item in­dex of bad de­ci­sion­s/out­comes (the DOI): ever rent a movie you did­n’t watch, get ex­pelled, file for bank­rupt­cy, for­feit your dri­ver’s li­cense, miss an air­plane, bounce a check, etc. Then they ran cor­re­la­tions. They repli­cated the min­i­mal cor­re­la­tion of sunk cost avoid­ance with IQ, but sunk cost (and ‘path in­de­pen­dence’) ex­hib­ited fas­ci­nat­ing be­hav­iors com­pared to the other bi­as­es/­fal­lac­ies mea­sured: sunk cost & path in­de­pen­dence cor­re­lated min­i­mally with the other tested bi­as­es/­fal­lac­i­es, s were al­most use­lessly low, ed­u­ca­tion did not help much, age helped some, and sunk cost had low cor­re­la­tions with the risky be­hav­ior or the DOI (eg. after con­trol­ling for de­ci­sion-mak­ing styles, 0.13).

    • Lar­rick et al 1993 found tests of nor­ma­tive eco­nomic rea­son­ing, in­clud­ing sunk cost ques­tions, cor­re­lated with in­creased aca­d­e­mic salaries, even for non-e­co­nomic pro­fes­sors like bi­ol­o­gists & hu­man­ists (but the effect size & causal­ity are un­clear)

  10. Dis­so­ci­a­tion in hy­po­thet­i­cal­s—be­ing told a prior man­ager made de­ci­sion­s—­does not al­ways coun­ter­act effects (Biyal­o­gorsky 2006)

Sunk costs may also re­flect im­per­fect mem­ory about what in­for­ma­tion one had in the past; one may rea­son that one’s past self had bet­ter in­for­ma­tion about all the for­got­ten de­tails that went into a de­ci­sion to make some in­vest­ments, and re­spect their de­ci­sion, thus ap­pear­ing to honor sunk costs (Baliga & Ely 2011).

Countering hyperbolic discounting?

“Use bar­bar­ians against bar­bar­ians.”45

, On China 2011

The clas­sic kicker of hy­per­bolic dis­count­ing is that it in­duces tem­po­ral dis­count­ing—y­our far-sighted self is able to cal­cu­late what is best for you, but then your near-sighted self screws it all up by chang­ing tacks. Know­ing this, it may be a good idea to not work on your ‘bad’ habit of be­ing over­con­fi­dent about your projects46 or en­gag­ing in plan­ning fal­la­cy, since at least they will coun­ter­act a lit­tle the hy­per­bolic dis­count­ing; in par­tic­u­lar, you should dis­trust near-term es­ti­mates of the fun or value of ac­tiv­i­ties when you have not learned any­thing very im­por­tant47. We could run the same ar­gu­ment but in­stead point to the psy­chol­ogy re­search on the con­nec­tion be­tween blood sugar lev­els and ‘willpower’; if it takes willpower to start a project but lit­tle willpower to cease work­ing on or quit a pro­ject, then we would ex­pect our de­ci­sions to quit be cor­re­lated with low willpower and blood sugar lev­els, and hence to be ig­nored!

It’s hard to op­pose these is­sues: hu­mans are bi­ased hard­ware. If one does­n’t know ex­actly why a bias is bad, coun­ter­ing a bias may sim­ply let other bi­ases hurt you. Anec­do­tal­ly, a num­ber of peo­ple have prob­lems with quite the op­po­site of sunk cost fal­la­cy—over­es­ti­mat­ing the mar­ginal value of the al­ter­na­tives and dis­count­ing how lit­tle fur­ther in­vest­ment is nec­es­sary48, and peo­ple try to com­mit them­selves by de­lib­er­ately buy­ing things they don’t val­ue.49 (This seems dou­bly plau­si­ble given the high value of Con­sci­en­tious­ness/Grit50—with mar­ginal re­turn high enough that it sug­gests most peo­ple do not com­mit long-term nearly enough, and if sunk cost is the price of reap­ing those gain­s…)

Thoughtlessness: the real bias

One of the known ways to elim­i­nate sunk cost bias is to be ex­plicit and em­pha­size the costs of con­tin­u­ing (Northcraft and Neale, 1986, Tan and Yates, 1995, Brock­ner et al 198251 & con­versely Brock­ner 198152, Mc­Cain 1986), as well as set­ting ex­plicit bud­gets (Si­mon­son & Staw 1992, Heath 199553, Bould­ing et al 1997). Fancy tools don’t add much effec­tive­ness54

This, com­bined with the pre­vi­ous learn­ing-based the­ory of sunk cost, sug­gests some­thing to me: sunk cost is a . One does­n’t in­trin­si­cally over-value some­thing due to past in­vest­ment, one fails to think about the value at all.

Further reading


  1. “World: Asi­a-Paci­fic: US de­mands ‘killing fields’ trial”, BBC 1998-12-29↩︎

  2. And the Con­corde defi­nitely did not suc­ceed com­mer­cial­ly: op­er­at­ing it could barely cover costs, its min­i­mal profits never came close to the to­tal R&D or op­por­tu­nity costs, its last flight was in 2003 (a shock­ingly low life­time in an in­dus­try which typ­i­cally tries to op­er­ate in­di­vid­ual planes, much less en­tire de­signs, for decades), and as of 2017, the Con­corde still has no suc­ces­sors in its niche or ap­par­ent up­com­ing suc­ces­sors de­spite great tech­no­log­i­cal progress & global eco­nomic de­vel­op­ment and the no­to­ri­ous growth in wealth of the “1%”.↩︎

  3. Arkes & Blumer 1985, “The psy­chol­ogy of sunk cost”:

    The sunk cost effect is man­i­fested in a greater ten­dency to con­tinue an en­deavor once an in­vest­ment in mon­ey, effort, or time has been made. Ev­i­dence that the psy­cho­log­i­cal jus­ti­fi­ca­tion for this be­hav­ior is pred­i­cated on the de­sire not to ap­pear waste­ful is pre­sent­ed. In a field study, cus­tomers who had ini­tially paid more for a sea­son sub­scrip­tion to a the­ater se­ries at­tended more plays dur­ing the next 6 months, pre­sum­ably be­cause of their higher sunk cost in the sea­son tick­ets. Sev­eral ques­tion­naire stud­ies cor­rob­o­rated and ex­tended this find­ing. It is found that those who had in­curred a sunk cost in­flated their es­ti­mate of how likely a project was to suc­ceed com­pared to the es­ti­mates of the same project by those who had not in­curred a sunk cost. The ba­sic sunk cost find­ing that peo­ple will throw good money after bad ap­pears to be well de­scribed by prospect the­ory (D. Kah­ne­man & A. Tver­sky, 1979, Econo­met­rica, 47, 263–291). Only mod­er­ate sup­port for the con­tention that per­sonal in­volve­ment in­creases the sunk cost effect is pre­sent­ed. The sunk cost effect was not less­ened by hav­ing taken prior courses in eco­nom­ics. Fi­nal­ly, the sunk cost effect can­not be fully sub­sumed un­der any of sev­eral so­cial psy­cho­log­i­cal the­o­ries.

    As an ex­am­ple of the sunk cost effect, con­sider the fol­low­ing ex­am­ple [from Thaler 1980]. A man wins a con­test spon­sored by a lo­cal ra­dio sta­tion. He is given a free ticket to a foot­ball game. Since he does not want to go alone, he per­suades a friend to buy a ticket and go with him. As they pre­pare to go to the game, a ter­ri­ble bliz­zard be­gins. The con­test win­ner peers out his win­dow over the arc­tic scene and an­nounces that he is not go­ing, be­cause the pain of en­dur­ing the snow­storm would be greater than the en­joy­ment he would de­rive from watch­ing the game. How­ev­er, his friend protests, ‘I don’t want to waste the twelve dol­lars I paid for the tick­et! I want to go!’ The friend who pur­chased the ticket is not be­hav­ing ra­tio­nally ac­cord­ing to tra­di­tional eco­nomic the­o­ry. Only in­cre­men­tal costs should in­flu­ence de­ci­sions, not sunk costs. If the agony of sit­ting in a blind­ing snow­storm for 3 h is greater than the en­joy­ment one would de­rive from try­ing to see the game, then one should not go. The $12 has been paid whether one goes or not. It is a sunk cost. It should in no way in­flu­ence the de­ci­sion to go. But who among us is so ra­tio­nal?

    Our fi­nal sam­ple thus had eigh­teen no-dis­count, nine­teen $2 dis­count, and sev­en­teen $7 dis­count sub­jects. Since the ticket stubs were color cod­ed, we were able to col­lect the stubs after each per­for­mance and de­ter­mine how many per­sons in each group had at­tended each play…We per­formed a 3 (dis­count: none, $2, $7) x 2 (half of sea­son) analy­sis of vari­ance on the num­ber of tick­ets used by each sub­ject. The lat­ter vari­able was a with­in-sub­jects fac­tor. It was also the only sig­nifi­cant source of vari­ance, F(1,51) = 32.32, MS, = 1.81, (p < 0.OO). More tick­ets were used by each sub­ject on the first five plays (3.57) than on the last five plays (2.09). We per­formed a pri­ori tests on the num­ber of tick­ets used by each of the three groups dur­ing the first half of the the­ater sea­son. The no-dis­count group used sig­nifi­cantly more tick­ets (4.11) than both the $2 dis­count group (3.32) and the $7 dis­count group (3.29), t = 1.79, 1.83, re­spec­tive­ly, p’s < .05, one tailed. The groups did not use sig­nifi­cantly differ­ent num­bers of tick­ets dur­ing the last half of the the­ater sea­son (2.28, 1 .S4, 2.18, for the no-dis­count, $2 dis­count, and $7 dis­count groups, re­spec­tive­ly). Con­clu­sion. Those who had pur­chased the­ater tick­ets at the nor­mal price used more the­ater tick­ets dur­ing the first half of the sea­son than those who pur­chased tick­ets at ei­ther of the two dis­counts. Ac­cord­ing to ra­tio­nal eco­nomic the­o­ry, after all sub­jects had their ticket book­let in hand, they should have been equally likely to at­tend the plays.

    …A sec­ond fea­ture of prospect the­ory per­ti­nent to sunk costs is the cer­tainty effect. This effect is man­i­fested in two ways. First, ab­solutely cer­tain gains (P = 1) are greatly over­val­ued. By this we mean that the value of cer­tain gains is higher than what would be ex­pected given an analy­sis of a per­son’s val­ues of gains hav­ing a prob­a­bil­ity less than 1.0. Sec­ond, cer­tain losses (P = 1.0) are greatly un­der­val­ued (i.e., fur­ther from ze­ro). The value is more neg­a­tive than what would be ex­pected given an analy­sis of a per­son’s val­ues of losses hav­ing a prob­a­bil­ity less than 1.0. In other words, cer­tainty mag­ni­fies both pos­i­tive and neg­a­tive val­ues. Note that in ques­tion 3A the de­ci­sion not to com­plete the plane re­sults in a cer­tain loss of the amount al­ready in­vest­ed. Since prospect the­ory states that cer­tain losses are par­tic­u­larly aver­sive, we might pre­dict that sub­jects would find the other op­tion com­par­a­tively at­trac­tive. This is in fact what oc­curred. When­ever a sunk cost dilemma in­volves the choice of a cer­tain loss (stop the wa­ter­way pro­ject) ver­sus a long shot (maybe it will be­come profitable by the year 2500), the cer­tainty effect fa­vors the lat­ter op­tion.

    …Fifty-nine stu­dents had taken at least one course; six­ty-one had taken no such course. All of these stu­dents were ad­min­is­tered the Ex­per­i­ment 1 ques­tion­naire by a grad­u­ate stu­dent in psy­chol­o­gy. A third group com­prised 61 stu­dents cur­rently en­rolled in an eco­nom­ics course, who were ad­min­is­tered the Ex­per­i­ment 1 ques­tion­naire by their eco­nom­ics pro­fes­sor dur­ing an eco­nom­ics class. Ap­prox­i­mately three fourths of the stu­dents in this group had also taken one prior eco­nom­ics course. All of the eco­nom­ics stu­dents had been ex­posed to the con­cept of sunk cost ear­lier that se­mes­ter both in their text­book (G­wart­ney & Stroup, 1982, p. 125 [Mi­cro­eco­nom­ics: Pri­vate and pub­lic choice]) and in their class lec­tures. Re­sults. Ta­ble 1 con­tains the re­sults. The x2 analy­sis does not ap­proach sig­nifi­cance. Even when an eco­nom­ics teacher in an eco­nom­ics class hands out a sunk cost ques­tion­naire to eco­nom­ics stu­dents, there is no more con­for­mity to ra­tio­nal eco­nomic the­ory than in the other two groups. We con­clude that gen­eral in­struc­tion in eco­nom­ics does not lessen the sunk cost effect. In a re­cent analy­sis of en­trap­ment ex­per­i­ments, Northcraft and Wolf (1984) con­cluded that con­tin­ued in­vest­ment in many of them does not nec­es­sar­ily rep­re­sent an eco­nom­i­cally ir­ra­tional be­hav­ior. For ex­am­ple, con­tin­ued wait­ing for the bus will in­crease the prob­a­bil­ity that one’s wait­ing be­hav­ior will be re­ward­ed. There­fore there is an em­i­nently ra­tio­nal ba­sis for con­tin­ued pa­tience. Hence this sit­u­a­tion is not a pure demon­stra­tion of the sunk cost effect. How­ev­er, we be­lieve that some sunk cost sit­u­a­tions do cor­re­spond to en­trap­ment sit­u­a­tions. The sub­jects who ‘owned’ the air­line com­pany would have en­dured con­tin­u­ing ex­pen­di­tures on the plane as they sought the even­tual goal of fi­nan­cial res­cue. This cor­re­sponds to the Brock­ner et al. en­trap­ment sit­u­a­tion. How­ev­er, en­trap­ment is ir­rel­e­vant to the analy­sis of all our other stud­ies. For ex­am­ple, peo­ple who paid more money last Sep­tem­ber for the sea­son the­ater tick­ets are in no way trapped. They do not in­cur small con­tin­u­ous losses as they seek an even­tual goal. There­fore we sug­gest that en­trap­ment is rel­e­vant only to the sub­set of sunk cost sit­u­a­tions in which con­tin­u­ing losses are en­dured in the hope of later res­cue by a fur­ther in­vest­ment.

    Ac­cord­ing to Thomas 1981 [Mi­cro­eco­nomic ap­pli­ca­tions: Un­der­stand­ing the Amer­i­can econ­omy], one per­son who rec­og­nized it as an er­ror was none other than . In the 1880s Edi­son was not mak­ing much money on his great in­ven­tion, the elec­tric lamp. The prob­lem was that his man­u­fac­tur­ing plant was not op­er­at­ing at full ca­pac­ity be­cause he could not sell enough of his lamps. He then got the idea to boost his plan­t’s pro­duc­tion to full ca­pac­ity and sell each ex­tra lamp be­low its to­tal cost of pro­duc­tion. His as­so­ciates thought this was an ex­ceed­ingly poor idea, but Edi­son did it any­way. By in­creas­ing his plan­t’s out­put, Edi­son would add only 2% to the cost of pro­duc­tion while in­creas­ing pro­duc­tion 25%. Edi­son was able to do this be­cause so much of the man­u­fac­tur­ing cost was sunk cost. It would be present whether or not he man­u­fac­tured more bulbs. [the Eu­rope price > mar­ginal cost] Edi­son then sold the large num­ber of ex­tra lamps in Eu­rope for much more than the small added man­u­fac­tur­ing costs. Since pro­duc­tion in­creases in­volved neg­li­gi­ble new costs but sub­stan­tial new in­come, Edi­son was wise to in­crease pro­duc­tion. While Edi­son was able to place sunk costs in proper per­spec­tive in ar­riv­ing at his de­ci­sion, our re­search sug­gests that most of the rest of us find that very diffi­cult to do.

    Fried­man et al 2006 crit­i­cism of Arkes:

    This is con­sis­tent with the sunk cost fal­la­cy, but the ev­i­dence is not as strong as one might hope. The re­ported sig­nifi­cance lev­els ap­par­ently as­sume that (a­part from the ex­cluded cou­ples) all at­ten­dance choices are in­de­pen­dent. The au­thors do not ex­plain why they di­vided the sea­son in half, nor do they re­port the sig­nifi­cance lev­els for the en­tire sea­son (or first quar­ter, etc.). The data show no sig­nifi­cant differ­ence be­tween the small and large dis­count groups in the first half sea­son nor among any of the groups in the sec­ond half sea­son. We are not aware of any repli­ca­tion of this field ex­per­i­ment.

    ↩︎
  4. Davis’s com­plaint is a lit­tle odd, inas­much as eco­nom­ics text­books do ap­par­ently dis­cuss sunk costs; Steele 1996 gives ex­am­ples back to 1910, or from “Do Sunk Costs Mat­ter?”, McAfee et al 2007:

    In­tro­duc­tory text­books in eco­nom­ics present this as a ba­sic prin­ci­ple and a deep truth of ra­tio­nal de­ci­sion-mak­ing (Frank and Bernanke, 2006, p. 10, and Mankiw, 2004, p. 297).

    ↩︎
  5. Pop­u­lar­ized dis­cus­sions of Farmer & Geanako­p­los 2009:

    ↩︎
  6. Some quotes from the pa­per:

    Con­ven­tional eco­nom­ics sup­poses that agents value the present vs. the fu­ture us­ing an ex­po­nen­tial dis­count­ing func­tion. In con­trast, ex­per­i­ments with an­i­mals and hu­mans sug­gest that agents are bet­ter de­scribed as hy­per­bolic dis­coun­ters, whose dis­count func­tion de­cays much more slowly at large times, as a power law. This is gen­er­ally re­garded as be­ing time in­con­sis­tent or ir­ra­tional. We show that when agents can­not be sure of their own fu­ture one-pe­riod dis­count rates, then hy­per­bolic dis­count­ing can be­come ra­tio­nal and ex­po­nen­tial dis­count­ing ir­ra­tional. This has im­por­tant im­pli­ca­tions for en­vi­ron­men­tal eco­nom­ics, as it im­plies a much larger weight for the far fu­ture.

    …Why should we dis­count the fu­ture? Bohm-Baw­erk (1889,1923) and Fisher (1930) ar­gued that men were nat­u­rally im­pa­tient, per­haps ow­ing to a fail­ure of the imag­i­na­tion in con­jur­ing the fu­ture as vividly as the pre­sent. An­other jus­ti­fi­ca­tion for de­clin­ing Ds (τ) in τ, given by Rae (1834,1905), is that peo­ple are mor­tal, so sur­vival prob­a­bil­i­ties must en­ter the cal­cu­la­tion of the ben­e­fits of fu­ture po­ten­tial con­sump­tion. There are many pos­si­ble rea­sons for dis­count­ing, as re­viewed by Das­gupta (2004, 2008). Most eco­nomic analy­sis as­sumes ex­po­nen­tial dis­count­ing Ds (τ) = D(τ) = ex­p(−rτ), as orig­i­nally posited by Samuel­son (1937) and put on an ax­iomatic foun­da­tion by Koop­mans (1960). A nat­ural jus­ti­fi­ca­tion for ex­po­nen­tial dis­count­ing comes from fi­nan­cial eco­nom­ics and the op­por­tu­nity cost of fore­go­ing an in­vest­ment. A dol­lar at time s can be placed in the bank to col­lect in­ter­est at rate r, and if the in­ter­est rate is con­stant, it will gen­er­ate ex­p(r(t—s)) dol­lars at time t. A dol­lar at time t is there­fore equiv­a­lent to ex­p(−r(t—s)) dol­lars at time s. Let­ting τ = t—s, this mo­ti­vates the ex­po­nen­tial dis­count func­tion Ds (τ) = D(τ) = ex­p(−rτ), in­de­pen­dent of s.

    …For roughly the first eighty years the cer­tainty equiv­a­lent dis­count func­tion for the geo­met­ric ran­dom walk stays fairly close to the ex­po­nen­tial, but after­ward the two di­verge sub­stan­tial­ly, with the geo­met­ric ran­dom walk giv­ing a much larger weight to the fu­ture. A com­par­i­son us­ing more re­al­is­tic pa­ra­me­ters is given in Ta­ble 1. For large times the differ­ence is dra­mat­ic.

    Farmer & Geanako­p­los 2009 Ta­ble 1, com­par­ing geo­met­ric ran­dom walk (GRW) vs ex­po­nen­tial dis­count­ing over in­creas­ing time pe­ri­ods show­ing that GRW even­tu­ally de­cays much slow­er.
    year GRW ex­po­nen­tial
    20 0.462 0.456
    60 0.125 0.095
    100 0.051 0.020
    500 0.008 2 × 10−9
    1000 0.005 4 × 10−18

    …What this analy­sis makes clear, how­ev­er, is that the long term be­hav­ior of val­u­a­tions de­pends ex­tremely sen­si­tively on the in­ter­est rate mod­el. The fact that the present value of ac­tions that affect the far fu­ture can shift from a few per­cent­age points to in­fin­ity when we move from a con­stant in­ter­est rate to a geo­met­ric ran­dom walk calls se­ri­ously into ques­tion many well re­garded analy­ses of the eco­nomic con­se­quences of global warm­ing. … no fixed dis­count rate is re­ally ad­e­quate—as our analy­sis makes abun­dantly clear, the proper dis­count­ing func­tion is not an ex­po­nen­tial.

    ↩︎
  7. For ex­am­ple, Staw 1981, “The Es­ca­la­tion of Com­mit­ment to a Course of Ac­tion”:

    A sec­ond way to ex­plain de­ci­sional er­rors is to at­tribute a break­down in ra­tio­nal­ity to in­ter­per­sonal el­e­ments such as so­cial power or group dy­nam­ics. Pf­effer [1977] has, for ex­am­ple, out­lined how and when power con­sid­er­a­tions are likely to out­weigh more ra­tio­nal as­pects of or­ga­ni­za­tional de­ci­sion mak­ing, and Ja­nis [1972] has noted many prob­lems in the de­ci­sion mak­ing of pol­icy groups. Co­he­sive groups may, ac­cord­ing to Janis, sup­press dis­sent, cen­sor in­for­ma­tion, cre­ate il­lu­sions of in­vul­ner­a­bil­i­ty, and stereo­type en­e­mies. Any of these by-prod­ucts of so­cial in­ter­ac­tion may, of course, hin­der ra­tio­nal de­ci­sion mak­ing and lead in­di­vid­u­als or groups to de­ci­sional er­rors.

    ↩︎
  8. Wang & Keil 2005; from the ab­stract:

    Us­ing meta-analy­sis, we an­a­lyzed the re­sults of 20 sunk cost ex­per­i­ments and found: (1) a large effect size as­so­ci­ated with sunk costs, (2) vari­abil­ity of effect sizes across ex­per­i­ments that was larger than pure sub­jec­t-level sam­pling er­ror, and (3) stronger effects in ex­per­i­ments in­volv­ing IT projects as op­posed to non-IT pro­jects.

    Back­ground on why one might ex­pect effects with IT in par­tic­u­lar:

    Al­though project es­ca­la­tion is a gen­eral phe­nom­e­non, IT project es­ca­la­tion has re­ceived con­sid­er­able at­ten­tion since Keil and his col­leagues be­gan study­ing the phe­nom­e­non (Keil, Mixon et al. 1995). Sur­vey data sug­gest that 30–40% of all IT projects in­volve some de­gree of project es­ca­la­tion (Keil, Mann, and Rai 2000). To study the role of sunk cost in soft­ware project es­ca­la­tion, Keil et al. (1995) con­ducted a se­ries of lab ex­per­i­ments, in which sunk costs were ma­nip­u­lated at var­i­ous lev­els, and sub­jects de­cided whether or not to con­tinue an IT project fac­ing neg­a­tive prospects. This IT ver­sion of the sunk cost ex­per­i­ment was later repli­cated across cul­tures (Keil, Tan et al. 2000), with group de­ci­sion mak­ers (Boon­thanom 2003), and un­der differ­ent de-esca­la­tion sit­u­a­tions (Heng, Tan et al. 2003). These ex­per­i­ments demon­strated the sunk cost effect to be sig­nifi­cant in IT project es­ca­la­tion.

    The “real op­tion” de­fense of sunk cost be­hav­ior has been sug­gested for soft­ware projects (Ti­wana & Fich­man 2006)↩︎

  9. “Diffu­sion of Re­spon­si­bil­i­ty: Effects on the Es­ca­la­tion Ten­dency”, Whyte 1991 (see also Whyte 1993):

    In a lab­o­ra­tory study, the pos­si­bil­ity was in­ves­ti­gated that group de­ci­sion mak­ing in the ini­tial stages of an in­vest­ment project might re­duce the es­ca­la­tion ten­dency by diffus­ing re­spon­si­bil­ity for ini­ti­at­ing a fail­ing pro­ject. Sup­port for this no­tion was found. Es­ca­la­tion effects oc­curred less fre­quently and were less se­vere among in­di­vid­u­als de­scribed as par­tic­i­pants in a group de­ci­sion to ini­ti­ate a fail­ing course of ac­tion than among in­di­vid­u­als de­scribed as per­son­ally re­spon­si­ble for the ini­tial de­ci­sion. Self­-jus­ti­fi­ca­tion the­ory was found to be less rel­e­vant after group than after in­di­vid­ual de­ci­sions. Be­cause most de­ci­sions about im­por­tant new poli­cies in or­ga­ni­za­tions are made by groups, these re­sults in­di­cate a gap in the­o­riz­ing about the de­ter­mi­nants of es­ca­lat­ing com­mit­ment for an im­por­tant cat­e­gory of es­ca­la­tion sit­u­a­tions.

    …The im­pact of per­sonal re­spon­si­bil­ity on per­sis­tence in er­ror has been repli­cated sev­eral times (e.g., Baz­er­man, Beekun, & Schoor­man, 1982; Cald­well & O’Reil­ly, 1982; Staw, 1976; Staw & Fox, 1977).

    ↩︎
  10. Both of them—but the on the Iraqi side, specifi­cally ; Baz­er­man & Neale 1992:

    Sim­i­lar­ly, it could be ar­gued that in the Iraqi/Kuwait con­flict, Iraq (Hus­sein) had the in­for­ma­tion nec­es­sary to ra­tio­nally pur­sue a ne­go­ti­ated set­tle­ment. In fact, early on in the cri­sis, he was offered a pack­age for set­tle­ment that was far bet­ter than any­thing that he could have ex­pected through a con­tin­ued con­flict. The es­ca­la­tion lit­er­a­ture ac­cu­rately pre­dicts that the ini­tial “in­vest­ment” in­curred in in­vad­ing Kuwait would lead Iraq to a fur­ther es­ca­la­tion of its com­mit­ment not to com­pro­mise on the re­turn of Kuwait.

    ↩︎
  11. Kelly 2004:

    The physi­cist Eu­gene Dem­ler in­forms me that ex­actly par­al­lel ar­gu­ments were quite com­monly made in the So­viet Union in the late 1980s in an at­tempt to jus­tify con­tin­ued So­viet in­volve­ment in Afghanistan.

    ↩︎
  12. Dawkins & Carlisle 1976 sar­cas­ti­cally re­mark:

    …The idea has been in­flu­en­tial4, and it ap­peals to eco­nomic in­tu­ition. A gov­ern­ment which has in­vested heav­ily in, for ex­am­ple, a su­per­sonic air­lin­er, is un­der­stand­ably re­luc­tant to aban­don it, even when sober judge­ment of fu­ture prospects sug­gests that it should do so. Sim­i­lar­ly, a pop­u­lar ar­gu­ment against Amer­i­can with­drawal from the Viet­nam war was a ret­ro­spec­tive one: ‘We can­not al­low those boys to have died in vain’. In­tu­ition says that pre­vi­ous in­vest­ment com­mits one to fu­ture in­vest­ment.

    ↩︎
  13. The for­mer can be found in Baz­er­man, Giu­liano, & Ap­pel­man, 1984, Davis & Bobko, 1986, & Staw, 1976 among other stud­ies cited here. The lat­ter is often called ‘self­-jus­ti­fi­ca­tion’ or the ‘jus­ti­fi­ca­tion effect’ (eg. Brock­ner 1992).

    Self­-jus­ti­fi­ca­tion is, of course, in many con­texts a valu­able trait to have; is the fol­low­ing an er­ror, or busi­ness stu­dents demon­strat­ing their pre­co­cious un­der­stand­ing of an in­valu­able bu­reau­cratic in­-fight­ing skill? Baz­er­man et al 1982, “Per­for­mance eval­u­a­tion in a dy­namic con­text: A lab­o­ra­tory study of the im­pact of prior com­mit­ment to the ra­tee” (see also Cald­well & O’Reil­ly, 1982; Staw, 1976; Staw & Fox, 1977):

    A dy­namic view of per­for­mance eval­u­a­tion is pro­posed that ar­gues that raters who are pro­vided with neg­a­tive per­for­mance data on a pre­vi­ously pro­moted em­ployee will sub­se­quently eval­u­ate the em­ployee more pos­i­tively if they, rather than their pre­de­ces­sors, made the ear­lier pro­mo­tion de­ci­sion. A to­tal of 298 busi­ness ma­jors par­tic­i­pated in the study. The ex­per­i­men­tal group made a pro­mo­tion de­ci­sion by choos­ing among three can­di­dates, whereas the con­trol group was told that the de­ci­sion had been made by some­one else. Both groups eval­u­ated the pro­moted em­ploy­ee’s per­for­mance after re­view­ing 2 years of da­ta. The hy­poth­e­sized es­ca­la­tion of com­mit­ment effect was ob­served in that the ex­per­i­men­tal group con­sis­tently eval­u­ated the em­ployee more fa­vor­ably, pro­vided larger re­wards, and made more op­ti­mistic pro­jec­tions of fu­ture per­for­mance than did the con­trol group.

    ↩︎
  14. And it is diffi­cult to judge from a dis­tance when sunk cost has oc­curred: what ex­actly else are the In­di­ans go­ing to in­vest in? Re­mem­ber our ex­po­nen­tial dis­count­ing ex­am­ple. As long as var­i­ous set­tle­ments are not run­ning at an out­right loss or are be­ing sub­si­dized, how steep an op­por­tu­nity cost do they re­ally face? From the pa­per:

    By the end of the oc­cu­pa­tion in the late-A.D. 800s there is ev­i­dence of de­ple­tion of wood re­sources, pi on seeds, and an­i­mals (re­viewed by Kohler 1992). Fol­low­ing the col­lapse of these vil­lages, the Do­lores area was never re­oc­cu­pied in force by Puebloan farm­ers. A sec­ond sim­i­lar case comes from nearby Sand Canyon Lo­cal­ity west of Cortez, Col­orado, in­ten­sively stud­ied by the Crow Canyon Ar­chae­o­log­i­cal Cen­ter over the last 15 years (Lipe 1992). Here the main oc­cu­pa­tion is sev­eral hun­dred years later than in Do­lores, but the pat­terns of con­struc­tion in ham­lets ver­sus vil­lages are sim­i­lar (fig. 4, bot­tom). The demise of the two vil­lages con­tribut­ing dated con­struc­tion events to fig. 4 (bot­tom) co­in­cides with the fa­mous de­pop­u­la­tion of the Four Cor­ners re­gion of the U.S. South­west. There is strong ev­i­dence for de­clin­ing avail­abil­ity of pro­tein in gen­eral and large game an­i­mals in par­tic­u­lar, and in­creased com­pe­ti­tion for the best agri­cul­tural land, dur­ing the ter­mi­nal oc­cu­pa­tion (re­viewed by Kohler 2000). We draw a fi­nal ex­am­ple from an in­ter­me­di­ate pe­ri­od. The most fa­mous Anasazi struc­tures, the “great houses” of Chaco Canyon, may fol­low a sim­i­lar pat­tern. Windes and Ford (1996) show that early con­struc­tion episodes (in the early A.D. 900s) in the canyon great houses typ­i­cally co­in­cide with pe­ri­ods of high po­ten­tial agri­cul­tural pro­duc­tiv­i­ty, but later con­struc­tion con­tin­ues in both good pe­ri­ods and bad, par­tic­u­larly in the poor pe­riod from ca. A.D. 1030–1050.

    Cer­tainly there is strong ev­i­dence of di­min­ish­ing mar­ginal re­turn­s—ev­i­dence for the Tain­ter the­sis—but di­min­ish­ing mar­ginal re­turns is not sunk cost fal­lacy. Given the gen­eral en­vi­ron­ment, and given that there was a ‘col­lapse’, ar­guably there was no op­por­tu­nity cost to re­main­ing there. How would the In­di­ans have be­come bet­ter off if they aban­doned their vil­lages, given that there is lit­tle ev­i­dence that other places were bet­ter off in that pe­riod of great droughts and the ob­ser­va­tion that they would need to make sub­stan­tial cap­i­tal in­vest­ments wher­ever they went?↩︎

  15. Janssen & Scheffer 2004:

    In fact, es­ca­la­tion of com­mit­ment is found in group de­ci­sion mak­ing (Baz­er­man et al. 1984). Mem­bers of a group strive for una­nim­i­ty. A typ­i­cal goal for po­lit­i­cal de­ci­sions within smal­l­-s­cale so­ci­eties is to reach con­sen­sus (Boehm 1996). Once una­nim­ity is reached, the eas­i­est way to pro­tect it is to stay com­mit­ted to the group’s de­ci­sion (Baz­er­man et al. 1984, Ja­nis 1972 [Vic­tims of group­think]). Thus, when the group is faced with a neg­a­tive feed­back, mem­bers will not sug­gest aban­don­ing the ear­lier course of ac­tion, be­cause this might dis­rupt the ex­ist­ing una­nim­i­ty.

    ↩︎
  16. McAfee et al 2007:

    But there are also ex­am­ples of peo­ple who suc­ceeded by not ig­nor­ing sunk costs. The same “we-owe-it-to-our-fal­l­en-coun­try­men” logic that led Amer­i­cans to stay the course in Viet­nam also helped the war effort in World War II. More gen­er­al­ly, many suc­cess sto­ries in­volve peo­ple who at some time suffered great set­backs, but per­se­vered when short­-term odds were not in their fa­vor be­cause they “had al­ready come too far to give up now.” Colum­bus did not give up when the shores of In­dia did not ap­pear after weeks at sea, and many on his crew were urg­ing him to turn home (see Ol­son, 1967 [The North­men, Colum­bus and Cabot, 985–1503], for Colum­bus’ jour­nal). Jeff Be­zos, founder of Ama­zon.­com, did not give up when Ama­zon’s loss to­taled $1.4 bil­lion in 2001, and many on Wall Street were spec­u­lat­ing that the com­pany would go broke (see Mendel­son and Meza, 2001).

    ↩︎
  17. “Bank­ing on Com­mit­ment: In­tended and Un­in­tended Con­se­quences of an Or­ga­ni­za­tion’s At­tempt to At­ten­u­ate Es­ca­la­tion of Com­mit­ment”, Mc­Na­mara et al 2002:

    The no­tion that de­ci­sion mak­ers tend to in­cor­rectly con­sider pre­vi­ous ex­pen­di­tures when de­lib­er­at­ing cur­rent util­i­ty-based de­ci­sions (Arkes & Blumer, 1985) has been used to ex­plain fi­as­coes rang­ing from the pro­longed in­volve­ment of the United States in the Viet­nam War to the dis­as­trous cost over­run dur­ing the con­struc­tion of the (Ross & Staw, 1993). In the Shore­ham Nu­clear Power Plant ex­am­ple, es­ca­la­tion of com­mit­ment meant bil­lions of wasted dol­lars (Ross & Staw, 1993). In the Viet­nam War, it may have cost thou­sands of lives…Kirby and Davis’s (1998) ex­per­i­men­tal study showed that in­creased mon­i­tor­ing could dampen the es­ca­la­tion of com­mit­ment. Staw, Barsade, and Kop­ut’s (1997) field data on the bank­ing in­dus­try led them to con­clude that top man­ager turnover led to de-esca­la­tion of com­mit­ment at an ag­gre­gate lev­el.

    …So far, the re­sults sup­port the effi­cacy of changes in mon­i­tor­ing and de­ci­sion re­spon­si­bil­ity as cures for the es­ca­la­tion of com­mit­ment bias. We now turn to the side effects of these treat­ments. Hy­pothe­ses 4 and 5 pro­pose that the threat of in­creased mon­i­tor­ing and change in man­age­ment re­spon­si­bil­ity in­crease the like­li­hood of a differ­ent form of un­de­sir­able de­ci­sion com­mit­men­t—the per­sis­tent un­der­assess­ment of bor­rower risk. The re­sults in col­umn 3 of Ta­ble 2 sup­port these hy­pothe­ses. Both the threat of in­creased mon­i­tor­ing and the threat of change in de­ci­sion re­spon­si­bil­ity in­crease the like­li­hood of per­sis­tent un­der­assess­ment of bor­rower risk (0.47, p < 0.01, and 0.50, p < 0.05, re­spec­tive­ly). These find­ings sup­port the view that de­ci­sion mak­ers are likely to fail to ap­pro­pri­ately down­grade a bor­rower when, by do­ing so, they avoid an or­ga­ni­za­tional in­ter­ven­tion. We ex­am­ined the change in in­vest­ment com­mit­ment for bor­row­ers whose risk was per­sis­tently un­der­assessed and who faced ei­ther in­creased mon­i­tor­ing or change in de­ci­sion re­spon­si­bil­ity if the de­ci­sion mak­ers had ad­mit­ted that the risk needed down­grad­ing. We found that de­ci­sion mak­ers did ap­pear to ex­hibit es­ca­la­tion of com­mit­ment to these bor­row­ers. The change in com­mit­ment (on av­er­age, over 30%) is sig­nifi­cantly greater than 0 (t = 2.94, p < 0.01) and greater than the change in com­mit­ment to those bor­row­ers who were cor­rectly as­sessed as re­main­ing at the same risk level (t = 2.58, p = 0.01). Com­bined, these find­ings sug­gest that al­though the or­ga­ni­za­tional efforts to min­i­mize un­de­sir­able de­ci­sion com­mit­ment ap­peared suc­cess­ful at first glance, the threat of these in­ter­ven­tions in­creased the like­li­hood that de­ci­sion mak­ers would per­sis­tently give over­fa­vor­able as­sess­ments of the risk of bor­row­ers. In turn, the lend­ing offi­cers would then es­ca­late their mon­e­tary com­mit­ment to these riskier bor­row­ers.

    On nu­clear power plants as sunk cost fal­la­cy, McAfee et al 2007:

    Ac­cord­ing to ev­i­dence re­ported by , man­agers of many util­ity com­pa­nies in the U.S. have been overly re­luc­tant to ter­mi­nate eco­nom­i­cally un­vi­able nu­clear plant pro­jects. In the 1960s, the nu­clear power in­dus­try promised “en­ergy too cheap to me­ter.” But nu­clear power later proved un­safe and un­eco­nom­i­cal. As the U.S. nu­clear power pro­gram was fail­ing in the 1970s and 1980s, Pub­lic Ser­vice Com­mis­sions around the na­tion or­dered pru­dency re­views. From these re­views, De Bondt and Makhija find ev­i­dence that the Com­mis­sions de­nied many util­ity com­pa­nies even par­tial re­cov­ery of nu­clear con­struc­tion costs on the grounds that they had been mis­man­ag­ing the nu­clear con­struc­tion projects in ways con­sis­tent with “throw­ing good money after bad.”…In most projects there is un­cer­tain­ty, and restart­ing after stop­ping en­tails costs, mak­ing the op­tion to con­tinue valu­able. This is cer­tainly the case for nu­clear power plants, for ex­am­ple. Shut­ting down a nu­clear re­ac­tor re­quires dis­man­tling or en­tomb­ment, and the costs of restart­ing are ex­tremely high. More­over, the vari­ance of en­ergy prices has been quite large. The op­tion of main­tain­ing nu­clear plants is there­fore po­ten­tially valu­able. Low re­turns from nu­clear power in the 1970s and 1980s might have been a con­se­quence of the large vari­ance, sug­gest­ing a high op­tion value of main­tain­ing nu­clear plants. This may in part ex­plain the ev­i­dence (re­ported by De Bondt and Makhi­ja, 1988) that man­agers of util­i­ties at the time were so re­luc­tant to shut down seem­ingly un­profitable plants.

    ↩︎
  18. , 2011, pg 336:

    In the case of a war of at­tri­tion, one can imag­ine a leader who has a chang­ing will­ing­ness to suffer a cost over time, in­creas­ing as the con­flict pro­ceeds and his re­solve tough­ens. His motto would be: ‘We fight on so that our boys shall not have died in vain.’ This mind­set, known as loss aver­sion, the sunk-cost fal­la­cy, and throw­ing good money after bad, is patently ir­ra­tional, but it is sur­pris­ingly per­va­sive in hu­man de­ci­sion-mak­ing.65 Peo­ple stay in an abu­sive mar­riage be­cause of the years they have al­ready put into it, or sit through a bad movie be­cause they have al­ready paid for the tick­et, or try to re­verse a gam­bling loss by dou­bling their next bet, or pour money into a boon­dog­gle be­cause they’ve al­ready poured so much money into it. Though psy­chol­o­gists don’t fully un­der­stand why peo­ple are suck­ers for sunk costs, a com­mon ex­pla­na­tion is that it sig­nals a pub­lic com­mit­ment. The per­son is an­nounc­ing: ‘When I make a de­ci­sion, I’m not so weak, stu­pid, or in­de­ci­sive that I can be eas­ily talked out of it.’ In a con­test of re­solve like an at­tri­tion game, loss aver­sion could serve as a costly and hence cred­i­ble sig­nal that the con­tes­tant is not about to con­cede, pre­empt­ing his op­po­nen­t’s strat­egy of out­last­ing him just one more round.

    It’s worth not­ing that there is at least one ex­am­ple of sunk cost (“en­try li­censes” [fees]) en­cour­ag­ing co­op­er­a­tion (“col­lu­sive price path”) in mar­ket agents: Offer­man & Pot­ter 2001, “Does Auc­tion­ing of En­try Li­censes In­duce Col­lu­sion? An Ex­per­i­men­tal Study”, who point out an­other case of how our sunk cost map may not cor­re­spond to the ter­ri­to­ry:

    There is one caveat to the sunk cost ar­gu­ment, how­ev­er. If the game for which the po­si­tions are al­lo­cated has mul­ti­ple equi­lib­ria, an en­try fee may affect the equi­lib­rium that is be­ing se­lect­ed. Sev­eral ex­per­i­men­tal stud­ies have demon­strated the force of this prin­ci­ple. For ex­am­ple, Coop­er, De­Jong, Forsythe and Ross (1993), Van Huy­ck, Bat­talio and Beil (1993), and Ca­chon and Camer­er, (1996) study co­or­di­na­tion games with mul­ti­ple equi­lib­ria and find that an en­try fee may in­duce play­ers to co­or­di­nate on a differ­ent (Pareto su­pe­ri­or) equi­lib­ri­um.

    ↩︎
  19. McAfee et al 2007

    Rep­u­ta­tional Con­cerns. In team re­la­tion­ships, each par­tic­i­pan­t’s will­ing­ness to in­vest de­pends on the in­vest­ments of oth­ers. In such cir­cum­stances, a com­mit­ment to fin­ish­ing projects even when they ap­pear ex post un­profitable is valu­able, be­cause such a com­mit­ment in­duces more effi­cient ex ante in­vest­ment. Thus, a rep­u­ta­tion for “throw­ing good money after bad”—the clas­sic sunk cost fal­la­cy—­can solve a co­or­di­na­tion prob­lem. In con­trast to the de­sire for com­mit­ment, peo­ple might ra­tio­nally want to con­ceal bad choices to ap­pear more tal­ent­ed, which may lead them to make fur­ther in­vest­ments, hop­ing to con­ceal their in­vest­ments gone bad.

    Kan­odia, Bush­man, and Dick­haut (1989), Pren­der­gast and Stole (1996), and Camerer and We­ber (1999) de­velop prin­ci­pal-a­gent mod­els in which ra­tio­nal agents in­vest more if they have in­vested more in the past to pro­tect their rep­u­ta­tion for abil­i­ty. We elu­ci­date the gen­eral fea­tures of these mod­els be­low and ar­gue that con­cerns about rep­u­ta­tion for abil­ity are es­pe­cially pow­er­ful in ex­plain­ing ap­par­ent re­ac­tions to sunk costs by politi­cians. [see also Car­pen­ter & Matthews 2003] de­velop a model in which agents ini­tially make in­vest­ments in­de­pen­dently and are later matched in pairs, their match pro­duces a sur­plus, and they bar­gain over it based on cul­tural norms of fair di­vi­sion. A fair di­vi­sion rule in which each agen­t’s sur­plus share is in­creas­ing in their sunk in­vest­ment, and de­creas­ing in the oth­er’s sunk in­vest­ment, is shown to be evo­lu­tion­ar­ily sta­ble.

    …If a mem­ber of an il­le­gal price-fix­ing car­tel seems likely to con­fess to the gov­ern­ment in ex­change for im­mu­nity from pros­e­cu­tion, the other car­tel mem­bers may race to be first to con­fess, since only the first gets im­mu­nity (in Eu­rope, such im­mu­nity is called “le­niency”). Sim­i­lar­ly, a spouse who loses faith in the long-term prospects of a mar­riage in­vests less in the re­la­tion­ship, thereby re­duc­ing the gains from part­ner­ship, po­ten­tially doom­ing the re­la­tion­ship. In both cas­es, be­liefs about the fu­ture vi­a­bil­ity mat­ter to the suc­cess of the re­la­tion­ship, and there is the po­ten­tial for self­-ful­fill­ing op­ti­mistic and pes­simistic be­liefs.

    In such a sit­u­a­tion, in­di­vid­u­als may ra­tio­nally se­lect oth­ers who stay in the re­la­tion­ship be­yond the point of in­di­vid­ual ra­tio­nal­i­ty, if such a com­mit­ment is pos­si­ble. In­deed, ex ante it is ra­tio­nal to con­struct exit bar­ri­ers like costly and diffi­cult di­vorce laws, so as to re­duce early ex­it. Such exit bar­ri­ers might be be­hav­ioral as well as le­gal. If an in­di­vid­ual can de­velop a rep­u­ta­tion for stick­ing in a re­la­tion­ship be­yond the break-even point, it would make that in­di­vid­ual a more de­sir­able part­ner and thus en­hance the set of avail­able part­ners, as well as en­cour­age greater and longer last­ing in­vest­ment by the cho­sen part­ner. One way of cre­at­ing such a rep­u­ta­tion is to act as if one cares about sunk cost­s…We now for­mal­ize this con­cept us­ing a sim­ple two-pe­riod model that sets aside con­sid­er­a­tion of se­lec­tion…That is, a slight pos­si­bil­ity of breach is col­lec­tively harm­ful; both agents would be ex ante bet­ter off if they could pre­vent breach when V—ρ < 1, which holds as long as the rep­u­ta­tion cost ρ of breach­ing is not too small. In this mod­el, a ten­dency to stay in the re­la­tion­ship due to a large sunk in­vest­ment would be ben­e­fi­cial to each par­ty.

    ↩︎
  20. The qual­i­fier is be­cause hy­per­bolic dis­count­ing has been demon­strated in many pri­mates, and a num­ber of other bi­as­es, eg Chen et al 2006, “How Ba­sic Are Be­hav­ioral Bi­as­es? Ev­i­dence from Ca­puchin Mon­key Trad­ing Be­hav­ior”↩︎

  21. See also Rad­ford & Blakey 2000, :

    Nest-de­fence be­hav­iour of passer­ines is a form of parental in­vest­ment. Par­ents are se­lect­ed, there­fore, to vary the in­ten­sity of their nest de­fence with re­spect to the value of their off­spring. Great tit, Parus ma­jor, males were tested for their de­fence re­sponse to both a nest preda­tor and play­back of a great tit chick dis­tress call. The re­sults from the two tri­als were sim­i­lar; males gave more alarm calls and made more perch changes if they had larger broods and if they had a greater pro­por­tion of sons in their brood. This is the first ev­i­dence for a re­la­tion­ship be­tween nest-de­fence in­ten­sity and off­spring sex ra­tio. Pa­ter­nal qual­i­ty, size, age and con­di­tion, lay date and chick con­di­tion did not sig­nifi­cantly in­flu­ence any of the mea­sured nest-de­fence pa­ra­me­ters.

    …The most con­sis­tent pat­tern found in stud­ies of avian nest de­fence has been an in­crease in the level of the parental re­sponse to preda­tors from clutch ini­ti­a­tion to £edg­ing (e.g.Bier­mann & Robert­son 1981; Regel­mann & Cu­rio 1983; Mont­gomerie & Weath­er­head 1988; Wik­lund 1990 a). This sup­ports the pre­dic­tion from parental in­vest­ment the­ory (Trivers 1972) that par­ents should risk more in de­fence of young that are more valu­able to them. The in­ten­sity of nest de­fence is also ex­pected to be pos­i­tively cor­re­lated with brood size be­cause the ben­e­fits of de­ter­ring a preda­tor will in­crease with off­spring num­ber (Williams 1966; Wik­lund 1990 b).

    ↩︎
  22. Northcraft & Wolf 1984, . Acad­emy of Man­age­ment Re­view, 9, 225–234:

    The de­ci­sion maker also may treat the neg­a­tive feed­back as sim­ply a learn­ing ex­pe­ri­ence-a cue to redi­rect efforts within a project rather than aban­don it (Con­nol­ly, 1976).

    …In some cases (Brock­n­er, Shaw, & Ru­bin, 1979), the ex­pected rate of re­turn for fur­ther fi­nan­cial com­mit­ment even can be shown with a few as­sump­tions to be in­creas­ing and (after a cer­tain amount of in­vest­ment) fi­nan­cially ad­vis­able, de­spite the claim that fur­ther re­source com­mit­ment un­der the cir­cum­stances is psy­cho­log­i­cally rather than eco­nom­i­cally mo­ti­vat­ed…­More to the point, the life cy­cle model clearly re­veals the psy­chol­o­gist’s fal­la­cy: con­tin­u­ing a project in the face of a fi­nan­cial set­back is not al­ways ir­ra­tional (it de­pends on the stage in the project and the mag­ni­tude of the fi­nan­cial set­back). Sec­ond, the life cy­cle model pro­vides an in­sight into the man­ager’s pre­oc­cu­pa­tion with a pro­jec­t’s fi­nan­cial past. It demon­strates how a pro­jec­t’s fi­nan­cial past can be used heuris­ti­cally to un­der­stand the pro­jec­t’s fu­ture.

    Fried­man et al 2006:

    …There are also sev­eral pos­si­ble ra­tio­nal ex­pla­na­tions for an ap­par­ent con­cern with sunk costs. Main­tain­ing a rep­u­ta­tion for fin­ish­ing what you start may have suffi­cient value to com­pen­sate for the ex­pected loss on an ad­di­tional in­vest­ment. The ‘’ value (e.g., Dixit and Pindy­ck, 1994 [In­vest­ment Un­der Un­cer­tainty]) [cf.O’Brien & Folta 2009, Ti­wana & Fich­man 2006] of con­tin­u­ing a project also may off­set an ex­pected loss. in or­ga­ni­za­tions may make it per­son­ally bet­ter for a man­ager to con­tinue an un­profitable project than to can­cel it and take the heat from its sup­port­ers (e.g., Mil­grom and Roberts, 1992 [Eco­nom­ics, Or­ga­ni­za­tion, and Man­age­ment]).

    (Cer­tainty effects seem to be sup­ported by fMRI imag­ing.) One may ask why cap­i­tal con­straints aren’t solved—if the projects re­ally are good profitable ideas—by re­sort to eq­uity or debt? But those are al­ways last re­sorts due to fun­da­men­tal co­or­di­na­tion & trust is­sues; McAfee et al 2007:

    Abun­dant the­o­ret­i­cal lit­er­a­ture in cor­po­rate fi­nance shows that im­pos­ing fi­nan­cial con­straints on firm man­agers im­proves (see Stiglitz and Weiss, 1981, My­ers and Ma­jluf, 1984, Lewis and Sap­ping­ton, 1989, and Hart and Moore, 1995). The the­o­ret­i­cal con­clu­sion finds over­whelm­ing em­pir­i­cal sup­port, and only a small frac­tion of busi­ness in­vest­ment is funded by bor­row­ing (see Faz­zari and Athey, 1987, Faz­zari and Pe­ter­son, 1993, and Love, 2003). When man­agers face fi­nan­cial con­straints, sunk costs must in­flu­ence firm in­vest­ments sim­ply be­cause of bud­get­s…­Firms with fi­nan­cial con­straints might ra­tio­nally re­act to sunk costs by in­vest­ing more in a pro­ject, rather than less, be­cause the abil­ity to un­der­take al­ter­na­tive in­vest­ments de­clines in the level of sunk cost­s…­Given lim­ited re­sources, if the firm has al­ready sunk more re­sources into the cur­rent pro­ject, then the value of the op­tion to start a new project if it arises is lower rel­a­tive the value of the op­tion to con­tinue the cur­rent pro­ject, be­cause fewer re­sources are left over to bring any new project to fruition, and more re­sources have al­ready been spent to bring the cur­rent project to fruition. There­fore, the fir­m’s in­cen­tive to con­tinue in­vest­ing in the cur­rent project is higher the more re­sources it has al­ready sunk into the pro­ject.

    • Stiglitz, Joseph E. and Weiss, An­drew, 1981. “Credit Ra­tioning in Mar­kets with Im­per­fect In­for­ma­tion”, Amer­i­can Eco­nomic Re­view 71, 393–410.
    • My­ers, Stew­art and Ma­jluf, Nicholas S., 1984. “Cor­po­rate Fi­nanc­ing and In­vest­ment De­ci­sions when Firms Have In­for­ma­tion that In­vestors Do Not Have,” Jour­nal of Fi­nan­cial Eco­nom­ics 13, 187–221
    • Lewis, Tracy and Sap­ping­ton, David E. M., 1989. “Coun­ter­vail­ing In­cen­tives in Agency Prob­lems,” Jour­nal of Eco­nomic The­ory 49, 294–313
    • Hart, Oliver and Moore, John, 1995. “Debt and Se­nior­i­ty: An Analy­sis of the Role of Hard Claims in Con­strain­ing Man­age­ment,” Amer­i­can Eco­nomic Re­view 85, 567–585
    • Faz­zari, Steven and Athey, Michael J., 1987. “Asym­met­ric In­for­ma­tion, Fi­nanc­ing Con­straints, and In­vest­ment,” Re­view of Eco­nom­ics and Sta­tis­tics 69, 481–487.
    • Faz­zari, Steven and Pe­tersen, Bruce, 1993. “Work­ing Cap­i­tal and Fixed In­vest­ment: New Ev­i­dence on Fi­nanc­ing Con­straints,” RAND Jour­nal of Eco­nom­ics 24, 328–342
    • Love, Ines­sa, 2003. “Fi­nan­cial De­vel­op­ment and Fi­nanc­ing Con­straints: In­ter­na­tional Ev­i­dence from the Struc­tural In­vest­ment Mod­el,” Re­view of Fi­nan­cial Stud­ies 16, 765–791
    ↩︎
  23. The swans:

    Al­though none of these terms are used, the same phe­nom­ena is also ob­served by No­let et al. (2001). In par­tic­u­lar, tun­dra swans must ex­pend more en­ergy to “up­-end” to feed on deep­-wa­ter tu­ber patches than they do to “head­-dip” to feed on shal­low-wa­ter patch­es; how­ev­er, con­trary to the ex­pec­ta­tions of No­let et al., the swans feed for a longer time on each high­-cost deep­-wa­ter patch. In every con­text, the ob­ser­va­tion of the sunk-cost effect is an enigma be­cause in­tu­ition sug­gests that this be­hav­ior is sub­op­ti­mal. Here, we show how op­ti­miza­tion of Eq. (3) pre­dicts the sunk-cost effect for cer­tain sce­nar­ios; a com­mon el­e­ment of every case is a large ini­tial cost.

    ↩︎
  24. Klaczyn­ski & Cot­trell 2004:

    Al­though con­sid­er­able ev­i­dence in­di­cates that adults com­mit the SC fal­lacy fre­quent­ly, age differ­ences in the propen­sity to ho­n­our sunk costs have been lit­tle stud­ied. In their in­ves­ti­ga­tions of 7–15-year-olds (S­tudy 1) and 5–12-year-olds (S­tudy 2), Baron et al. (1993) found no re­la­tion­ship be­tween age and SC de­ci­sions. By con­trast, Klaczyn­ski (2001b) re­ported that the SC fal­lacy de­creased from early ado­les­cence to adult­hood, al­though nor­ma­tive de­ci­sions were in­fre­quent across ages. A third pat­tern of find­ings is re­viewed by Arkes and Ay­ton (1999). Specifi­cal­ly, Arkes and Ay­ton ar­gue that two stud­ies (Krouse, 1986; Web­ley & Plais­er, 1998) in­di­cate that younger chil­dren com­mit the SC fal­lacy less fre­quently than older chil­dren. Mak­ing sense of these con­flict­ing find­ings is diffi­cult be­cause crit­i­cisms can be levied against each in­ves­ti­ga­tion. For in­stance, Arkes and Ay­ton (1999) ques­tioned the null find­ings of Baron et al. (1993) be­cause sam­ple sizes were small (e.g., in Baron et al., Study 2, n per age group ranged from 7 to 17). The prob­lems used by Krouse (1986) and Web­ley and Plaiser (1998) were not, strictly speak­ing, SC prob­lems (rather, they were prob­lems of ‘men­tal ac­count­ing’; see Web­ley & Plais­er, 1998). Be­cause Klaczyn­ski (2001b) did not in­clude chil­dren in his sam­ple, the age trends he re­ported are lim­ited to ado­les­cence. Thus, an in­ter­pretable mon­tage of age trends in SC de­ci­sions can­not be cre­ated from prior re­search.

    …An al­ter­na­tive propo­si­tion is based on the pre­vi­ously out­lined the­ory of the role of metacog­ni­tion in me­di­at­ing in­ter­ac­tions be­tween an­a­lytic and heuris­tic pro­cess­ing. In this view, even young chil­dren have had am­ple op­por­tu­ni­ties to con­vert the ‘waste not’ heuris­tic from a con­scious strat­egy to an au­to­mat­i­cally ac­ti­vated heuris­tic stored as a pro­ce­dural mem­o­ry. Ev­i­dence from chil­dren’s ex­pe­ri­ences with food (e.g., Birch, Fish­er, & Grim­m-Thomas, 1999) pro­vides some sup­port for the ar­gu­ment that even preschool­ers are fre­quently re­in­forced for not ‘wast­ing’ food. Moth­ers com­monly ex­tort their chil­dren to ‘clean up their plates’ even though they are sated and even though the nu­tri­tional effects of eat­ing more than their bod­ies re­quire are gen­er­ally neg­a­tive. If the ‘waste not’ heuris­tic is au­to­mat­i­cally ac­ti­vated in sunk cost sit­u­a­tions for both chil­dren and adults, then one pos­si­bil­ity is that no age differ­ences in com­mit­ting the fal­lacy should be ex­pect­ed. How­ev­er, if ac­ti­vated heuris­tics are mo­men­tar­ily avail­able for eval­u­a­tion in work­ing mem­o­ry, then the su­pe­rior metacog­ni­tive abil­i­ties of ado­les­cents and adults should al­low them to in­ter­cede in ex­pe­ri­en­tial pro­cess­ing be­fore the heuris­tic is ac­tu­ally used. Al­though the ev­i­dence is clear that most adults do not take ad­van­tage of this op­por­tu­nity for eval­u­a­tion, the pro­por­tion of ado­les­cents and adults who ac­tively in­hibit the ‘waste not’ heuris­tic should be greater than the same pro­por­tion of chil­dren.

    ↩︎
  25. eg. “Dis­count­ing of De­layed Re­wards: A Life-S­pan Com­par­i­son”, Green et al 1994; ab­stract:

    In this study, chil­dren, young adults, and older adults chose be­tween im­me­di­ate and de­layed hy­po­thet­i­cal mon­e­tary re­wards. The amount of the de­layed re­ward was held con­stant while its de­lay was var­ied. All three age groups showed de­lay dis­count­ing; that is, the amount of an im­me­di­ate re­ward judged to be of equal value to the de­layed re­ward de­creased as a func­tion of de­lay. The rate of dis­count­ing was high­est for chil­dren and low­est for older adults, pre­dict­ing a life-s­pan de­vel­op­men­tal trend to­ward in­creased self­-con­trol. Dis­count­ing of de­layed re­wards by all three age groups was well de­scribed by a sin­gle func­tion with age-sen­si­tive pa­ra­me­ters (all R2s > .94). Thus, even though there are quan­ti­ta­tive age differ­ences in de­lay dis­count­ing, the ex­is­tence of an age-in­vari­ant form of dis­count func­tion sug­gests that the process of choos­ing be­tween re­wards of differ­ent amounts and de­lays is qual­i­ta­tively sim­i­lar across the life span.

    ↩︎
  26. “The Bias Against Cre­ativ­i­ty: Why Peo­ple De­sire But Re­ject Cre­ative Ideas”, Mueller et al 2011:

    Un­cer­tainty is an aver­sive state (Fiske & Tay­lor, 1991 [So­cial cog­ni­tion]; Hei­der, 1958 [The psy­chol­ogy of in­ter­per­sonal re­la­tions]) which peo­ple feel a strong mo­ti­va­tion to di­min­ish and avoid (Whit­son & Galin­sky, 2008).

    ↩︎
  27. “Search­ing for the Sunk Cost Fal­lacy”, Fried­man et al 2007:

    Sub­jects play a com­puter game in which they de­cide whether to keep dig­ging for trea­sure on an is­land or to sink a cost (which will turn out to be ei­ther high or low) to move to an­other is­land. The re­search hy­poth­e­sis is that sub­jects will stay longer on is­lands that were more costly to find. Nine treat­ment vari­ables are con­sid­ered, e.g. al­ter­na­tive vi­sual dis­plays, whether the trea­sure value of an is­land is shown on ar­rival or dis­cov­ered by trial and er­ror, and al­ter­na­tive pa­ra­me­ters for sunk costs. The data re­veal a sur­pris­ingly small and er­ratic sunk cost effect that is gen­er­ally in­sen­si­tive to the pro­posed psy­cho­log­i­cal dri­vers.

    I cite Fried­man 2006 here so much be­cause it’s un­usu­al—as McAfee et al 2007 puts it:

    …Most of the ex­ist­ing em­pir­i­cal work has not con­trolled for chang­ing haz­ards, op­tion val­ues, rep­u­ta­tions for abil­ity and com­mit­ment, and bud­get con­straints. We are aware of only one study in which sev­eral of these fac­tors are elim­i­nat­ed—Fried­man et al. (2006). In an ex­per­i­men­tal en­vi­ron­ment with­out op­tion value or rep­u­ta­tion con­sid­er­a­tions, the au­thors find only very small and sta­tis­ti­cally in­signifi­cant sunk cost effects in the ma­jor­ity of their treat­ments, con­sis­tent with the ra­tio­nal the­ory pre­sented here.

    ↩︎
  28. On the lessons of Gar­land et al 1990’s ob­ser­va­tion of non sunk cost fal­la­cy, McAfee et al 2007:

    While some projects have an in­creas­ing haz­ard, oth­ers ap­pear to have a de­creas­ing haz­ard. For ex­am­ple, , orig­i­nally ex­pected to cost $1 bil­lion (see Ep­stein, 1998), prob­a­bly has a de­creas­ing haz­ard; given ini­tial fail­ure, the odds of im­me­di­ate suc­cess re­cede and the likely ex­pen­di­tures re­quired to com­plete grow. Oil-ex­plo­ration projects might also be char­ac­ter­ized by de­creas­ing haz­ards. Sup­pose a firm ac­quires a li­cense to drill a num­ber of wells in a fixed area. It de­cides to drill a well on a par­tic­u­lar spot in the area. Sup­pose the well turns out to be dry. The costs of drilling the well are then sunk. But the dry well might in­di­cate that the like­li­hood of strik­ing oil on an­other spot in the area is low since the geo­phys­i­cal char­ac­ter­is­tics of sur­face rocks and ter­rain for the next spot are more or less the same as the ones for the pre­vi­ous spot that turned out to be dry. Thus, the firm might be ra­tio­nally less likely to drill an­other well. In gen­er­al, firms might be less will­ing to drill an­other well the more wells they had al­ready found to be dry. This may in part ex­plain the rapid “de-esca­la­tion” ob­served by Gar­land, Sande­ford, and Rogers (1990) in their oil-ex­plo­ration ex­per­i­ments.

    ↩︎
  29. Born­stein et al. 1999:

    Mea­sure­ments and main re­sults: Res­i­dents eval­u­ated med­ical and non-med­ical sit­u­a­tions that var­ied the amount of pre­vi­ous in­vest­ment and whether the present de­ci­sion maker was the same or differ­ent from the per­son who had made the ini­tial in­vest­ment. They rated rea­sons both for con­tin­u­ing the ini­tial de­ci­sion (e.g., stay with the med­ica­tion al­ready in use) and for switch­ing to anew al­ter­na­tive (e.g., a differ­ent med­ica­tion). There were two main find­ings: First, the res­i­dents’ rat­ings of whether to con­tinue or switch med­ical treat­ments were not in­flu­enced by the amount of the ini­tial in­vest­ment (p’s > 0.05). Sec­ond, res­i­dents’ rea­son­ing was more nor­ma­tive in med­ical than in non-med­ical sit­u­a­tions, in which it par­al­leled that of un­der­grad­u­ates (p’s < 0.05).

    Con­clu­sions: Med­ical res­i­dents’ eval­u­a­tion of treat­ment de­ci­sions re­flected good rea­son­ing, in that they were not in­flu­enced by the amount of time and/or money that had al­ready been in­vested in treat­ing a pa­tient. How­ev­er, the res­i­dents did demon­strate a sunk-cost effect in eval­u­at­ing non-med­ical sit­u­a­tions. Thus, any ad­van­tage in de­ci­sion mak­ing that is con­ferred by med­ical train­ing ap­pears to be do­main spe­cific.

    Some of this was repli­cated & gen­er­al­ized in Braver­man & Blu­men­thal-Barby 2012:

    Specifi­cal­ly, we sur­veyed 389 health care providers in a large ur­ban med­ical cen­ter in the United States dur­ing Au­gust 2009. We asked par­tic­i­pants to make a treat­ment rec­om­men­da­tion based on one of four hy­po­thet­i­cal clin­i­cal sce­nar­ios that var­ied in the source and type of prior in­vest­ment de­scribed. By com­par­ing rec­om­men­da­tions across sce­nar­ios, we found that providers did not demon­strate a sunk-cost effect; rather, they demon­strated a sig­nifi­cant ten­dency to over-com­pen­sate for the effect. In ad­di­tion, we found that more than one in ten health care providers rec­om­mended con­tin­u­a­tion of an in­effec­tive treat­ment.

    ↩︎
  30. Staw 1981, “The Es­ca­la­tion of Com­mit­ment to a Course of Ac­tion”:

    …How­ev­er, when choos­ing to com­mit re­sources, sub­jects did not ap­pear to per­sist unswerv­ingly in the face of con­tin­ued neg­a­tive re­sults or to ig­nore in­for­ma­tion about the pos­si­bil­ity of fu­ture re­turns. These in­con­sis­ten­cies led to a third study [Staw & Ross, 1978] de­signed specifi­cally to find out how in­di­vid­u­als process in­for­ma­tion fol­low­ing neg­a­tive ver­sus pos­i­tive feed­back. In this third study, pre­vi­ous suc­cess/­fail­ure and causal in­for­ma­tion about a set­back were both ex­per­i­men­tally var­ied. Re­sults showed that sub­jects in­vested more re­sources in a course of ac­tion when in­for­ma­tion pointed to an ex­oge­nous rather than en­doge­nous cause of a set­back, and this ten­dency was most pro­nounced when sub­jects had been given a pre­vi­ous fail­ure rather than a suc­cess. The ex­oge­nous cause in this ex­per­i­ment was one that was both ex­ter­nal to the pro­gram in which sub­jects in­vested and was un­likely to per­sist, whereas the en­doge­nous cause was a prob­lem cen­tral to the pro­gram and likely to per­sist.

    ↩︎
  31. “Con­tin­u­ing in­vest­ment un­der con­di­tions of fail­ure: A lab­o­ra­tory study of the lim­its to es­ca­la­tion”, Mc­Cain 1986:

    Brock­ner et al. (1982), fol­low­ing Te­ger (1980), have also specifi­cally sug­gested that en­trap­ment in­volves two dis­tinct stages. In the first stage sub­jects re­spond pri­mar­ily to eco­nomic in­cen­tives, whereas self­-jus­ti­fi­ca­tion sup­pos­edly gov­erns the sec­ond. Brock­ner et al. found that cost salience sig­nifi­cantly re­duced en­trap­ment early on but had lit­tle effect in later pe­ri­od­s…Thus, a process that re­flects efforts to learn both what caused the set­backs and the im­pli­ca­tions of that cause for fu­ture ac­tion may pro­vide a bet­ter model of de-esca­la­tion.

    …The find­ings of this study clearly showed that the es­ca­la­tion effect, de­fined by a differ­ence be­tween the al­lo­ca­tions of high- and low-choice sub­jects, was lim­ited to the ini­tial stages of con­tin­u­ing in­vest­ment. The find­ings were con­sis­tent with pre­vi­ous re­search (Staw & Fox, 1977) and sup­port the con­tention that in­vest­ment in fail­ing projects in­volves two stages. Clear­ly, too, the avail­abil­ity of al­ter­na­tive in­vest­ments lim­ited the es­ca­la­tion effect. When sub­jects were given al­ter­na­tives to the fail­ing in­vest­ment, the differ­ence be­tween the in­vest­ments of the high- and low-choice groups dis­ap­peared. The re­sults showed, as well, that high­-choice sub­jects who dis­played the es­ca­la­tion effect quit fund­ing the fail­ing in­vest­ment sooner than com­pa­ra­ble low-choice sub­jects, con­trary to a com­mit­ment per­spec­tive. Sim­i­lar­ly, the de­clin­ing haz­ard rates ob­served here sup­port a learn­ing model more than they sup­port the self­-jus­ti­fi­ca­tion mod­el…­Some au­thors (e.g., Northcraft & Wolf, 1984) have sug­gested that in­vestors re­act differ­ently to cost over­runs than they re­act to rev­enue short­falls, yet many es­ca­la­tion ex­per­i­ments do not clearly spec­ify whether set­backs re­sult from higher than ex­pected costs or from lower than ex­pected rev­enues. Clear­ly, if in­vestors are sen­si­tive to un­cer­tain­ty, as the at­tri­bu­tional model sug­gests, re­searchers must con­sider how sub­jects may re­spond to an in­ad­e­quately spec­i­fied in­vest­ment con­text…

    ↩︎
  32. “Sunk and Op­por­tu­nity Costs in Val­u­a­tion and Bid­ding”, Phillips et al 1991, which also men­tions an­other ap­par­ent in­stance of mar­ket agents ini­tially com­mit­ting sunk cost and then learn­ing: Plott & Uhl 1981 “Com­pet­i­tive Equi­lib­rium with Mid­dle­men: An Em­pir­i­cal Study”, South­ern Eco­nomic Jour­nal↩︎

  33. In par­tic­u­lar, “Pay-to-bid” s such as have been called in­stances of sunk cost fal­lacy, or “es­ca­la­tion of com­mit­ment” by no less than Richard H. Thaler, and de­scribed by techie as be­ing “as close to pure, dis­tilled evil in a busi­ness plan as I’ve ever seen”. But as Wang & Xu 2012 and Cal­dara 2012 in­di­cate, while peo­ple do lose money to the penny auc­tions, they even­tu­ally do learn that penny auc­tions are not good ideas and es­cape the trap. A pre­vi­ous analy­sis of penny auc­tions, Au­gen­blick 2009, omit­ted de­tailed sur­vivor­ship data but still found some learn­ing effects. And in­deed, Swoopo has since shut down.↩︎

  34. “Learn­ing lessons from sunk costs”, Born­stein & Chap­man 1995:

    Study par­tic­i­pants rated the qual­ity of sev­eral ar­gu­ments for con­tin­u­ing an orig­i­nal plan in sunk cost sit­u­a­tions in or­der to (a) avoid wast­ing re­sources, (b) learn to make bet­ter de­ci­sions, (c) pun­ish poor de­ci­sion mak­ing, and (d) ap­pear con­sis­tent. The lesson-learn­ing ar­gu­ment was per­ceived as most ap­pro­pri­ate when adult teach­ers taught lessons to oth­ers, the orig­i­nal de­ci­sion was care­lessly made, or if it con­sumed com­par­a­tively more re­sources. Rat­ings of the lesson-learn­ing ar­gu­ment were higher for teacher-learner than for adult-alone sit­u­a­tions, re­gard­less of whether the learner was a child or an adult. The im­pli­ca­tions for im­prov­ing de­ci­sion mak­ing and judg­ing whether the sunk cost effect is a bias are dis­cussed…How­ev­er, prospect the­ory does not pre­dict an effect of vari­ables such as whether the de­ci­sion maker acted alone, the care with which the de­ci­sion was made, or the na­ture of the re­la­tion­ship be­tween teacher and learn­er. The other three re­sponses were in­flu­enced by these vari­ables.

    What ap­pears to be a bias in the lab­o­ra­tory may be func­tional be­hav­ior in a more re­al­is­tic con­text (Fun­der, 1987; Hog­a­rth, 1981), where a va­ri­ety of jus­ti­fi­ca­tions for the be­hav­ior can be con­sid­ered. In gen­er­al, ig­nor­ing sunk costs is an adap­tive, cost-effec­tive strat­e­gy. Yet what ap­pears to be bi­ased, ir­ra­tional be­hav­ior—­such as de­creas­ing util­ity through at­ten­tion to ir­re­triev­ably wasted re­sources—­can be de­scribed as ‘meta-ra­tional’ (Junger­mann, 1986), as­sum­ing the ben­e­fits of learn­ing and im­ple­ment­ing the les­son out­weigh the costs of stick­ing to the orig­i­nal plan. How­ev­er, it raises the in­ter­est­ing ques­tion of why con­tin­u­ing a failed plan is the best (or even a good) way to learn to make bet­ter de­ci­sions in the fu­ture. Per­haps one could both aban­don the cur­rent un­suc­cess­ful plan and learn to think more care­fully in fu­ture de­ci­sions.

    ↩︎
  35. See Arkes & Blumer 1985, which found no re­sis­tance in stu­dents who had were taken an eco­nom­ics course and most of whom had taken other eco­nom­ics cours­es; also good is Lar­rick et al 1993 or Lar­rick et al 1990, “Teach­ing the use of cost-ben­e­fit rea­son­ing in every­day life”:

    It may be seen in Ta­ble 1 that econ­o­mists’ rea­son­ing on the uni­ver­sity and in­ter­na­tional pol­icy ques­tions was more in line with cost-ben­e­fit rules than was that of bi­ol­o­gists and hu­man­ists. This pat­tern was found for the net-ben­e­fit ques­tions (p < 0.05), and for the op­por­tu­nity cost ques­tions (p < 0.05) and a trend was found for the sunk cost ques­tions (p < 0.15)…Third, econ­o­mists were more likely than bi­ol­o­gists and hu­man­ists to re­port that they ig­nored sunk costs or at­tended to op­por­tu­nity costs in their per­sonal de­ci­sions. For in­stance, they were more likely to have dropped a re­search project be­cause it was not prov­ing worth­while. (It is in­ter­est­ing to note that econ­o­mists were not sim­ply more likely to drop pro­jects. All three dis­ci­plines gave the same an­swer on av­er­age to the ques­tion “have you ever dropped a re­search project be­cause of a lack of fund­ing?”) Fi­nal­ly, econ­o­mists par­tic­i­pated in a greater num­ber of time-sav­ing ac­tiv­i­ties…The re­sults show that train­ing peo­ple only briefly on an eco­nomic prin­ci­ple sig­nifi­cantly al­ters their so­lu­tions to hy­po­thet­i­cal eco­nomic prob­lems [in­clud­ing sunk cost]. More­over, train­ing effects gen­er­al­ize fully from a fi­nan­cial do­main to a non­fi­nan­cial one and vice ver­sa…The means for both in­dices showed that [quick­-e­co­nom­ics] trained sub­jects were ig­nor­ing sunk costs more than un­trained sub­jects, but only the nine-item in­dex based on the ques­tion “Have you bought one of the fol­low­ing items at some time and then not used it in the past month” ap­proached sig­nifi­cance. Trained sub­jects re­ported that they had paid for but not used 1.14 ob­jects and ac­tiv­i­ties com­pared to 0.84 for un­trained sub­jects, f(78) = 1.64, p = 0.10.

    On the other hand, Fen­nema & Perkins 2008:

    The re­sults in­di­cate that prac­tic­ing Cer­ti­fied Pub­lic Ac­coun­tants (CPAs), Mas­ters of Busi­ness Ad­min­is­tra­tion stu­dents (MBAs) and un­der­grad­u­ate ac­count­ing stu­dents per­form bet­ter than un­der­grad­u­ate psy­chol­ogy stu­dents. The level of train­ing, as mea­sured by the num­ber of col­lege courses in man­age­r­ial ac­count­ing, was found to be pos­i­tively cor­re­lated with per­for­mance, while the level of ex­pe­ri­ence, as mea­sured by years of fi­nan­cial­ly-re­lated work, was not. Jus­ti­fi­ca­tion was found to im­prove de­ci­sions only for those par­tic­i­pants with sig­nifi­cant work ex­pe­ri­ence (MBAs and CPAs). Strate­gies used in this type of de­ci­sion were ex­am­ined with the sur­pris­ing find­ing that eco­nom­i­cally ra­tio­nal de­ci­sions can be made even if sunk costs are not ig­nored.

    ↩︎
  36. For ex­am­ple, Heath 1995 spends a page crit­i­ciz­ing Brock­ner & Ru­bin 1985’s setup of en­dowed sub­jects buy­ing tick­ets in a lot­tery, point­ing out they took sub­jects quit­ting ticket pur­chases after a long run of tick­et-buy­ing as ev­i­dence of sunk cost, even though if the ex­pected value of the lot­tery was pos­i­tive, the nor­ma­tive ra­tio­nal strat­egy is for the sub­ject spend every penny of the en­dow­ment buy­ing tick­ets! “Con­sid­er, for ex­am­ple, the av­er­age of $3.82 in­vested in the game with a $10.00 prize. In this game, the av­er­age sub­ject quits at a point where the ex­pected ben­e­fits from a mar­ginal in­vest­ment are three times what they were when the sub­ject be­gan in­vest­ing.” [em­pha­sis added]↩︎

  37. The pre­vi­ously men­tioned stud­ies of sunk cost in chil­dren found min­i­mal cor­re­la­tions with in­tel­li­gence, when that was mea­sured. For adults, see Strough et al 2008, pre­vi­ously cit­ed. Al­so, Stanovich, K. E., & West, R. F. (2008b). “On the rel­a­tive in­de­pen­dence of think­ing bi­ases and cog­ni­tive abil­ity”. Jour­nal of Per­son­al­ity and So­cial Psy­chol­ogy, 94, 672–695 (pg 7–8):

    Both cog­ni­tive abil­ity groups dis­played sunk-cost effects of roughly equal mag­ni­tude. For the high-SAT group, the mean in the no-sunk-cost con­di­tion was 6.90 and the mean in the sunk-cost con­di­tion was 5.08, whereas for the low-SAT group, the mean in the no-sunk-cost con­di­tion was 6.50 and the mean in the sunk-cost con­di­tion was 4.19. A 2 (cog­ni­tive abil­i­ty) ϫ 2 (con­di­tion) in­di­cated a sig­nifi­cant main effect of cog­ni­tive abil­i­ty, F(1, 725) ϭ 8.40, MSE ϭ 9.13, p Ͻ .01, and a sig­nifi­cant main effect of con­di­tion, F(1, 725) ϭ 84.9, MSE ϭ 9.13, p Ͻ .001. There was a slight ten­dency for the low-SAT par­tic­i­pants to show a larger sunk-cost effect, but the Cog­ni­tive Abil­ity ϫ Con­di­tion in­ter­ac­tion did not at­tain sta­tis­ti­cal sig­nifi­cance, F(1, 725) ϭ 1.21, MSE ϭ 9.13. The in­ter­ac­tion was also tested in a re­gres­sion analy­ses in which SAT was treated as a con­tin­u­ous vari­able rather than as a di­choto­mous vari­able. The Form ϫ SAT cross pro­duct, when en­tered third in the equa­tion, was not sig­nifi­cant, F(1, 725) ϭ 0.32.

    The sunk-cost effect thus rep­re­sents an­other cog­ni­tive bias that is not strongly at­ten­u­ated by cog­ni­tive abil­i­ty. How­ev­er, this is true only when it is as­sessed in a be­tween-sub­jects con­text. Us­ing a sim­i­lar sunk-cost prob­lem, Stanovich & West 1999 did find an as­so­ci­a­tion with cog­ni­tive abil­ity when par­tic­i­pants re­sponded in a with­in-sub­jects de­sign.

    And Parker & Fis­chhoff 2005:

    The first two rows of Ta­ble 5 show strong cor­re­la­tions be­tween five of the seven DMC com­po­nent mea­sures and re­spon­dents’ scores on the WISC-R vo­cab­u­lary test and on Gi­an­cola et al.’s (1996) mea­sure of ECF. Con­sis­tency in risk per­cep­tion and re­sis­tance to sunk cost show lit­tle re­la­tion­ship to ei­ther of these gen­eral cog­ni­tive abil­i­ties.8,9 [cor­re­la­tions: 0.12, 0.08]

    Bru­ine de Bruin et al 2007, mod­i­fy­ing Parker & Fis­chhoff 2005’s test bat­tery, im­proved the con­sis­tency of the sunk cost ques­tions, and found sim­i­lar small cor­re­la­tions with their 2 IQ mea­sures, of 0.17 and 0.04. Lar­rick et al 1993 recorded SAT/ACT scores (close prox­ies for IQ) and found some cor­re­la­tion, and noted that IQ was “pos­i­tively re­lated to recog­ni­tion of econ­o­mists’ po­si­tion on var­i­ous eco­nomic prob­lems.”↩︎

  38. “Sunk Costs in the NBA: Why Draft Or­der Affects Play­ing Time and Sur­vival in Pro­fes­sional Bas­ket­ball”:

    A sec­ond prob­lem is that much of the es­ca­la­tion lit­er­a­ture, de­spite its in­tent to sources of com­mit­ment, has not di­rectly ex­plain non­ra­tional chal­lenged the as­sump­tions of eco­nomic de­ci­sion mak­ing. By and large, the es­ca­la­tion lit­er­a­ture has demon­strated that psy­cho­log­i­cal and so­cial fac­tors can in­flu­ence re­source al­lo­ca­tion de­ci­sions, not that the ra­tio­nal as­sump­tions of de­ci­sion mak­ing are in er­ror. A third weak­ness is that al­most all the es­ca­la­tion lit­er­a­ture is lab­o­ra­tory based. Aside from a few re­cent qual­i­ta­tive case stud­ies (e.g., Ross and Staw, 1986, 1993), es­ca­la­tion pre­dic­tions have not been con­firmed set­tings, us­ing data that are or fal­si­fied in real or­ga­ni­za­tional gen­er­ated in their nat­ural con­text. There­fore, de­spite the size of the es­ca­la­tion lit­er­a­ture, it is still un­cer­tain if to es­ca­la­tion effects can be gen­er­al­ized from the lab­o­ra­tory the field.

    …Gar­land, Sande­fur, and Rogers (1990) found a sim­i­lar ab­sence of sunk-cost effects in an ex­per­i­ment us­ing an oil-drilling sce­nario. Prior ex­pen­di­tures on dry wells were not as­so­ci­ated with con­tin­ued drilling, per­haps be­cause dry wells were so clearly seen as re­duc­ing rather than in­creas­ing the like­li­hood of fu­ture oil pro­duc­tion. Thus it ap­pears that sunk costs may only be in­flu­en­tial on project de­ci­sions when they are linked to the per­cep­tion (if not the re­al­i­ty) of progress on a course of ac­tion.

    …Table 2 also shows that draft or­der was a sig­nifi­cant pre­dic­tor of min­utes played over the en­tire five-year pe­ri­od. This effect was above and be­yond any effects of a play­er’s per­for­mance, in­jury, or trade sta­tus. The re­gres­sions showed that every in­cre­ment in the draft num­ber de­creased play­ing time by as much as 23 min­utes in the sec­ond year (I, = −22.77, p < 0.001, one-tailed test). Like­wise, be­ing taken in the sec­ond rather than the first round of the draft meant 552 min­utes less play­ing time dur­ing a play­er’s sec­ond year in the NBA.

    ↩︎
  39. Fried­man et al 2006: “…Of course, it is hard to com­pletely rule out other ex­pla­na­tions based on un­ob­served com­po­nents of per­for­mance or the coach­es’ Bayesian pri­ors.”↩︎

  40. “Rein­vest­ment de­ci­sions by en­tre­pre­neurs: Ra­tio­nal de­ci­sion-mak­ing or es­ca­la­tion of com­mit­ment?”, Mc­Carthy et al 1993:

    The hy­pothe­ses were tested us­ing data from a lon­gi­tu­di­nal study in­volv­ing 1112 firms. It was found that en­tre­pre­neurs who had started their firms and those who had ex­pressed sub­stan­tial over-con­fi­dence were sig­nifi­cantly more likely to make the de­ci­sion to ex­pand. The hy­pothe­ses that those who had part­ners and those who ex­pected to ap­ply their skills would be more likely to ex­pand were not sup­port­ed. Fur­ther­more, and con­sis­tent with pre­vi­ous re­search, these psy­cho­log­i­cal es­ca­la­tion pre­dic­tors seemed to ex­ert a greater in­flu­ence when feed­back from the mar­ket­place was neg­a­tive. As ex­pect­ed, there was a de­clin­ing in­flu­ence in the third year as com­pared with the sec­ond. Con­sis­tent with the prior lit­er­a­ture and the hy­pothe­ses, these psy­cho­log­i­cal pre­dic­tors did show a small, but sys­tem­atic in­flu­ence upon rein­vest­ment de­ci­sions.

    …Although the hy­poth­e­sis re­gard­ing PARTNR was not sup­port­ed, as not­ed, the ze­ro-order cor­re­la­tion be­tween PARTNR and NEWCAP2 is in the pre­dicted di­rec­tion (r = 0.06, p < 0.05, one-tailed). Thus, en­tre­pre­neurs with part­ners may be more likely to ex­pand the as­set base of their firms than they would be if they were sole own­ers. This has sig­nifi­cant im­pli­ca­tions for en­tre­pre­neur­ial teams, in that the pres­ence of part­ners does not in­hibit the ten­dency to es­ca­late, but in fact in­creases that ten­den­cy. This means that hav­ing part­ners is not in­sur­ance against the ten­dency to es­ca­late. This is con­sis­tent with the re­search on es­ca­la­tion (Baz­er­man et al. 1984).

    …A puz­zling find­ing was the lack of any re­la­tion­ship be­tween fi­nan­cial in­di­ca­tors from the pre­vi­ous year and new cap­i­tal in­vested in the busi­ness. In other words, there was no sys­tem­atic re­la­tion­ship be­tween sales growth and ex­pan­sion of the as­set base for these young firms. This may mean that many of these firms started with some ex­cess ca­pac­ity so that it was not nec­es­sary to add to fa­cil­i­ties to sup­port their early growth. It may also mean that man­age­ment of work­ing cap­i­tal was er­rat­ic. On the other hand. the psy­cho­log­i­cal fac­tors pre­dicted by es­ca­la­tion the­ory did, in two of four cas­es, show sys­tem­atic re­la­tion­ships to ad­di­tional in­vest­ment.

    …One fi­nal is­sue worth com­ment is the rel­a­tively small amount of vari­ance ac­counted for by the mod­els de­scribed in this study. The vari­ance ac­counted for in this re­search is in line with the find­ings in sim­i­lar stud­ies of es­ca­la­tion. In a re­cent field study of the es­ca­la­tion bi­as, Schoor­man (1988) re­ported that the es­ca­la­tion bias ac­counted for 6% of the vari­ance in per­for­mance rat­ings. Schoor­man (1988) noted in this ar­ti­cle that the es­ca­la­tion vari­ables were more pow­er­ful pre­dic­tors of per­for­mance (at 6%) than a mea­sure of abil­ity used in a val­i­dated se­lec­tion test for these same em­ploy­ees…­Taken to­gether these find­ings pro­vide sup­port for the view that es­ca­la­tion bias is a sig­nifi­cant and com­mon prob­lem in de­ci­sion-mak­ing among en­tre­pre­neurs. The char­ac­ter­is­tics of en­tre­pre­neurs and the na­ture of the de­ci­sions they are re­quired to make leave them par­tic­u­larly vul­ner­a­ble to es­ca­la­tion bias. Efforts to train en­tre­pre­neurs to guard against this bias may be very valu­able.

    ↩︎
  41. “Recon­cep­tu­al­iz­ing en­tre­pre­neur­ial ex­it: Di­ver­gent exit routes and their dri­vers”, Wennberg et al 2009:

    …An al­ter­na­tive fail­ure-avoid­ance strat­egy is to in­vest ad­di­tional eq­ui­ty. We found that such rein­vest­ments re­duced the prob­a­bil­ity of all exit routes. While pre­vi­ous re­search on rein­vest­ment also found that rein­vest­ment was not re­lated to well-de­fined per­for­mance lev­els (M­c­Carthy et al., 1993), it is in­ter­est­ing that it also re­duced the odds of har­vest sales and har­vest liq­ui­da­tions. As a fail­ure-avoid­ance strat­e­gy, rein­vest­ment thus seems to be less effec­tive than cost re­duc­tion. Cost re­duc­tions have di­rect effects on firm per­for­mance while rein­vest­ments pro­vide a tem­po­rary buffer for fail­ing firms. As sug­gest­ed, there might be dis­in­cen­tives to ad­di­tional in­vest­ments if tax laws pun­ish en­tre­pre­neurs tak­ing out money as salaries or div­i­dends. If cor­rob­o­rat­ed, this is an im­por­tant find­ing for pub­lic pol­icy mak­ers.

    ↩︎
  42. See and Ashraf et al 2007; on the hy­po­thet­i­cal (sum­mary from Holla & Kre­mer 2008, pg 11):

    When they di­vide their sam­ple into house­holds that dis­played a sunk-cost effect when re­spond­ing to a hy­po­thet­i­cal sce­nario posed to them by sur­vey­ors and those that did not, they find co­effi­cients of much larger mag­ni­tude for the hy­po­thet­i­cal-sunk-cost house­holds, al­though these re­main in­signifi­cant and can­not be sta­tis­ti­cally dis­tin­guished from the es­ti­mated effects for house­holds that did not dis­play this hy­po­thet­i­cal sunk-cost effect. Ashraf et al (2007) iden­tify hy­po­thet­i­cal-sunk-cost house­holds from their an­swers to the fol­low­ing ques­tion posed dur­ing the fol­low-up sur­vey: Sup­pose you bought a bot­tle of juice for 1,000 Kw. When you start to drink it, you re­al­ize you don’t re­ally like the taste. Would you fin­ish drink­ing it?

    ↩︎
  43. Fried­man et al 2006:

    Do In­ter­net users re­spond to sunk time costs? Man­ley & Seltzer (1997) re­port that after a par­tic­u­lar web­site im­posed an ac­cess charge, the re­main­ing users stayed longer. A ri­val ex­pla­na­tion to the sunk cost fal­lacy is se­lec­tion bi­as: the users with short­est stays when the site was free are those who stopped com­ing when they had to pay. Klein et al (1999) re­port that users stick around longer on their site after en­coun­ter­ing de­lays while play­ing a game, but again se­lec­tion bias is a pos­si­ble al­ter­na­tive ex­pla­na­tion. The is­sue is im­por­tant in e-com­merce be­cause ‘stick­ier’ sites earn more ad­ver­tis­ing rev­enue. Schwartz (1999) [Dig­i­tal Dar­win­ism: 7 Break­through Busi­ness Strate­gies for Sur­viv­ing in the Cut­throat Web Econ­omy] re­ports that man­agers of the free Wall Street Jour­nal site de­lib­er­ately slowed the lo­gin process in the be­lief that users would then stay longer. One of us (Lukose) took a sam­ple of 2000 user logs from a web­site and found a sig­nifi­cant pos­i­tive cor­re­la­tion be­tween res­i­dence time at the site and down­load la­ten­cy. One al­ter­na­tive ex­pla­na­tion is un­ob­served con­ges­tion on the web, and users may have been re­spond­ing more to ex­pected fu­ture time costs than to time costs al­ready sunk. Al­so, good sites may be more pop­u­lar be­cause they are good, lead­ing to (a) con­ges­tion and (b) more time spent on the site.

    ↩︎
  44. Fried­man et al 2006:

    …Bar­ron et al. (2001) find that US firms are sig­nifi­cantly more likely to ter­mi­nate projects fol­low­ing the de­par­ture of top man­agers. This might re­flect the new man­agers’ in­sen­si­tiv­ity to costs sunk by their pre­de­ces­sors, or it might sim­ply re­flect two as­pects of the same broad re­align­ment de­ci­sion.

    ↩︎
  45. A “tra­di­tional maxim” in Chi­nese state­craft.↩︎

  46. Dil Green’s de­scrip­tion:

    • ‘tac­ti­cal op­ti­mism’ : David Bohm’s term for the way in which hu­mans over­come the (so far) in­escapable as­sess­ment that; ‘in the long run, we’re all dead’. Specifi­cal­ly, within the build­ing in­dus­try, rife with non-op­ti­mal in­grained con­di­tions, you would­n’t come to work if you weren’t an op­ti­mist. Builders who cease to have an op­ti­mistic out­look go and find other things to do.

    It’s hard not to think of Arkes & Hutzel 2000, “The Role of Prob­a­bil­ity of Suc­cess Es­ti­mates in the Sunk Cost Effect”

    The sunk cost effect is man­i­fested in a ten­dency to con­tinue an en­deavor once an in­vest­ment has been made. Arkes and Blumer (1985) showed that a sunk cost in­creases one’s es­ti­mated prob­a­bil­ity that the en­deavor will suc­ceed [ p(s)]. Is this p(s) in­crease a cause of the sunk cost effect, a con­se­quence of the effect, or both? In Ex­per­i­ment 1 par­tic­i­pants read a sce­nario in which a sunk cost was or was not pre­sent. Half of each group read what the pre­cise p(s) of the project would be, thereby dis­cour­ag­ing p(s) in­fla­tion. Nev­er­the­less these par­tic­i­pants man­i­fested the sunk cost effect, sug­gest­ing p(s) in­fla­tion is not nec­es­sary for the effect to oc­cur. In Ex­per­i­ment 2 par­tic­i­pants gave p(s) es­ti­mates be­fore or after the in­vest­ment de­ci­sion. The lat­ter group man­i­fested higher p(s), sug­gest­ing that the in­flated es­ti­mate is a con­se­quence of the de­ci­sion to in­vest.

    ↩︎
  47. The rea­son­ing goes like this:

    An­other rea­son for hon­or­ing the sunk cost of the movie ticket (re­lated to avoid­ing re­gret) is that you know your­self well enough to re­al­ize you often make mis­takes. There are many ir­ra­tional rea­sons why you would not want to see the movie after all. Maybe you’re un­will­ing to get up and go to the movie be­cause you feel a lit­tle tired after eat­ing too much. Maybe a friend who has al­ready seen the movie dis­cour­ages you to go, even though you know your tastes in movies don’t al­ways match. Maybe you’re a lit­tle de­pressed and dis­tracted by work/re­la­tion­ship/what­ever prob­lems. Etc.

    For what­ever rea­son, your past self chose to buy the tick­et, and your present self does not want to see the movie. Your present self has more in­for­ma­tion. But this ex­tra in­for­ma­tion is of du­bi­ous qual­i­ty, and is not al­ways rel­e­vant to the de­ci­sion. But it still in­flu­ences your state of mind, and you know that. How do you know which self is right? You don’t, un­til after you’ve seen the movie. The mar­ginal costs, in terms of men­tal dis­com­fort, of see­ing the movie and not lik­ing it, are usu­ally smaller than the mar­ginal ben­e­fit of stay­ing home and think­ing about what a great movie it could have been. The rea­son­ing be­hind this triv­ial ex­am­ple can eas­ily be adapted to sunk cost choices in sit­u­a­tions that do mat­ter.

    And again:

    Peo­ple who take into ac­count sunken costs in every­day de­ci­sions will make bet­ter de­ci­sions on av­er­age. My ar­gu­ment re­lies on the propo­si­tion that a per­son’s es­ti­mate of his own util­ity func­tion is highly noisy. In other words, you don’t re­ally know if go­ing to the movie will make you happy or not, un­til you ac­tu­ally do it.

    So if you’re in this movie-go­ing sit­u­a­tion, then you have at least two pieces of da­ta. Your cur­rent self has pro­duced an es­ti­mate that says the util­ity of go­ing to the movie is neg­a­tive. But your for­mer self pro­duced an es­ti­mate that says the util­ity is sub­stan­tially pos­i­tive—e­nough so that he was will­ing to fork over $10. So maybe you av­er­age out the es­ti­mates: if you cur­rently value the movie at -$5, then the av­er­age value is still pos­i­tive and you should go. The real ques­tion is how con­fi­dent you are in your cur­rent es­ti­mate, and whether that con­fi­dence is jus­ti­fied by real new in­for­ma­tion.

    ↩︎
  48. “Real artists ship”, as the say­ing goes, and don’t give into the temp­ta­tion to rewrite en­tire sys­tems un­less start­ing over or start­ing an en­tirely new sys­tem is truly nec­es­sary (par­tic­u­larly given ). One might call us­ing “sunk cost fal­lacy” to jus­tify aban­don­ing par­tial projects for new projects the “‘sunk cost fal­lacy’ fal­lacy”:

    I have a prob­lem with never fin­ish­ing things that I want to work on. I get en­thu­si­as­tic about them for a while, but then find some­thing else to work on. This prob­lem seems to be pow­ered par­tially by my sunk costs fal­lacy hooks. When faced with the choice of fin­ish­ing my cur­rent project or start­ing this shiny new pro­ject, my sunk costs hook ac­ti­vates and says “eval­u­ate fu­ture ex­pected util­ity and ig­nore sunk costs”. The new project looks very shiny com­pared to the old pro­ject, enough that it looks like a bet­ter thing to work on than the rest of the cur­rent pro­ject. The trou­ble is that this al­ways seems to be the case. It seems weird that the awe­some­ness of my project ideas would have ex­po­nen­tial growth over time, so there must be some­thing else here.

    Johni­cholas:

    …Some­times it can be hard to main­tain a good bal­ance among mul­ti­ple ac­tiv­i­ties. For ex­am­ple, it is im­por­tant to no­tice new good ideas. How­ev­er, I tend to spend too much time pur­su­ing nov­el­ty, and not enough time work­ing on the best idea that I’ve found so far. There is a tra­di­tion of browser games (see ) that en­force a kind of bal­ance us­ing a vir­tual cur­rency of ‘turns’. You ac­cu­mu­late turns slowly in real time, and es­sen­tially every ac­tion within the game uses up turns. This en­forces not spend­ing too much time play­ing the game (and in­creases the per­ceived value of the game via forced ar­ti­fi­cial scarci­ty, of course). If I gave my­self ‘ex­plore dol­lars’ for do­ing non-ex­plo­ration (so-called ex­ploit) tasks, and charged my­self for do­ing ex­plo­ration tasks (like read­ing or Wikipedi­a), I could en­force a bal­ance. If I were also prone to the op­po­site prob­lem (‘A few months in the lab can often save whole hours in the li­brary.’), then I might use two cur­ren­cies; ex­plor­ing costs ex­plore points but re­wards with ex­ploit points, and ex­ploit­ing costs ex­ploit points but re­wards with ex­plore points. (Vir­tual cur­ren­cies are ubiq­ui­tous in games, and they can be used for many pur­pos­es; I ex­pect to find them able to be placed across from many differ­ent fail­ure modes.)

    Mass Dri­ver:

    …I have the same prob­lem at work; al­though, by main­stream so­ci­ety’s stan­dards, I am a rea­son­ably suc­cess­ful pro­fes­sion­al, I can’t re­ally sit down and write a great es­say when I’m too hot, or, at least, it seems like I would be more pro­duc­tive if I stopped writ­ing for 5 min­utes and cranked up the A/C or changed into shorts. An hour lat­er, it seems like I would be more pro­duc­tive if I stopped writ­ing for 20 min­utes and ate lunch. Later that after­noon, it seems like I would be more pro­duc­tive if I stopped for a few min­utes and read an in­ter­est­ing ar­ti­cle on gen­eral sci­ence. These things hap­pen even in an ideal work­ing en­vi­ron­ment, when I’m by my­self in a place I’m fa­mil­iar with. If I have cowork­ers, or if I’m in a new town, there are even more dis­trac­tions. If I have to learn who to ask for help with learn­ing to use the new soft­ware so that I can re­search the data that I need to write a re­port, then I might spend 6 hours prepar­ing to spend 1 hour writ­ing a re­port.

    All this wor­ries me for two rea­sons: (1) I might be fail­ing to ac­tu­ally op­ti­mize for my goals if I only spend 10–20% of my time di­rectly per­form­ing tar­get ac­tions like ‘write es­say’ or ‘kayak with friends’, and (2) even if I am suc­cess­fully op­ti­miz­ing, it sucks that the way to achieve the re­sults that I want is to let my at­ten­tion dwell on the most effi­cient ways to, say, brush my teeth. I don’t just want to go kayak­ing, I want to think about kayak­ing. Think­ing about dri­ving to the river seems like a waste of cog­ni­tive ‘time’ to me.

    Daniel Meade:

    …I’ve had so many ‘projects’ over the past few years I’ve lost count. Has any one of them seen the light of day? Well yes, but that failed mis­er­ably. The point is, it’s all too easy to get dis­tracted and move onto some­thing else, per­haps it’s that hur­dle where you’re just not sure what to do next or how to do it, so in­stead of find­ing a way to tackle it head on, you take the easy way out and start some­thing new. I’m quick to blame my fail­ings on the lack of cap­i­tal, that I ‘need’ to get my projects off the ground. And that jus­ti­fies my avoid­ance. Of course, it does­n’t, far from it. But I just don’t know how to push through, not right now any way.

    We all have heard of busi­nesses en­gaged in sunk cost, but that does­n’t tell us any­thing un­less we know to what de­gree, if any, they en­gage in the op­po­site be­hav­ior, switch­ing too much; from “How Dig­i­Cash Blew Every­thing”, Next! Mag­a­zine:

    It all started out quite nice­ly. The brand new com­pany sold a smart card for closed sys­tems which was a cash-cow for years. It was at this time that the first ir­ri­tants ap­peared. Even if you are a bril­liant sci­en­tist, that does­n’t mean you are a good man­ag­er. was a con­trol freak, some­one who could­n’t del­e­gate any­thing to any­one else, and in­sisted upon watch­ing over every­body’s shoul­ders. “That re­sulted in slow­ing down re­search,” ex­plains an ex- em­ployee who wished to re­main anony­mous. “We had a lot of half-fin­ished prod­uct. He con­tin­u­ously changed his mind about where things were head­ed.”

    ↩︎
  49. Ini­ti­a­tion rit­u­als may in­crease com­mit­ment as they be­come more un­pleas­ant or de­mand­ing (Aron­son & Mills 1959); and we can see at­tempts in our daily lives with con­sumeris­m—buy­ing gym mem­ber­ships or ex­er­cise equip­ment; McAfee et al 2007:

    Anec­do­tal ev­i­dence sug­gests that in­di­vid­u­als may even ex­ploit their own re­ac­tions to sunk ex­pen­di­tures to their ad­van­tage. Steele (1996, p. 610) [cf. El­ster 2000, Ulysses Un­bound: Stud­ies in Ra­tio­nal­i­ty, Pre­com­mit­ment and Con­straints, and Kelly 2004] and Wal­ton (2002, p. 479) re­count sto­ries of in­di­vid­u­als who buy ex­er­cise ma­chines or gym mem­ber­ships that cost in the thou­sands of dol­lars, even though they are re­luc­tant to spend this much mon­ey, rea­son­ing that if they do, it will make them ex­er­cise, which is good for their health. A re­ac­tion to sunk costs that as­sists in com­mit­ment is often help­ful.

    Or pre­pay­ing for lessons, or buy­ing ex­ces­sively ex­pen­sive writ­ing tools:

    ‘Are re­ally worth the cost com­pared to Mead? If so, why?’

    Plenty of peo­ple seem to swear by them. But here’s the thing—it’s not so much the cost (in ab­solute sums, it’s not that large). It’s whether you use it. You ob­vi­ously sweat over costs; per­haps this sweat­ing can be a cud­gel to force you to write down what­ev­er. The more a mole­sk­ine is­n’t worth buy­ing, the more you will find your­self com­pelled to use it. Then would­n’t you be bet­ter off in the end?

    mocks this logic (pg 122 of Spent 2011):

    All ex­pe­ri­enced fit­ness ma­chine sales­peo­ple are well aware that this is the fate of most of their prod­ucts. What they are re­ally sell­ing con­sumers is the delu­sion that the sunk costs of buy­ing the ma­chines will force them to ex­er­cise con­sci­en­tious­ly. (The con­sumers know that they could have al­ready been jog­ging for months around their neigh­bor­hood parks in their old run­ning shoes, but they also know that their ac­cess to the parks and shoes has not, em­pir­i­cal­ly, been suffi­cient to in­duce reg­u­lar ex­er­cise.) So, the con­sumer thinks: ‘If I in­vest $3,900 in this Pre­Cor EFX5.33 el­lip­ti­cal train­er, it will (1) call forth reg­u­lar aer­o­bic ac­tiv­ity from my flawed and un­wor­thy body, through the tech­no-fetishis­tic magic of its build qual­i­ty, and (2) save me money in the long run by re­duc­ing med­ical ex­pens­es.’ The sales­per­son mean­while thinks: ‘20% com­mis­sion!’ and the man­u­fac­turer thinks: ‘We can safely offer a ten-year war­ran­ty; be­cause the av­er­age ma­chine only gets used sev­en­teen times in the first two months after pur­chase.’ Every­body’s hap­py, ex­cept for most con­sumers, and they don’t com­plain be­cause they think it’s all their fault that they’re fail­ing to use the ma­chine.’ The few con­sci­en­tious con­sumers who do use the equip­ment reg­u­larly en­joy many ben­e­fits: effi­cient mus­cle build­ing and fat burn­ing through the low per­ceived ex­er­tion of the Pre­Cor’s smooth el­lip­ti­cal move­ment; a lean body that elic­its lust and re­spect; a self­-sat­is­fied glow of moral su­pe­ri­or­i­ty.

    ↩︎
  50. Grit is a slightly nar­rower ver­sion of ; from “Grit: Per­se­ver­ance and Pas­sion for Long-Term Goals”:

    …We de­fine grit as per­se­ver­ance and pas­sion for long-term goals. Grit en­tails work­ing stren­u­ously to­ward chal­lenges, main­tain­ing effort and in­ter­est over years de­spite fail­ure, ad­ver­si­ty, and plateaus in progress. The gritty in­di­vid­ual ap­proaches achieve­ment as a marathon; his or her ad­van­tage is sta­mi­na. Whereas dis­ap­point­ment or bore­dom sig­nals to oth­ers that it is time to change tra­jec­tory and cut loss­es, the gritty in­di­vid­ual stays the course. Our hy­poth­e­sis that grit is es­sen­tial to high achieve­ment evolved dur­ing in­ter­views with pro­fes­sion­als in in­vest­ment bank­ing, paint­ing, jour­nal­ism, acad­e­mia, med­i­cine, and law. Asked what qual­ity dis­tin­guishes star per­form­ers in their re­spec­tive fields, these in­di­vid­u­als cited grit or a close syn­onym as often as tal­ent. In fact, many were awed by the achieve­ments of peers who did not at first seem as gifted as oth­ers but whose sus­tained com­mit­ment to their am­bi­tions was ex­cep­tion­al. Like­wise, many noted with sur­prise that prodi­giously gifted peers did not end up in the up­per ech­e­lons of their field.

    More than 100 years prior to our work on grit, Gal­ton (1892) col­lected bi­o­graph­i­cal in­for­ma­tion on em­i­nent judges, states­men, sci­en­tists, po­ets, mu­si­cians, painters, wrestlers, and oth­ers. Abil­ity alone, he con­clud­ed, did not bring about suc­cess in any field. Rather, he be­lieved high achiev­ers to be triply blessed by ‘abil­ity com­bined with zeal and with ca­pac­ity for hard labour’ (p. 33). Sim­i­lar con­clu­sions were reached by Cox (1926) in an analy­sis of the bi­ogra­phies of 301 em­i­nent cre­ators and lead­ers drawn from a larger sam­ple com­piled by J. M. Cat­tell (1903). Es­ti­mated IQ and Cat­tel­l’s rank or­der of em­i­nence were only mod­er­ately re­lated (r = ϭ.16) when re­li­a­bil­ity of data was con­trolled for. Rat­ing ge­niuses on 67 char­ac­ter traits de­rived from Webb (1915), Cox con­cluded that hold­ing con­stant es­ti­mated IQ, the fol­low­ing traits ev­i­dent in child­hood pre­dicted life­time achieve­ment: ‘per­sis­tence of mo­tive and effort, con­fi­dence in their abil­i­ties, and great strength or force of char­ac­ter’ (p. 218).

    …How­ev­er, in the Ter­man lon­gi­tu­di­nal study of men­tally gifted chil­dren, the most ac­com­plished men were only 5 points higher in IQ than the least ac­com­plished men (Ter­man & Oden, 1947). To be sure, re­stric­tion on range of IQ partly ac­counted for the slight­ness of this gap, but there was suffi­cient vari­ance in IQ (SD ϭ 10.6, com­pared with SD ϭ 16 in the gen­eral pop­u­la­tion) in the sam­ple to have ex­pected a much greater differ­ence. More pre­dic­tive than IQ of whether a men­tally gifted Ter­man sub­ject grew up to be an ac­com­plished pro­fes­sor, lawyer, or doc­tor were par­tic­u­lar noncog­ni­tive qual­i­ties: ‘Per­se­ver­ance, Self­-Con­fi­dence, and In­te­gra­tion to­ward goals’ (Ter­man & Oden, 1947, p. 351). Ter­man and Oden, who were close col­lab­o­ra­tors of Cox, en­cour­aged fur­ther in­quiry into why in­tel­li­gence does not al­ways trans­late into achieve­ment: ‘Why this is so, what cir­cum­stances affect the fruition of hu­man tal­ent, are ques­tions of such tran­scen­dent im­por­tance that they should be in­ves­ti­gated by every method that promises the slight­est re­duc­tion of our present ig­no­rance’ (p. 352).

    …The cross-sec­tional de­sign of Study 1 lim­its our abil­ity to draw strong causal in­fer­ences about the ob­served pos­i­tive as­so­ci­a­tion be­tween grit and age. Our in­tu­ition is that grit grows with age and that one learns from ex­pe­ri­ence that quit­ting plans, shift­ing goals, and start­ing over re­peat­edly are not good strate­gies for suc­cess. In fact, a strong de­sire for nov­elty and a low thresh­old for frus­tra­tion may be adap­tive ear­lier in life: Mov­ing on from dead­-end pur­suits is es­sen­tial to the dis­cov­ery of more promis­ing paths. How­ev­er, as Er­ic­s­son and Char­ness (1994) demon­strat­ed, ex­cel­lence takes time, and dis­cov­ery must at some point give way to de­vel­op­ment. Al­ter­na­tive­ly, Mc­Crae et al. (1999) spec­u­lated that mat­u­ra­tional changes in per­son­al­i­ty, at least through mid­dle adult­hood, might be ge­net­i­cally pro­grammed. From an evo­lu­tion­ary psy­chol­ogy per­spec­tive, cer­tain traits may not be as ben­e­fi­cial when seek­ing mates as when pro­vid­ing for and rais­ing a fam­i­ly. A third pos­si­bil­ity is that the ob­served as­so­ci­a­tion be­tween grit and age is a con­se­quence of co­hort effects. It may be that each suc­ces­sive gen­er­a­tion of Amer­i­cans, for so­cial and cul­tural rea­sons, has grown up less gritty than the one be­fore (cf. Twenge, Zhang, & Im, 2004).

    ↩︎
  51. “Fac­tors Affect­ing En­trap­ment in Es­ca­lat­ing Con­flicts: The Im­por­tance of Tim­ing”, Brock­ner et al 1982

    All sub­jects were given an ini­tial mon­e­tary stake and had the op­por­tu­nity to win more by tak­ing part in an en­trap­ping in­vest­ment sit­u­a­tion. In Ex­per­i­ment 1, half the sub­jects were pro­vided with a pay­off chart that made salient the costs as­so­ci­ated with in­vest­ing (High­-cost salience con­di­tion) whereas half were not (Low-cost salience con­di­tion). More­over, for half of the sub­jects the pay­off chart was in­tro­duced be­fore they were asked to in­vest (Early con­di­tion) whereas for the other half it was in­tro­duced after they had in­vested a con­sid­er­able por­tion of their re­sources (Late con­di­tion). En­trap­ment was lower in the High salience-Early than in the Low Salience-Early con­di­tion. How­ev­er, there was no differ­ence be­tween groups in the Late con­di­tion. In Ex­per­i­ment 2, the per­ceived pres­ence of an au­di­ence in­ter­acted with per­son­al­ity vari­ables re­lated to face-sav­ing to effect en­trap­ment. When the au­di­ence was de­scribed as ‘ex­perts in de­ci­sion mak­ing,’ sub­jects high in pub­lic self­-con­scious­ness (or so­cial anx­i­ety) be­came less en­trapped than those low on these di­men­sions. When the au­di­ence con­sisted of in­di­vid­u­als who ‘wished sim­ply to ob­serve the ex­per­i­men­tal pro­ce­dure,’ how­ev­er, high pub­lic self­-con­scious­ness (or so­cial anx­i­ety) in­di­vid­u­als were…­more en­trapped than lows. More­over, these in­ter­ac­tion effects oc­curred when the au­di­ence was in­tro­duced late, but not ear­ly, into the en­trap­ment sit­u­a­tion. Taken to­geth­er, these (and oth­er) find­ings sug­gest that eco­nomic fac­tors are more in­flu­en­tial de­ter­mi­nants of be­hav­ior in the ear­lier stages of an en­trap­ping con­flict, whereas face-sav­ing vari­ables are more po­tent in the later phas­es.

    …For ex­am­ple, in­di­vid­u­als may ‘throw good money after bad’ in re­pair­ing an old car, re­main for an ex­ces­sively long pe­riod of time in un­sat­is­fy­ing jobs or ro­man­tic re­la­tion­ships, or de­cide to es­ca­late the arms race (even in the face of in­for­ma­tion sug­gest­ing the im­prac­ti­cal­ity of all these ac­tions) be­cause of their be­lief that they have ‘too much in­vested to quit’ (Te­ger, 1980).

    ↩︎
  52. “Face-sav­ing and en­trap­ment”, Brock­ner 1981:

    En­trap­ping con­flicts are those in which in­di­vid­u­als: (1) have made sub­stan­tial, un­re­al­ized in­vest­ments in pur­suit of some goal, and (2) feel com­pelled to jus­tify these ex­pen­di­tures with con­tin­ued in­vest­ments, even if the like­li­hood of goal at­tain­ment is low. It was hy­poth­e­sized that en­trap­ment (i.e., amount in­vest­ed) would be in­flu­enced by the rel­a­tive im­por­tance in­di­vid­u­als at­tach to the costs and re­wards as­so­ci­ated with con­tin­ued in­vest­ments. Two ex­per­i­ments tested the no­tion that en­trap­ment would be more pro­nounced when costs were ren­dered less im­por­tant (and/or re­wards were made more im­por­tan­t). In Ex­per­i­ment 1, half of the sub­jects were in­structed be­fore­hand of the virtues of in­vest­ing con­ser­v­a­tively (Cau­tious con­di­tion), whereas half were in­formed of the ad­van­tages of in­vest­ing a con­sid­er­able amount (Risky con­di­tion). In­vest­ments were more than twice as great in the Risky con­di­tion. More­over, con­sis­tent with a face-sav­ing analy­sis, (1) the in­struc­tions had a greater effect on sub­jects with high rather than low so­cial anx­i­ety, and (2) in­di­vid­u­als with high so­cial anx­i­ety who par­tic­i­pated in front of a large au­di­ence were more in­flu­enced by the in­struc­tions than were in­di­vid­u­als with low so­cial anx­i­ety who par­tic­i­pated in front of a small au­di­ence. In the sec­ond ex­per­i­ment, the im­por­tance of costs and re­wards were var­ied in a 2 × 2 de­sign. As pre­dict­ed, sub­jects in­vested sta­tis­ti­cal­ly-sig­nifi­cantly more when cost im­por­tance was low rather than high. Con­trary to ex­pec­ta­tion, re­ward im­por­tance had no effect. Ques­tion­naire data from this study also sug­gested that en­trap­ment was at least par­tially me­di­ated by the par­tic­i­pants’ con­cern over the way they thought they would be eval­u­at­ed. The­o­ret­i­cal im­pli­ca­tions are dis­cussed.

    Dis­agree­ing with Brock­ner 1981 on the so­cial con­cern part; , Kar­a­vanov & Cai 2007:

    The cur­rent in­ves­ti­ga­tion did not sup­port the find­ings from pre­vi­ous stud­ies that sug­gest that jus­ti­fi­ca­tion processes and face con­cerns lead to en­trap­ment. This study found that only in­ter­nal self­-jus­ti­fi­ca­tion and oth­er-pos­i­tive face con­cerns are re­lated to en­trap­ment, but in­stead of con­tribut­ing to en­trap­ment, these as­pects pre­vent in­di­vid­u­als from be­com­ing en­trapped. Per­sonal net­works were demon­strated to have pos­i­tive effect on both self- and oth­er-pos­i­tive face con­cerns, pro­vid­ing em­pir­i­cal sup­port for the value of us­ing per­sonal net­works as a pre­dic­tor of face goals. How­ev­er, per­sonal net­works did not con­tribute to en­trap­ment.

    ↩︎
  53. Heath takes the use of ‘bud­get ac­count­ing’—which can lead to re­duced to­tal re­turn, as it did for sub­jects in his ex­per­i­ments, who if they stuck it out and es­ca­lated com­mit­ments earned $7.35 ver­sus the bud­get-users at $4.84—as often con­flict­ing with nor­ma­tive stan­dards. My own per­spec­tive is to won­der how much bud­get mak­ing re­sem­bles writ­ing down one’s jus­ti­fi­ca­tion for a par­tic­u­lar prob­a­bilis­tic pre­dic­tion, when one’s pre­dic­tions are ul­ti­mately fal­si­fied.↩︎

  54. See, for ex­am­ple, Pala et al 2007 which in­ves­ti­gated whether helped peo­ple pre­vent sunk cost more than be­ing given “a list of im­por­tant fac­tors”. They did­n’t.↩︎