On Having Enough Socks

Personal experience and surveys on running out of socks; discussion of socks as small example of human procrastination and irrationality, caused by lack of explicit deliberative thought where no natural triggers or habits exist.
statistics, R, survey, Bayes, psychology, technology, decision-theory, design, insight-porn
2017-11-222019-06-12 finished certainty: possible importance: 4

After run­ning out of socks one day, I re­flected on how or­di­nary tasks get ne­glect­ed. Anec­do­tally and in 3 on­line sur­veys, peo­ple re­port often not hav­ing enough socks, a prob­lem which cor­re­lates with rar­ity of sock pur­chases and de­mo­graphic vari­ables, con­sis­tent with a ne­glec­t/pro­cras­ti­na­tion in­ter­pre­ta­tion: be­cause there is no spe­cific time or trig­ger­ing fac­tor to re­plen­ish a shrink­ing sock stock­pile, it is easy to run out.

This re­minds me of akra­sia on mi­nor tasks, ‘yak shav­ing’, and the na­ture of dis­as­ter in com­plex sys­tems: lack of hard rules lets er­rors ac­cu­mu­late, with­out any ‘global’ un­der­stand­ing of the drift into dis­as­ter (or at least in­effi­cien­cy). Hu­mans on a smaller scale also ‘drift’ when they en­gage in Sys­tem I re­ac­tive think­ing & ac­tion for too long, re­sult­ing in cog­ni­tive bi­as­es. An ex­am­ple of drift is the gen­er­al­ized hu­man fail­ure to ex­plore/­ex­per­i­ment ad­e­quate­ly, re­sult­ing in overly greedy ex­ploita­tive be­hav­ior of the cur­rent lo­cal op­ti­mum. Gro­cery shop­ping pro­vides a case study: de­spite large gains, most peo­ple do not ex­plore, per­haps be­cause there is no es­tab­lished rou­tine or prac­tice in­volv­ing ex­per­i­men­ta­tion. Fixes for these things can be seen as en­sur­ing that Sys­tem II de­lib­er­a­tive cog­ni­tion is pe­ri­od­i­cally in­voked to re­view things at a global lev­el, such as de­vel­op­ing a habit of max­i­mum ex­plo­ration at first pur­chase of a food pro­duct, or an­nu­ally re­view­ing pos­ses­sions to note prob­lems like a lack of socks.

While socks may be small things, they may re­flect big things.

Socks pos­sess the mys­te­ri­ous pow­er, like cats, of van­ish­ing; un­like cats, they don’t get hun­gry and come back. So I found my­self one day in sum­mer 2013 do­ing laun­dry a week early and wast­ing time schlep­ping back & forth solely be­cause I had run out of socks en­tirely and could­n’t bear walk­ing around in dirty socks. I sud­denly re­al­ized that this was a ridicu­lous prob­lem to have in an age awash with cheap tex­tiles (so cheap that clothes must be shipped to Africa or in­cin­er­ated lest the thrift stores burst at the seam­s), and im­me­di­ately went on Ama­zon & bought a pack of 30 pairs to re­fill my ‘sock­pile’.1 This made me cu­ri­ous: how many other peo­ple don’t have enough socks, and why not?

I be­gan ask­ing peo­ple if they thought they had enough socks and quite a few peo­ple would say that they did­n’t, but they had­n’t quite got­ten around to it. (Although some in­sist changed their lives for­ev­er.)

So I be­gan run­ning polls, and I am not alone.

Sock Surveys

An oth­er­wise-un­pub­lished Sam­sung sock sur­vey finds that “Brits lose an av­er­age of 1.3 socks each month (and more than 15 in a year)”, im­ply­ing an an­nual loss of ~8 pairs in the best case sce­nar­io: where you ei­ther don’t need ex­act matches (be­cause all socks are the same kind) or don’t mind mis­match­es.2 If each pair is unique and one goes miss­ing from each, then in the worst case an an­nual loss of 15 in­di­vid­ual socks im­plies one must buy an­other 15 pairs. This ap­pears to be in ad­di­tion to wear-and-tear or changes in nec­es­sary type, which must be made up for.

In a Twit­ter sur­vey 2019-01-20–2019-01-27 of my fol­low­ers, I asked:

  1. Do you have enough pairs of socks?

    • Yes: 64% (n = 689)
    • No: 37% (n = 405)
  2. How many pairs of socks do you have?

    • 0–10: 18% (n = 118)
    • 11–20: 46% (n = 302)
    • 21–30: 27% (n = 177)
    • 31+: 9% (n = 59)
  3. How often do you buy re­place­ment socks?

    • Month­ly: 2% (n = 15)
    • Semi­-an­nu­al­ly: 33% (n = 254)
    • An­nu­al­ly: 37% (n = 285)
    • Less or nev­er: 28% (n = 216)
  4. Who buys your socks?

    • Me: 75% (n = 580)
    • Spouse/sig­nifi­can­t-other: 7% (n = 54)
    • Rel­a­tive: 16% (n = 123)
    • Oth­er: 2% (n = 15)

At least among my Twit­ter so­cial cir­cle, not hav­ing enough socks is com­mon and a fair num­ber of peo­ple are on the verge of sock bank­rupt­cy. An an­swer to why sug­gests it­self by the pur­chase de­tails: most peo­ple are re­spon­si­ble for their own sock main­te­nance, but buy on per­haps not even an an­nual ba­sis (a plu­ral­ity buy ‘an­nu­ally’, and the ‘semi­-an­nu­ally’ may be more than off­set by the ‘less or never’ re­spon­dents); so it’s easy to for­get and not buy socks.

Is sock­less­ness con­cen­trated among those who must buy their own socks & do so rarely? Twit­ter re­sponses are in­de­pen­dent and not linked by user­name (only the ag­gre­gate %s and to­tal n are re­port­ed), so there’s no way to know from the re­spons­es, so there’s no way to see the in­ter­cor­re­la­tions.

To do that, I set up a sur­vey on 2019-01-20 (CSV), ask­ing all 4 ques­tions in a sin­gle sur­vey with n = 130 US re­sponses cost­ing $100. (This is more ex­pen­sive than my usual trick of ask­ing only 1 ques­tion, and costs $1/re­sponse rather than $0.10/re­spon­se, but a set of 4 sin­gle-ques­tion sur­veys would be the same as the Twit­ter sur­vey.) Eric Jor­gensen also ran a ver­sion of the sur­vey on a per­son­al­ity quiz web­site with an in­ter­na­tional au­di­ence (ODS/CSV), with n = 455. They have the same ques­tions (with the ex­cep­tion of the sock count ques­tion, where his sur­vey asked for a nu­meric rather than or­di­nal re­spon­se, so I con­vert it back to or­di­nal), so I pool them for analy­sis:

socks <- read.csv("https://www.gwern.net/docs/psychology/2019-01-20-gs-socks.csv")
socks <- subset(socks, select=c("Question..1.Answer", "Question..2.Answer", "Question..3.Answer", "Question..4.Answer"))
socks <- socks[socks$Question..3.Answer!="",] # rm NAs
socks <- socks[socks$Question..4.Answer!="",] # rm NAs
socks$Question..1.Answer <- socks$Question..1.Answer=="Yes"
socks$Question..2.Answer <- as.ordered(socks$Question..2.Answer)
socks$Question..3.Answer <- ordered(socks$Question..3.Answer, levels=c("monthly", "semi-annually", "annually", "less/never"))
socks$Question..4.Answer <- ordered(socks$Question..4.Answer, levels=c("me", "spouse or significant other", "relative", "other"))
socksI <- with(socks, data.frame(Enough=as.integer(Question..1.Answer), Count=as.integer(Question..2.Answer), Frequency=as.integer(Question..3.Answer), Purchaser=as.integer(Question..4.Answer)))

eric <- read.csv("https://www.gwern.net/docs/psychology/2019-01-21-eric-socksurvey.csv")
eric$Count <- as.integer(ordered(sapply(eric$Count, function(c) { if (c<=10) { "0-10"; } else { if (c<=20) { "11-20"; } else { if (c<=30) { "21-30"; } else { "31+"; }}}})))
ericI <- subset(eric, select=c("Enough", "Count", "Frequency", "Purchaser"))

socksAllI <- rbind(socksI, ericI)

## Descriptive:
# Skim summary statistics
#  n obs: 599
#  n variables: 4
# ── Variable type:integer
#   variable missing complete   n mean   sd p0 p25 p50 p75 p100     hist
#      Count       0      599 599 2.04 0.91  1   1   2   3    4 ▆▁▇▁▁▃▁▂
#     Enough       0      599 599 0.84 0.37  0   1   1   1    1 ▂▁▁▁▁▁▁▇
#  Frequency       0      599 599 2.76 0.87  1   2   3   3    4 ▁▁▇▁▁▇▁▅
#  Purchaser       0      599 599 1.63 0.96  1   1   1   3    4 ▇▁▁▁▁▂▁▁

#  n obs: 600
#  n variables: 4
# ── Variable type:integer
#   variable missing complete   n mean   sd p0 p25 p50 p75 p100     hist
#      Count       0      600 600 2.04 0.92  1   1   2   3    4 ▆▁▇▁▁▃▁▂
#     Enough       0      600 600 0.84 0.37  0   1   1   1    1 ▂▁▁▁▁▁▁▇
#  Frequency       1      599 600 2.76 0.87  1   2   3   3    4 ▁▁▇▁▁▇▁▅
#  Purchaser       0      600 600 1.63 0.96  1   1   1   3    4 ▇▁▁▁▁▂▁▁

## Bivariate correlations:
# Polychoric correlations
#           Enogh Count Frqnc Prchs
# Enough     1.00
# Count      0.29  1.00
# Frequency -0.23 -0.08  1.00
# Purchaser  0.16 -0.01  0.17  1.00
#  with tau of
#               1     2     3    4
# Enough    -0.99   Inf   Inf  Inf
# Count      -Inf -0.48  0.62 1.38
# Frequency  -Inf -1.60 -0.21 0.74
# Purchaser  -Inf  0.45  0.64 1.71

The GS re­spon­dents have less of an is­sue with sock short­ages than my Twit­ter re­spon­dents (un­sur­pris­ing­ly) with 15% rather than 37% sock­less, and the bi­vari­ate poly­choric cor­re­la­tions3 make sense to me: count and hav­ing enough cor­re­late strong­ly, of course, while fre­quency & pur­chaser dis­tance pre­dict less sock­s/­more risk of not hav­ing enough.

What about joint re­la­tion­ships? con­ve­niently sup­ports or­di­nal pre­dic­tors via “mo­not­o­nic effects” in ad­di­tion to sup­port­ing or­di­nal re­gres­sion for or­di­nal out­comes, so there’s no prob­lem mod­el­ing any of the vari­ables in any com­bi­na­tion; given the over­lap of sock count & hav­ing enough, it does­n’t make much sense to use one as a pre­dic­tor of the other (although ex­tract­ing a fac­tor might make sense). So to do re­gres­sion from Frequency & Purchaser onto Enough & Count:

brm(Enough ~ mo(Frequency) + mo(Purchaser), family="bernoulli", data=socksAllI)
# ...Population-Level Effects:
#             Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
# Intercept       2.26      0.28     1.79     2.91       1851 1.00
# moFrequency    -0.93      0.37    -1.71    -0.24       1948 1.00
# moPurchaser    -0.58      0.39    -1.38     0.17       3365 1.00
# ...
brm(Count ~ mo(Frequency) + mo(Purchaser), family="cumulative", data=socksAllI)
# ...Population-Level Effects:
#              Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
# Intercept[1]    -1.03      0.17    -1.39    -0.72       2488 1.00
# Intercept[2]     0.77      0.17     0.41     1.06       2860 1.00
# Intercept[3]     2.17      0.20     1.75     2.56       3505 1.00
# moFrequency     -0.52      0.25    -1.02    -0.03       2558 1.00
# moPurchaser      0.02      0.29    -0.49     0.68       3142 1.00

While differ­ent pa­ra­me­ter­i­za­tions, the mes­sage re­mains the same: a fair num­ber of peo­ple do not have socks (it’s not only me), and this par­tic­u­larly cor­re­lates with not fre­quently pur­chas­ing socks.


In­ci­den­tal­ly, both the GS & Eric Jor­gensen polls in­clude some de­mo­graph­ics data: es­ti­mated gen­der/age/lo­ca­tion for GS, and ESL-speaker/country/gender for Eric Jor­gensen. Those aren’t my main in­ter­est here, but how do they look?

One could make some pre­dic­tions based on stereo­types: women will have more socks than men, older peo­ple will be more likely to have enough socks than younger peo­ple, and there will prob­a­bly be cross-coun­try differ­ences. Check­ing, older peo­ple are in­deed more like­ly, cross-coun­try differ­ences are not so large as to be in­fer­able, and there ap­pears to be in­con­sis­tency in gen­der effects: men have more prob­lems with socks in the US than in­ter­na­tion­al­ly?

Jor­gensen’s data first; be­cause of the large num­ber of coun­tries, heavy reg­u­lar­iza­tion must be used:

polychoric(subset(eric, select=c(Gender.Int, Enough, Frequency, Purchaser)))
# Polychoric correlations
#            Gnd.I Enogh Frqnc Prchs
# Gender.Int  1.00
# Enough      0.02  1.00
# Frequency  -0.01 -0.25  1.00
# Purchaser   0.20  0.24  0.15  1.00
brm(Enough ~ Gender + Country + mo(Frequency) + mo(Purchaser), family=bernoulli, prior=c(set_prior("horseshoe(1, par_ratio=0.05)")), control = list(max_treedepth = 15, adapt_delta=0.95), chains=30, iter=10000, data=eric)
# ...Population-Level Effects:
#             Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
# Intercept       1.85      0.25     1.52     2.54      10770 1.00
# GenderMale      0.00      0.04    -0.07     0.07     131457 1.00
# CountryAU      -0.00      0.08    -0.10     0.08     113988 1.00
# CountryAZ       0.00      0.18    -0.10     0.12      95067 1.00
# CountryBE       0.01      0.18    -0.09     0.11      84197 1.00
# CountryBG       0.01      0.16    -0.08     0.13      62989 1.00
# CountryBO       0.00      0.16    -0.10     0.12      85775 1.00
# CountryBR       0.00      0.16    -0.09     0.11      80827 1.00
# CountryBS      -0.09      0.53    -1.28     0.05      29590 1.00
# CountryCA       0.00      0.08    -0.08     0.10     118251 1.00
# CountryCH      -0.02      0.26    -0.20     0.07      55249 1.00
# CountryCZ       0.01      0.18    -0.08     0.15      54813 1.00
# CountryDE       0.01      0.18    -0.07     0.16      72813 1.00
# CountryDK      -0.00      0.11    -0.11     0.09     111953 1.00
# CountryEE       0.00      0.15    -0.10     0.11     109200 1.00
# CountryES       0.01      0.17    -0.09     0.12      59172 1.00
# CountryFI       0.00      0.15    -0.10     0.11      90646 1.00
# CountryFR      -0.00      0.11    -0.11     0.09     115618 1.00
# CountryGB      -0.00      0.06    -0.09     0.08     101507 1.00
# CountryGR       0.01      0.18    -0.08     0.13      33231 1.00
# CountryHK       0.01      0.15    -0.09     0.12      73429 1.00
# CountryHR      -0.01      0.12    -0.13     0.08      98295 1.00
# CountryHU       0.01      0.17    -0.09     0.12      85208 1.00
# CountryID       0.00      0.10    -0.08     0.11      91982 1.00
# CountryIE      -0.01      0.15    -0.15     0.07      55181 1.00
# CountryIL      -0.04      0.28    -0.46     0.06      35464 1.00
# CountryIN      -0.02      0.13    -0.20     0.06      69378 1.00
# CountryIR      -0.02      0.25    -0.19     0.07      53285 1.00
# CountryIS      -0.03      0.29    -0.23     0.07      44140 1.00
# CountryIT       0.00      0.16    -0.10     0.11      71643 1.00
# CountryJE       0.00      0.16    -0.09     0.11      65529 1.00
# CountryJM       0.00      0.11    -0.09     0.11      91020 1.00
# CountryJP       0.00      0.10    -0.09     0.09     118871 1.00
# CountryKE       0.01      0.16    -0.09     0.12     104434 1.00
# CountryKR       0.01      0.16    -0.09     0.12      93687 1.00
# CountryLB       0.01      0.17    -0.08     0.14      51394 1.00
# CountryLT       0.01      0.15    -0.09     0.12      97039 1.00
# CountryMD       0.00      0.16    -0.09     0.11      79148 1.00
# CountryMK      -0.01      0.17    -0.16     0.07      14656 1.00
# CountryMM       0.00      0.15    -0.09     0.11     107831 1.00
# CountryMX       0.01      0.16    -0.08     0.12      84871 1.00
# CountryMY       0.01      0.19    -0.08     0.15      57119 1.00
# CountryNL       0.04      0.31    -0.05     0.48      44071 1.00
# CountryNO       0.00      0.10    -0.08     0.11     101486 1.00
# CountryNONE     0.03      0.28    -0.06     0.33      36810 1.00
# CountryPH      -0.01      0.13    -0.20     0.06      63309 1.00
# CountryPL       0.00      0.10    -0.10     0.10      97421 1.00
# CountryPT       0.01      0.17    -0.09     0.12      69037 1.00
# CountryQA       0.01      0.16    -0.09     0.11      66602 1.00
# CountryRO       0.01      0.16    -0.09     0.11      82836 1.00
# CountryRU       0.00      0.15    -0.09     0.11      99649 1.00
# CountrySA       0.01      0.17    -0.09     0.11      70890 1.00
# CountrySE      -0.00      0.08    -0.10     0.08     105770 1.00
# CountrySG       0.02      0.21    -0.06     0.20      48360 1.00
# CountryTR      -0.01      0.17    -0.16     0.07      56410 1.00
# CountryTT       0.01      0.17    -0.08     0.14      68940 1.00
# CountryUA       0.01      0.16    -0.08     0.13      60610 1.00
# CountryUS      -0.01      0.06    -0.14     0.05      73991 1.00
# CountryVE       0.01      0.17    -0.09     0.13      35998 1.00
# CountryVN       0.00      0.17    -0.09     0.11      54781 1.00
# CountryXK       0.00      0.16    -0.10     0.12     101146 1.00
# CountryZA       0.01      0.16    -0.08     0.13      59905 1.00
# moFrequency    -0.17      0.40    -1.40     0.02       7944 1.00
# moPurchaser    -0.01      0.08    -0.17     0.05      52232 1.00
# ...
brm(Count ~ Gender + Country + mo(Frequency) + mo(Purchaser), family=cumulative, prior=c(set_prior("horseshoe(1, par_ratio=0.05)")), control = list(max_treedepth = 15, adapt_delta=0.95), chains=30, iter=10000, data=eric)
# ...Population-Level Effects:
#              Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
# Intercept[1]    -0.91      0.31    -1.57    -0.33     103560 1.00
# Intercept[2]     1.07      0.32     0.42     1.68     105615 1.00
# Intercept[3]     2.68      0.35     1.98     3.37     113296 1.00
# GenderMale      -0.13      0.17    -0.50     0.16     125937 1.00
# CountryAU       -1.03      0.69    -2.48     0.08     124545 1.00
# CountryAZ       -0.53      1.12    -3.33     1.18     158180 1.00
# CountryBE        0.11      0.81    -1.54     1.95     215427 1.00
# CountryBG       -0.07      0.61    -1.45     1.20     224586 1.00
# CountryBO        0.12      0.80    -1.51     1.97     208130 1.00
# CountryBR       -0.55      1.13    -3.36     1.16     158105 1.00
# CountryBS        0.27      0.91    -1.41     2.47     194178 1.00
# CountryCA        0.98      0.57    -0.04     2.09      82595 1.00
# CountryCH       -0.51      1.10    -3.26     1.21     161968 1.00
# CountryCZ        0.37      0.74    -0.90     2.12     154331 1.00
# CountryDE        0.29      0.59    -0.73     1.68     156133 1.00
# CountryDK        0.62      0.73    -0.47     2.27     113453 1.00
# CountryEE       -0.60      1.15    -3.44     1.12     155992 1.00
# CountryES       -0.12      0.75    -1.84     1.38     219013 1.00
# CountryFI        1.27      1.56    -0.79     4.96     123980 1.00
# CountryFR       -0.17      0.55    -1.46     0.88     202919 1.00
# CountryGB        0.17      0.28    -0.33     0.81      90166 1.00
# CountryGR        0.02      0.62    -1.32     1.37     214245 1.00
# CountryHK       -0.96      1.24    -4.01     0.65     144472 1.00
# CountryHR        0.01      0.62    -1.34     1.37     222467 1.00
# CountryHU        0.08      0.80    -1.61     1.88     212960 1.00
# CountryID       -1.81      0.92    -3.80    -0.14     138380 1.00 # Indonesia
# CountryIE       -0.92      1.23    -3.93     0.67     145126 1.00
# CountryIL        0.22      0.62    -0.93     1.67     181101 1.00
# CountryIN       -2.25      1.38    -5.43    -0.09     138907 1.00 # India
# CountryIR        0.22      0.84    -1.40     2.21     194840 1.00
# CountryIS        0.05      0.80    -1.64     1.84     210993 1.00
# CountryIT        0.17      0.82    -1.47     2.07     206471 1.00
# CountryJE        1.27      1.56    -0.79     4.96     123390 1.00
# CountryJM        0.76      0.64    -0.23     2.10      86077 1.00
# CountryJP       -0.42      0.60    -1.82     0.52     156615 1.00
# CountryKE       -0.56      0.82    -2.53     0.69     164055 1.00
# CountryKR        0.07      0.76    -1.54     1.76     219641 1.00
# CountryLB       -0.32      0.65    -1.87     0.79     178249 1.00
# CountryLT        0.44      0.79    -0.86     2.31     157516 1.00
# CountryMD        0.77      1.12    -0.94     3.41     137492 1.00
# CountryMK       -0.27      0.77    -2.13     1.12     199321 1.00
# CountryMM       -0.46      1.08    -3.19     1.24     164795 1.00
# CountryMX        0.36      0.68    -0.78     1.96     158241 1.00
# CountryMY       -1.91      1.39    -5.10     0.06     133497 1.00
# CountryNL        0.49      0.46    -0.22     1.46      90797 1.00
# CountryNO        1.22      0.69    -0.02     2.56      91436 1.00
# CountryNONE     -0.66      0.60    -1.95     0.25     127296 1.00
# CountryPH       -0.25      0.51    -1.44     0.63     177716 1.00
# CountryPL        0.09      0.45    -0.82     1.11     171907 1.00
# CountryPT        0.92      0.98    -0.52     3.07     118487 1.00
# CountryQA       -0.46      1.09    -3.17     1.27     166932 1.00
# CountryRO        0.02      0.78    -1.70     1.71     216995 1.00
# CountryRU       -0.60      1.15    -3.45     1.12     154146 1.00
# CountrySA       -1.01      1.26    -4.07     0.61     142090 1.00
# CountrySE        0.59      0.53    -0.23     1.72      85612 1.00
# CountrySG       -0.67      0.71    -2.29     0.37     143570 1.00
# CountryTR        0.14      0.67    -1.21     1.69     214560 1.00
# CountryTT        0.65      0.87    -0.69     2.63     121131 1.00
# CountryUA       -0.59      0.84    -2.59     0.67     150001 1.00
# CountryUS        0.79      0.28     0.26     1.35      72791 1.00
# CountryVE        1.35      1.14    -0.35     3.72     101866 1.00
# CountryVN       -0.65      1.18    -3.56     1.09     153337 1.00
# CountryXK       -0.59      1.15    -3.44     1.12     150475 1.00
# CountryZA        0.01      0.62    -1.35     1.35     212747 1.00
# moFrequency     -0.89      0.36    -1.61    -0.19      96417 1.00
# moPurchaser     -0.06      0.26    -0.64     0.45     179817 1.00

Noth­ing of note emerges here, ex­cept per­haps a ten­dency for males to have fewer socks (al­beit they ap­pear to be con­tent with few­er); there might be coun­try-level effects as even the horse­shoe reg­u­lar­iza­tion does­n’t pull them tightly to ze­ro, but there is far too lit­tle data to be con­fi­dent in what the effects might be.

In the GS US sur­vey data, there is only one coun­try, of course, but in ex­change an in­ferred age bracket is avail­able:

socks <- read.csv("https://www.gwern.net/docs/psychology/2019-01-20-gs-socks.csv")
socks <- subset(socks, select=c("Question..1.Answer", "Question..2.Answer", "Question..3.Answer", "Question..4.Answer", "Gender", "Age"))
socks <- socks[socks$Question..3.Answer!="",] # rm NAs
socks <- socks[socks$Question..4.Answer!="",] # rm NAs

socks$Question..1.Answer <- socks$Question..1.Answer=="Yes"
socks$Question..2.Answer <- as.ordered(socks$Question..2.Answer)
socks$Question..3.Answer <- ordered(socks$Question..3.Answer, levels=c("monthly", "semi-annually", "annually", "less/never"))
socks$Question..4.Answer <- ordered(socks$Question..4.Answer, levels=c("me", "spouse or significant other", "relative", "other"))
socks <- socks[socks$Age!="Unknown" & socks$Gender!="Unknown",]

socksI <- with(socks, data.frame(Enough=as.integer(Question..1.Answer), Count=as.integer(Question..2.Answer), Frequency=as.integer(Question..3.Answer), Purchaser=as.integer(Question..4.Answer), Age=as.integer(Age), Gender=as.integer(Gender=="Male")))
# Skim summary statistics
#  n obs: 114
#  n variables: 6
# ── Variable type:integer
#   variable missing complete   n mean   sd p0 p25 p50 p75 p100     hist
#        Age       0      114 114 3.92 1.57  1   3   4   5    6 ▃▂▁▇▅▁▇▆
#      Count       0      114 114 2.34 0.94  1   2   2   3    4 ▃▁▇▁▁▅▁▂
#     Enough       0      114 114 0.79 0.41  0   1   1   1    1 ▂▁▁▁▁▁▁▇
#  Frequency       0      114 114 2.8  0.84  1   2   3   3    4 ▁▁▅▁▁▇▁▃
#     Gender       0      114 114 0.72 0.45  0   0   1   1    1 ▃▁▁▁▁▁▁▇
#  Purchaser       0      114 114 1.39 0.88  1   1   1   1    4 ▇▁▁▁▁▁▁▁
# Polychoric correlations
#           Enogh Count Frqnc Prchs Age   Gendr
# Enough     1.00
# Count      0.19  1.00
# Frequency -0.21  0.12  1.00
# Purchaser -0.23 -0.24 -0.25  1.00
# Age        0.17  0.08  0.09 -0.20  1.00
# Gender    -0.49 -0.14  0.11  0.55  0.26  1.00
brm(Enough ~ Gender + mo(Age) + mo(Frequency) + mo(Purchaser), family="bernoulli", data=socksI)
# ...Population-Level Effects:
#             Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
# Intercept       2.42      1.13     0.28     4.70       2064 1.00
# Gender         -1.81      0.91    -3.82    -0.27       2802 1.00
# moAge           1.08      0.84    -0.52     2.81       2342 1.00
# moFrequency    -0.03      1.00    -1.87     2.10       2111 1.00
# moPurchaser    -0.99      0.75    -2.50     0.45       3191 1.00
brm(Count ~ Gender + mo(Age) + mo(Frequency) + mo(Purchaser), family="cumulative", data=socksI)
# ...Population-Level Effects:
#              Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
# Intercept[1]    -0.65      0.88    -2.21     1.20       2351 1.00
# Intercept[2]     1.45      0.90    -0.14     3.39       2295 1.00
# Intercept[3]     2.94      0.93     1.31     4.91       2392 1.00
# Gender           0.12      0.41    -0.72     0.91       5283 1.00
# moAge           -0.49      0.57    -1.60     0.65       3793 1.00
# moFrequency      1.44      0.92    -0.19     3.34       2553 1.00
# moPurchaser      1.35      0.74     0.02     2.86       4586 1.00

There are pos­si­ble age effects in the ex­pected di­rec­tion; older peo­ple ap­pear to be bet­ter at man­ag­ing sock lev­els.

Cu­ri­ous­ly, there may be differ­ent gen­der effects in the two sur­vey datasets: in the Jor­gensen in­ter­na­tional sur­vey, gen­der is largely in­ert (ex­cept for a cor­re­la­tion with Purchaser) while in the US GS sur­vey, gen­der cor­re­lates with every­thing and men ap­pear much less likely to have enough socks (but to have more sock­s). Pok­ing at the data, there ap­pears to be an­other con­nec­tion: in the US, men are more likely to do their own sock pur­chas­ing. I won­der if this re­flects a differ­ent in sex roles, with women do­ing more cloth­ing shop­ping in non-US coun­tries and tak­ing care of sock needs along the way?

Christmas advice

“What do you see when you look in the Mir­ror [of Erised]?”
“I? I see my­self hold­ing a pair of thick, woollen socks.”
Harry stared.
“One can never have enough socks”, said Dum­b­le­dore. “An­other Christ­mas has come and gone and I did­n’t get a sin­gle pair. Peo­ple will in­sist on giv­ing me books.”

J.K. Rowl­ing, Harry Pot­ter and the Philoso­pher’s Stone

Given that con­sis­tently >15% of re­spon­dents don’t have enough socks, and in the US, younger males are es­pe­cially likely to not have enough socks, here’s some Christ­mas ad­vice: if you don’t know what to buy them, why not buy them some re­ally good socks?

Socks make a great gift. Every­one will need re­place­ment socks, sooner or lat­er, and it seems lots of peo­ple don’t get them. Un­like the feared ‘ugly sweater from Grandma’ pre­sent, they aren’t on pub­lic dis­play so if they’re ug­ly, it’s not too big a deal. Nor do they take up too much space, and can be used for more of the year. An an­nual gift of socks is about the op­ti­mal tem­po, given the sur­veys about how often peo­ple lose socks or buy socks, and Christ­mas is an ex­cel­lent Schelling point, since it’s al­ready as­so­ci­ated with socks. Fi­nal­ly, socks may be a cun­ning gift as they can be eas­ily eval­u­ated as su­pe­rior, and so seem pre­mium de­spite not cost­ing all that much in ab­solute terms.4

Who Moved My Sock?

How had I run out of socks? Well, like the joke about go­ing bank­rupt, I did it one day at a time: with a sock qui­etly dis­ap­pear­ing one day, and a sock be­ing tossed out due to holes & thin­ning out an­other day… At no point did I ever de­lib­er­ately try to econ­o­mize on socks or go with­out socks or ex­plic­itly think that it was­n’t worth the bother of pick­ing up some socks next time I was in a cloth­ing store or do­ing an Ama­zon or­der—it just hap­pened on its own.

The Importance Of The Unimportant

In the case of socks, there is never a ‘Sock­nik mo­ment’. There is only a slip­pery-s­lope/­sorites—there’s no hard and fast line be­tween enough and too-few socks, socks slowly wear out or lose mates, and if you have 20 and now have 19, well, that’s not a big deal, and then when you are down to 18, that’s not a big deal ei­ther why go shop­ping, and soon you’ll be down to 17… And if you don’t buy socks reg­u­larly as part of a clothes shop­ping trip, when will you? Even­tu­ally you’re wear­ing un­com­fort­able socks or be­ing cold or be­ing forced to do laun­dry runs ear­ly, with­out there ever be­ing a clear ‘I need to buy some socks!’ trig­ger point. Even a habit like buy­ing re­place­ment socks once a year as part of spring clean­ing would be enough, but one still needs to in­still a habit.

Some might ob­ject that this is over­think­ing socks, and one should never think about socks at all. This is short­-sight­ed. If we were all per­fectly ra­tio­nal and om­ni­scient and pos­sessed of in­fi­nite com­put­ing pow­er, all our prob­lems would al­ready be solved and we would buy socks at the ex­act op­ti­mal mo­ment as part of the grand plan; but we are not. Deal­ing with our bounded ra­tio­nal­ity is the cen­tral con­cern of all dis­cus­sions of ra­tio­nal­ity & op­ti­miz­ing & bi­as­es.

It may not seem im­por­tant to think about socks at any par­tic­u­lar mo­ment, and socks are prob­a­bly not the most press­ing thing at this in­stant for me ei­ther, com­pared to tasks like ‘write an es­say’ or ‘ex­er­cise’ or ‘an­swer emails’. But if it is bet­ter to wear socks than not, and one does not wish to go bare­foot for the rest of one’s life, then it must be op­ti­mal at some mo­ment to think about socks. Per­haps a few months from now when one’s ‘sock­pile’ has worn down, dur­ing down­time, but there must be one.

Sim­i­lar­ly, one could scoff at all of the ne­ces­si­ties of life like get­ting gro­ceries, or fil­ing a tax re­turn, or get­ting life in­sur­ance: surely at that in­stant there is al­ways some­thing more im­por­tant one could be work­ing on do­ing, like get­ting a col­lege de­gree or found­ing a star­tup? But this ar­gu­ment must have some flaw or by in­duc­tion you would never do them and so you would starve to death while be­ing au­dited by the IRS and your heirs are ren­dered home­less. For ex­am­ple, the value of these tasks in­creases over time: you don’t re­ally need to do your taxes early be­fore the dead­line, but you do want to get it done by the dead­line. With gro­ceries, as long as you have enough to eat, it’s not much of a prob­lem to be low on food—per­haps it re­duces your va­ri­ety a bit, but it’s not like you’ll starve, ex­cept if you run out of food in which case you will. And fail­ure to get life in­sur­ance in­curs a small loss each and every day (be­cause of the risk of you dy­ing that day and fail­ing to pro­vide for what­ever you wanted life in­sur­ance for).

Fur­ther, one’s life is a com­plex sys­tem: one’s house, one’s ca­reer, one’s com­put­er, all of these are com­plex sys­tems with in­ter­act­ing, cas­cad­ing fail­ures. All com­plex sys­tems (, Cook 2000) op­er­ate in a , where mi­nor er­rors must be reg­u­larly re­paired in or­der to pre­vent a large-s­cale fail­ure cas­cad­ing through the whole sys­tem. When a steel fur­nace ex­plodes, killing peo­ple, it does­n’t hap­pen out of the blue, but re­flects a long se­ries of choices & grad­u­ally es­ca­lat­ing is­sues & near-miss­es, and is a . When I lost weeks of time and money to a lap­top & backup fail­ure, it was­n’t be­cause only one thing went wrong: it re­quired at least 3 un­usual fail­ures si­mul­ta­ne­ously in my lap­top & backup sys­tems, any of which not hap­pen­ing would have pre­vented the full ac­ci­dent. Each slip may seem rel­a­tively mi­nor and ex­tra­or­di­nar­ily un­likely to have any se­ri­ous con­se­quences, but, like the “in­differ­ence of the in­di­ca­tor”, they add up over a life­time and even­tu­ally a tail risk ma­te­ri­al­izes. Chance dis­fa­vors the un­pre­pared mind—­time and chance hap­peneth to all, and in­deed do many things come to pass.

Be­cause fail­ures in­ter­act and mul­ti­ply, they re­sem­ble a : each in­di­vid­ual fac­tor can block the ac­ci­dent so the fi­nal dam­age of the out­come is the mul­ti­pli­ca­tion of the in­di­vid­ual fac­tors. The log-nor­mal im­plies that a small sys­tem­atic in­crease or de­crease in each fac­tor, anal­o­gous to be­ing more care­ful & proac­tive in gen­eral about main­te­nance and risk, can cause a large differ­ence in fi­nal out­come (). One must ex­pect the un­ex­pect­ed, and a fail­ure to ‘sweat the small stuff’ means you are al­low­ing brush to pile up in the forest: one match could set it ablaze. Peo­ple who do not sweat the small stuff have a re­mark­able ten­dency to have ‘bad luck’ and some­how keep get­ting into trou­ble, much like the less in­tel­li­gent suffer more ‘ac­ci­dents’ or nat­ural dis­as­ters have death tolls al­most en­tirely de­ter­mined by pover­ty—cer­tain­ly, time & chance may hap­peneth to us all, but our prepa­ra­tions & re­ac­tions play an even greater role in de­ter­min­ing how far things go. A lack of the bour­geoisie virtues is a lack of fore­sight, prepa­ra­tions, and re­serves/in­sur­ance/slack. Con­sider how care­less some peo­ple are in mat­ters of every­day life.5 It’s not hard to see how such care­less­ness in, say, get­ting drunk and mak­ing rental pay­ments can quickly es­ca­late.

‘Yak Shaving’ as a Failure Cascade

Seth Godin ex­plains yak shav­ing as a sto­ry:

“I want to wax the car to­day.”

“Oops, the hose is still bro­ken from the win­ter. I’ll need to buy a new one at Home De­pot.”

“But Home De­pot is on the other side of the Tap­pan Zee bridge and get­ting there with­out my EZ Pass is mis­er­able be­cause of the tolls.”

“But, wait! I could bor­row my neigh­bor’s EZ Pass…”

“Bob won’t lend me his EZ Pass un­til I re­turn the mooshi pil­low my son bor­rowed, though.”

“And we haven’t re­turned it be­cause some of the stuffing fell out and we need to get some yak hair to restuff it.”

And the next thing you know, you’re at the zoo, shav­ing a yak, all so you can wax your car.

God­in’s take-away is that yak shav­ing is mis­guided per­fec­tion­ism: once one re­al­izes one is yak shav­ing, one should de­cide “Don’t go to Home De­pot for the hose. The minute you start walk­ing down a path to­ward a yak shav­ing par­ty, it’s worth mak­ing a com­pro­mise. Do­ing it well now is much bet­ter than do­ing it per­fectly lat­er.”

I in­ter­pret yak shav­ing en­tirely differ­ent­ly. At least when I feel I am trapped in yak shav­ing, it more often re­flects a fail­ure cas­cade in the com­plex sys­tem I am cur­rently part of: ei­ther men­tally I have got­ten trapped into a lo­cal min­ima and have failed to re­flect pe­ri­od­i­cally on what the best way is, or the sys­tem re­ally is bro­ken and once the yak is shaved, re­quires to find out how to fix the fun­da­men­tal prob­lems and how to pre­vent them from re­cur­ring.

I see ‘yak-shav­ing’ as a de­scrip­tion of a sit­u­a­tion where you are nested so deep in sub­goals that you’ve for­got­ten your orig­i­nal goal, at which point a good heuris­tic is to wake up and say “this is a lot of yak-shav­ing!” and think about what is go­ing on that has led to an un­de­sir­able sit­u­a­tion.

Think­ing about my own ap­pli­ca­tions of the term, I think there are 3 differ­ent kinds of prob­lems which can lead to yak-shav­ing: avoid­ance, lack of mind­ful­ness, and cas­cad­ing prob­lem­s/sys­tem fail­ures.

  1. you are pro­cras­ti­nat­ing or be­ing akratic or falling into per­fec­tion­ism (closely re­lated to pro­cras­ti­na­tion), by de­lib­er­ately over­com­pli­cat­ing some­thing or try­ing to use fancy or shiny new tech­niques, which of course fre­quently lead to new sub­goals be­cause you aren’t fa­mil­iar with them yet.

    This is fine some­times (you have to learn those new tech­niques some­when) or if it’s a kind of ‘struc­tured pro­cras­ti­na­tion’ (where the yak-shav­ing is it­self valu­able eg be­cause it makes a neat blog post or use­ful soft­ware pack­age), but often is­n’t. The usual akrasi­a/pro­cras­ti­na­tion equa­tion stuff, ex­cept it’s be­ing hid­den un­der a gloss of su­per­fi­cial pro­duc­tiv­ity. (“I can’t write my nov­el, I have to clean my desk which re­quires […­solv­ing 15 deeper nested is­sues…] which will take up all the rest of the day; I sure am a hard-work­ing writer.”)

    By call­ing it yak-shav­ing, you ad­mit you are faffing around and you then solve your prob­lem the way you knew you should all along; or you can deal with why you are avoid­ing fin­ish­ing, or whether you re­ally want to do it at all. If you refuse to ac­knowl­edge the yak-shav­ing, then even if you ‘shave the yaks’ you’ll just find an­other way to over­com­pli­cate things or a differ­ent thing to waste time on or switch to pro­cras­ti­nat­ing on so­cial me­dia etc.

  2. you have been fol­low­ing a greedy strat­egy of tak­ing the quick­est op­tion at each de­ci­sion node; that you have now stacked up so many tasks to com­plete sug­gests that the greedy strat­egy has failed and you have fallen into a lo­cal pes­si­ma.

    Like with , it’s time to stop be­ing so mind­less, step back, think about it more glob­al­ly, and ask if there’s some bet­ter ap­proach. Was there some en­tirely differ­ent strat­egy which seemed too ex­pen­sive com­pared to your cur­rent path (which has ac­tu­ally turned out to be far more costly than pre­dict­ed) and now looks cheap? Or are there any in­ter­me­di­ate mid­dle steps which are ex­pen­sive but cut out a large num­ber of other steps? Or per­haps all the paths are so costly that the top-level goal now no longer looks worth both­er­ing with and you should drop all the ex­ist­ing tasks & stop shav­ing the yak en­tire­ly.

    Pro­gram­mers are par­tic­u­larly sus­cep­ti­ble to this be­cause the line be­tween use­ful au­toma­tion and im­mensely com­pli­cated time-wast­ing tin­ker­ing is a fine one in­deed. This can be com­mon in pro­gram­ming where you can say, build up a Rube Gold­berg col­lec­tion of shell scripts and Emacs func­tions and man­ual ed­its to text be­cause you wanted to avoid writ­ing a SQL func­tion (be­cause it would take 20 min­utes of con­sult­ing the SQL doc­u­men­ta­tion to get it right); but by the time you’re con­sult­ing the Bash FAQ or re­set­ting IFS vari­ables to deal with a prob­lem half an hour lat­er, it’s good to wake up and ask ‘am I yak-shav­ing?’—and then you might re­al­ize that the data or prob­lem has turned out to be suffi­ciently painful (eg lots of spe­cial char­ac­ters or odd­ity in data for­mat­ting) that you can’t catch all the spe­cial cases and you would’ve been bet­ter off writ­ing the SQL query in the first place. In God­in’s ex­am­ple, per­haps one should sim­ply re­turn the yak pil­low and hope the neigh­bor won’t no­tice the miss­ing stuffing, or they will pre­fer to sim­ply have it back rather than wait for you to fix it when­ev­er, or sim­ply up­set them a lit­tle; or or­der the hose on Ama­zon even if it costs $5 more, to get it done; or, pay the damn toll like any­one else; or fi­nal­ly, is wax­ing the car worth­while at all (who no­tices)?

    Here ‘yak-shav­ing’ serves as a use­ful men­tal trig­ger which can break you out of the my­opic prob­lem-solv­ing loop. This sort of yak-shav­ing is usu­ally quite bad, and if you don’t break out of it soon enough, can lead to con­sid­er­able ex­haus­tion and waste of time, and lock you into bad long-term de­ci­sions. So it’s good to pe­ri­od­i­cally ask, if you aren’t mak­ing progress on a prob­lem of in­trin­sic in­ter­est to you, “so all this work, what’s it for any­way? If I were start­ing over from scratch—­know­ing what I do now—is this re­ally how I would ap­proach this prob­lem?”

  3. what you are do­ing is the best way to solve the prob­lem over­all, it’s just that things have been go­ing wrong and you’ve been run­ning into con­tin­ual prob­lems, so you find your­self nested many lay­ers deep deal­ing with the cas­cade of prob­lems and doc­u­men­ta­tion

    …all your (en­crypt­ed) back­ups are bro­ken be­cause you can’t get the most re­cent de­cryp­tion key be­cause your drive is cor­rupted be­cause you were run­ning the GPU 24/7 (to name a re­cent ex­am­ple of mine) so you’re in a LiveCD try­ing to mount the drive try­ing pass­words try­ing…

    In this case, in ad­di­tion to sim­ply shav­ing the yak, you need to do root-cause analy­sis—you are ex­pe­ri­enc­ing what might be called —and in ad­di­tion to fig­ur­ing out how to solve each prox­i­mate prob­lem on the way, fig­ure out why they hap­pened & how to pre­vent them in the fu­ture. In pro­gram­ming, this fre­quently en­tails fil­ing bug re­ports & doc­u­ment patch­es, for­mal­iz­ing your re­cov­ery meth­ods as scripts or pro­grams, adding tests or re­dun­dancy or up­grad­ing hard­ware, and writ­ing post-mortems.

    So God­in’s in­ter­pre­ta­tion of a stack of nested re­lated prob­lems here is sim­ply a form of this. But here, sim­ply yak shav­ing may solve the fur prob­lem & al­low pop­ping, but it’s not enough. It’s not enough to sim­ply close those open loops, or have a sys­tem for record­ing open loops. Root-cause analy­sis is need­ed.

    Why did the yak fur fall out of the pil­low in the first place and how can it be pre­vented ever again? Why did­n’t he have his EZ Pass in the first place? Why was­n’t the hose put on the weekly shop­ping list (there is a shop­ping list right?) and re­placed long be­fore? And so on.

    With­out at­tack­ing prob­lems at the root, you might as well buy a sea­sonal pass to the zoo, be­cause you are merely ap­ply­ing bandaids to a com­plex sys­tem fail­ing, and if you don’t do any root-cause fix­es, even­tu­ally your prob­lems will se­ri­ously stack up and you’ll find your­self hit by a so-called ‘per­fect storm’ (ac­tu­ally per­fectly fore­see­able & in­evitable) and then you’ll re­ally be sor­ry.

So, ‘yak-shav­ing’ is a use­ful heuris­tic for keep­ing plan­ning stacks nested not too deeply by pe­ri­od­i­cally ask­ing whether one is falling prey to one of those 3 fail­ure mod­es, and need to break out of the yak-shav­ing by an ap­pro­pri­ate coun­ter­mea­sure of ei­ther: in­ter­ro­gat­ing the rea­sons for the akrasia; find­ing a bet­ter ap­proach; or pri­or­i­tiz­ing fix­ing the root-causes of need­ing to yak-shave (rather than fo­cus­ing on the yak-shav­ing).

The Ur Cognitive Bias

“I started eat­ing with them [the chemists] for a while. And I started ask­ing, ‘What are the im­por­tant prob­lems of your field?’ And after a week or so, ‘What im­por­tant prob­lems are you work­ing on?’ And after some more time I came in one day and said, ‘If what you are do­ing is not im­por­tant, and if you don’t think it is go­ing to lead to some­thing im­por­tant, why are you at Bell Labs work­ing on it?’ I was­n’t wel­comed after that; I had to find some­body else to eat with­!…In the fall, Dave Mc­Call stopped me in the hall and said, ‘Ham­ming, that re­mark of yours got un­der­neath my skin. I thought about it all sum­mer, i.e. what were the im­por­tant prob­lems in my field. I haven’t changed my re­search’, he says, ‘but I think it was well worth­while.’ And I said, ‘Thank you Dave’, and went on. I no­ticed a cou­ple of months later he was made the head of the de­part­ment. I no­ticed the other day he was a Mem­ber of the Na­tional Acad­emy of En­gi­neer­ing. I no­ticed he has suc­ceed­ed. I have never heard the names of any of the other fel­lows at that ta­ble men­tioned in sci­ence and sci­en­tific cir­cles.”


“A mule who has car­ried a pack for ten cam­paigns un­der will be no bet­ter a tac­ti­cian for it, and it must be con­fessed, to the dis­grace of hu­man­i­ty, that many men grow old in an oth­er­wise re­spectable pro­fes­sion with­out mak­ing any greater progress than this mule.”

, “Thoughts on Tac­tics”6

One prob­lem here is that the unim­por­tant be­comes im­por­tant, slowly and sub­tly. There is no IRS clock tick­ing on one’s wall, any more than there is a re­al­time dis­play of one’s sock­pile with de­fined red dan­ger zones upon which one or­ders new socks.

For many things, there is never any hard dead­line or sched­uled event or re­minder which would bring a need to mind. So nec­es­sary things suffer from what a com­puter sci­en­tist might call : when a back­ground task, like run­ning a back­up, which has a low pri­or­ity (eg a backup can wait a few min­utes with­out much risk), is con­tin­u­ously pushed out by higher pri­or­ity tasks and never gets to run; while it may not have been ur­gent that it run im­me­di­ate­ly, it is ur­gent that it run even­tu­al­ly. (Any­one who dis­agrees about back­ups not be­ing im­por­tant is free to im­ple­ment that ad­vice and see how it works for them in the long run.)

Star­va­tion re­flects bad plan­ning: the pri­or­i­ties of starv­ing tasks are not in­creased over time to re­flect their pri­or­i­ty, or starv­ing tasks may not be con­sid­ered at all by a my­opic plan­ner. And for hu­mans, ‘out of sight is out of mind’, so my­opia is easy.

Many hu­man cog­ni­tive bi­ases can be con­sid­ered as re­flec­tions of a sin­gle ur-cog­ni­tive bias (S­tanovich 2010, De­ci­sion Mak­ing and Ra­tio­nal­ity in the Mod­ern World), a fail­ure to ac­ti­vate diffi­cult, de­lib­er­ate, ex­plicit Sys­tem II think­ing when ap­pro­pri­ate, ‘wak­ing up’ from the usual fast fru­gal Sys­tem I think­ing, per­haps from time to time just to re-e­val­u­ate things. “Hu­mans are not au­to­mat­i­cally strate­gic.” In­stead, Sys­tem I is al­ways in­voked, re­gard­less of Sys­tem II is need­ed, and the fast fru­gal cheap re­flex­ive think­ing of Sys­tem I takes over. When Sys­tem I runs unim­ped­ed, work tends to de­gen­er­ate into what Google SRE terms “toil”; Beyer et al 2016:

…toil is the kind of work tied to run­ning a pro­duc­tion ser­vice that tends to be:

  • Man­ual
  • Repet­i­tive
  • Au­tomat­able and not re­quir­ing hu­man judg­ment
  • In­ter­rup­t-driven and re­ac­tive
  • Of no en­dur­ing value

One works hard, but that a few bucks will get you a cup of coffee. Elim­i­nat­ing toil re­quires step­ping back to take an out­side view and pos­si­bly re-engi­neer things.7

Of course Sys­tem II can’t run all the time, any more than we can pon­der every day whether to­day we should re-engi­neer our sock­-buy­ing sys­tem or buy more socks. We hardly ever do—but that’s not quite the same as nev­er. It needs to run oc­ca­sion­ally to check the fun­da­men­tals, to look for tasks starv­ing in the back­ground for lack of salien­cy, and to re­flect on what is be­ing done that ought not to be done at all, and con­sider en­tirely new al­ter­na­tives.

I think ap­par­ent in­stances of ‘sunk cost’ are bet­ter de­scribed as thought­less­ness. To give an ex­am­ple: when chess or play­ers con­tinue throw­ing pieces into a doomed po­si­tion, is that be­cause they ex­plic­itly re­al­ize it is doomed but feel they must per­se­vere any­way, or is it due to the fact that chess am­a­teurs com­mit more 8 than mas­ters () and don’t re­al­ize that the po­si­tions are in fact ir­re­triev­able? When one en­gages in spring-clean­ing, one may wind up throw­ing or giv­ing away a great many things which one has owned for months or years but had not dis­posed of be­fore; is this an in­stance of sunk cost where you over-val­ued them sim­ply be­cause you have in­vested into hold­ing onto them for X months, an in­stance of where it is more valu­able be­cause it’s yours (a bias which does­n’t change with ad­di­tional in­vest­men­t)—or is this an in­stance of you sim­ply never be­fore de­vot­ing a few sec­onds to pon­der­ing whether you gen­uinely liked that check­ered scarf & if you haven’t worn it in years how likely are you to ever wear it again? When we see an ap­par­ent sunk cost, might we not be see­ing a well-de­vel­oped habit which made sense when it was de­vel­oped and per­haps has sim­ply never been crit­i­cally re-ex­am­ined in the light of cur­rent cir­cum­stances? Habits are in­valu­able, but they are also in­vis­i­ble and in­durate ex­cept at times of cri­sis where one is re-pri­or­i­tiz­ing things.9 Even in cor­po­ra­tions, where sunk cost think­ing is at its worst, many of the in­stances (eg the new CEO who rad­i­cally over­hauls the com­pany by cut­ting prod­ucts & di­vi­sions & em­ploy­ees) are often sim­ply ex­e­cut­ing changes that the rest of the com­pany knows are long over­due but could never quite rise to a pri­or­ity with­out the Schelling point of a new CEO brought on to shake things up. (Or in­deed, in gen­er­al: “never let a cri­sis go to waste.”)

Few peo­ple per­se­vere in a mis­taken choice of col­lege de­gree be­cause they truly value that they have ob­tained ir­ra­tionally more solely be­cause they have al­ready spent a lot of money on it, which is the clas­sic ‘sunk cost bias’. Usu­al­ly, it’s more that they are so busy with classes & stu­dent life & projects & hob­bies that they don’t think about it, con­tin­u­ing with the orig­i­nal plan is the path of least re­flec­tion, the oc­ca­sional stray thoughts ‘maybe this is the wrong path’ are too painful to pur­sue more than briefly, and they have not sat down and pon­dered even 5 min­utes the cost­s/ben­e­fits or how well it’s been go­ing and se­ri­ously opened up in­ter­nally to the pos­si­bil­ity of quit­ting. One con­tin­ues be­cause one con­tin­ues. Nor is there nec­es­sar­ily any point at which they will be forced to con­sider this be­fore grad­u­a­tion, as col­lege sys­tems are geared to usher one from en­roll­ment to grad­u­a­tion, and one does­n’t have to make an ex­tra­or­di­nary effort at any point to con­tinue on that path. (One does for grad­u­ate school, which is for­tu­nate, con­sid­er­ing how much stu­dent debt that can en­tail, but then the same dy­namic will kick in once one is in grad school.) Or at what point does a com­muter re­al­ize that the trade­off is­n’t that great? Any doubts may sim­ply starve for lack of thought to feed them, un­til one day, one sud­denly ‘wakes up’.

Finding New Socks

“It is a pro­foundly er­ro­neous tru­ism, re­peated by all copy­-books and by em­i­nent peo­ple when they are mak­ing speech­es, that we should cul­ti­vate the habit of think­ing of what we are do­ing. The pre­cise op­po­site is the case. Civ­i­liza­tion ad­vances by ex­tend­ing the num­ber of im­por­tant op­er­a­tions which we can per­form with­out think­ing about them. Op­er­a­tions of thought are like cav­alry charges in a bat­tle—they are strictly lim­ited in num­ber, they re­quire fresh hors­es, and must only be made at de­ci­sive mo­ments…It is in­ter­est­ing to note how im­por­tant for the de­vel­op­ment of sci­ence a mod­est-look­ing sym­bol may be.”

, An In­tro­duc­tion to Math­e­mat­ics (1911)

Many of the best an­ti-bias mech­a­nisms or ‘life hacks’ or ‘habits’ are about strate­gic ap­pli­ca­tion of our lim­ited Sys­tem II re­sources, often em­ploy­ing ex­ter­nal sys­tems to fight star­va­tion.

The sim­plest wake-up mech­a­nism is hav­ing a habit to oc­ca­sion­ally re­view the past, like re­view­ing one’s ledgers at the end of every month.10 The hum­ble check­list, for ex­am­ple; or , re­minder or note-tak­ing soft­ware, spread­sheet­s/­dou­ble-en­try ledgers, emails with timers, ‘lint’ tools, many ‘life hacks’ in gen­er­al…11 I am heav­ily re­liant on my cal­en­dar soft­ware to re­mind me to check in on var­i­ous pa­pers or peo­ple, do ex­port­s/back­ups which can’t be eas­ily au­to­mat­ed, up­date pages, and re-e­val­u­ate things pe­ri­od­i­cal­ly; in writ­ing things, I have found it worth­while to de­velop my own check­list and am con­stantly ex­pand­ing my writ­ing lin­ter, markdown-lint & my site build/­sync script, with new er­rors to watch out for.

Such sys­tems effi­ciently in­ter­vene only at crit­i­cal mo­ments, and sys­tem­at­i­cally cover avail­able op­tions to over­come Sys­tem I in­er­ti­a/­for­get­ting: a check­list re­minds one of every nec­es­sary step, while poka-yoke er­ror-proofing re­move er­ror cases or at least add them to check­lists, and point­ing-and-call­ing is a phys­i­cal im­ple­men­ta­tion of the men­tal process of check­list­ing, while time-based tools like cal­en­dars can be sched­uled in ad­vance to fire only at the crit­i­cal mo­ment to save all the cog­ni­tion from now to then. And suffi­ciently re­li­able au­to­mated tools can go one bet­ter and only in­ter­rupt one, wak­ing up Sys­tem II, only if there is ac­tu­ally an er­ror which needs to be fixed.


Un­der­use of Sys­tem II par­tic­u­larly man­i­fests as over-ex­ploita­tion/un­der-ex­plo­ration, where large po­ten­tial im­prove­ments are fore­gone be­cause of a lack of a habit or other sys­tem­atic fac­tor which would trig­ger ex­plo­ration. (By ex­plo­ration, I don’t mean spend­ing hours read­ing re­views on Ama­zon or on so­cial me­dia, or read­ing yet an­other book on a top­ic, which is largely about feed­ing idle cu­rios­ity & is in­for­ma­tion su­per-s­tim­uli, but ac­tual ex­per­i­men­ta­tion and try­ing.)

One way to mea­sure un­der­-ex­plo­ration is not­ing in­stances where ex­oge­nous ran­dom­iza­tion or de­struc­tion of the sta­tus quo op­tion leads to per­ma­nent changes or net effi­ciency gains after the shock is re­moved, in­di­cat­ing learn­ing or that the sta­tus quo was sub­op­ti­mal all along12. (One area where un­der­-ex­plo­ration is es­pe­cially rife is in ran­dom­ized ex­per­i­ments in sci­ence, where what every­one ‘knows’ based on cor­re­la­tion , yet de­spite the large im­plied re­grets, it is still held to be ‘un­eth­i­cal’ to run more ran­dom­ized ex­per­i­ments.)

Har­vard econ­o­mist Send­hil Mul­lainathan asks “Why Try­ing New Things Is So Hard to Do”, putting it well with a fa­mil­iar ex­am­ple, gro­cery shop­ping:

I drink a lot of Diet Coke: two liters a day, al­most six cans’ worth. I’m not proud of the habit, but I re­ally like the taste of Diet Coke. As a fru­gal econ­o­mist, I’m well aware that switch­ing to a generic brand would save me mon­ey, not just once but dai­ly, for weeks and years to come. Yet I only drink Diet Coke. I’ve never even sam­pled generic so­da.

Why not? I’ve cer­tainly thought about it. And I tell my­self that the dol­lars in­volved are in­con­se­quen­tial, re­al­ly, that I’m happy with what I’m al­ready drink­ing and that I can afford to be pas­sive about this lit­tle ex­trav­a­gance. Yet I’m clearly mak­ing an er­ror, one that re­veals a deeper de­ci­sion-mak­ing bias whose cu­mu­la­tive cost is siz­able: Like most peo­ple, I con­duct rel­a­tively few ex­per­i­ments in my per­sonal life, in both small and big things.

This is a pity be­cause ex­per­i­men­ta­tion can pro­duce out­size re­wards. For ex­am­ple, I would­n’t be risk­ing much by try­ing a generic so­da, and if I liked it enough to switch, the pay­out could be big: All my fu­ture so­das would be cheap­er. When the same choice is made over and over again, the down­side of try­ing some­thing differ­ent is lim­ited and fixed—that one soda is un­ap­peal­ing—while the po­ten­tial gains are dis­pro­por­tion­ately large. One study es­ti­mated that 47% of hu­man be­hav­iors are of this ha­bit­ual va­ri­ety.

Yet many peo­ple per­sist in buy­ing branded prod­ucts even when equiv­a­lent gener­ics are avail­able. These choices are note­wor­thy for drugs, when gener­ics and branded op­tions are chem­i­cally equiv­a­lent. Why con­tinue to buy a name-brand as­pirin when the same chem­i­cal com­pound sits nearby at a cheaper price?

Gro­cery shop­ping is a great ex­am­ple be­cause it is some­thing every­one does, often, which rep­re­sents a sub­stan­tial por­tion of per­sonal bud­gets, with clear & un­am­bigu­ous costs, where the diffi­culty of ex­per­i­men­ta­tion is so min­i­mal that it feels weird to even call ac­tiv­i­ties like ‘com­pare prices & try differ­ent foods’ by a term as fancy as “ex­per­i­men­ta­tion”, where the ben­e­fits of learn­ing are large & can last decades. (Aldi is­n’t go­ing to sud­denly be­come more ex­pen­sive than Whole Foods, and rank-order­ing of prices re­mains rel­a­tively con­stan­t—that’s the whole point of hav­ing brands, after al­l). Yet, we still don’t.

And the ben­e­fits are large. As Mul­lainathan notes, while the cost in a sin­gle in­stance may be small, the to­tal loss (“re­gret”) is much larger be­cause it is re­peated across a life­time. If you choose to drink Diet Coke and it costs +$0.25/­can (let’s say the generic costs $0.75/­can and Diet Coke $1/­can, and if you dis­like the generic you’ll throw it away), you haven’t lost $0.25, you have lost much more than that, be­cause it is not a one-off de­ci­sion about a sin­gle drink—you are buy­ing in­for­ma­tion for all your fu­ture choic­es, and the “Value of In­for­ma­tion” of the ex­per­i­ment is far higher than the triv­ial up­front cost.

Sup­pose you drink 1 coke a day. The differ­ence is $0.25/­day, or, $91 a year. The gain from switch­ing does not stop after a year, it goes on in­defi­nite­ly, so at a fairly psy­cho­log­i­cally nor­mal dis­count rate of 5%, the of the gain is $187113. In or­der for your ex­per­i­ment to not cover $0.75 and not be profitable, you would have to as­sign a prior prob­a­bil­ity of <0.013% to the generic be­ing as good (or bet­ter!) and you switch­ing and reap­ing a gain of $1871. Which would be crazy be­cause as Mul­lainathan also notes, every­one knows that often the generic ver­sion is fine, and in­deed, fre­quently is lit­er­ally the same as the brand name, ei­ther be­cause they use the same man­u­fac­tur­ers or be­cause the seller is im­ple­ment­ing .

And let’s not pre­tend that this is any great heroic effort, re­quir­ing ad­vanced sta­tis­tics or long-term ex­per­i­men­ta­tion or blind­ing.14 It takes a sec­ond to grab the generic soda from off the shelf next to the Diet Coke, and a few sec­onds later in the kitchen to try them side by side; are they about the same? Then great! You can en­joy the sav­ings from buy­ing generic thence­forth, oth­er­wise, toss the generic so­da; ei­ther way, there’s no need to think about it fur­ther.

This ap­plies as well to any other sta­ples you might buy. Is King Arthur flour re­ally worth pay­ing twice as much as Gold Medal flour? (Not that I’ve ever no­ticed in my bak­ing.) Per­haps if you tried all 6 kinds of ap­ple­sauce you’d find one of the cheaper ones tastes bet­ter than the ex­pen­sive ones. (I did. It does­n’t add sweet­en­er, and I think most ap­ple­sauces are over­sweet­ened. I want it to taste like ap­ples, not corn syrup.) Is ‘scrap ba­con’ ter­ri­ble in some way that makes cost­ing half as much as reg­u­lar ba­con a lie? (Nope: tastes as de­li­cious to me, and I can buy twice as much.) Can you tell the differ­ence be­tween the ex­pen­sive im­ported Finnish/Irish but­ter and the generic Wal­mart but­ter? (I can… eat­ing it straight while con­cen­trat­ing care­ful­ly. But I can’t on bread or any­where I would use said but­ter.) And is Smuck­er’s “nat­ural peanut but­ter” any bet­ter than your or­di­nary Jiff or generic peanut but­ter? (Trick ques­tion—I ac­tu­ally think it tastes much bet­ter than reg­u­lar peanut but­ter & that’s what I buy. But, I only know this be­cause I tried them all; oth­er­wise, I would­n’t’ve bought some­thing as weird-look­ing as peanut but­ter which still has its orig­i­nal peanut oil.)

Per­son­al­ly, I make a point of, when­ever try­ing some­thing new like food, to buy 1 of every­thing, to the ex­tent pos­si­ble, and sim­ply try­ing them all. I am no longer sur­prised when I find that the generic is as good or bet­ter at a third or less the cost (how on earth do brands main­tain their profits when it’s so easy to com­pare?), or that I pre­fer some­thing I did­n’t ex­pect to pre­fer. (Par­tic­u­larly in tea this has paid off in learn­ing about strange things like twig tea.) I think it’s crazy how peo­ple will buy the same thing for­ever and over­spend on brand names, and, while they’re at it, never try an­other gro­cery store (switch­ing to Wal­mart saved me >10%, and then switch­ing to Aldi an­other >10%), and pass up bulk sav­ings to buy the small­est pos­si­ble quan­ti­ties. And then they com­plain their monthly gro­cery bill is $400 and they won­der where all the money goes… It is waste­ful to not be waste­ful.

If we so often un­der­-ex­plore in gro­ceries, we surely un­der­-ex­plore else­where too. What can help ame­lio­rate this is de­lib­er­ate forc­ing of ex­plo­ration. With gro­ceries, my rule of buy­ing mul­ti­ples the first time is a sim­ple eas­i­ly-im­ple­mented heuris­tic to force ex­plo­ration of gro­cery op­tions. With mu­sic, I try to avoid my tastes ‘freez­ing’ into what­ever I lis­tened to as a teenager by lis­ten­ing to large mu­si­cal dumps rather than rec­om­men­da­tions (eg com­pi­la­tion­s), and avoid­ing the band­wagon effects of pop­u­lar me­dia.15 With re­search, sys­tem­atic read­ing of all pa­pers on a given topic rather than the most-cited ones can lead to many in­ter­est­ing but still ob­scure pa­pers.

We can try to com­pen­sate for our lack of mind­ful­ness in other ar­eas too. With socks, my new heuris­tic is ex­pand my an­nual pho­to­graphic in­ven­tory of my per­sonal pos­ses­sions (mak­ing a record of every­thing I own in case of dis­as­ter) to in­clude clothes too16; in con­sid­er­ing my clothes, I ex­pect that I will no­tice when I get low on sock­s—or any other kind of cloth­ing—and can take ac­tion be­fore too many years pass and my sock­pile be­comes in­ad­e­quate. I will surely dis­cover other in­ad­e­qua­cies in the fu­ture, but, if I am mind­ful of my lim­its, fewer and few­er, and they will get less in the way of more im­por­tant things.

See Also


Grocery shopping advice

To ex­pand on the topic of ex­per­i­ment­ing & gro­cery shop­ping, I would sum­ma­rize good gro­cery shop­ping as in­volv­ing, (in de­scend­ing or­der of mar­ginal re­turn­s), ad­vance plan­ning to se­lect effi­cient tar­gets, se­lec­tion of gro­cery stores by to­tal cost (in­clud­ing travel time), se­lec­tion of cheap­est ver­sion (ex­per­i­men­ta­tion up front, then se­lect­ing by unit cost), avoid­ing gro­cery store trick­ery like coupons, and us­ing as­sis­tance like a stan­dard gro­cery store shop­ping list to main­tain cor­rect­ness of de­ci­sions.

  1. plan recipes ahead to avoid im­pulse shop­ping and food wastage while puz­zling how to eat some­thing. Re­sources on fru­gal cook­ing are every­where and you can find tons of ad­vice on eg cook­ing soup or stew. You should em­pha­size min­i­mally un­processed goods which are com­modi­ties and so cheap, with fewer lay­ers of bo­gus prod­uct differ­en­ti­a­tion/over­head­/ad­ver­tis­ing.

    Frozen veg­eta­bles, for ex­am­ple, are un­der­rated in terms of both con­ve­nience (why spend that time clean­ing & chop­ping when in­dus­trial ma­chines are so much more effi­cient at that?) and also cost, as they don’t go bad; and de­pend­ing on the veg­etable & time of year, can eas­ily be bet­ter-qual­ity as well. (Car­rots are a stand­out case: you can go through the has­sle of clean­ing, peel­ing, and chop­ping a sad bowl of fresh car­rots which you must eat be­fore they go bad, or you can eat some de­li­cious pre-chopped car­rots when­ever you feel like pulling them out of your freez­er, pos­si­bly a year lat­er.)

    (I would­n’t take ‘health’ too se­ri­ously as a cri­te­ri­on. Di­et/nu­tri­tion re­search is one of the worst fields in all med­i­cine. Don’t sac­ri­fice your qual­ity of life now for some small late-late QALYs which may not ex­ist at al­l.)

  2. in­ves­ti­gate all lo­cal gro­ceries. The av­er­age price can differ con­sid­er­ably be­tween stores.

    In my own fea­si­ble shop­ping area, I have Wal­mart, Tar­get, Shop­pers, Aldi, Gi­ant and some oth­ers (B­J’s is the ma­jor al­ter­na­tive but I’ve never been con­vinced I would be able to buy enough to ben­e­fit). When I switched from NEX to Wal­mart, I saved a good 10%; when I switched (most of) my shop­ping to Aldi, I saved an­other good 10%. (The cost sav­ings had I started with Whole Foods or Har­ris Teeter hardly bear think­ing on.) There are some dis­ad­van­tages to shop­ping at Aldi (more re­stricted se­lec­tion, dis­or­ga­nized store, hav­ing to re­mem­ber to bring a quar­ter for the shop­ping carts) but sav­ing $20 or $30 is a good salve for the an­noy­ances. It may take some time to get fa­mil­iar with a store (I take about an hour to thor­oughly walk through a store, look­ing at where every­thing is and not­ing prices for things I often buy), but con­sider the Value of In­for­ma­tion: if you spend 2 or 3 hours to find a new gro­cery store and save 10–20%, that’s a sav­ings of eas­ily $120+ a year for a NPV of some­thing like $2k. () And there’s not that many to check.

  3. in choos­ing a gro­cery store and what to buy, re­mem­ber the costs also of travel and time spent shop­ping.

    The goal is to get your gro­ceries for a to­tal cost which min­i­mizes mon­ey, time, and effort. Every sec­ond spent shop­ping is a waste-–cer­tainly I don’t par­tic­u­larly en­joy it. The cost of dri­ving to a store is some­where around $0.10-$0.50 per mile, and then there is the risk of ac­ci­dents and your own time; adding up the mileage and time, I get ~$15 per gro­cery trip. This is a sub­stan­tial frac­tion of the to­tal cost of my gro­ceries, and so I keep that in mind when plan­ning: I shop once a mon­th, stock­ing up as much as pos­si­ble. I’d much rather make one trip to buy a lot of food at $120+$15=$135 than two trips at $60+$60+$15+$15=$150! (In this re­spect, Aldi is a wash for me: I have to spend some­what longer dri­ving to it, but it’s so much more com­pact and tiny that I spend much less time walk­ing around it and check­ing out.) Travel time is also why it makes a lot of sense to oc­ca­sion­ally buy from the lo­cal dol­lar store about 3 min­utes away-–when a sin­gle trip costs $15, then even if a bot­tle of ketchup or what­ever costs twice as much as at Aldi, it’s still a lot cheap­er. (Although if you find your­self re­sort­ing to that too often, it sug­gests you are mak­ing mis­takes fur­ther up­stream.)

  4. in buy­ing a spe­cific in­gre­di­ent, al­ways start with the *u­nit* cost.

    Many foods keep a long time and you can eas­ily make use of a larger quan­ti­ty. It’s some­what un­usual for some­thing to be too big to buy and a bad idea due to spoilage/op­por­tu­nity cost (usu­ally some­thing ei­ther per­ish­able, like fruit, or ridicu­lously long-last­ing and more ex­pen­sive in op­por­tu­nity cost than up front; eg a few months ago, I fin­ished off a bot­tle of mo­lasses which dat­ed, as best as I could in­fer from the copy­rights on the la­bel, from ~1995, and it would not be a good idea to buy a big bot­tle of mo­lasses if you only use it once in a while like I do, for bak­ing rye bread).

  5. when buy­ing a new in­gre­di­ent, start with the gener­ic.

  6. If you have doubts about buy­ing gener­ic, test it: re­quire the much more ex­pen­sive brand-name goods to jus­tify their ex­is­tence.

    My pref­er­ence is to take into ac­count Value of In­for­ma­tion: by the same logic as choos­ing gro­ceries, re­ject­ing a cheap generic food in fa­vor of an ex­pen­sive one is an ex­pen­sive mis­take as you in­cur it in­defi­nite­ly.

    One of my pet peeves is how much money peo­ple waste on brand-name goods rather than de­fault­ing to gener­ics or off-brands, when there is rarely a no­tice­able taste differ­ence to me. So my sug­ges­tion is that when­ever you try some­thing new, buy 1 of every­thing and try them out side by side to see what you like and if the brand-name qual­ity can pos­si­bly jus­tify pay­ing so much more. I’ve done this with but­ter, milk, ap­ple­sauce, ce­re­al, ba­con, sausage, mus­tard, ice cream, etc. It baffles me how few peo­ple ap­par­ently take ad­van­tage of this—­like at Wal­mart, the ‘ir­reg­u­lar ba­con’ tastes lit­er­ally iden­ti­cal and is not that differ­ent from the reg­u­lar ba­con and yet is al­ways al­most half-price per ounce! Half! If I spend $8 a month rather than $4 on ba­con, that’s a NPV of -$983. Quite an ex­pen­sive mis­take to make over a life­time.

    I don’t ad­vise re­in­force­ment learn­ing-style ap­proaches like Thomp­son sam­pling. Why? Be­cause the VoI for test­ing all op­tions is so high, you can sam­ple them all si­mul­ta­ne­ously (mak­ing it more of a mul­ti­ple-play MAB), there is large cog­ni­tive costs to main­tain­ing op­tions (the point is to get in and out as fast as pos­si­ble, re­mem­ber, to min­i­mize time-cost) and so each sam­ple has a fixed cost (which is ig­nored in the usual MAB for­mu­la­tion where it’s as­sumed you have to choose each round any­way), and in my ex­pe­ri­ence sam­pling food­stuffs, not many things are ‘ac­quired tastes’ where mul­ti­ple tastes will yield a differ­ent re­sult, and there is not that much noise in taste com­par­isons of this sort. Typ­i­cal­ly, I try some­thing and I im­me­di­ately can tell that the gener­ic­s/brand-name are equiv­a­lent or which one is much su­pe­ri­or. (And if the differ­ence is sub­tle, then it does­n’t mat­ter much, but typ­i­cally the price differ­ence is not sub­tle.) If there is no noise, the EV is highly pos­i­tive, and you can take mul­ti­ple ac­tions si­mul­ta­ne­ous­ly, tak­ing a Thomp­son sam­pling or se­quen­tial test­ing ap­proach is merely in­cur­ring un­nec­es­sary re­gret and com­plex­ity com­pared to a sin­gle-trial de­ci­sion ap­proach.

    So it’s best to do a sin­gle pre­cise test of all avail­able con­tenders, and then buy the top-ranked item from then on with­out think­ing about it fur­ther. Does the op­ti­mal buy change? May­be, but food prices are fairly sta­ble in a rel­a­tive rank-order sense (eg when ba­con spiked in price ~2017, all the ba­cons did si­mul­ta­ne­ous­ly, so I wound up buy­ing the ex­act same dis­count ba­con, but less), so the de­ci­sions don’t seem to need to be re­vis­ited more. Even if the in­for­ma­tion de­cays, the tests are still worth run­ning be­cause aside from learn­ing about the spe­cific food type you’re test­ing, you ben­e­fit from get­ting an idea of the gen­eral range of vari­a­tion in food taste/qual­ity and how much a brand-name is worth (ie ‘lit­tle’).

  7. skip coupons and sales.

    They are neg­a­tive sum games after tak­ing into ac­count the time wasted sort­ing through the gim­micks and all the op­tions, in­tended to get things you did­n’t want to buy in the first place, even at the dis­counted price, even oc­ca­sional mis­takes will wipe out the sav­ings, and they dis­cour­age ex­per­i­men­ta­tion and com­par­i­son (you would­n’t want to buy the other ap­ple­sauce which you don’t have a coupon for, would you? why, that would be a ripoff!); worse, they are by de­fi­n­i­tion ephemeral (mak­ing any effort spent on them “toil”), so your gained knowl­edge and effort be­comes im­me­di­ately worth­less, as com­pared to sta­ble long-term knowl­edge like which gro­cery store is cheap­est, where all items are lo­cat­ed, which gener­ics to buy etc. Like credit card churn­ing or fre­quen­t-flier miles, they should be avoided as traps. Life is far too short.

  8. Gro­cery lists should be kept reg­u­larly and reused as tem­plates to avoid for­get­ting about im­por­tant things or in­dulging in im­pulse or spree pur­chas­ing.

  9. Track­ing ex­pen­di­tures can be help­ful in find­ing cat­e­gories of food which have been get­ting im­bal­anced spend­ing and re­view­ing en­joy­men­t/$ trade­offs.

  10. After eval­u­at­ing stores, learn­ing where items are, fin­ish­ing taste com­par­isons, pick­ing recipes, mak­ing tem­plate lists, the whole process should­n’t oc­cupy more than an hour or so a mon­th: you take your tem­plate, mod­ify slightly for cur­rent recipes, drive there, dash in to the pre­spec­i­fied items, buy those, and get out.

There are doubt­less fur­ther op­ti­miza­tions which could be made, but by that point, I be­lieve they truly are into the realm of ‘over­think­ing it’, and one should move one’s scarce de­lib­er­a­tive ca­pac­ity onto other top­ics (like ca­reer plan­ning).

To re­cap:

  1. plan sen­si­ble cheap meals
  2. find the cheap­est lo­cal gro­cery store
  3. buy as rarely as pos­si­ble, in bulk, and generic (un­less a food is proven in taste-test­ing to be su­pe­ri­or); get in and out and don’t be tempt­ed.

In terms of op­ti­miz­ing, keep in mind the Pareto prin­ci­ple: quan­ti­ta­tive­ly, I think the biggest wins comes in this or­der:

  1. choice of foods (<10x differ­ence in cost)
  2. generic vs brand-name (1–3x)
  3. choice of gro­cery store (<=1.3x)
  4. buy­ing bulk (1–1.5x)
  5. lo­ca­tion and fre­quency of vis­its (1–1.1x)
  6. in­-s­tore shop­ping effi­ciency (1–1.05x)

  1. All the same type, of course, be­cause who wants to spend time match­ing up socks? Have as few types as pos­si­ble so you can throw them in a drawer or some­thing.↩︎

  2. Where do all those miss­ing socks go? Sam­sung’s sock sur­vey/in­ter­views sug­gests no par­tic­u­lar rea­son:

    There are many prac­ti­cal rea­sons for sock loss rather than su­per­nat­ural dis­ap­pear­ances. Re­search in­ter­views found the com­mon causes in­cluded items falling be­hind ra­di­a­tors or un­der fur­ni­ture with­out any­one re­al­is­ing, stray items be­ing added to the wrong coloured wash and be­com­ing sep­a­rated from its match­ing sock, not be­ing se­cured to a wash­ing line se­curely so they fall off and blow away—or they are sim­ply care­lessly paired up.

    To which I would add, in mul­ti­-per­son house­holds, socks have a ten­dency to mi­grate to other peo­ple’s rooms (ac­ci­den­tally or not), flow­ing along a sock gra­di­ent. (I lost a lot of socks to my broth­er. I know be­cause we la­beled them with mark­ers and I’d reg­u­larly find them in his draw­er.) Some­times they get phys­i­cally lost in the dry­er. In clut­tered house­holds, it’s easy for a sock to fall out of the dryer or the bas­ket when you’re mov­ing a big load, or fall be­hind draw­er­s/beds and get lost there. Pet an­i­mals can steal them: I’ve seen fer­rets mak­ing off with socks to hide in cor­ners (or be­hind the dry­er), and sup­pos­edly often have a just for socks & woolens (“wool suck­ing”). And in some cas­es, there may be things man was not meant to know.↩︎

  3. We can’t use Pear­son’s r be­cause these re­sponses are cat­e­gor­i­cal, not con­tin­u­ous num­bers. (While most sur­vey soft­ware sup­ports free re­sponse or con­tin­u­ous num­bers, they are typ­i­cally a bad idea be­cause peo­ple will take any chance they get to feed in bad data or wild re­spons­es.) To han­dle them prop­er­ly, I use the . Poly­choric cor­re­la­tions han­dle or­di­nal data by as­sum­ing a la­tent nor­mal vari­able which is dis­cretized as the ob­served vari­able (sim­i­lar to the ), and to cor­re­late 2 or­di­nal vari­ables, asks what the cor­re­la­tion of the 2 la­tent nor­mal vari­ables is, which is then the same as the fa­mil­iar r. (It’s more prin­ci­pled than the com­mon ap­proach of sim­ply turn­ing the or­di­nal scale into in­te­gers, and us­ing them as-is, any­way.)

    In this case, sock count and sock pur­chase fre­quency might not be nor­mally dis­trib­ut­ed, but I’m happy to as­sume there’s a la­tent vari­able; the pur­chaser vari­able is more ques­tion­able but I think the op­tions can be ranked by a kind of ‘so­cial dis­tance’ and it’s rea­son­able to use the poly­choric here as well.↩︎

  4. Re­al­is­ti­cal­ly, how much could a top-notch pair of wool socks cost? $30 a pair? If you bought 5 pairs at $150 for a re­cip­i­ent who needs them, that would amaze them; on the other hand, if you bought them, say, a lap­top at the same cost, it’s prob­a­bly go­ing to be a crummy lap­top they don’t want to use and they’ll feel a mix of anger & guilt at be­ing given a white ele­phant.↩︎

  5. I am think­ing of ex­am­ples like the items on the DOI/A-DMC scales, such as: lock­ing keys in one’s car/throw­ing out gro­ceries which went bad/n­ever wear­ing pur­chased clothes/miss­ing an air­plane flight/get­ting an STD/overdrawing an ac­coun­t/de­clar­ing bank­rupt­cy.↩︎

  6. Trans­lated on pg47, “The Sov­er­eign and the Study of War”, Fred­er­ick the Great on the Art of War, Jay Lu­vaas 1966/1999 ISBN 0-306-80908-7; Lu­vaas cites it to “Réflex­ions sur la tac­tique et sur quelques par­ties de la guer­re, ou Réflex­ions sur quelques changes­ments dans la fa­con de fairre la guerre”, Oeu­vres 28, pg153–154 [Réflex­ions sur la tac­tique], of Oeu­vres de Fred­eric le Grand (30 vol­umes, 1846–1856).↩︎

  7. But not all toil can or should be elim­i­nat­ed, as the at­tempt to do so can all too eas­ily back­fire & waste time on net, in a ver­sion of Brian Kernighan’s quote, “Every­one knows that de­bug­ging is twice as hard as writ­ing a pro­gram in the first place. So if you’re as clever as you can be when you write it, how will you ever de­bug it?” Sim­i­lar­ly, I avoid sock sub­scrip­tion ser­vices, be­cause they are ex­pen­sive and I ex­pect that the has­sle of deal­ing with them—changes in terms, reg­u­lar de­liv­er­ies, re­funds or billing prob­lems, ser­vices clos­ing etc—ex­ceeds any sup­posed con­ve­nience or time-sav­ings.↩︎

  8. Not fal­si­fy­ing one’s own be­liefs or moves is the nat­ural habit of hu­mans, and real effort is re­served for be­liefs of other peo­ple and es­pe­cially en­e­mies—sim­ply imag­ing that one’s be­lief is held by an imag­i­nary friend aids fal­si­fi­ca­tion! See Cow­ley & Byrne 2005, “When Fal­si­fi­ca­tion is the Only Path to Truth”; for ex­ten­sive back­ground on the the­ory that rea­son is prin­ci­pally about ar­gu­ing & per­sua­sion, see Mercier & Sper­ber 2010’s “Why do hu­mans rea­son? Ar­gu­ments for an ar­gu­men­ta­tive the­ory”.↩︎

  9. An in­ter­est­ing ex­am­ple is afforded by the NY Times’s “How Com­pa­nies Learn Your Se­crets”:

    …two col­leagues from the mar­ket­ing de­part­ment stopped by his desk to ask an odd ques­tion: “If we wanted to fig­ure out if a cus­tomer is preg­nant, even if she did­n’t want us to know, can you do that?” As the mar­keters ex­plained to Pole—and as Pole later ex­plained to me, back when we were still speak­ing and be­fore Tar­get told him to stop—new par­ents are a re­tail­er’s holy grail. Most shop­pers don’t buy every­thing they need at one store. In­stead, they buy gro­ceries at the gro­cery store and toys at the toy store, and they visit Tar­get only when they need cer­tain items they as­so­ciate with Tar­get—­clean­ing sup­plies, say, or new socks or a six-month sup­ply of toi­let pa­per. But Tar­get sells every­thing from milk to stuffed an­i­mals to lawn fur­ni­ture to elec­tron­ics, so one of the com­pa­ny’s pri­mary goals is con­vinc­ing cus­tomers that the only store they need is Tar­get. But it’s a tough mes­sage to get across, even with the most in­ge­nious ad cam­paigns, be­cause once con­sumers’ shop­ping habits are in­grained, it’s in­cred­i­bly diffi­cult to change them. There are, how­ev­er, some brief pe­ri­ods in a per­son’s life when old rou­tines fall apart and buy­ing habits are sud­denly in flux. One of those mo­ments—the mo­ment, re­al­ly—is right around the birth of a child, when par­ents are ex­hausted and over­whelmed and their shop­ping pat­terns and brand loy­al­ties are up for grabs. But as Tar­get’s mar­keters ex­plained to Pole, tim­ing is every­thing. Be­cause birth records are usu­ally pub­lic, the mo­ment a cou­ple have a new baby, they are al­most in­stan­ta­neously bar­raged with offers and in­cen­tives and ad­ver­tise­ments from all sorts of com­pa­nies. Which means that the key is to reach them ear­lier, be­fore any other re­tail­ers know a baby is on the way. Specifi­cal­ly, the mar­keters said they wanted to send spe­cially de­signed ads to women in their sec­ond trimester, which is when most ex­pec­tant moth­ers be­gin buy­ing all sorts of new things, like pre­na­tal vi­t­a­mins and ma­ter­nity cloth­ing. “Can you give us a list?” the mar­keters asked. “We knew that if we could iden­tify them in their sec­ond trimester, there’s a good chance we could cap­ture them for years,” Pole told me. “As soon as we get them buy­ing di­a­pers from us, they’re go­ing to start buy­ing every­thing else too. If you’re rush­ing through the store, look­ing for bot­tles, and you pass or­ange juice, you’ll grab a car­ton. Oh, and there’s that new DVD I want. Soon, you’ll be buy­ing ce­real and pa­per tow­els from us, and keep com­ing back.”…As the abil­ity to an­a­lyze data has grown more and more fine-grained, the push to un­der­stand how daily habits in­flu­ence our de­ci­sions has be­come one of the most ex­cit­ing top­ics in clin­i­cal re­search, even though most of us are hardly aware those pat­terns ex­ist. One study from Duke Uni­ver­sity es­ti­mated that habits, rather than con­scious de­ci­sion-mak­ing, shape 45% of the choices we make every day, and re­cent dis­cov­er­ies have be­gun to change every­thing from the way we think about di­et­ing to how doc­tors con­ceive treat­ments for anx­i­ety, de­pres­sion and ad­dic­tions.

    Habits are thought­less:

    …The first time a rat was placed in the maze, it would usu­ally wan­der slowly up and down the cen­ter aisle after the bar­rier slid away, sniffing in cor­ners and scratch­ing at walls. It ap­peared to smell the choco­late but could­n’t fig­ure out how to find it. There was no dis­cernible pat­tern in the rat’s me­an­der­ings and no in­di­ca­tion it was work­ing hard to find the treat. The probes in the rats’ heads, how­ev­er, told a differ­ent sto­ry. While each an­i­mal wan­dered through the maze, its brain was work­ing fu­ri­ous­ly. Every time a rat sniffed the air or scratched a wall, the neu­rosen­sors in­side the an­i­mal’s head ex­ploded with ac­tiv­i­ty. As the sci­en­tists re­peated the ex­per­i­ment, again and again, the rats even­tu­ally stopped sniffing cor­ners and mak­ing wrong turns and be­gan to zip through the maze with more and more speed. And within their brains, some­thing un­ex­pected oc­curred: as each rat learned how to com­plete the maze more quick­ly, its men­tal ac­tiv­ity de­creased. As the path be­came more and more au­to­mat­ic—as it be­came a habit—the rats started think­ing less and less. This process, in which the brain con­verts a se­quence of ac­tions into an au­to­matic rou­tine, is called . There are dozens, if not hun­dreds, of be­hav­ioral chunks we rely on every day. Some are sim­ple: you au­to­mat­i­cally put tooth­paste on your tooth­brush be­fore stick­ing it in your mouth. Some, like mak­ing the kids’ lunch, are a lit­tle more com­plex. Still oth­ers are so com­pli­cated that it’s re­mark­able to re­al­ize that a habit could have emerged at al­l…What Gray­biel and her col­leagues found was that, as the abil­ity to nav­i­gate the maze be­came ha­bit­u­al, there were two spikes in the rats’ brain ac­tiv­i­ty—once at the be­gin­ning of the maze, when the rat heard the click right be­fore the bar­rier slid away, and once at the end, when the rat found the choco­late. Those spikes show when the rats’ brains were fully en­gaged, and the dip in neural ac­tiv­ity be­tween the spikes showed when the habit took over. From be­hind the par­ti­tion, the rat was­n’t sure what waited on the other side, un­til it heard the click, which it had come to as­so­ciate with the maze. Once it heard that sound, it knew to use the “maze habit”, and its brain ac­tiv­ity de­creased. Then at the end of the rou­tine, when the re­ward ap­peared, the brain shook it­self awake again and the choco­late sig­naled to the rat that this par­tic­u­lar habit was worth re­mem­ber­ing, and the neu­ro­log­i­cal path­way was carved that much deep­er.

  10. To bal­ance re­view with ex­e­cu­tion, one could al­lo­cate a fixed per­cent­age of time at mul­ti­ple time scales for re­view & meta-re­view : at each lev­el, al­lo­cate X%, which will con­verg­ing to a fi­nite bounded X.X…% for all meta lev­els. For ex­am­ple, if one al­lo­cates 5% for reg­u­lar re­view, one would spend 1 ‘meta’ day for every 20 days of work; and 1 ‘meta-meta’ day for every 400 days of work; and 1 ‘meta-meta-meta’ day every 8000 days… (Or: month­ly, year­ly, bidece­nial­ly). This is loosely anal­o­gous to .↩︎

  11. If life hacks like never reach fix­a­tion, per­haps that rep­re­sents an in­verse .↩︎

  12. Some in­ter­est­ing links on lo­cal op­ti­ma/­greed­i­ness/risk-aver­sion (eg the “” or the )

    An amus­ing fic­tional ex­am­ple might be the My Lit­tle Pony episode “Ap­ple­jack’s ‘Day’ Off”. An in­ter­est­ing pa­per, al­though not clearly es­tab­lish­ing sub­op­ti­mal ex­plo­ration, is , which uses an ex­tremely large dataset of restau­rant or­ders by in­di­vid­u­als from ; also in­ter­est­ing is Alessan­dretti et al 2016, which finds that peo­ple visit few phys­i­cal lo­ca­tions, and while they do con­tin­u­ously ex­plore new lo­ca­tions & shift, there ap­pears to be an equiv­a­lent of a , sug­gest­ing lim­its to hu­man abil­i­ties to eas­ily plan/re­mem­ber ac­tiv­i­ties cov­er­ing more than ~25 lo­ca­tions reg­u­lar­ly.↩︎

  13. Us­ing the usual ap­prox­i­ma­tion for the NPV of an in­defi­nite dis­counted in­come stream: .↩︎

  14. As much fun as I find run­ning blinded ex­per­i­ments like , I ac­knowl­edge it may not be many other peo­ple’s idea of fun.↩︎

  15. I am think­ing par­tic­u­larly of the Ya­hoo ex­per­i­ments: Sal­ganik et al 2006//Sal­ganik & Watts 2009.↩︎

  16. Pho­to­graphic in­ven­to­ries are some­times sug­gested for renters as part of renters in­sur­ance, in or­der to claim re­im­burse­ment from the in­sur­er, but I orig­i­nally started do­ing pho­to­graphic in­ven­to­ries as a dis­as­ter pre­pared­ness thing. Where I live, hur­ri­canes and flood­ing are se­ri­ous con­cerns: a few years after I moved in, I had to in­stall in­su­la­tion un­der the floor of my bed­room be­cause the pre­vi­ous in­su­la­tion had been washed out by flood­ing from a hur­ri­cane sev­eral years pre­vi­ous­ly. I also came un­com­fort­ably close to be­ing flooded out by high tide dur­ing an­other hur­ri­cane. In ad­di­tion, the elec­tri­cal wiring in this place was done by an am­a­teur like 50 years ago. And I’m also in the evac­u­a­tion zone of a nu­clear power plant. All things con­sid­ered, I de­cided it was a good idea to put to­gether an evac­u­a­tion kit with io­dine pills, food bars etc, a fire safe, re­mote In­ter­net back­ups, and take pho­tos of every­thing else to as­sist in dis­as­ter re­cov­ery. Just in case.

    More im­me­di­ate­ly, I find it to be use­ful dur­ing spring clean­ing. When you pho­to­graph every­thing, it forces you to say men­tally ‘oh this thing! when was the last time I used it, any­way? This turned out to be a waste. Maybe some­one else would find it more use­ful’ or re­minds you to use it for some­thing.↩︎