About Gwern

Who am I online & what have I done? Contact information; sites I use; computers and software tools; things I’ve worked on; psychological profiles
Haskell, personal, psychology, survey
2009-08-052020-06-14 finished certainty: highly likely importance: 3

This page is about me; for in­for­ma­tion about Gw­ern.net, see .


“A tran­si­tion from an au­thor’s book to his con­ver­sa­tion, is too often like an en­trance into a large city, after a dis­tant prospect. Re­mote­ly, we see noth­ing but spires of tem­ples and tur­rets of palaces, and imag­ine it the res­i­dence of splen­dour, grandeur and mag­nifi­cence; but when we have passed the gates, we find it per­plexed with nar­row pas­sages, dis­graced with de­spi­ca­ble cot­tages, em­bar­rassed with ob­struc­tions, and clouded with smoke.”

; , No. 14 (1750-05-05)1

“Be­hind a re­mark­able scholar one often finds a mediocre man, and be­hind a mediocre artist, often, a re­mark­able man.”

, §137,

“The reader lives faster than life, the writer lives slow­er.”

James Richard­son,


I am a free­lance Amer­i­can writer & re­searcher. (To make ends meet, I have a Pa­treon, ben­e­fit from Bit­coin ap­pre­ci­a­tion thanks to some old coins, and live fru­gal­ly.) I have worked for, pub­lished in, or con­sulted for: Wired (2015), 2 (2012–2013), CFAR (2012), (2017), the FBI (2016), Cool Tools (2013), Quan­ti­modo (2013), New World En­cy­clo­pe­dia (2006), Bit­coin Weekly (2011), (2013–2014), Bell­roy (2013–2014), Do­minic Frisby (2014), and pri­vate clients (2009-); every­thing on Gw­ern.net should be con­sid­ered my own view­point or writ­ing un­less oth­er­wise spec­i­fied by a rep­re­sen­ta­tive or pub­li­ca­tion. I am cur­rently not ac­cept­ing new com­mis­sions.


“‘I don’t speak’, Bi­jaz said. ‘I op­er­ate a ma­chine called lan­guage. It creaks and groans, but is mine own.’”


I have no con­nec­tion to the French singer or with gwern.com, any lo­ca­tions in Wales, the gw­ern on My­Space, or ei­ther ac­count on Pivory.com (which are con­nected to an ).


I have been ac­tive on the Eng­lish Wikipedia and re­lated projects since Jan­u­ary 2004. Cu­mu­la­tively6, I have over 90,000 ed­its and have writ­ten or worked on ; dur­ing my time as an Eng­lish ad­min­is­tra­tor, I per­formed thou­sands of ad­min­is­tra­tive ac­tions; I am an ad­min on the Haskell wiki, han­dling rou­tine spam & van­dal­ism:

I also ran a at “Wikipedia Re­li­able Sources for anime & manga”; this is a cus­tom Google search with >4542 web­sites on its and s. (The source/lists are pub­licly avail­able.) It re­turns much more use­ful7 re­sults for top­ics in pop­u­lar cul­ture, and as the name sug­gests, anime & manga in par­tic­u­lar.

Uses This

I’m some­times asked about my tech “stack”, in the vein of “Uses This” or The Paris Re­view’s Writer At Work. I use FLOSS soft­ware with a text/CLI em­pha­sis on a cus­tom work­sta­tion de­signed for deep learn­ing & re­in­force­ment learn­ing work, and an er­gonomic home office with por­trait-ori­en­ta­tion mon­i­tor, Aeron chair, & track­ball.


I run Ubuntu Linux with a tiling win­dow man­ager & CLI-centric habits. (I pre­fer De­bian but the sup­port of NVIDIA dri­vers has been bet­ter with Ubun­tu, so as long as I need GPU ac­cel­er­a­tion, I will be us­ing Ubun­tu). I be­gan us­ing tiling win­dow man­agers with and helped drive the ini­tial de­vel­op­ment of and then (my con­fig), which I still use in con­junc­tion with , a fork of the last good GNOME desk­top en­vi­ron­ment ver­sion be­fore the crazy GNOME 3 ru­ined every­thing.

I spend most of my time in edit­ing Mark­down (my con­fig), Fire­fox (ex­ten­sions: plu­gin, HTTPS Every­where, No­Script, uBlock ori­gin, Last­Pass, RECAP), or urxvt//. Most of my pro­gram­ming of R/Haskell/Python is done in a REPL+Emacs. (Friends don’t let friends use heroin or org-mod­e—are you ever re­ally go­ing to make back the time it takes to learn & cus­tomize org-mod­e?)

Mis­cel­la­neous­ly: I use for , for RSS, Evernote/NixNote for clippings/notes, for down­loads, / for me­dia play­ing, for IRC, arbtt for time-track­ing, ledger for fi­nances & for scheduling/reminders, for screen tint­ing at night , and for back­ups.


A photo of my work­sta­tion & win­dow view as of 2018-07-29, show­ing the used Aeron chair, Dell mon­i­tor in por­trait mode, and the work­sta­tion; sleep­ing in the cat tree is my cat.


As of June 2020, I use a work­sta­tion PC (which I built my­self), a large Dell mon­i­tor mounted in por­trait mode for read­ing8, a 200-foot Eth­er­net ca­ble (which re­quired I ), a Log­itech thumb track­ball, a generic key­board (to be re­placed by a Ki­ne­sis Ad­van­tage key­board once I fig­ure out the keymap­ping), and Bose noise-cancel­ing ear­phones. In July 2020, I switched to the split Er­go­dox EZ ($325). The work­sta­tion is plugged into a 900W UPS for pro­tec­tion against the not-in­fre­quent light­ning storms here, and a 6TB ex­ter­nal drive for daily in­cre­men­tal back­ups, sup­ple­mented by Back­blaze B2 (~$4/month) & mis­cel­la­neous ex­ter­nal dri­ves. While trav­el­ing, I use my ThinkPad P70 lap­top, which re­placed an Acer As­pire V17 (which died in a most un­for­tu­nate way), which re­placed a Dell Stu­dio 17, which re­placed a PC I built ~2008.

I de­signed the work­sta­tion to be use­ful for deep learn­ing, re­in­force­ment learn­ing, and Bayesian sta­tis­tics, which made it much more ex­pen­sive than I would’ve liked, set­tling on a +dual-GPU de­sign (while not for­get­ting that IO is often a bot­tle­neck), but un­for­tu­nately those are fairly con­tra­dic­tory re­quire­ments (DRL wants RAM+CPU while DL wants just GPU), and the re­sult wound up be­ing ex­pen­sive. (I went over­board on RAM in part be­cause I was frus­trated how I kept hit­ting RAM lim­its while test­ing out var­i­ous dy­namic pro­gram­ming al­go­rithms for the , and be­cause that much RAM means that en­tire datasets can be cached or worked with in­-mem­ory in R/Python, sav­ing the con­sid­er­able com­plex­ity of out­-of-core al­go­rithms or op­ti­miza­tion­s.)

The work­sta­tion is a liq­uid-cooled AMD Thread­rip­per CPU build on a Gi­ga­byte X399 Des­ignare EX moth­er­board, 2×1080ti NVIDIA GPUs, 110GB RAM (nom­i­nally 128GB but fi­nal stick is un­us­able due to ap­par­ent BIOS is­sues), a 1TB drive for OS/home, and an 8TB in­ter­nal HDD for bulk stor­age, all in a (un­nec­es­sary but too fun to not have) tem­pered-glass case. The process of putting it to­gether was difficult—motherboards/CPUs/GPUs have got­ten more com­plex since I last built a PC back in 2008—and the first moth­er­board stub­bornly re­fused to boot, and after I RMA’d it to Newegg (at a cost of $36), the sec­ond one ini­tially worked but then died overnight.9 After tin­ker­ing & pro­cras­ti­nat­ing for months, I gave up on the Asus moth­er­board, checked what was us­ing for their Thread­rip­per builds (Think­Mate was still not offer­ing any), and copied their choice of Gi­ga­byte X399 Des­ignare EX moth­er­boards, rea­son­ing that if they were ship­ping hun­dreds of such sys­tems, it must be rel­a­tively re­li­able; that moth­er­board, plus much more force­fully in­sert­ing the Thread­rip­per CPU, fi­nally worked, and I was able to switch every­thing over in June 2018. While the fi­nal re­sult was as pow­er­ful and use­ful as I hoped (e­spe­cially for work­ing with Dan­booru2018, where the 16-cores+2-GPUs al­lows me to cre­ate many differ­ent spe­cial­ized datasets & ex­per­i­ment with many differ­ent GAN ar­chi­tec­tures) the ex­pe­ri­ence of build­ing it has soured me on build­ing my own PCs in the fu­ture: I clearly no longer know enough about PC hard­ware to do a good job, and the more ex­pen­sive the com­po­nents, the less I en­joy the risk or fact of brick­ing them. In the fu­ture I will prob­a­bly ei­ther rely more on cloud so­lu­tions or bite the bul­let & buy pre­built sys­tems. The work­sta­tion parts list (PCPartPicker.com sketch):


For scan­ning books, I use a 12-inch guil­lo­tine pa­per cut­ter to de­bind books evenly (a big up­grade from us­ing X-Acto knives with Fiskars curved blades), an Ep­son sheet-fed scan­ner with imagescan for scan­ning & for post-pro­cess­ing.

My desk is an old desk made out of ply­wood & plumb­ing hard­ware by my great-grand­fa­ther for my aunt; I re­pur­posed it when I re­al­ized it was the per­fect size and height. In July 2020, be­cause I failed to find any good stand­ing desks I could buy used lo­cally to test it out, I gave up and bought a 48x30 curved bam­boo Jarvis stand­ing desk ($609). I ex­per­i­mented with a but found it dis­tract­ing, chron­i­cally un­pleas­ant, and dis­tress­ing to my cat. I put the desk in front of my bay win­dow so I could en­joy the view and rest my eyes, while watch­ing what hap­pens on the riv­er. The bay win­dow un­for­tu­nately often has di­rect sun­light through it, so I added re­flec­tive sheet­ing, which greatly re­duces the heat dur­ing the sum­mer (at the cost of mak­ing it gloomier in win­ter, of course, but that is why I have bright LED bulb­s). The chair is a used Aeron chair I bought off Craigslist for $225 in No­vem­ber 2016 (a bar­gain, al­though I doubt I would pay the list price). The sisal cat tree (Pet­co) pro­vides an ex­cel­lent perch for my cat, and I have added a pet flap with a cat win­dow sill so he can more eas­ily come & go, with acrylic sheet­ing to re­duce air flow. (He turns out to greatly dis­like soft sur­faces, so half of the cat win­dow sill was use­less! I had to re­place the foam padding & cover with a sheet of ply­wood I cut to fit.) The box fan by my feet (Wal­mart, $19) & the work­sta­tion both rest on rub­ber-cork an­ti-vi­bra­tion pads. To re­duce RSI, I keep a grip ex­er­ciser around to use dur­ing idle mo­ments like watch­ing videos. For mak­ing , I boil wa­ter in a sim­ple ad­justable elec­tric tea ket­tle which I’ve made ‘pro­gram­ma­ble’ by drilling a hole into the clear plas­tic & in­sert­ing a meat ther­mome­ter (which com­bi­na­tion is far cheaper than elec­tronic ket­tles and more trust­wor­thy); I then steep the tea in a Finum fil­ter in­side a big Colo­nial Williams­burg ce­ramic fox mug.

My cat would like to re­mind you to take a typ­ing & com­puter break every hour.

Mailing lists






This sec­tion cov­ers some of the most im­por­tant things pos­si­ble to know about me: my per­son­al­ity and men­tal de­scrip­tion. No doubt some read­ers ex­pected a care­fully air­brushed & pot­ted bi­og­ra­phy de­scrib­ing where & when I was raised, what my fa­mil­ial & tribal affil­i­a­tions are, or what fa­mous in­sti­tu­tions I am affil­i­ated with; even though this in­for­ma­tion is al­most en­tirely use­less—what can one pre­dict about me if one knows that I was born in Illi­nois and raised on Long Is­land, but (maybe) my ac­cent and a gen­eral lib­er­al­ism? The irony—that peo­ple want most the in­for­ma­tion they will learn from least­—will not be lost on those fa­mil­iar with . In con­trast, stan­dard­ized & val­i­dated psy­cho­me­t­ric in­stru­ments like the or re­ally do have pre­dic­tive va­lid­ity for many life out­comes.

(Much of this data comes from Your­Moral­s.org. I plan to re­take the sur­veys, if pos­si­ble, every decade; it will be in­ter­est­ing to see what changes.)


To de­scribe my per­son­al­ity briefly: I am in­tro­vert­ed, calm, nei­ther par­tic­u­larly in­dus­tri­ous nor lazy, con­trary, and patho­log­i­cally cu­ri­ous. I have made a copy of my 2011–2014 re­sponses to the Your­Moral­s.org cor­pus; dis­cussed in more de­tail be­low. My scores on the “Big 5 Per­son­al­ity In­ven­tory”, /long 1/2/3:

  1. 10: high (short) or 87/87th (long)
  2. 11: medium or 64/69th
  3. 12: low or 6/7th per­centile
  4. 13: medi­um-low or 3/3rd per­centile
  5. 14: medi­um-low or 16/13th per­centile

For those who en­joy play­ing the game of ‘ad hominem via lay psy­chi­atric di­ag­no­sis’, may I sug­gest not ac­cus­ing me of —which is so over­done—but some­thing more novel & scary-sound­ing like ?


The rel­e­vant re­sults



  • Email: gw­ern@g­w­ern.net; I do not use Skype.
  • Bit­coin: 1Gb89tyJq3P5K5M3GcpFvPrMsw33cik9wX (canon­i­cal ad­dress; used for #bitcoin-otc trad­ing)
  • PGP key (mir­ror; fin­ger­print: 7DCEA38789C588CC; my old key, F7E5D682, is no longer us­able)


Collaboration style

Once on #haskell, I was asked why I have no large pro­grams to my cred­it; I replied, “My prob­lem is that most pro­grams I use al­ready ex­ist.”

I am not a bad Haskell pro­gram­mer (although I am no guru like Si­mon Pey­ton-Jones, Apfel­mus, or Don Stew­art), but given how long I’ve been us­ing Haskell, my con­tri­bu­tions prob­a­bly look pretty slim. This is­n’t be­cause I don’t like Haskel­l—I do, I find func­tional pro­gram­ming nat­u­ral: defin­ing trans­for­ma­tion after trans­for­ma­tion un­til the re­sult is what I need. And of the func­tional lan­guages, Haskell seems the best com­bi­na­tion of power be­yond ba­sic arith­metic or list pro­cess­ing, one of the best ecosys­tems, and good ba­sic lan­guage. (Which is not to say it’s per­fect: there are some sharp edges in the ba­sic math which ir­ri­tate me when I’m mess­ing around in the REPL.)

This is partly be­cause of my style of con­tri­bu­tion. I’ve al­ways pre­ferred to work on ex­ist­ing ap­pli­ca­tions and li­braries than to go write my own. I’ve al­ways pre­ferred to take some­one else’s work and bring it up to snuff than write a clean im­ple­men­ta­tion of my own. I’ve al­ways pre­ferred prod­ding the au­thor or main­tainer to do the right thing than to drop a large batch of patches onto them. Like­wise, I view it as bet­ter to use Haskell stan­dards like Ca­bal or Darcs than to use some­thing like Au­to­tools even if the lat­ter lets us man­age just a lit­tle more au­toma­tion. I view it as bet­ter to up­load to Hack­age than to use any fancy site like or .

It’s bet­ter to do yeo­man’s work tak­ing two sim­i­lar mod­ules in two ap­pli­ca­tions and split them out to a li­brary than to write even the fan­ci­est us­ing . Bet­ter to com­mit changes that re­duce user con­figs by a line than to demon­strate once again the el­e­gance of mon­ads. Bet­ter by far to file a bug than wank around in #haskell ex­pres­sions.

It is much bet­ter to find some peo­ple who have tried in the past to solve a prob­lem and bring them to­gether to solve it, than to solve it your­self—even if it means be­ing a foot­note (or less) in the an­nounce­ment. What’s im­por­tant is that it got done, and peo­ple will be us­ing it. Not the cred­it. It is a high ac­com­plish­ment in­deed to fac­tor out a bit of func­tion­al­ity into a li­brary and make every pos­si­ble user ac­tu­ally use it. Would that more Haskellers had this mind­set! In­deed, would that more peo­ple in gen­eral had this mind­set; as it is, peo­ple have bad habits of re­peat­edly fail­ing when they think they have spe­cial in­for­ma­tion, are highly even in ob­jec­tive ar­eas with quick feed­back, and badly over­es­ti­mate how many good ideas they can come up with15—in­deed, most good ideas are . One should be able to draw upon the wis­dom of oth­ers.

This is an ethos I learned work­ing with the in­clu­sion­ists of Wikipedia. No code is so bad that it con­tains no good; the most valu­able code is that used by other code; credit is less im­por­tant than work; a steady stream of small triv­ial im­prove­ments is bet­ter than oc­ca­sional mas­sive ed­its.

A leader is best when peo­ple barely know that he ex­ists, not so good when peo­ple obey and ac­claim him, worst when they de­spise him. Fail to honor peo­ple, They fail to honor you. But of a good lead­er, who talks lit­tle, when his work is done, his aims ful­filled, they will all say, ‘We did this our­selves.’16

This is not an ethos cal­cu­lated to im­press. Fil­ing bug re­ports, help­ing new­bies, com­ment­ing on ar­ti­cles and code, ca­bal­iz­ing & up­load­ing code—these are things hard to eval­u­ate or take credit for. They are use­ful, use­ful in­deed (shep­heb or, eg. my­self, never boast in #xmonad of hav­ing helped 5 new­bies to­day, but over the months and years, this friend­li­ness and ready aid is of greater value than any mod­ule in all of XMon­ad­Con­trib.) but they will never im­press an in­ter­viewer or earn a fel­low­ship. Is that too bad? Did I waste all my time?

I don’t think so. I value my con­tri­bu­tions, and the Haskell com­mu­nity is bet­ter for it. It may have made my life a lit­tle more diffi­cult—all that time spent on Haskell mat­ters is time I did not de­vote to classes or jobs or what-have-y­ou—but ul­ti­mately they did help some­body. One could do worse things with one’s time than that.

Coding contributions

I mostly con­tribute to projects in Haskell, my fa­vorite lan­guage; I have con­tributed to non-Haskell projects such as , , 17 etc. but not in ma­jor ways, so I do not list them here. After start­ing this web­site, I wound down my reg­u­lar cod­ing ac­tiv­i­ties in fa­vor of my writ­ings; when I code, now it tends to be tools doc­u­mented or hosted on this web­site (eg , ) or in­te­grated into write­ups (eg ). For that code, you can browse by lan­guage tag: //////////////////

Be­low is a more de­tailed list of my old Haskell con­tri­bu­tions, most of which is now of only his­tor­i­cal in­ter­est.


  • arbtt

    • wrote tu­to­r­ial on con­fig­ur­ing the time-tracker & defin­ing rules: “Effec­tive Use of arbtt”
    • doc­u­mented de­pen­den­cies, sim­i­lar soft­ware, con­fig­u­ra­tion syn­tax mode, CLI flag cor­rec­tions
  • Darcs

    1. Switched from Fast­Packed­Strings to ByteStrings
    2. Low-level C op­ti­miza­tion
    3. Ini­ti­ated Ca­bal­iza­tion (my work ini­tially ap­peared as darc­s-ca­bal­ized and then was merged into HEAD and darcs-cabalized dep­re­cat­ed)
    4. Refac­tor­ing of shell tests
    5. Ini­ti­ated switch from wiki to Gi­tit
    6. Iden­ti­fied per­for­mance is­sue & in­sti­gated ad­di­tion of --max-count op­tion for File­store
  • XMonad

    1. reg­u­lar XMon­ad­Con­trib patch re­views

    2. Con­fig archive down­loader

    3. Con­tributed mod­ules:

      1. XMonad.Util.Paste
      2. XMonad.Actions.Search
      3. XMonad.Actions.WindowGo
      4. XMonad.Util.XSelection
    4. Main­tained pre­vi­ous18

  • Yi

    1. Con­tributed mod­ules:

      1. Yi.IReader
      2. Yi.­Mod­e.IReader
      3. Yi.Hoogle
    2. Im­proved Emacs key­bind­ings

    3. Ini­ti­ated ‘Uni­cod­ify’ or ‘Pretty Lamb­das’ fea­ture for Haskell syn­tax high­light­ing

    4. Added move­men­t-re­lated func­tions for im­proved in­cre­men­tal search

    5. Cleanup19

    6. Com­ment sup­port to ca­bal-mode

  • Lambd­abot

    1. (Re)­Ca­bal­ized20

    2. Adapted to use Mue­val

    3. Refac­tored out code in mul­ti­ple pack­ages:

      1. show
      2. lambd­abot-u­tils
      3. brain­fuck
      4. un­lambda
    4. Im­ple­mented run-in-any-di­rec­tory func­tion­al­ity (pre­vi­ously Lambd­abot could only run in the repos­i­tory di­rec­to­ry)

    5. Cleanup

    6. Main­tained it (with Cale Gib­bard)

  • Gi­tit

    • Wrote Darcs back­end (which was moved to the file­store pack­age and be­came Da­ta.­File­Store.­Darcs)
    • Did some op­ti­miza­tion work (im­ages, JavaScript & CSS mini­fi­ca­tion, wrote gzip en­cod­ing & ini­ti­ated ex­pire head­ers, JS re­lo­ca­tion, fewer calls to ex­pen­sive file­store func­tions)
    • Wrote RSS sup­port
    • Wrote In­ter­wiki plu­gin
    • Wrote Date plu­gin
    • Wrote We­bArchiver & We­bArchiver­Bot plu­g­ins (see later stand­alone tool/library)
    • Wrote Uni­code plu­gin
    • Wrote HCAR en­try
    • Misc. bug re­ports & sug­ges­tions
    • Added PDF ex­port func­tion­al­ity
    • In­te­grated JQuery-based float­ing foot­notes
  • File­store

    • In­sti­gated its development/use in Gi­tit & Or­chid
    • Main­tained the Darcs back­end (de­bug & op­ti­mize)
  • archiver: Wrote and main­tain it (see re­lease ANN)

  • Mue­val: Wrote it

  • wp-archive­bot: Wrote it (see re­lease ANN)

  • Change-mon­ger: Wrote it

  • Base

  • Unix: fixed a pos­si­ble run­time crash in mkstemp; added mkstemp docs

  • Au­to­proc

    • Cleanup
    • Im­proved ba­sic func­tion­al­ity
    • Im­ple­mented an XMon­ad-style re­load sys­tem to al­low ac­tual cus­tomiza­tion
    • Main­tained it
  • Frag

    • Up­dated for GHC 6.8 & 6.1021
    • Cleanup
    • Re­placed the non-Free level data and graph­ics with Free ones
  • Hint

    • Im­proved ex­am­ples, docs
    • Added UTF8 sup­port
    • Made use ghc-paths li­brary
    • En­abled QuickCheck sup­port
    • Added GHC-options sup­port
  • Hlint: added GHCi in­te­gra­tion

  • Pugs

    • Cleaned up their third-party mod­ules
    • Fixed up var­i­ous Ca­bal is­sues
    • Helped main­tain it
  • QuickCheck: Data.Complex in­stance

  • Tag­soup: re­placed old cus­tom HTTP down­load code with stan­dard li­brary func­tions

  • Hashell: Up­dated for 6.8’s GHC API; Cleanup; Ca­bal­ized


As part of my effort to help shift the Haskell com­mu­nity to the use of cen­tral­ized pack­ag­ing repos­i­to­ries pi­o­neered by CPAN, which is a fun­da­men­tal re­quire­ment for any mod­ern lan­guage, I made a sys­tem­atic effort to get all ex­tant Haskell code into Ca­bal for­mat & up­loaded to Hack­age—whether the orig­i­nal au­thors wanted it or not. (For all the ruffled feath­ers and con­tin­ued in­fe­lic­i­ties of Haskell pack­ag­ing, a decade lat­er, no Haskeller would go back to the pre-Cabal/Hackage Au­to­tools days.) I ca­bal­ized and/or up­loaded (ac­cord­ing to the 2013-05-10 Hack­age up­load log):

  1. This is a lit­er­ary way of say­ing I am not as in­ter­est­ing as my writ­ings, and in some re­spect, it should not mat­ter who I am or what I have done be­cause ar­gu­ment screens off au­thor­ity.↩︎

  2. When I say “re­search as­sis­tant”, I mean it in the older sense of some­one who does de­tail work for an­other per­son’s orig­i­nal re­search—so I spent a lot of time read­ing up on spe­cific ar­eas and mak­ing notes about stuff my boss needs, and only oc­ca­sion­ally do in­de­pen­dent work. Not all my work can be made pub­lic, but some of it is. A par­tial list in rough chrono­log­i­cal or­der:

  3. The fol­low­ing is a list of my sub­mis­sions to LW I re­gard as sub­stan­tive or par­tic­u­larly good, ex­clud­ing con­tent which can be found on Gw­ern.net, in chrono­log­i­cal or­der with in­ter­est­ing ones high­light­ed:

  4. Of course, I don’t agree with every MIRI or LW po­si­tion. The in­tel­lec­tual ho­mo­gene­ity has been much over-es­ti­mated by out­siders who have not both­ered to look at the an­nual sur­veys, I think. Here are some ma­jor points for me:

    1. : I think that LW­ers who were per­suaded by Eliez­er’s MWI writ­ings are wrong to do so, as they are un­fa­mil­iar with even the rudi­ments of any al­ter­na­tives in­ter­pre­ta­tions and can­not judge in the mat­ter; how many LW­ers have ever se­ri­ously looked at all the com­pet­ing the­o­ries, or could even name many al­ter­na­tives? (“Col­lapse, MWI, uh…”), much less could dis­cuss why they dis­like or what­ev­er. Lack­ing any real un­der­stand­ing, they ought to sim­ply adopt the ex­pert con­sen­sus, where MWI seems to have a plu­ral­ity or bare ma­jor­ity of ad­her­ents (with the weak con­fi­dence that im­plies).

    2. Heuris­tics and cog­ni­tive bi­as­es: I am not much con­vinced that knowl­edge of heuris­tics & bi­ases help in or­di­nary life. Feed­back & learn­ing are pow­er­ful tools in elim­i­nat­ing er­ror, cal­i­brat­ing pre­dic­tions, and jus­tify com­mit­ting what may look like the ; and feed­back is what one gets in or­di­nary life.

      Per , where our knowl­edge of heuris­tics & bi­ases will pay off most is in what Han­son would call sce­nar­ios: evo­lu­tion­ary novel sit­u­a­tions with few prece­dents and only costly or non-ex­is­tent feed­back. (For ex­am­ple, the ques­tion of whether ar­ti­fi­cial in­tel­li­gence will be de­vel­oped by 2040: it will only hap­pen or not on­ce, there are few com­pa­ra­ble events, the con­se­quences may be dra­mat­ic, and our or­di­nary lives offer no use­ful in­sight­s.) As it hap­pens, this de­scribes much of fu­tur­ism & fore­cast­ing but we can­not jus­tify our fu­tur­ism by claim­ing its tech­niques are in­cred­i­bly valu­able in or­di­nary life!

    3. Cry­on­ics girl: The do­na­tions ap­pall me, for rea­sons I lay out at length there—they are a com­plete aban­don­ment of core ideas like util­i­tar­i­an­ism & op­ti­mal phil­an­thropy.

    4. Al­icorn’s “Liv­ing Lu­mi­nously” par­a­digm struck me as du­bi­ous, not backed by even to­ken re­search, and likely idio­syn­cratic to her; I thought her Lu­mi­nos­ity e-novel was merely OK de­spite the end­less dis­cus­sions on LW (ri­val­ing those for Meth­ods of Ra­tio­nal­ity it­self) and that her fol­lowup, Ra­di­ance, was just ter­ri­ble. Nev­er­the­less, her novel ca­reer seems to con­tin­ue.

  5. There is a mod­er­ately funny story about how Ger­ard came to write it, based on my mu­si­cal in­com­pe­tence.↩︎

  6. That is, sum­ming up the (sur­viv­ing) ed­its of my var­i­ous ac­counts over the years: User:G­w­ern, User:­Marudub­shinki, & User:Rhwawn↩︎

  7. Com­pare the CSE re­sults with the Google Re­sults for the anime . Which is more use­ful for an ed­i­tor? For more de­tails, see my re­lease an­nounce­ment.↩︎

  8. A trick I dis­cov­ered when vis­it­ing FHI in 2015—I had used widescreen lap­tops for so long I had for­got­ten how nice por­trait-ori­en­ta­tion was for read­ing.↩︎

  9. My best guess is that my prob­lem ini­tially was that I se­ri­ously un­der­es­ti­mated how much pres­sure it takes to in­sert a Thread­rip­per CPU into its sock­et—it re­quired a truly ter­ri­fy­ing amount of force and I only got it right after triple-check­ing on­line tu­to­ri­als & videos & dis­cus­sion­s—and that was why the first moth­er­board never worked at all, and the sec­ond one was killed by sta­tic elec­tric­ity or a short.↩︎

  10. See also “Ac­tively Open-Minded Think­ing Scale”, “Clar­ity Scale”, “En­gage­ment with Beauty”, & “a mea­sure of what types of sto­ries you en­joy”.↩︎

  11. See also “Zim­bardo Time Per­spec­tive In­ven­tory”. Brent W. Roberts crit­i­cizes these two in­ven­to­ries when used to mea­sure Con­sci­en­tious­ness.↩︎

  12. See also “Re­la­tional Mo­bil­ity scale”, “Em­pathiz­ing and Sys­tem­iz­ing scales” & “Ra­tio­nal vs Ex­pe­ri­en­tial In­ven­tory”.↩︎

  13. See also “Self­-Re­port Psy­chopa­thy Scale”.↩︎

  14. See also “Ex­pe­ri­ence in Pur­chas­ing Be­hav­ior Scale” & “Ken­tucky In­ven­tory of Mind­ful­ness Skills”.↩︎

  15. For fur­ther read­ing on over­con­fi­dence, see all LW ar­ti­cles so tagged. I once read in a book of a study in which sub­jects were asked to gen­er­ate ideas for, IIRC, putting out a fire, and to stop only when they were con­vinced they had thought up all good ones, and usu­ally stop­ping when they had thought up only a third; but I have been un­able to re­find it and would ap­pre­ci­ate know­ing de­tails if this de­scrip­tion rings any bells for a read­er.↩︎

  16. Chap­ter 17, Tao Teh Ching↩︎

  17. For ex­am­ple, my clean-up and ex­ten­sion of the browse-url mod­ule was com­pletely rewrit­ten by ; so I can hardly take credit there.↩︎

  18. Hence­forth, this im­plies I have a com­mit-bit (or equiv­a­lent) for that pro­ject.↩︎

  19. Hence­forth, ‘cleanup’ should be taken as re­fer­ring to ex­ten­sive mis­cel­la­neous changes which in­clude (in no par­tic­u­lar or­der):

    • fix­ing GHC’s -Wall or hlint warn­ings

    • re­plac­ing OPTION prag­mas with LANGUAGE prag­mas

    • track­ing down li­cens­ing in­for­ma­tion

    • switch­ing from Haskel­l98 im­ports to the stan­dard hi­er­ar­chi­cal mod­ule im­ports

      1. eg. import Charimport Data.Char; non­triv­ial in some cases where Haskel­l98 mod­ules were dis­persed over mul­ti­ple base mod­ules
    • re­or­ga­niz­ing the file tree

    • im­prov­ing the Ca­bal­iza­tion

    • white­space for­mat­ting, and so on.

  20. Hence­forth, this typ­i­cally im­plies that I up­loaded it to Hack­age as well↩︎

  21. Hence­forth, this im­plies that I made what­ever changes nec­es­sary to get it com­pil­ing on GHC 6.8.x and 6.10.x↩︎