Who am I online & what have I done? Contact information; sites I use; computers and software tools; things I’ve worked on; psychological profiles
Haskell, personal, psychology, survey
2009-08-052020-06-14 finished certainty: highly likely importance: 3

This page is about me; for infor­ma­tion about, see .


“A tran­si­tion from an author’s book to his con­ver­sa­tion, is too often like an entrance into a large city, after a dis­tant prospect. Remote­ly, we see noth­ing but spires of tem­ples and tur­rets of palaces, and imag­ine it the res­i­dence of splen­dour, grandeur and mag­nifi­cence; but when we have passed the gates, we find it per­plexed with nar­row pas­sages, dis­graced with despi­ca­ble cot­tages, embar­rassed with obstruc­tions, and clouded with smoke.”

; , No. 14 (1750-05-05)1

“Behind a remark­able scholar one often finds a mediocre man, and behind a mediocre artist, often, a remark­able man.”

, §137,

“The reader lives faster than life, the writer lives slow­er.”

James Richard­son,


I am a free­lance Amer­i­can writer & researcher. (To make ends meet, I have a Patreon, ben­e­fit from Bit­coin appre­ci­a­tion thanks to some old coins, and live fru­gal­ly.) I have worked for, pub­lished in, or con­sulted for: Wired (2015), 2 (2012–2013), CFAR (2012), (2017), the FBI (2016), Cool Tools (2013), Quan­ti­modo (2013), New World Ency­clo­pe­dia (2006), Bit­coin Weekly (2011), (2013–2014), Bell­roy (2013–2014), Dominic Frisby (2014), and pri­vate clients (2009-); every­thing on should be con­sid­ered my own view­point or writ­ing unless oth­er­wise spec­i­fied by a rep­re­sen­ta­tive or pub­li­ca­tion. I am cur­rently not accept­ing new com­mis­sions.


“‘I don’t speak’, Bijaz said. ‘I oper­ate a machine called lan­guage. It creaks and groans, but is mine own.’”


I have no con­nec­tion to the French singer or with, any loca­tions in Wales, the gwern on MySpace, or either account on (which are con­nected to an ).


I have been active on the Eng­lish Wikipedia and related projects since Jan­u­ary 2004. Cumu­la­tively6, I have over 90,000 edits and have writ­ten or worked on ; dur­ing my time as an Eng­lish admin­is­tra­tor, I per­formed thou­sands of admin­is­tra­tive actions; I am an admin on the Haskell wiki, han­dling rou­tine spam & van­dal­ism:

I also ran a at “Wikipedia Reli­able Sources for anime & manga”; this is a cus­tom Google search with >4542 web­sites on its and s. (The source/lists are pub­licly avail­able.) It returns much more use­ful7 results for top­ics in pop­u­lar cul­ture, and as the name sug­gests, anime & manga in par­tic­u­lar.

Uses This

I’m some­times asked about my tech “stack”, in the vein of “Uses This” or The Paris Review’s Writer At Work. I use FLOSS soft­ware with a text/CLI empha­sis on a cus­tom work­sta­tion designed for deep learn­ing & rein­force­ment learn­ing work, and an ergonomic home office with por­trait-ori­en­ta­tion mon­i­tor, Aeron chair, & track­ball.


I run Ubuntu Linux with a tiling win­dow man­ager & CLI-centric habits. (I pre­fer Debian but the sup­port of NVIDIA dri­vers has been bet­ter with Ubun­tu, so as long as I need GPU accel­er­a­tion, I will be using Ubun­tu). I began using tiling win­dow man­agers with and helped drive the ini­tial devel­op­ment of and then (my con­fig), which I still use in con­junc­tion with , a fork of the last good GNOME desk­top envi­ron­ment ver­sion before the crazy GNOME 3 ruined every­thing.

I spend most of my time in edit­ing Mark­down (my con­fig), Fire­fox (ex­ten­sions: plu­gin, HTTPS Every­where, NoScript, uBlock ori­gin, Last­Pass, RECAP), or urxvt//. Most of my pro­gram­ming of R/Haskell/Python is done in a REPL+Emacs. (Friends don’t let friends use heroin or org-mod­e—are you ever really going to make back the time it takes to learn & cus­tomize org-mod­e?)

Mis­cel­la­neous­ly: I use for , for RSS, Evernote/NixNote for clippings/notes, for down­loads, / for media play­ing, for IRC, arbtt for time-track­ing, ledger for finances & for scheduling/reminders, for screen tint­ing at night , and for back­ups.


A photo of my work­sta­tion & win­dow view as of 2018-07-29, show­ing the used Aeron chair, Dell mon­i­tor in por­trait mode, and the work­sta­tion; sleep­ing in the cat tree is my cat.


As of June 2020, I use a work­sta­tion PC (which I built myself), a large Dell mon­i­tor mounted in por­trait mode for read­ing8, a 200-foot Eth­er­net cable (which required I ), a Log­itech thumb track­ball, a generic key­board (to be replaced by a Kine­sis Advan­tage key­board once I fig­ure out the keymap­ping), and Bose noise-cancel­ing ear­phones. In July 2020, I switched to the split Ergo­dox EZ ($325). The work­sta­tion is plugged into a 900W UPS for pro­tec­tion against the not-in­fre­quent light­ning storms here, and a 6TB exter­nal drive for daily incre­men­tal back­ups, sup­ple­mented by Back­blaze B2 (~$4/month) & mis­cel­la­neous exter­nal dri­ves. While trav­el­ing, I use my ThinkPad P70 lap­top, which replaced an Acer Aspire V17 (which died in a most unfor­tu­nate way), which replaced a Dell Stu­dio 17, which replaced a PC I built ~2008.

I designed the work­sta­tion to be use­ful for deep learn­ing, rein­force­ment learn­ing, and Bayesian sta­tis­tics, which made it much more expen­sive than I would’ve liked, set­tling on a +dual-GPU design (while not for­get­ting that IO is often a bot­tle­neck), but unfor­tu­nately those are fairly con­tra­dic­tory require­ments (DRL wants RAM+CPU while DL wants just GPU), and the result wound up being expen­sive. (I went over­board on RAM in part because I was frus­trated how I kept hit­ting RAM lim­its while test­ing out var­i­ous dynamic pro­gram­ming algo­rithms for the , and because that much RAM means that entire datasets can be cached or worked with in-mem­ory in R/Python, sav­ing the con­sid­er­able com­plex­ity of out­-of-core algo­rithms or opti­miza­tion­s.)

The work­sta­tion is a liq­uid-cooled AMD Thread­rip­per CPU build on a Giga­byte X399 Des­ignare EX moth­er­board, 2×1080ti NVIDIA GPUs, 110GB RAM (nom­i­nally 128GB but final stick is unus­able due to appar­ent BIOS issues), a 1TB drive for OS/home, and an 8TB inter­nal HDD for bulk stor­age, all in a (un­nec­es­sary but too fun to not have) tem­pered-glass case. The process of putting it together was difficult—motherboards/CPUs/GPUs have got­ten more com­plex since I last built a PC back in 2008—and the first moth­er­board stub­bornly refused to boot, and after I RMA’d it to Newegg (at a cost of $36), the sec­ond one ini­tially worked but then died overnight.9 After tin­ker­ing & pro­cras­ti­nat­ing for months, I gave up on the Asus moth­er­board, checked what was using for their Thread­rip­per builds (Think­Mate was still not offer­ing any), and copied their choice of Giga­byte X399 Des­ignare EX moth­er­boards, rea­son­ing that if they were ship­ping hun­dreds of such sys­tems, it must be rel­a­tively reli­able; that moth­er­board, plus much more force­fully insert­ing the Thread­rip­per CPU, finally worked, and I was able to switch every­thing over in June 2018. While the final result was as pow­er­ful and use­ful as I hoped (espe­cially for work­ing with , where the 16-cores+2-GPUs allows me to cre­ate many differ­ent spe­cial­ized datasets & exper­i­ment with many differ­ent GAN archi­tec­tures) the expe­ri­ence of build­ing it has soured me on build­ing my own PCs in the future: I clearly no longer know enough about PC hard­ware to do a good job, and the more expen­sive the com­po­nents, the less I enjoy the risk or fact of brick­ing them. In the future I will prob­a­bly either rely more on cloud solu­tions or bite the bul­let & buy pre­built sys­tems. The work­sta­tion parts list ( sketch):


For scan­ning books, I use a 12-inch guil­lo­tine paper cut­ter to debind books evenly (a big upgrade from using X-Acto knives with Fiskars curved blades), an Epson sheet-fed scan­ner with imagescan for scan­ning & for post-pro­cess­ing.

My desk is an old desk made out of ply­wood & plumb­ing hard­ware by my great-grand­fa­ther for my aunt; I repur­posed it when I real­ized it was the per­fect size and height. In July 2020, because I failed to find any good stand­ing desks I could buy used locally to test it out, I gave up and bought a 48x30 curved bam­boo Jarvis stand­ing desk ($609). I exper­i­mented with a but found it dis­tract­ing, chron­i­cally unpleas­ant, and dis­tress­ing to my cat. I put the desk in front of my bay win­dow so I could enjoy the view and rest my eyes, while watch­ing what hap­pens on the riv­er. The bay win­dow unfor­tu­nately often has direct sun­light through it, so I added reflec­tive sheet­ing, which greatly reduces the heat dur­ing the sum­mer (at the cost of mak­ing it gloomier in win­ter, of course, but that is why I have bright LED bulb­s). The chair is a used Aeron chair I bought off Craigslist for $225 in Novem­ber 2016 (a bar­gain, although I doubt I would pay the list price). The sisal cat tree (Pet­co) pro­vides an excel­lent perch for my cat, and I have added a pet flap with a cat win­dow sill so he can more eas­ily come & go, with acrylic sheet­ing to reduce air flow. (He turns out to greatly dis­like soft sur­faces, so half of the cat win­dow sill was use­less! I had to replace the foam padding & cover with a sheet of ply­wood I cut to fit.) The box fan by my feet (Wal­mart, $19) & the work­sta­tion both rest on rub­ber-cork anti-vi­bra­tion pads. To reduce RSI, I keep a grip exer­ciser around to use dur­ing idle moments like watch­ing videos. For mak­ing , I boil water in a sim­ple adjustable elec­tric tea ket­tle which I’ve made ‘pro­gram­ma­ble’ by drilling a hole into the clear plas­tic & insert­ing a meat ther­mome­ter (which com­bi­na­tion is far cheaper than elec­tronic ket­tles and more trust­wor­thy); I then steep the tea in a Finum fil­ter inside a big Colo­nial Williams­burg ceramic fox mug.

My cat would like to remind you to take a typ­ing & com­puter break every hour.

Mailing lists






This sec­tion cov­ers some of the most impor­tant things pos­si­ble to know about me: my per­son­al­ity and men­tal descrip­tion. No doubt some read­ers expected a care­fully air­brushed & pot­ted biog­ra­phy describ­ing where & when I was raised, what my famil­ial & tribal affil­i­a­tions are, or what famous insti­tu­tions I am affil­i­ated with; even though this infor­ma­tion is almost entirely use­less—what can one pre­dict about me if one knows that I was born in Illi­nois and raised on Long Island, but (maybe) my accent and a gen­eral lib­er­al­ism? The irony—that peo­ple want most the infor­ma­tion they will learn from least­—will not be lost on those famil­iar with . In con­trast, stan­dard­ized & val­i­dated psy­cho­me­t­ric instru­ments like the or really do have pre­dic­tive valid­ity for many life out­comes.

(Much of this data comes from Your­Moral­ I plan to retake the sur­veys, if pos­si­ble, every decade; it will be inter­est­ing to see what changes.)


To describe my per­son­al­ity briefly: I am intro­vert­ed, calm, nei­ther par­tic­u­larly indus­tri­ous nor lazy, con­trary, and patho­log­i­cally curi­ous. I have made a copy of my 2011–2014 responses to the Your­Moral­ cor­pus; dis­cussed in more detail below. My scores on the “Big 5 Per­son­al­ity Inven­tory”, /long 1/2/3:

  1. 10: high (short) or 87/87th (long)
  2. 11: medium or 64/69th
  3. 12: low or 6/7th per­centile
  4. 13: medi­um-low or 3/3rd per­centile
  5. 14: medi­um-low or 16/13th per­centile

For those who enjoy play­ing the game of ‘ad hominem via lay psy­chi­atric diag­no­sis’, may I sug­gest not accus­ing me of —which is so over­done—but some­thing more novel & scary-sound­ing like ?


The rel­e­vant results



  • Email: gwern@g­w­; I do not use Skype.
  • Bit­coin: 1Gb89tyJq3P5K5M3GcpFvPrMsw33cik9wX (canon­i­cal address; used for #bitcoin-otc trad­ing)
  • PGP key (mir­ror; fin­ger­print: 7DCEA38789C588CC; my old key, F7E5D682, is no longer usable)


Collaboration style

Once on #haskell, I was asked why I have no large pro­grams to my cred­it; I replied, “My prob­lem is that most pro­grams I use already exist.”

I am not a bad Haskell pro­gram­mer (although I am no guru like Simon Pey­ton-Jones, Apfel­mus, or Don Stew­art), but given how long I’ve been using Haskell, my con­tri­bu­tions prob­a­bly look pretty slim. This isn’t because I don’t like Haskel­l—I do, I find func­tional pro­gram­ming nat­u­ral: defin­ing trans­for­ma­tion after trans­for­ma­tion until the result is what I need. And of the func­tional lan­guages, Haskell seems the best com­bi­na­tion of power beyond basic arith­metic or list pro­cess­ing, one of the best ecosys­tems, and good basic lan­guage. (Which is not to say it’s per­fect: there are some sharp edges in the basic math which irri­tate me when I’m mess­ing around in the REPL.)

This is partly because of my style of con­tri­bu­tion. I’ve always pre­ferred to work on exist­ing appli­ca­tions and libraries than to go write my own. I’ve always pre­ferred to take some­one else’s work and bring it up to snuff than write a clean imple­men­ta­tion of my own. I’ve always pre­ferred prod­ding the author or main­tainer to do the right thing than to drop a large batch of patches onto them. Like­wise, I view it as bet­ter to use Haskell stan­dards like Cabal or Darcs than to use some­thing like Auto­tools even if the lat­ter lets us man­age just a lit­tle more automa­tion. I view it as bet­ter to upload to Hack­age than to use any fancy site like or .

It’s bet­ter to do yeo­man’s work tak­ing two sim­i­lar mod­ules in two appli­ca­tions and split them out to a library than to write even the fan­ci­est using . Bet­ter to com­mit changes that reduce user con­figs by a line than to demon­strate once again the ele­gance of mon­ads. Bet­ter by far to file a bug than wank around in #haskell expres­sions.

It is much bet­ter to find some peo­ple who have tried in the past to solve a prob­lem and bring them together to solve it, than to solve it your­self—even if it means being a foot­note (or less) in the announce­ment. What’s impor­tant is that it got done, and peo­ple will be using it. Not the cred­it. It is a high accom­plish­ment indeed to fac­tor out a bit of func­tion­al­ity into a library and make every pos­si­ble user actu­ally use it. Would that more Haskellers had this mind­set! Indeed, would that more peo­ple in gen­eral had this mind­set; as it is, peo­ple have bad habits of repeat­edly fail­ing when they think they have spe­cial infor­ma­tion, are highly even in objec­tive areas with quick feed­back, and badly over­es­ti­mate how many good ideas they can come up with15—in­deed, most good ideas are . One should be able to draw upon the wis­dom of oth­ers.

This is an ethos I learned work­ing with the inclu­sion­ists of Wikipedia. No code is so bad that it con­tains no good; the most valu­able code is that used by other code; credit is less impor­tant than work; a steady stream of small triv­ial improve­ments is bet­ter than occa­sional mas­sive edits.

A leader is best when peo­ple barely know that he exists, not so good when peo­ple obey and acclaim him, worst when they despise him. Fail to honor peo­ple, They fail to honor you. But of a good lead­er, who talks lit­tle, when his work is done, his aims ful­filled, they will all say, ‘We did this our­selves.’16

This is not an ethos cal­cu­lated to impress. Fil­ing bug reports, help­ing new­bies, com­ment­ing on arti­cles and code, cabal­iz­ing & upload­ing code—these are things hard to eval­u­ate or take credit for. They are use­ful, use­ful indeed (shep­heb or, eg. myself, never boast in #xmonad of hav­ing helped 5 new­bies today, but over the months and years, this friend­li­ness and ready aid is of greater value than any mod­ule in all of XMon­ad­Con­trib.) but they will never impress an inter­viewer or earn a fel­low­ship. Is that too bad? Did I waste all my time?

I don’t think so. I value my con­tri­bu­tions, and the Haskell com­mu­nity is bet­ter for it. It may have made my life a lit­tle more diffi­cult—all that time spent on Haskell mat­ters is time I did not devote to classes or jobs or what-have-y­ou—but ulti­mately they did help some­body. One could do worse things with one’s time than that.

Coding contributions

I mostly con­tribute to projects in Haskell, my favorite lan­guage; I have con­tributed to non-Haskell projects such as , , 17 etc. but not in major ways, so I do not list them here. After start­ing this web­site, I wound down my reg­u­lar cod­ing activ­i­ties in favor of my writ­ings; when I code, now it tends to be tools doc­u­mented or hosted on this web­site (eg , ) or inte­grated into write­ups (eg ). For that code, you can browse by lan­guage tag: //////////////////

Below is a more detailed list of my old Haskell con­tri­bu­tions, most of which is now of only his­tor­i­cal inter­est.


  • arbtt

    • wrote tuto­r­ial on con­fig­ur­ing the time-tracker & defin­ing rules: “Effec­tive Use of arbtt”
    • doc­u­mented depen­den­cies, sim­i­lar soft­ware, con­fig­u­ra­tion syn­tax mode, CLI flag cor­rec­tions
  • Darcs

    1. Switched from Fast­Packed­Strings to ByteStrings
    2. Low-level C opti­miza­tion
    3. Ini­ti­ated Cabal­iza­tion (my work ini­tially appeared as darc­s-ca­bal­ized and then was merged into HEAD and darcs-cabalized dep­re­cat­ed)
    4. Refac­tor­ing of shell tests
    5. Ini­ti­ated switch from wiki to Gitit
    6. Iden­ti­fied per­for­mance issue & insti­gated addi­tion of --max-count option for File­store
  • XMonad

    1. reg­u­lar XMon­ad­Con­trib patch reviews

    2. Con­fig archive down­loader

    3. Con­tributed mod­ules:

      1. XMonad.Util.Paste
      2. XMonad.Actions.Search
      3. XMonad.Actions.WindowGo
      4. XMonad.Util.XSelection
    4. Main­tained pre­vi­ous18

  • Yi

    1. Con­tributed mod­ules:

      1. Yi.IReader
      2. Yi.­Mod­e.IReader
      3. Yi.Hoogle
    2. Improved Emacs key­bind­ings

    3. Ini­ti­ated ‘Uni­cod­ify’ or ‘Pretty Lamb­das’ fea­ture for Haskell syn­tax high­light­ing

    4. Added move­men­t-re­lated func­tions for improved incre­men­tal search

    5. Cleanup19

    6. Com­ment sup­port to cabal-mode

  • Lambd­abot

    1. (Re)­Ca­bal­ized20

    2. Adapted to use Mue­val

    3. Refac­tored out code in mul­ti­ple pack­ages:

      1. show
      2. lambd­abot-u­tils
      3. brain­fuck
      4. unlambda
    4. Imple­mented run-in-any-di­rec­tory func­tion­al­ity (pre­vi­ously Lambd­abot could only run in the repos­i­tory direc­to­ry)

    5. Cleanup

    6. Main­tained it (with Cale Gib­bard)

  • Gitit

    • Wrote Darcs back­end (which was moved to the file­store pack­age and became Data.­File­Store.­Darcs)
    • Did some opti­miza­tion work (im­ages, JavaScript & CSS mini­fi­ca­tion, wrote gzip encod­ing & ini­ti­ated expire head­ers, JS relo­ca­tion, fewer calls to expen­sive file­store func­tions)
    • Wrote RSS sup­port
    • Wrote Inter­wiki plu­gin
    • Wrote Date plu­gin
    • Wrote WebArchiver & WebArchiver­Bot plu­g­ins (see later stand­alone tool/library)
    • Wrote Uni­code plu­gin
    • Wrote HCAR entry
    • Misc. bug reports & sug­ges­tions
    • Added PDF export func­tion­al­ity
    • Inte­grated JQuery-based float­ing foot­notes
  • File­store

    • Insti­gated its development/use in Gitit & Orchid
    • Main­tained the Darcs back­end (de­bug & opti­mize)
  • archiver: Wrote and main­tain it (see release ANN)

  • Mue­val: Wrote it

  • wp-archive­bot: Wrote it (see release ANN)

  • Change-mon­ger: Wrote it

  • Base

  • Unix: fixed a pos­si­ble run­time crash in mkstemp; added mkstemp docs

  • Auto­proc

    • Cleanup
    • Improved basic func­tion­al­ity
    • Imple­mented an XMon­ad-style reload sys­tem to allow actual cus­tomiza­tion
    • Main­tained it
  • Frag

    • Updated for GHC 6.8 & 6.1021
    • Cleanup
    • Replaced the non-Free level data and graph­ics with Free ones
  • Hint

    • Improved exam­ples, docs
    • Added UTF8 sup­port
    • Made use ghc-paths library
    • Enabled QuickCheck sup­port
    • Added GHC-options sup­port
  • Hlint: added GHCi inte­gra­tion

  • Pugs

    • Cleaned up their third-party mod­ules
    • Fixed up var­i­ous Cabal issues
    • Helped main­tain it
  • QuickCheck: Data.Complex instance

  • Tag­soup: replaced old cus­tom HTTP down­load code with stan­dard library func­tions

  • Hashell: Updated for 6.8’s GHC API; Cleanup; Cabal­ized


As part of my effort to help shift the Haskell com­mu­nity to the use of cen­tral­ized pack­ag­ing repos­i­to­ries pio­neered by CPAN, which is a fun­da­men­tal require­ment for any mod­ern lan­guage, I made a sys­tem­atic effort to get all extant Haskell code into Cabal for­mat & uploaded to Hack­age—whether the orig­i­nal authors wanted it or not. (For all the ruffled feath­ers and con­tin­ued infe­lic­i­ties of Haskell pack­ag­ing, a decade lat­er, no Haskeller would go back to the pre-Cabal/Hackage Auto­tools days.) I cabal­ized and/or uploaded (ac­cord­ing to the 2013-05-10 Hack­age upload log):

  1. This is a lit­er­ary way of say­ing I am not as inter­est­ing as my writ­ings, and in some respect, it should not mat­ter who I am or what I have done because argu­ment screens off author­ity.↩︎

  2. When I say “research assis­tant”, I mean it in the older sense of some­one who does detail work for another per­son’s orig­i­nal research—so I spent a lot of time read­ing up on spe­cific areas and mak­ing notes about stuff my boss needs, and only occa­sion­ally do inde­pen­dent work. Not all my work can be made pub­lic, but some of it is. A par­tial list in rough chrono­log­i­cal order:

  3. The fol­low­ing is a list of my sub­mis­sions to LW I regard as sub­stan­tive or par­tic­u­larly good, exclud­ing con­tent which can be found on, in chrono­log­i­cal order with inter­est­ing ones high­light­ed:

  4. Of course, I don’t agree with every MIRI or LW posi­tion. The intel­lec­tual homo­gene­ity has been much over-es­ti­mated by out­siders who have not both­ered to look at the annual sur­veys, I think. Here are some major points for me:

    1. : I think that LWers who were per­suaded by Eliez­er’s MWI writ­ings are wrong to do so, as they are unfa­mil­iar with even the rudi­ments of any alter­na­tives inter­pre­ta­tions and can­not judge in the mat­ter; how many LWers have ever seri­ously looked at all the com­pet­ing the­o­ries, or could even name many alter­na­tives? (“Col­lapse, MWI, uh…”), much less could dis­cuss why they dis­like or what­ev­er. Lack­ing any real under­stand­ing, they ought to sim­ply adopt the expert con­sen­sus, where MWI seems to have a plu­ral­ity or bare major­ity of adher­ents (with the weak con­fi­dence that implies).

    2. Heuris­tics and cog­ni­tive bias­es: I am not much con­vinced that knowl­edge of heuris­tics & biases help in ordi­nary life. Feed­back & learn­ing are pow­er­ful tools in elim­i­nat­ing error, cal­i­brat­ing pre­dic­tions, and jus­tify com­mit­ting what may look like the ; and feed­back is what one gets in ordi­nary life.

      Per , where our knowl­edge of heuris­tics & biases will pay off most is in what Han­son would call sce­nar­ios: evo­lu­tion­ary novel sit­u­a­tions with few prece­dents and only costly or non-ex­is­tent feed­back. (For exam­ple, the ques­tion of whether arti­fi­cial intel­li­gence will be devel­oped by 2040: it will only hap­pen or not once, there are few com­pa­ra­ble events, the con­se­quences may be dra­mat­ic, and our ordi­nary lives offer no use­ful insight­s.) As it hap­pens, this describes much of futur­ism & fore­cast­ing but we can­not jus­tify our futur­ism by claim­ing its tech­niques are incred­i­bly valu­able in ordi­nary life!

    3. Cry­on­ics girl: The dona­tions appall me, for rea­sons I lay out at length there—they are a com­plete aban­don­ment of core ideas like util­i­tar­i­an­ism & opti­mal phil­an­thropy.

    4. Alicorn’s “Liv­ing Lumi­nously” par­a­digm struck me as dubi­ous, not backed by even token research, and likely idio­syn­cratic to her; I thought her Lumi­nos­ity e-novel was merely OK despite the end­less dis­cus­sions on LW (ri­val­ing those for Meth­ods of Ratio­nal­ity itself) and that her fol­lowup, Radi­ance, was just ter­ri­ble. Nev­er­the­less, her novel career seems to con­tin­ue.

  5. There is a mod­er­ately funny story about how Ger­ard came to write it, based on my musi­cal incom­pe­tence.↩︎

  6. That is, sum­ming up the (sur­viv­ing) edits of my var­i­ous accounts over the years: User:G­w­ern, User:­Marudub­shinki, & User:Rhwawn↩︎

  7. Com­pare the CSE results with the Google Results for the anime . Which is more use­ful for an edi­tor? For more details, see my release announce­ment.↩︎

  8. A trick I dis­cov­ered when vis­it­ing FHI in 2015—I had used widescreen lap­tops for so long I had for­got­ten how nice por­trait-ori­en­ta­tion was for read­ing.↩︎

  9. My best guess is that my prob­lem ini­tially was that I seri­ously under­es­ti­mated how much pres­sure it takes to insert a Thread­rip­per CPU into its sock­et—it required a truly ter­ri­fy­ing amount of force and I only got it right after triple-check­ing online tuto­ri­als & videos & dis­cus­sion­s—and that was why the first moth­er­board never worked at all, and the sec­ond one was killed by sta­tic elec­tric­ity or a short.↩︎

  10. See also “Actively Open-Minded Think­ing Scale”, “Clar­ity Scale”, “Engage­ment with Beauty”, & “a mea­sure of what types of sto­ries you enjoy”.↩︎

  11. See also “Zim­bardo Time Per­spec­tive Inven­tory”. Brent W. Roberts crit­i­cizes these two inven­to­ries when used to mea­sure Con­sci­en­tious­ness.↩︎

  12. See also “Rela­tional Mobil­ity scale”, “Empathiz­ing and Sys­tem­iz­ing scales” & “Ratio­nal vs Expe­ri­en­tial Inven­tory”.↩︎

  13. See also “Self­-Re­port Psy­chopa­thy Scale”.↩︎

  14. See also “Expe­ri­ence in Pur­chas­ing Behav­ior Scale” & “Ken­tucky Inven­tory of Mind­ful­ness Skills”.↩︎

  15. For fur­ther read­ing on over­con­fi­dence, see all LW arti­cles so tagged. I once read in a book of a study in which sub­jects were asked to gen­er­ate ideas for, IIRC, putting out a fire, and to stop only when they were con­vinced they had thought up all good ones, and usu­ally stop­ping when they had thought up only a third; but I have been unable to refind it and would appre­ci­ate know­ing details if this descrip­tion rings any bells for a read­er.↩︎

  16. Chap­ter 17, Tao Teh Ching↩︎

  17. For exam­ple, my clean-up and exten­sion of the browse-url mod­ule was com­pletely rewrit­ten by ; so I can hardly take credit there.↩︎

  18. Hence­forth, this implies I have a com­mit-bit (or equiv­a­lent) for that pro­ject.↩︎

  19. Hence­forth, ‘cleanup’ should be taken as refer­ring to exten­sive mis­cel­la­neous changes which include (in no par­tic­u­lar order):

    • fix­ing GHC’s -Wall or hlint warn­ings

    • replac­ing OPTION prag­mas with LANGUAGE prag­mas

    • track­ing down licens­ing infor­ma­tion

    • switch­ing from Haskel­l98 imports to the stan­dard hier­ar­chi­cal mod­ule imports

      1. eg. import Charimport Data.Char; non­triv­ial in some cases where Haskel­l98 mod­ules were dis­persed over mul­ti­ple base mod­ules
    • reor­ga­niz­ing the file tree

    • improv­ing the Cabal­iza­tion

    • white­space for­mat­ting, and so on.

  20. Hence­forth, this typ­i­cally implies that I uploaded it to Hack­age as well↩︎

  21. Hence­forth, this implies that I made what­ever changes nec­es­sary to get it com­pil­ing on GHC 6.8.x and 6.10.x↩︎