Banner ad for the tech recruiting company Triplebyte: 'Triplebyte is building a background-blind screening process for hiring software engineers'

Links

Who am I online & what have I done? - Contact information; sites I use; things I've worked on (Haskell, personal, psychology, survey)
created: 05 Aug 2009; modified: 07 Dec 2018; status: finished; confidence: highly likely; importance: 3

This page is about me; for information about gwern.net, see About This Website.

Personal

A transition from an author’s book to his conversation, is too often like an entrance into a large city, after a distant prospect. Remotely, we see nothing but spires of temples and turrets of palaces, and imagine it the residence of splendour, grandeur and magnificence; but when we have passed the gates, we find it perplexed with narrow passages, disgraced with despicable cottages, embarrassed with obstructions, and clouded with smoke.1

Behind a remarkable scholar one often finds a mediocre man, and behind a mediocre artist, often, a remarkable man.2

The reader lives faster than life, the writer lives slower.3

Work

I am a freelance writer & researcher. (To make ends meet, I sell advertising on gwern.net, have a Patreon, benefit from Bitcoin appreciation thanks to some old coins, and live frugally.) I have worked for, published in, or consulted for: Wired (2015), MIRI/SIAI4 (2012-2013), CFAR (2012), GiveWell (2017), the FBI (2016), A Global Village (2013), Cool Tools (2013), Quantimodo (2013), New World Encyclopedia (2006), Bitcoin Weekly (2011), Mobify (2013-2014), Bellroy (2013-2014), Dominic Frisby (2014), and private clients (2009-); everything on gwern.net should be considered my own viewpoint or writing unless otherwise specified by a representative or publication. I am currently not accepting new commissions.

Websites

I have no connection to the French singer or with gwern.com, any locations in Wales, the gwern on MySpace, or either account on Pivory.com (which are connected to an attempted extortion of me).

Wikis

I have been active on the English Wikipedia and related projects since January 2004. Cumulatively9, I have over 90,000 edits and have written or worked on hundreds of articles; during my time as an English administrator, I performed thousands of administrative actions; I am an admin on the Haskell wiki, handling routine spam & vandalism:

I also ran a custom Google search tool at Wikipedia Reliable Sources for anime & manga; this is a custom Google search with >4542 websites on its black and whitelists. (The source/lists are publicly available.) It returns much more useful10 results for topics in popular culture, and as the name suggests, anime & manga in particular.

Uses This

Software

I run Ubuntu Linux with a tiling window manager & CLI-centric habits. (I prefer Debian but the support of NVIDIA drivers has been better with Ubuntu, so as long as I need GPU acceleration, I will be using Ubuntu). I began using tiling window managers with ratpoison and helped drive the initial development of StumpWM and then xmonad (my config), which I still use in conjunction with MATE, a fork of the last good GNOME desktop environment version before the crazy GNOME 3 ruined everything.

I spend most of my time in Emacs editing Markdown (my config), Firefox (extensions: Evernote plugin, HTTPS Everywhere, NoScript, uBlock origin, LastPass, RECAP), or urxvt/Bash/screen. Most of my programming of R/Haskell/Python is done in a REPL+Emacs. (Friends don’t let friends use heroin or org-mode.)

Miscellaneously: I use Mnemosyne for spaced repetition, Liferea for RSS, Evernote/NixNote for clippings/notes, rTorrent for downloads, mpv/Clementine for media playing, irssi for IRC, arbtt for time-tracking, ledger for finances & Google Calendar for scheduling/reminders, Redshift for screen tinting at night to help bedtimes, and duplicity for backups.

Oolong reminds you to take a typing & computer break every hour.
Oolong reminds you to take a typing & computer break every hour.

Hardware

A photo of my workstation & window view as of 29 July 2018, showing the used Aeron chair, Dell monitor in portrait mode, and the workstation; sleeping in the cat tree is my cat, Oolong.
A photo of my workstation & window view as of 29 July 2018, showing the used Aeron chair, Dell monitor in portrait mode, and the workstation; sleeping in the cat tree is my cat, Oolong.

As of November 2018, I use a workstation PC (which I built myself), a large Dell monitor mounted in portrait mode for reading11, a 200-foot Ethernet cable (which required I dig a trench to the next house), a Logitech thumb trackball, a generic keyboard (to be replaced by a Kinesis Advantage keyboard once I figure out how to fix the keymapping), and Bose noise-canceling earphones. The workstation is plugged into a 900W UPS for protection against the not-infrequent lightning storms here, and a 6TB external drive for daily incremental backups, supplemented by Backblaze B2 (~$4/month) & miscellaneous external drives. While traveling, I use my ThinkPad P70 laptop, which replaced an Acer Aspire V17 (which died in a most unfortunate way), which replaced a Dell Studio 17, which replaced a PC I built ~2008.

I designed the workstation to be useful for deep learning, reinforcement learning, and Bayesian statistics, which made it much more expensive than I would’ve liked, settling on a Threadripper+dual-GPU design (while not forgetting that IO is often a bottleneck), but unfortunately those are fairly contradictory requirements (DRL wants RAM+CPU while DL wants just GPU), and the result wound up being expensive. (I went overboard on RAM in part because I was frustrated how I kept hitting RAM limits while testing out various dynamic programming algorithms for the Kelly coin-flip game, and because that much RAM means that entire datasets can be cached or worked with in-memory in R/Python, saving the considerable complexity of out-of-core algorithms or optimizations.)

The workstation is a liquid-cooled AMD Threadripper CPU build on a Gigabyte X399 Designare EX motherboard, 2x1080ti NVIDIA GPUs, 110GB RAM (nominally 128GB but final stick is unusable due to apparent BIOS issues), a 1TB NVMe drive for OS/home, and an 8TB internal HDD for bulk storage, all in a (unnecessary but too fun to not have) tempered-glass case. The process of putting it together was difficult - motherboards/CPUs/GPUs have gotten more complex since I last built a PC back in 2008 - and the first motherboard stubbornly refused to boot, and after I RMA’d it to Newegg (at a cost of $36), the second one initially worked but then died overnight.12 After tinkering & procrastinating for months, I gave up on the Asus motherboard, checked what Puget Systems was using for their Threadripper builds (ThinkMate was still not offering any), and copied their choice of Gigabyte X399 Designare EX motherboards, reasoning that if they were shipping hundreds of such systems, it must be relatively reliable; that motherboard, plus much more forcefully inserting the Threadripper CPU, finally worked, and I was able to switch everything over in June 2018. While the final result was as powerful and useful as I hoped (especially for working with Danbooru2017, where the 16-cores+2-GPUs allows me to create many different specialized datasets & experiment with many different GAN architectures) the experience of building it has soured me on building my own PCs in the future: I clearly no longer know enough about PC hardware to do a good job, and the more expensive the components, the less I enjoy the risk or fact of bricking them. In the future I will probably either rely more on cloud solutions or bite the bullet & buy prebuilt systems. The workstation parts list (PCPartPicker.com sketch):

For scanning books, I use a 12-inch guillotine paper cutter to debind books evenly (a big upgrade from using X-Acto knives with Fiskars curved blades), an Epson sheet-fed scanner with imagescan for scanning & gscan2pdf for post-processing.

My desk is an old desk made out of plywood & plumbing hardware by my great-grandfather for my aunt; I repurposed it when I realized it was the perfect size and height. I put the desk in front of my bay window so I could enjoy the view and rest my eyes, while watching what happens on the river. The bay window unfortunately often has direct sunlight through it, so I added reflective sheeting, which greatly reduces the heat during the summer (at the cost of making it gloomier in winter, of course, but that is why I have bright LED bulbs). The chair is a used Aeron chair I bought off Craigslist for $225 in November 2016 (a bargain, although I doubt I would pay a list price like $1200). The sisal cat tree (Petco) provides an excellent perch for my cat, Oolong, and I have added a pet flap with a cat window sill so he can more easily come & go, with acrylic sheeting to reduce air flow. (Oolong turns out to greatly dislike soft surfaces, so half of the cat window sill was useless! I had to replace the foam padding & cover with a sheet of plywood I cut to fit.) The box fan by my feet (Walmart, $19) & the workstation both rest on rubber-cork anti-vibration pads. To reduce RSI, I keep a grip exerciser around to use during idle moments like watching videos. For making tea, I boil water in a simple adjustable electric tea kettle which I’ve made programmable by drilling a hole into the clear plastic & inserting a meat thermometer (which combination is far cheaper than electronic kettles and more trustworthy); I then steep the tea in a Finum filter inside a big Colonial Williamsburg ceramic fox mug.

Mailing lists

MOOCs

Finished:

Incomplete:

Abandoned:

Profile

This section covers some of the most important things possible to know about me: my personality and mental description. No doubt some readers expected a carefully airbrushed & potted biography describing where & when I was raised, what my familial & tribal affiliations are, or what famous institutions I am affiliated with; even though this information is almost entirely useless - what can one predict about me if one knows that I was born in Illinois and raised on Long Island, but (maybe) my accent and a general liberalism? The irony - that people want most the information they will learn from least - will not be lost on those familiar with signaling. In contrast, standardized & validated psychometric instruments like the NEO-PI-R or RAPM really do have predictive validity for many life outcomes.

(Much of this data comes from YourMorals.org. I plan to retake the surveys, if possible, every decade; it will be interesting to see what changes.)

Personality

To describe my personality briefly: I am introverted, calm, neither particularly industrious nor lazy, contrary, and pathologically curious. I have made a copy of my 2011-2014 responses to the YourMorals.org corpus; discussed in more detail below. My scores on the Big 5 Personality Inventory, /long 1/2/3:

  1. Openness to Experience13: high (short) or 87/87th percentile (long)
  2. Conscientiousness14: medium or 64/69th
  3. Extraversion15: low or 6/7th percentile
  4. Agreeableness16: medium-low or 3/3rd percentile
  5. Neuroticism17: medium-low or 16/13th percentile

For those who enjoy playing the game of ad hominem via lay psychiatric diagnosis, may I suggest not accusing me of Asperger syndrome - which is so overdone - but something more novel & scary-sounding like schizoid personality disorder?

Philosophy/morals

The relevant results

Politics

IQ

At the risk of alienating readers even further, I will reveal that I have taken IQ tests 3 times that I know of:

  1. At some point in 3rd-5th grade, I took the Abbreviated Stanford-Binet and scored ~135. (I came across the report cleaning up a room as a child and could not keep it.)
  2. In February 2009, for the purpose of a before-after dual n-back comparison, I took the Raven’s test at iqtest.dk and scored 115. (Others report they too received low scores; it seems, based on emails, that the maintainer renormed it on the population of online test-takers but has failed to disclose this publicly, which means it will be low by an unknown amount but possibly somewhere around 0.5 standard deviations, in addition to the usual large amount of measurement error in any short single-form IQ test.)
  3. On 5 August 2011, I signed up for and took the entrance survey to the prediction-contest Good Judgment Project; the survey included among other things a short Raven’s test. My survey results include the raw data but not any norm: of the 12 questions, I got 8, while the mean among participants was 8.81 and the SD 2.39.

Other ways to approximate IQ are standardized tests which are heavily g-loaded; they are broadly consistent with the 130s decile:

  1. 2004 SAT: 800V/700M (conversion)
  2. 2004 ACT: 32
  3. 2009 GRE: 730V/680M/5.5W (conversion)

Contact

  • Email: [email protected]; I do not use Skype.
  • Bitcoin: 1Gb89tyJq3P5K5M3GcpFvPrMsw33cik9wX (canonical address; used for #bitcoin-otc trading)
  • PGP key (mirror; fingerprint: 89C588CC; my old key, F7E5D682, is no longer usable)
-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBFUXRioBEAC9iINWQuiLTVnA05ZwU8eMhcL19kiy8rWehWkpUHNMhLCw6ZCg
k9K91WW6WBEqEHPadkL6cSQjNlduyGsEui2OZYxpMSM+1ugzEI6eNaMH8fynHpyZ
Z+35nAl11kRM5/xZ3WoUtO/x6U7HR8zRt+A+gzpz1eDkT8gpjBFHFHFnCdNJQe+z
OvWMrcK0oZexdlAonV4xp3izj6dsLthoCE3nCtvLlWyZxW8JtAOD5a2SmLVxHXdM
/p11xQsYUrJAxgTkyQCRk182SstpynKuI0r+/H36Is61edSUM3nsXR6Xjb995bTS
9zcQgZvP35y5Z6Dt8oCjbK5dHJVHDPsM5c0Y71e09vRppPds/7bJO/hEq2jXYtj3
cYL7EJp3ZB/FRDshFpsrZ+Pw2TimNPFUHp+Z3mUoEqzL3nUvM/GuYx5o2bTpwT1K
g1stCHiynwHsyx60J7e5LiTeL0OtxevxfLlkQYyEbohiyJVWHxWyXCpDTVYdj3ya
fBiDbC+j8aI2XMBC0Uf7rOHyKkV1du6pjz4bDqguv4Pj9vIkI7v/VvNtiRvtEgkk
ae0Zbx0UYiso4Bjuy1uemjSK4XiUL8kqKS5XrJmYsMFm6WI/K8yllciZJZ0Oi/q+
utzwBnnl37ys/3bGKraEFyXzQ0Jh72u2suYyXXoN28APMIShppPDX86PhQARAQAB
tB9Hd2VybiBCcmFud2VuIDxnd2VybkBnd2Vybi5uZXQ+iQI+BBMBCgAoBQJVF0Yq
AhsDBQkFo5qABgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRB9zqOHicWIzIhe
EACLkkDxAHCql3LVWAONxx0DbnFAJ8k5TpYM5riDe9n8cjBJReGl5JZ4A52a5J70
GDFGDPP1v4WZ/tbi4vBIfSzEzox/6OI72QgV/JJm/ILloyNWjQWAWRnDQp6KjgxF
wCQ0E3Hn8/bNnOMlLSQEDBwG0/xONY1B2tR/v4tYuUhajTdor3LDrDN09wa31Vw6
5wcC1zuWbb10R89pNrFGSVl7dDZ0wneYG5sJlOjDvFXe9IkxjJemM+Z7sHb1UVg3
aBXdSAG/i0+E5tL0Bi4OymehTw5h6681GwO5SkmmAVfTiYqFEeQbBBPdMwe5YnKA
moos1PO9dhtJs5tYqKlxnxoZ2LkQqKkJsbOkDz3F0s2dZb0RnyTO9e2Et8Kiy0vT
exMtO+QYCy2RNVepgP3RGuTJA5R53KBPgV9l+Vm5uzQa9JYn02sw9W0FZaEIzmE9
2hHQ9XqwBwidaXqUQg+iwFYofhpbHMnjlPGYICDH3VdoGXg3HreYZaFWM/wOEHdj
POwyPNBGWg26uUsP8v1PYuW60jdiFvLYXGha/5jc3YUff37oAijvbWGnQGTb7w7W
IKN8PGuwQ0RanK3nMM5z92bpDJanKngAsQFMluQHr2Yc7E+sst3BorULiBLGtIef
cMQEy2jMStrfcjthUBi/jwqhDuhrVl55ddpaJzSDBxwIebkCDQRVF0YqARAA16rd
/VKV+TA918awlLym6q47SZPzuWjjwhJk4oq6vKhLeVIfONcts4UO0vL3jArb45TP
QmCdkJy+L+M7nudS6RrjDV1MRU5lxJD1Q8TQrkZJK/G1UGdnd7zPsIoEoPZ6jDm1
ddvmjA5XZ5Sg0SI3i1OgkpwNZ60M8tVrHLlb0nUiPmgZv0QDc/A435uv6YrlXMip
S98/I4NC0rRHg/mJbPw0noKzApYZh8OTvzQT0/rt4yql8LylJMdxeHZXpQY4UKOw
40bOT7PUfn/GOPVWEUI6rO6HrxyDoWXrct9Iu02wUXQsn/IIJzOXBLrKxDL2tHrS
tnWb71Jy3WnPW8y2PjsTuIvT7ExRj6mh1ZbHY8QjaBiBQD4nT1InoA9pdtt9SOa9
TOdVaAjQjNRm4Ng4OT7GV203TXODGHdeheqTmIxDXTrBRdgeViDxkTXMn2zneWeP
C5reBmBTjP/D3CZpDMTkGAYglvfnhPxCKPuLDkppPQbraHB5lFFJqpZZOF8D2yR+
vOrgZ07Ynxgs7PLFlLnx5PXxEIt5sVbiYNyCsY1PKELtHAPcGVcKNNlPJbl4EUeD
U83WAlMUEt/IxiCMK4d+bZbdRxggRGHm48QnIqA4LBm2Y+aAc3mxlkFtSqLAFTZ9
Jm2yT8tkVVCMnFaL8w/Jp1RuQpYlqjW3WyLa2JEAEQEAAYkCJQQYAQoADwUCVRdG
KgIbDAUJBaOagAAKCRB9zqOHicWIzPxpEAC2vayHYq+qVc9FRbWLx6C6lzOHn3Rh
W85KFoGIA1ofDM4N1V5MxkzwBbRxynlCST1/4MGOo0E8lFsfm8zeutf3W/NUuDsU
aX/9CA5FKQ6TAoE6IHp/JgQH50A8ghRLFS7kHxrtVieigQM0D5a1EWyRaql35obZ
7YjXD2siySLP90GwJfzFcizjAXUbIW6Ui8d4Ek6L3FFrhSjY4MQKHTON/5gKqgB9
oaJMl+Ohsh+Sx5r0YIcbLM8yTUyXo09FNV/YG9YTOXm2SZYE/92Hw+nkEB/MhOg9
xKgZlO+u17irr+hEEsHRSM8tRiRhTXUJCe1X1Li6mL5QEfZzviTIvIQsyUs1T9sJ
5viZw4wR+ZZJzp4r/XqMvI0w3bMGp0Ds0ZVOOAQJ8W489kj6t+PJMQvPUah+5i8d
ooAE81SDDUARIVYuSGuRl3nQ7eSc0xasJAtaLv20tD+2NyREENkA9JIVfmU6P9ls
YL7HhP4FhJN2LS1+YMPfwS6i7z4sgysiHlegLQXvdSTKgqRR5+Mk6Fb+XEmNPZCi
7ZSYeNkloEarLBvtitX0PWDdAVlOEXu9BKkL0L5uONXE8F/hU7Hfn2SekaCAkpKp
0mX4C3KyWYde+vej/gsKjMai+CrqtdhYSFMiirKn2LBOMgL6mKsVCiWDHlCdrmNA
L4v1JQSDigejQQ==
=joEZ
-----END PGP PUBLIC KEY BLOCK-----

Collaboration style

Once on #haskell, I was asked why I have no large programs to my credit; I replied, My problem is that most programs I use already exist.

I am not a bad Haskell programmer (although I am no guru like Simon Peyton-Jones, Apfelmus, or Don Stewart), but given how long I’ve been using Haskell, my contributions probably look pretty slim. This isn’t because I don’t like Haskell - I do, I find functional programming natural: defining transformation after transformation until the result is what I need. And of the functional languages, Haskell seems the best combination of power beyond basic arithmetic or list processing, one of the best ecosystems, and good basic language. (Which is not to say it’s perfect: there are some sharp edges in the basic math which irritate me when I’m messing around in the REPL.)

This is partly because of my style of contribution. I’ve always preferred to work on existing applications and libraries than to go write my own. I’ve always preferred to take someone else’s work and bring it up to snuff than write a clean implementation of my own. I’ve always preferred prodding the author or maintainer to do the right thing than to drop a large batch of patches onto them. Likewise, I view it as better to use Haskell standards like Cabal or Darcs than to use something like Autotools even if the latter lets us manage just a little more automation. I view it as better to upload to Hackage than to use any fancy site like Github or Sourceforge.

It’s better to do yeoman’s work taking two similar modules in two applications and split them out to a library than to write even the fanciest purely functional finger tree using monoids. Better to commit changes that reduce user configs by a line than to demonstrate once again the elegance of monads. Better by far to file a bug than wank around in #haskell golfing expressions.

It is much better to find some people who have tried in the past to solve a problem and bring them together to solve it, than to solve it yourself - even if it means being a footnote (or less) in the announcement. What’s important is that it got done, and people will be using it. Not the credit. It is a high accomplishment indeed to factor out a bit of functionality into a library and make every possible user actually use it. Would that more Haskellers had this mindset! Indeed, would that more people in general had this mindset; as it is, people have bad habits of repeatedly failing when they think they have special information, are highly overconfident even in objective areas with quick feedback, and badly overestimate how many good ideas they can come up with18 - indeed, most good ideas are Not Invented Here. One should be able to draw upon the wisdom of others.

This is an ethos I learned working with the inclusionists of Wikipedia. No code is so bad that it contains no good; the most valuable code is that used by other code; credit is less important than work; a steady stream of small trivial improvements is better than occasional massive edits.

A leader is best when people barely know that he exists, not so good when people obey and acclaim him, worst when they despise him. Fail to honor people, They fail to honor you. But of a good leader, who talks little, when his work is done, his aims fulfilled, they will all say, We did this ourselves.19

This is not an ethos calculated to impress. Filing bug reports, helping newbies, commenting on articles and code, cabalizing & uploading code - these are things hard to evaluate or take credit for. They are useful, useful indeed (shepheb or, eg. myself, never boast in #xmonad of having helped 5 newbies today, but over the months and years, this friendliness and ready aid is of greater value than any module in all of XMonadContrib.) but they will never impress an interviewer or earn a fellowship. Is that too bad? Did I waste all my time?

I don’t think so. I value my contributions, and the Haskell community is better for it. It may have made my life a little more difficult - all that time spent on Haskell matters is time I did not devote to classes or jobs or what-have-you - but ultimately they did help somebody. One could do worse things with one’s time than that.

Coding contributions

I mostly contribute to projects in Haskell, my favorite language; I have contributed to non-Haskell projects such as StumpWM, Mnemosyne, GNU Emacs20 etc. but not in major ways, so I do not list them here:

arbtt

  • wrote tutorial on configuring the time-tracker & defining rules: Effective Use of arbtt
  • documented dependencies, similar software, configuration syntax mode, CLI flag corrections

Darcs

  1. Switched from FastPackedStrings to ByteStrings
  2. Low-level C optimization
  3. Initiated Cabalization (my work initially appeared as darcs-cabalized and then was merged into HEAD and darcs-cabalized deprecated)
  4. Refactoring of shell tests
  5. Initiated switch from MoinMoin wiki to Gitit
  6. Identified performance issue & instigated addition of --max-count option for Filestore

XMonad

  1. regular XMonadContrib patch reviews
  2. Config archive downloader
  3. Contributed modules:
    1. XMonad.Util.Paste
    2. XMonad.Actions.Search
    3. XMonad.Actions.WindowGo
    4. XMonad.Util.XSelection
  4. Maintain previous21

Yi

  1. Contributed modules:
    1. Yi.IReader
    2. Yi.Mode.IReader
    3. Yi.Hoogle
  2. Improved Emacs keybindings
  3. Initiated Unicodify or Pretty Lambdas feature for Haskell syntax highlighting
  4. Added movement-related functions for improved incremental search
  5. Cleanup22
  6. Comment support to cabal-mode

Lambdabot

  1. (Re)Cabalized23
  2. Adapted to use Mueval
  3. Refactored out code in multiple packages:
    1. show
    2. lambdabot-utils
    3. brainfuck
    4. unlambda
  4. Implemented run-in-any-directory functionality (previously Lambdabot could only run in the repository directory)
  5. Cleanup
  6. Maintain it (with Cale Gibbard)

Gitit

  • Wrote Darcs backend (which was moved to the filestore package and became Data.FileStore.Darcs)
  • Did some optimization work (images, JavaScript & CSS minification, wrote gzip encoding & initiated expire headers, JS relocation, fewer calls to expensive filestore functions)
  • Wrote RSS support
  • Wrote Interwiki plugin
  • Wrote Date plugin
  • Wrote WebArchiver & WebArchiverBot plugins (see later archiver standalone tool/library)
  • Wrote Unicode plugin
  • Wrote HCAR entry
  • Misc. bug reports & suggestions
  • Added PDF export functionality
  • Integrated JQuery-based floating footnotes

Filestore

  • Instigated its development/use in Gitit & Orchid
  • Maintain the Darcs backend (debug & optimize)

Mueval

  • Wrote and maintain it

wp-archivebot

archiver

Change-monger

  • Wrote and maintain it

Base libraries

Base

Unix

  • fixed a possible runtime crash in mkstemp
  • added mkstemp docs

Autoproc

  • Cleanup
  • Improved basic functionality
  • Implemented an XMonad-style reload system to allow actual customization
  • Maintain it

Frag

  • Updated for GHC 6.8 & 6.1024
  • Cleanup
  • Replaced the non-Free level data and graphics with Free ones

HalFS

  • Updated
  • Improved cabalization

Shu-thing, Monadius

  • Linux portability fixes
  • Cabalized
  • Cleanup

Hint

  • Improved examples, docs
  • Added UTF8 support
  • Made use ghc-paths library
  • Enabled QuickCheck support
  • Added GHC-options support

Hlint

  • added GHCi integration

Pugs

  • Cleaned up their third-party modules
  • Fixed up various Cabal issues
  • Helped maintain it

ZFS

  • Cabalized
  • Cleanup

Greencard

  • Updated
  • Cabalized & did the package split

ArrayRef

  • Cabalized
  • Cleanup
  • Updated

Hashell

  • Updated for 6.8’s GHC API
  • Cleanup
  • Cabalized

QuickCheck

  • Prototyped the Data.Complex instance

GenI

  • Improved Cabalization

HArchive

  • Cabalized

HaLeX

  • Cabalized

HTF

  • Cabalized

PArrows

  • Cabalized

Baskell

  • Cabalized

Mage

  • Cabalized
  • Cleanup

Haskell In Space

  • Cabalized
  • Cleanup
  • Updated

Smallcheck

  • Cabalized

Topkata

  • Improved Cabalization

HsSyck

  • ByteString updates
  • Improved cabalization

HList

  • Updated
  • Cabalized

flow2dot

  • Updated

hinvaders

  • Cabalized
  • Updated

Whim

  • Cabalized whim

Tagsoup

  • replaced old custom HTTP download code with standard library functions

The others & the rest

I cabalized and/or uploaded (according to the 10 May 2013 Hackage upload log):


  1. Dr. Samuel Johnson; The Rambler, No. 14 (5 May 1750). This is a literary way of saying I am not as interesting as my writings, and in some respect, it should not matter who I am or what I have done because argument screens off authority.

  2. #137, Friedrich Nietzsche’s Beyond Good and Evil: Prelude to a Philosophy of the Future

  3. Even More Aphorisms and Ten-Second Essays from Vectors 3.0, James Richardson

  4. When I say research assistant, I mean it in the older sense of someone who does detail work for another person’s original research - so I spend a lot of time reading up on specific areas and making notes about stuff my boss needs, and only occasionally do independent work. Not all my work can be made public, but some of it is. A partial list in rough chronological order:

  5. Frank Herbert, Dune Messiah

  6. The following is a list of my submissions to LW I regard as substantive or particularly good, excluding content which can be found on gwern.net, in chronological order with interesting ones highlighted:

  7. Of course, I don’t agree with every SIAI or LW position. The intellectual homogeneity has been much over-estimated by outsiders who have not bothered to look at the annual surveys, I think. Here are some major points for me:

    1. MWI: I think that LWers who were persuaded by Eliezer’s MWI writings are wrong to do so, as they are unfamiliar with even the rudiments of any alternatives interpretations and cannot judge in the matter; how many LWers have ever seriously looked at all the competing theories, or could even name many alternatives? (Collapse, MWI, uh…), much less could discuss why they dislike pilot waves or whatever. Lacking any real understanding, they ought to simply adopt the expert consensus, where MWI seems to have a plurality or bare majority of adherents (with the weak confidence that implies).
    2. Heuristics and cognitive biases: I am not much convinced that knowledge of heuristics & biases help in ordinary life. Feedback & learning are powerful tools in eliminating error, calibrating predictions, and justify committing what may look like the sunk cost fallacy; and feedback is what one gets in ordinary life.

      Per Moravec’s paradox, where our knowledge of heuristics & biases will pay off most is in what Hanson would call Far scenarios: evolutionary novel situations with few precedents and only costly or non-existent feedback. (For example, the question of whether artificial intelligence will be developed by 2040: it will only happen or not once, there are few comparable events, the consequences may be dramatic, and our ordinary lives offer no useful insights.) As it happens, this describes much of futurism & forecasting but we cannot justify our futurism by claiming its techniques are incredibly valuable in ordinary life!
    3. Cryonics girl: The donations appall me, for reasons I lay out at length there - they are a complete abandonment of core ideas like utilitarianism & optimal philanthropy.
    4. Alicorn’s Living Luminously paradigm struck me as dubious, not backed by even token research, and likely idiosyncratic to her; I thought her Luminosity e-novel was merely OK despite the endless discussions on LW (rivaling those for Methods of Rationality itself) and that her followup, Radiance, was just terrible. Nevertheless, her novel career seems to continue.

  8. There is a moderately funny story about how Gerard came to write it, based on my musical incompetence.

  9. That is, summing up the (surviving) edits of my various accounts over the years: User:Gwern, User:Marudubshinki, & User:Rhwawn

  10. Compare the CSE results with the Google Results for the anime Wings of Honnêamise. Which is more useful for an editor? For more details, see my release announcement.

  11. A trick I discovered when visiting FHI in 2015 - I had used widescreen laptops for so long I had forgotten how nice portrait-orientation was for reading.

  12. My best guess is that my problem initially was that I seriously underestimated how much pressure it takes to insert a Threadripper CPU into its socket - it required a truly terrifying amount of force and I only got it right after triple-checking online tutorials & videos & discussions - and that was why the first motherboard never worked at all, and the second one was killed by static electricity or a short.

  13. See also Actively Open-Minded Thinking Scale, Clarity Scale, Engagement with Beauty, & a measure of what types of stories you enjoy.

  14. See also Zimbardo Time Perspective Inventory. Brent W. Roberts criticizes these two inventories when used to measure Conscientiousness.

  15. See also Relational Mobility scale, Empathizing and Systemizing scales & Rational vs Experiential Inventory.

  16. See also Self-Report Psychopathy Scale.

  17. See also Experience in Purchasing Behavior Scale & Kentucky Inventory of Mindfulness Skills.

  18. For further reading on overconfidence, see all LW articles so tagged. I once read in a book of a study in which subjects were asked to generate ideas for, IIRC, putting out a fire, and to stop only when they were convinced they had thought up all good ones, and usually stopping when they had thought up only a third; but I have been unable to refind it and would appreciate knowing details if this description rings any bells for a reader.

  19. Chapter 17, Tao Teh Ching

  20. For example, my clean-up and extension of the browse-url module was completely rewritten by RMS; so I can hardly take credit there.

  21. Henceforth, this implies I have a commit-bit (or equivalent) for that project.

  22. Henceforth, cleanup should be taken as referring to extensive miscellaneous changes which include (in no particular order):

    • fixing GHC’s `-Wall or hlint warnings
    • replacing OPTION pragmas with LANGUAGE pragmas
    • tracking down licensing information
    • switching from Haskell98 imports to the standard hierarchical module imports
      1. eg. import Char ->import Data.Char; nontrivial in some cases where Haskell98 modules were dispersed
    • reorganizing the file tree
    • improving the Cabalization
    • whitespace formatting, and so on.
  23. Henceforth, this typically implies that I uploaded it to Hackage as well

  24. Henceforth, this implies that I made whatever changes necessary to get it compiling on GHC 6.8.x and 6.10.x