Meta page describing gwern.net site ideals of stable long-term essays which improve over time; technical decisions using Markdown and static hosting; idea sources and writing methodology; metadata definitions; semi-annual web traffic statistics; copyright license
topics: personal, psychology, archiving, statistics, predictions, meta, shell, R, Bayes, Google
created: 01 Oct 2010; modified: 15 Aug 2019; status: finished; confidence: highly likely;

This page is about gwern.net; for information about me, see .

# The Content

“Ah! let not Censure term our fate our choice, / The stage but echoes back the public’s voice; / The drama’s laws the drama’s patrons give, / For we that live to please must please to live.”

The content here varies from to to / to to to to to investigations of or (or two topics at once: or or heck !). It is everything I felt worth writing for the past few years that didn’t fit somewhere like Wikipedia or was already written—“…I realised that I wanted to read about them what I myself knew. More than this—what only I knew. Deprived of this possibility, I decided to write about them. Hence this book.”1 I never expected to write so much, but I discovered that once I had a hammer, nails were everywhere, and that 2. I believe that someone who has been well-educated will think of something worth writing at least once a week; to a surprising extent, this has been true. (I added ~130 documents to this repository over the first 3 years.) There are many benefits to keeping notes as they allow one to accumulate confirming and especially contradictory evidence3, and even drafts can be useful so you or simply decently respect the opinions of mankind:

“Special knowledge can be a terrible disadvantage if it leads you too far along a path you cannot explain anymore.”

One of my personal interests is applying the idea of the . What and how do you write a personal site with the long-term in mind? We live most of our lives in the future, and the actuarial tables give me until the 2070–2080s, excluding any benefits from / or projects like . It is a common-place in science fiction4 that longevity would cause widespread risk aversion. But on the other hand, it could do the opposite: the longer you live, the more long-shots you can afford to invest in. Someone with a timespan of 70 years has reason to protect against black swans—but also time to look for them.5 It’s worth noting that old people make many short-term choices, as reflected in increased suicide rates and reduced investment in education or new hobbies, and this is not due solely to the ravages of age but the proximity of death—the HIV-infected (but otherwise in perfect health) act similarly short-term.6

What sort of writing could you create if you worked on it (be it ever so rarely) for the next 60 years? What could you do if you started now?7

"Of all the books I have delivered to the presses, none, I think, is as personal as the straggling collection mustered for this hodgepodge, precisely because it abounds in reflections and interpolations. Few things have happened to me, and I have read a great many. Or rather, few things have happened to me more worth remembering than Schopenhauer’s thought or the music of England’s words.

A man sets himself the task of portraying the world. Through the years he peoples a space with images of provinces, kingdoms, mountains, bays, ships, islands, fishes, rooms, instruments, stars, horses, and people. Shortly before his death, he discovers that that patient labyrinth of lines traces the image of his face."8

## Long Site

“The Internet is self destructing paper. A place where anything written is soon destroyed by rapacious competition and the only preservation is to forever copy writing from sheet to sheet faster than they can burn. If it’s worth writing, it’s worth keeping. If it can be kept, it might be worth writing…If you store your writing on a third party site like , or even on your own site, but in the complex format used by blog/wiki software du jour you will lose it forever as soon as hypersonic wings of Internet labor flows direct people’s energies elsewhere. For most information published on the Internet, perhaps that is not a moment too soon, but how can the muse of originality soar when immolating transience brushes every feather?”

(“Self destructing paper”, 5 December 2006)

Keeping the site running that long is a challenge, and leads to the recommendations for : 100% software9, for data, human-readability, avoiding external dependencies1011, and staticness12.

Preserving the content is another challenge. Keeping the content in a like protects against file corruption and makes it easier to mirror the content; regular backups13 help. I have taken additional measures: has archived most pages and almost all external links; the is also archiving pages & external links14. (For details, read .)

One could continue in this vein, devising ever more powerful & robust storage methods (perhaps combine the DVCS with through , a la ), but what is one to fill the storage with?

## Long Content

“What has been done, thought, written, or spoken is not culture; culture is only that fraction which is remembered.”

‘Blog posts’ might be the answer. But I have read blogs for many years and most blog posts are the triumph of the hare over the tortoise. They are meant to be read by a few people on a weekday in 2004 and never again, and are abandoned—and perhaps as Assange says, not a moment too soon. (But isn’t that sad? Isn’t it a terrible for one’s time?) On the other hand, the best blogs always seem to be building something: they are rough drafts—works in progress16. So I did not wish to write a blog. Then what? More than just “evergreen content”, what would constitute Long Content as opposed to the existing culture of Short Content? How does one live in a Long Now sort of way?17

It’s shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. Muad’Dib knew that every experience carries its lesson.18

My answer is that one uses such a framework to work on projects that are too big to work on normally or too tedious. (Conscientiousness is often lacking online or in volunteer communities19 and many useful things go undone.) Knowing your site will survive for decades to come gives you the mental wherewithal to tackle long-term tasks like gathering information for years, and such persistence can be useful20—if one holds onto every glimmer of genius for years, then even the dullest person may look a bit like a genius himself21. (Even experienced professionals can only write at their peak for a few hours a day—usually , it seems.) Half the challenge of fighting procrastination is the —I find when I actually get of working on even dull tasks, it’s not so bad. So this suggests a solution: never start. Merely have perpetual drafts, which one tweaks from time to time. And the rest takes care of itself. I have a few examples of this:

1. When I read in Wired in 2008 that the obscure working memory exercise called dual n-back (DNB) had been found to increase IQ substantially, I was shocked. IQ is one of the most stubborn properties of one’s mind, one of the most fragile22, the hardest to affect positively23, but also one of the most valuable traits one could have24; if the technique panned out, it would be huge. Unfortunately, DNB requires a major time investment (as in, half an hour daily); which would be a bargain—if it delivers. So, to do DNB or not?

Questions of great import like this are worth studying carefully. The wheels of academia grind exceeding slow, and only a fool expects unanimous answers from fields like psychology. Any attempt to answer the question ‘is DNB worthwhile?’ will require years and cover a breadth of material. This FAQ on DNB is my attempt to cover that breadth over those years.

2. I have been discussing since 2004. The task of interpreting Eva is very difficult; the source works themselves are a major time-sink25, and there are thousands of primary, secondary, and tertiary works to consider—personal essays, interviews, reviews, etc. The net effect is that many Eva fans ‘know’ certain things about Eva, such as not being a grand ‘screw you’ statement by Hideaki Anno or that the TV series was censored, but they no longer have proof. Because each fan remembers a different subset, they have irreconcilable interpretations. (Half the value of the page for me is having a place to store things I’ve said in countless fora which I can eventually turn into something more systematic.)

To compile claims from all those works, to dig up forgotten references, to scroll through microfilms, buy issues of defunct magazines—all this is enough work to shatter of the stoutest salaryman. Which is why I began years ago and expect not to finish for years to come. (Finishing by 2020 seems like a good .)

3. : Years ago I was reading the papers of the economist . I recommend his work highly; even if they are wrong, they are imaginative and some of the finest speculative fiction I have read. (Except they were non-fiction.) One night I had a dream in which I saw in a flash a medieval city run in part on Hansonian grounds; a version of his . A city must have another city as a rival, and soon I had remembered the strange ’90s idea of s, which was easily tweaked to work in a medieval setting. Finally, between them, was one of my favorite proposals, Buckminster Fuller’s megastructure.

I wrote several drafts but always lost them. Sad26 and discouraged, I abandoned it for years. This fear leads straight into the next example.

Once, I didn’t have to keep reading lists. I simply went to the school library shelf where I left off and grabbed the next book. But then I began reading harder books, and they would cite other books, and sometimes would even have horrifying lists of hundreds of other books I ought to read (‘bibliographies’). I tried remembering the most important ones but quickly forgot. So I began keeping a book list on paper. I thought I would throw it away in a few months when I read them all, but somehow it kept growing and growing. I didn’t trust computers to store it before27, but now I do, and it lives on in digital form (currently on —because they have export functionality). With it, I can track how my interests evolved over time28, and what I was reading at the time. I sometimes wonder if I will read them all even by 2070.

What is next? So far the pages will persist through time, and they will gradually improve over time. But a truly Long Now approach would be to make them be improved by time—make them more valuable the more time passes. ( remarks in that a group of monks carved thousands of scriptures into stone, hoping to preserve them for posterity—but posterity would value far more a carefully preserved collection of monk feces, which would tell us countless valuable things about important phenomenon like global warming.)

One idea I am exploring is adding long-term predictions like the ones I make on . Many29 pages explicitly or implicitly make predictions about the future. As time passes, predictions would be validated or falsified, providing feedback on the ideas.30

For example, the Evangelion essay’s paradigm implies many things about the future movies in 31; is an extended prediction32 of future plot developments in series; has suggestions about what makes good projects, which could be turned into predictions by applying them to predict success or failure when the next Summer of Code choices are announced. And so on.

I don’t think “Long Content” is simply for working on things which are equivalent to a “” (a work which attempts to be an exhaustive exposition of all that is known—and what has been recently discovered—on a single topic), although monographs clearly would benefit from such an approach. If I write a short essay cynically remarking on, say, Al Gore and predicting he’d sell out and registered some predictions and came back 20 years later to see how it worked out, I would consider this “Long Content” (it gets more interesting with time, as the predictions reach maturation); but one couldn’t consider this a “monograph” in any ordinary sense of the word.

One of the ironies of this approach is that as a , I assign non-trivial probability to the world undergoing massive change during the 21st century due to any of a number of technologies such as artificial intelligence (such as 33) or ; yet here I am, planning as if I and the world were immortal.

I personally believe that one should “think Less Wrong and act Long Now”, if you follow me. I diligently do my daily and n-backing; I carefully design my website and writings to last decades, actively think about how to write material that improves with time, and work on writings that will not be finished for years (if ever). It’s a bit schizophrenic since both are totalized worldviews with drastically conflicting recommendations about where to invest my time. It’s a case of high versus low discount rates; and one could fairly accuse me of committing the , but then, I’m not sure that (certainly, I have more to show for my wasted time than most people).

The Long Now views its proposals like the Clock and the Long Library and as insurance—in case the future turns out to be surprisingly unsurprising. I view these writings similarly. If ’s most ambitious predictions turn out right and the happens by 2050 or so, then much of my writings will be moot, but I will have all the benefits of said Singularity; if the Singularity never happens or ultimately pays off in a very disappointing way, then my writings will be valuable to me. By working on them, I hedge my bets.

## Finding my ideas

To the extent I personally have any method for ‘getting started’ on writing something, it’s to pay attention to anytime you find yourself thinking, “how irritating that there’s no good webpage/Wikipedia article on X” or “I wonder if Y” or “has anyone done Z” or “huh, I just realized that A!” or “this is the third time I’ve had to explain this, jeez.”

The DNB FAQ started because I was irritated people were repeating themselves on the dual n-back mailing list; the modafinil article started because it was a pain to figure out where one could order modafinil; the trio of Death Note articles (, , ) all started because I had an amusing thought about information theory; the page was commissioned after I growsed about how deeply sensationalist & shallow & ill-informed all the mainstream media articles on the Silk Road drug marketplace were (similarly for ); my was based on thinking it was a pity that Arthur’s Guardian analysis was trivially & fatally flawed; and so on and so forth.

None of these seems special to me. Anyone could’ve compiled the DNB FAQ; anyone could’ve kept a list of online pharmacies where one could buy modafinil; someone tried something similar to my Google shutdown analysis before me (and the fancier statistics were all standard tools). If I have done anything meritorious with them, it was perhaps simply putting more work into them than someone else would have; to :

“I think you’ll see what I mean if I teach you a few principles magicians employ when they want to alter your perceptions…Make the secret a lot more trouble than the trick seems worth. You will be fooled by a trick if it involves more time, money and practice than you (or any other sane onlooker) would be willing to invest.”

“My partner, Penn, and I once produced 500 live cockroaches from a top hat on the desk of talk-show host David Letterman. To prepare this took weeks. We hired an entomologist who provided slow-moving, camera-friendly cockroaches (the kind from under your stove don’t hang around for close-ups) and taught us to pick the bugs up without screaming like preadolescent girls. Then we built a secret compartment out of foam-core (one of the few materials cockroaches can’t cling to) and worked out a devious routine for sneaking the compartment into the hat. More trouble than the trick was worth? To you, probably. But not to magicians.”

Besides that, I think after a while writing/research can be a virtuous circle or autocatalytic. If one were to look at my repo statistics, you see that I haven’t always been writing as much. What seems to happen is that as I write more:

eg. I learned basic in R to answer what all the positive & negative , but then I was able to use it for ; I learned linear models for but now I can use them anywhere I want to, like in my .

The “Feynman method” has been facetiously described as “find a problem; think very hard; write down the answer”, but gives the real one:

Richard Feynman was fond of giving the following advice on how to be a genius. You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say: “How did he do it? He must be a genius!”

• I internalize a habit of noticing interesting questions that flit across my brain

eg. in March 2013 while meditating: “I wonder if more doujin music gets released when unemployment goes up and people may have more spare time or fail to find jobs? Hey! That giant Touhou music torrent I downloaded, with its 45000 songs all tagged with release year, could probably answer that!” (One could argue that these questions probably should be ignored and not investigated in depth—Teller again—nevertheless, this is how things work for me.)

• if you aren’t writing, you’ll ignore useful links or quotes; but if you stick them in small asides or footnotes as you notice them, eventually you’ll have something bigger.

I grab things I see on Google Alerts & Scholar, Pubmed, Reddit, Hacker News, my RSS feeds, books I read, and note them somewhere until they amount to something. (An example would be my slowly accreting citations on .)

• people leave comments, ping me on IRC, send me emails, or leave anonymous messages, all of which help

Some examples of this come from my most popular page, on Silk Road 1:

1. an anonymous message led me to ; I wrote it up and gave my opinions and thus I got another short essay to add to my SR page which I would not have had otherwise (and I think there’s a that in a few years this will pay off and become a very interesting essay).
2. CMU’s Nicholas Christin, who by scraping SR for many months and giving all sorts of overall statistics, emailed me to point out I was citing inaccurate figures from the first version of his paper. I thanked him for the correction and while I was replying, mentioned I had a hard time believing his paper’s claims about the extreme rarity of scams on SR as estimated through buyer feedback. After some back and forth and suggesting specific mechanisms how the estimates could be positively biased, he was able to check his database and confirmed that there was at least one very large omission of scams in the scraped data and there was probably a general undersampling; so now I have a more accurate feedback estimate for my SR page (important for estimating risk of ordering) and he said he’ll acknowledge me in the/a paper, which is nice.

## Information organizing

1. For quotes or facts which are very important, I employ by adding them to my Mnemosyne

2. I keep web clippings in Evernotes; I also excerpt from research papers & books, and miscellaneous sources. This is useful for targeted searches when I remember a fact but not where I learned it, and for storing things which I don’t want to memorize but which have no logical home in my website or LW or elsewhere. It is also helpful for writing my and the , as I can read through my book excerpts to remind myself of the highlights and at the end of the month review clippings from papers/webpages to find good things to reshare which I was too busy at the time to do so or was unsure of its importance. I don’t make any use of more complex Evernote features.

I periodically back up my Evernote using the export feature. (I made sure there was a working export method before I began using Evernote, and use it only as long as Nixnote continues to work.)

My workflow for dealing with PDFs, as of late 2014, is:

1. if necessary, jailbreak the paper using Libgen or a university proxy, then upload a copy to Dropbox, named year-author.pdf
2. read the paper, making excerpts as I go
3. store the metadata & excerpts in Evernote, possibly copying over to my Google+ account if I think it’s good or otherwise interesting
4. if useful, integrate into gwern.net with its title/year/author metadata, adding a local fulltext copy if the paper had to be jailbroken, otherwise rely on my custom archiving setup to preserve the remote URL
5. hence, any future searches for the filename / title / key contents should result in hits either in my Evernote or gwern.net
3. Web pages are archived & backed up by . This is intended mostly for fixing dead links (eg to recover the fulltext of the original URL of an Evernote clipping).

4. I don’t have any special book reading techniques. For really good books I excerpt from each chapter and stick the quotes into Evernote.

5. I store insights and thoughts in various pages as parenthetical comments, footnotes, and appendices. If they don’t fit anywhere, I dump them in .

6. Larger masses of citations and quotes typically get turned into pages.

7. I make heavy use of RSS subscriptions for news. For that, I am currently using . (Not that I’m hugely thrilled about it. Google Reader was much better.)

8. For projects and followups, I use reminders in Google Calendar.

9. For recording personal data, I automate as much as possible (eg Zeo and ) and I make a habit of the rest—getting up in the morning is a great time to build a habit of recording data because it’s a time of habits like eating breakfast and getting dressed.

Hence, to refind information, I use a combination of Google, Evernote, grep (on the gwern.net files), occasionally Mnemosyne, and a good visual memory.

As far as writing goes, I do not use note-taking software or things like or —not that I think they are useless but I am worried about whether they would ever repay the large upfront investments of learning/tweaking or interfere with other things. Instead, I occasionally compile outlines of articles from comments on LW/Reddit/IRC, keep editing them with stuff as I remember them, search for relevant parts, allow little thoughts to bubble up while meditating, and pay attention to when I am irritated at people being wrong or annoyed that a particular topic hasn’t been written down yet.

## Confidence tags

Most of the metadata in each page is self-explanatory: the date is the last time the page was meaningfully modified34, the tags are categorization, etc. The “status” tag describes the state of completion: whether it’s a pile of links & snippets & “notes”, or whether it is a “draft” which at least has some structure and conveys a coherent thesis, or it’s a well-developed draft which could be described as “in progress”, and finally when a page is done—in lieu of additional material turning up—it is simply “finished”.

The “confidence” tag is a little more unusual. I stole the idea from tags; I use the same meaning for “log” for collections of data or links (“log entries that simply describe what happened without any judgment or reflection”) personal or reflective writing can be tagged “emotional” (“some cluster of ideas that got itself entangled with a complex emotional state, and I needed to externalize it to even look at it; in no way endorsed, but occasionally necessary (similar to fiction)”), and “fiction” needs no explanation (every author has some reason for writing the story or poem they do, but not even they always know whether it is an expression of their deepest fears, desires, history, or simply random thoughts). I drop his other tags in favor of giving my subjective probability using the :

1. “certain”
2. “highly likely”
3. “likely”
4. “possible” (my preference over Kesselman’s “Chances a Little Better [or Less]”)
5. “unlikely”
6. “highly unlikely”
7. “remote”
8. “impossible”

These are used to express my feeling about how well-supported the essay is, or how likely it is the overall ideas are right. (Of course, an interesting idea may be worth writing about even if very wrong, and even a long shot may be profitable to examine if the potential payoff is large enough.)

## Importance tags

An additional useful bit of metadata would be distinction between things which are trivial and those which are about more important topics which might change your life. Using , I’ve ranked pages in deciles from 0–10 on how important the topic is to myself, the intended reader, or the world. For example, topics like or are vastly more important, and be ranked 10, than some poems or a dream or someone’s small nootropics self-experiment, which would be ranked 0–1.

## Writing checklist

It turns out that writing essays (technical or philosophical) is a lot like writing code—there are so many ways to err that you need a process with as much automation as possible. My current checklist for finishing an essay:

### Markdown checker

I’ve found that many errors in my writing can be caught by some simple scripts, which I’ve compiled into a shell script, markdown-lint.sh.

My linter does:

1. checks for corrupted non-text binary files

2. checks a blacklist of domains which are either dead (eg Google+) or have a history of being unreliable (eg ResearchGate, NBER, PNAS); such links need35 to either be fixed, pre-emptively mirrored, or removed entirely.

• a special case is PDFs hosted on IA; the IA is reliable, but I try to rehost such PDFs so they’ll show up in Google/Google Scholar for everyone else.
3. Broken syntax: I’ve noticed that when I make Markdown syntax errors, they tend to be predictable and show up either in the original Markdown source, or in the rendered HTML. Two common source errors:

 "(www"
")www"

And the following should rarely show up in the final rendered HTML:

 "\frac"
"\times"
"(http"
")http"
"[http"
"]http"
" _ "
"[^"
"^]"
"<!--"
"-->"
"<-- "
"<-"
"->"
"$title$"
"$description$"
"$author$"
"$tags$"
"$category$"

Similarly, I sometimes slip up in writing image/document links so any link starting https://www.gwern.net or ~/wiki/ or /home/gwern/ is probably wrong. There are a few Pandoc-specific issues that should be checked for too, like duplicate footnote names and images without separating newlines or unescaped dollar signs (which can accidentally lead to sentences being rendered as TeX).

A final pass with finds many errors which slip through, like incorrectly-escaped URLs.

4. Flag dangerous language: Imperial units are deprecated, but so too is the misleading language of NHST statistics (if one must talk of “significance” I try to flag it as “statistically-significant” to warn the reader). I also avoid some other dangerous words like “obvious” (if it is really is, why do I need to say it?).

• (with some checks disabled because they play badly with Markdown documents)
• Another static warning is checking for too-long lines (most common in code blocks, although sometimes broken indentation will cause this) which will cause browsers to use scrollbars, for which I’ve written a Pandoc script,
• one for a bad habit of mine—too-long footnotes
6. duplicate and hidden-PDF URLs: a URL being linked multiple times is sometimes an error (too much copy-paste or insufficiently edited sections); PDF URLs should receive a visual annotation warning the reader it’s a PDF, but the CSS rules, which catch cases like .pdf$, don’t cover cases where the host quietly serves a PDF anyway, so all URLs are checked. (A URL which is a PDF can be made to trigger the PDF rule by appending #pdf.) 7. broken links are detected with . The best time to fix broken links is when you’re already editing a page. While this throws many false positives, those are easy to ignore, and the script fights bad habits of mine while giving me much greater confidence that a page doesn’t have any merely technical issues that screw it up (without requiring me to constantly reread pages every time I modify them, lest an accidental typo while making an edit breaks everything). ### Anonymous feedback Back in November 2011, lukeprog posted where he described his use of a Google Docs form for anonymous receipt of textual feedback or comments. Typically, most forms of communication are non-anonymous, or if they are anonymous, they’re public. One can set up pseudonyms and use those for private contact, but it’s not always that easy, and is definitely a series of (if anonymous feedback is not solicited, one has to feel it’s important enough to do and violate implicit norms against anonymous messages; one has to set up an identity; one has to compose and send off the message, etc). I thought it was a good idea to try out, and on 8 November 2011, I set up my own anonymous feedback form and stuck it in the footer of all pages on gwern.net where it remains to this day. I did wonder if anyone would use the form, especially since I am easy to contact via email, use multiple sites like Reddit or Lesswrong, and even my Disqus comments allow anonymous comments—so who, if anyone, would be using this form? I scheduled a followup in 2 years on 30 November 2013 to review how the form fared. 754 days, 2.884m page views, and 1.350m unique visitors later, I have received 116 pieces of feedback (mean of 24.8k visits per feedback). I categorize them as follows in descending order of frequency: • Corrections, problems (technical or otherwise), suggested edits: 34 • Praise: 31 • Question/request (personal, tech support, etc): 22 • Misc (eg gibberish, socializing, Japanese): 13 • Criticism: 9 • News/suggestions: 5 • Feature request: 4 • Request for cybering: 1 • Extortion: 1 (see my dealing with the September 2013 incident) Some submissions cover multiple angles (they can be quite long), sometimes people double-submitted or left it blank, etc, so the numbers won’t sum to 116. In general, a lot of the corrections were usable and fixed issues of varying importance, from typos to the entire site’s CSS being broken due to being uploaded with the wrong MIME type. One of the news/suggestion feedbacks was very valuable, as it lead to writing the Silk Road mini-essay A lot of the questions were a waste of my time; I’d say half related to Tor/Bitcoin/Silk-Road. (I also got an irritating number of emails from people asking me to, say, buy LSD or heroin off SR for them.) The feature requests were usually for a better RSS feed, which I tried to oblige by starting the page. The cybering and extortion were amusing, if nothing else. The praise was good for me mentally, as I don’t interact much with people. I consider the anonymous feedback form to have been a success, I’m glad lukeprog brought it up on LW, and I plan to keep the feedback form indefinitely. #### Feedback causes One thing I wondered is whether feedback was purely a function of traffic (the more visits, the more people who could see the link in the footer and decide to leave a comment), or more related to time (perhaps people returning regularly and eventually being emboldened or noticing something to comment on). So I compiled daily hits, combined with the feedback dates, and looked at a graph of hits: The hits are heavily skewed by Hacker News & Reddit traffic spikes, and probably should be log transformed. Then I did a logistic regression on hits, log hits, and a simple time index: feedback <- read.csv("https://www.gwern.net/docs/traffic/2013-gwernnet-anonymousfeedback.csv", colClasses=c("Date","logical","integer")) plot(Visits ~ Day, data=feedback) feedback$Time <- 1:nrow(feedback)
summary(step(glm(Feedback ~ log(Visits) + Visits + Time, family=binomial, data=feedback)))
# ...
# Coefficients:
#              Estimate Std. Error z value Pr(>|z|)
# (Intercept) -7.363507   1.311703   -5.61  2.0e-08
# log(Visits)  0.749730   0.173846    4.31  1.6e-05
# Time        -0.000881   0.000569   -1.55     0.12
#
# (Dispersion parameter for binomial family taken to be 1)
#
#     Null deviance: 578.78  on 753  degrees of freedom
# Residual deviance: 559.94  on 751  degrees of freedom
# AIC: 565.9

The logged hits works out better than regular hits, and survives to the simplified model. And the traffic influence seems much larger than the time variable (which is, curiously, negative).

# Technical aspects

## Popularity

### 2011

#### October 2010–February 2011

My editing activity, as generated by :

##### Traffic

“An audience, even an audience of one, is always to be treasured and respected.”

(, Patricia A. Jackson)

Popularity-wise, over the 150 days between 1 October 2010 and 28 February 2011, there were 4,346 page-views (average 30/day):

The most popular pages were36:

1. : 1,180
2. Modafinil: 644
3. : 241
4. : 108
5. : 104
6. : 101
7. : 96

The rankings are not as I would prefer (I imagine Internet archivist feels much the same way about ), but it’s pretty clear that people enjoy my more practical articles the most.

#### February 2011–July 2011

darcs-graph for this period:

##### Traffic

“Streaming in the wind / the smoke from Fuji / vanishes in the sky; / I know not where / these thoughts of mine go, either.”

the monk ( XVII: )

Google Analytics over the 124 days between 28 February 2011 and 2 July 2011, there were 42,410 page-views (average 342/day):

The most popular pages ranking changed considerably; while the DNB FAQ maintained its pre-eminent popularity, 3 new pages bumped out ‘Links’, ‘Spaced repetition’, ‘The Melancholy of Kyon’, and ‘Haskell Summer of Code’. I am a little surprised that my 2 Death Note essays seemed to’ve struck a chord, and even more surprised that my sloppy & random & un-rigorous notes about nootropics would be consistently popular:

1. DNB FAQ: 10,406
2. : 4,189
3. Modafinil: 3,231
4. : 2,633
5. : 1,779
6. : 1,366
7. Nootropics: 2,056
8. : 706
##### Promotion

“They accumulate / but there are none to buy them— / these leaves of words / piling up like wares for sale / beneath the Sumiyoshi Pine.”

; (#180, “Famous Market Town”, ; trans. Steven D. Carter)37

As a writer, I desire feedback. I also want to feel that my work has been of use to people. So while it would be nice if the world beat a path to my website, I recognize that I have to put some effort into marketing my work. I’ve tried a number of methods.

1. : I submitted any number of fairly popular articles but my total Witcoin traffic over this period was 132 visits—a traffic total I could have gotten with one slightly popular link on Reddit or a few links in comments. While I didn’t lose any Bitcoins (because my registration was funded by Kiba’s donation of ₿1 and actually profited ₿2.77), I have spent at least 6 hours figuring out how to use Witcoin, submitting articles, responding to comments, and voting. Not the best use of time.
2. Google : initially disappointing, with after 3010 impression, there were still no clicks! It was funded by the $100 coupon for signing up for Google’s Webmaster Tools. Interface is decent given complexity of task, but deeply frustrating to have to wait many weeks for the DNB FAQ and Modafinil ads to be approved or rejected. Finally, almost in June, the DNB FAQ ads were approved and the modafinil ads rejected. From 23 March to 2 July 2011, I paid$21.07 for 98,900 impressions yielding 63 clicks through. (Those visitors only spent an average of <1.5 minutes on the site, too.) Again, not a great investment of time.
3. : with just 3 articles ‘stumbled’ (included in the database), specifically DNB FAQ, & Nootropics, StumbleUpon was responsible for 161 visits or 2.77% of all traffic in the period I looked at. How much traffic could I expect with 30 or 40 articles stumbled? Quite a bit. SU has no ‘front page’ like other social news aggregators so traffic is more of a trickle than flood; nevertheless, Death Note Ending” clicked with SU readers and I got >500 readers out of it in a day or two. In total over this time, SU drove 2,257 visits. SU tended to give a pretty steady 30–50 visits a day with rare spikes when an article clicked. The downside is that after looking at SU comments and at how much time they spend on pages38, I have to agree with Arvind Narayanan’s —SUers do not want quality content but quick content, for the dopamine boost.
4. : “Girl Scouts and good governance” made it to the front page, resulting in 1,727 visits & setting gwern.net traffic records (it is that giant spike in the traffic graph), but apparently minimal viewing of other pages. Further, while I seem to get a modest amount of Reddit traffic from even unsuccessful submissions, HN submissions will sink without a trace. Kiba calls Hacker News a ‘lottery’, but it seems to be one worth playing.
5. LessWrong is a natural place to . And perhaps unsurprisingly, LW is my second-largest source of traffic, coming in after SU with 1,857 visits. While few of my submissions get upvoted all that highly, most of them drove a fair amount of traffic even in the Discussion ghetto. (Linking in comments also drives a surprising amount of traffic over long periods to my practical articles like on n-back or melatonin.) At some point I hope to have a good Article and see how much of a disparity there is.

#### July 2011–December 2011

Res audita perit, litera scripta manet.

Proverb

darcs-graph for this period (including 1 January 2012):

I ran into a cool post by Christopher Done on a tool that does detailed analysis of patch patterns on a Git repository, , and this spurred me to create a Git mirror of gwern.net using . GitStats produces a whole bundle of graphs and figures, some of which I found surprising. (I did not expect to see a large spike on Wednesday and relatively few patches on Saturday, or a spike around 5 PM, as opposed to the early morning.) I think I will update the GitStats output with each output, as a (large) adjunct to the darcs-graph plots.

##### Traffic

“…prompt no more the follies you decry, / As tyrants doom their tools of guilt to die; / ’Tis yours this night to bid the reign commence / Of rescu’d Nature, and reviving Sense; / …Bid scenic Virtue form the rising age, / And Truth diffuse her radiance from the stage.”

Samuel Johnson (“Prologue”)

Google Analytics , that over the 185 days between 2 July 2011 and 2 January 2012, there were 191,015 page-views (average 1,032/day) by 79,346 visitors for a total of 115,585 visits (average 624/day). This is better than I expected and makes me wonder about for <2000 average daily visits by 2013 (but it still seems unlikely traffic will triple over the next year).

The main change in page popularity did not surprise me; when I was writing “Silk Road 1”, I knew it would almost certainly be popular given how very popular the Gawker article was but also how lacking in practical details it was, and I also suspected that “Bitcoin is Worse is Better” would be fairly popular as it argued an interesting and controversial thesis (the original and promoted version is hosted on BitcoinWeekly.com, so its hit-count ought to be low). I’m surprised at how much the page gwern.net still gets; a surprising number of people must either visit the main page after reading another article or click on my various blog comments.

2. DNB FAQ: 25,541
3. home/main page: 16,967
4. Modafinil: 10,437
5. Nootropics: 10,219
6. Spaced repetition: 6,570
7. Bitcoin is Worse is Better: 4,297

More interesting is the other signals of popularity: gave me a free set of headbands (worth ~$50) because they liked my , a software engineer/manager contacted me to see about recruiting me, ThinkGum offered me some of their eponymous product for my Nootropics page, and my request for Bitcoin donations has paid off a little with a few donations ₿0.1₿1 and one generous donation of ₿20 (worth a bit upwards of$100 at the time; I spent it on modafinil). (This is all intrinsically helpful but I value it mostly because money speaks louder than words.)

##### Promotion

I’ve done relatively little in this period compared with the previous period:

1. I abandoned Witcoin not long after my experiment with it; and now Witcoin is dead, pending a possible open-sourcing of the codebase.
2. My AdWords credit is mostly expired. For some reason, my click-through rates kept dropping.
3. StumbleUpon remains a good traffic source (1,430 visits). I continue to ‘stumble’ my new articles when I remember to do so.
4. Hacker News was responsible for a great deal of my traffic in this period (4,175 visits). Most of it was not my doing, however—whenever I submit links, they do poorly.
5. LessWrong remains a major traffic driver (6,961 visits); I continue to see a lot of referrals from old posts and comments. Nor do all the links seem to be perceived negatively or as self-promotion by the LW community: I describing site updates and the article was received well to my surprise, eliciting very favorable reviews of my writings in general. That was nice.
6. For this period, I did spend a little more effort submitting stuff to Reddit; and I was handsomely rewarded with the Silk Road submission skyrocketing and become one of the all-time most popular articles in the Bitcoin subreddit. Between that and my nootropics articles, Reddit sent me 20,842 visits.

The largest traffic sources are Google at 36,625 visits and direct/no-referrals at 24,118 visits. As I have no idea how to improve these two figures, I ignore it. I write good content, submit it places, supply metadata, and abide by my hackerly principles; I hope that that is all the SEO I need.

### 2012

#### January 2012–July 2012

“Uproot your questions from their ground and the dangling roots will be seen. More questions!”

Frank Herbert (“Mentat Zensufi admonition”, )

darcs-graph for this period (3 January 2012–2 July 2012):

##### Traffic

“If it were not for the intellectual snobs who pay—in solid cash—the tribute which philistinism owes to culture, the arts would perish with their starving practitioners. Let us thank heaven for hypocrisy.”

The records traffic over those 182 days as being substantially increased: to 570,997 page-views (average 3,137/day) - almost 5 times the previous six month period—by 268,031 unique visitors in 366,301 visits (average 2012/day). If this rate continues, I will likely lose my traffic prediction (which doesn’t bother me very much). In particular, the lifetime total page-views is now at 809,000, which would seem to imply that I will mark by the next update! I would be very pleased by that milestone.

The popularity ranking remains mainly the same. 2 differences from the past stand out: the sudden popularity of the Zeo and “Death Note Anonymity” articles. Both owe their inclusion to 2 successful front-page appearances on Hacker News. DNA was submitted by someone I don’t know, but I submitted the Zeo page at the conclusion of my first where I concluded that that Vitamin D consumed in the evening did indeed damage my sleep (I then followed up with which found that Vitamin D consumed in the morning did not damage my sleep.) I am proud of these two experiments and so I was gratified that Hacker News found them worthwhile too.

2. DNB FAQ: 36,488
3. home/main page: 32,259
4. Modafinil: 23,801
5. Nootropics: 21,970
6. : 15,860
7. : 15,841
8. Spaced repetition: 8,628

Donation-wise, I received ~₿2, and a number of Paypal donations: $5,$4, $5, and$100 from a particularly generous LessWronger who wished me to backup my files more securely & remotely. Alexandra Carmichael was impressed by my Zeo experiments and asked me to write an ebook on sleep for an upcoming Quantified Self series of O’Reilly ebooks; we will see how that goes.39

##### Promotion

Hacker News, Reddit, and LessWrong remain major referral drivers. For recent new pages, I’ve been trying a checklist which includes submission to SU, Google+, Hacker News, Reddit, and LessWrong as appropriate; I am not sure how well it is working since popularity seems very random.

#### July 2012–January 2013

“All I say is by way of discourse, and nothing by way of advice. I should not speak so boldly if it were my due to be believed.”

(, Essays)

darcs-graph for this period (3 July 2012–2 January 2013):

##### Traffic

records 758,843 page views by 366,028 unique visitors over the 184 days for a daily average of 4,124.1 page-views, which is double the previous half-year average of 2,012 daily page-views; traffic growth is clearly slowing, though, since the previous half-year had quadruple the traffic compared to its predecessor. My prediction of breaking the million page-view mark came true, by a very large margin: the lifetime total page-views is now 1,568,957 page-views.

Popularity rankings have changed a bit: the Death Note essay and my sleep experiments have fallen out of the top 10 (the former because not many people are still linking it, and the latter probably because my latest experiments were relatively boring), replaced by a sidebar link and one of my terrorism-related essays:

2. home/main page: 37,853
3. Modafinil: 31,047
4. DNB FAQ: 27,433
5. Nootropics: 22,015
6. Drug heuristics/Algernon’s Law: 18,991
7. Spaced repetition: 11,693
10. : 7,369

I am quite surprised that my essay does not even make the top 50 pages, given that it deals with an novel thesis on which there’s many interesting things to think and which is easily misunderstood.

Donations: the ebook fell through when O’Reilly decided to cancel the entire series, which was a disappointment; Carmichael had finished her book on mood and is self-publishing, but I haven’t seen tremendous interest in sleep and will probably just roll my draft material into the existing Zeo page. Paypal donations performed outstandingly: I received $10,$25, $10,$15, $200, &$10 ($270). Bitcoiners were not so generous: ₿0.25, ₿0.32, & ₿1 (₿1.57, or$21 at the 2 January 2013 Mt.Gox exchange rate).

##### Promotion

To Hacker News, Reddit, and LessWrong, I can add as a major referrer Wikipedia—primarily to the Silk Road article, but also to a few Evangelion-related pages. StumbleUpon has declined to the 10th largest referrer:

1. HN: 16,886
2. Reddit: 15,219
3. Wikipedia: 7,531
4. LessWrong: 6,332
6. brainworkshop.sourceforge.net: 3,733
7. google.com: 2,649
8. youtube.com: 1,798
9. mainstreamlos.tumblr.com: 1,694
10. stumbleupon.com: 1,239

I haven’t spent much time promoting my content, but improving the site with metadata & writing new content: for example, I tripled the size of my database.

### 2013

#### January 2013–July 2013

“I was decimated. To program any more would be pointless. My programs would never live as long as . A computer will never live as long as The Trial. …What if was only written for ?”

_why the lucky stiff ()

darcs-graph for this period (3 January 2013–2 July 2013):

##### Traffic

records 832,415 page-views by 422,285 unique visitors over 181 days, or 4598 page-views per day. (The lifetime total page-views has thus reached 2,403,807.) This represents ~11% growth in traffic compared to the previous 6-month period of 4,124 dailies, continuing the slowdown trend—possibly the next half-year won’t see any growth, or a decline.

Popularity rankings have changed: Silk Road lost a lot of Wikipedia traffic when, during an editing dispute, an administrator deleted it from their article on Silk Road, and it was only restored on 2 July. 2 new statistical essays ( & ) enter the top 10, but neither were able to dethrone Silk Road as that darknet market continued to thrive & receive media coverage:

2. Modafinil: 58,856
4. DNB FAQ: 25,979
5. Death Note script: 25,792
6. Nootropics: 22,702
7. home/main page: 20,051
10. Spaced repetition: 11,711

Financial: this period was a remarkable period. My contracting work dried up (largely my own fault) and pressed for money, I began exploring alternatives:

1. the easiest strategy was to turn all my Amazon links for books & nootropics & miscellaneous into affiliate links; they are already there, do not affect readers, and coding it up was as simple as appending a string to links in hakyll.hs by a simple modification of my existing code for easily linking to Wikipedia. This worked as far as it went, which was not very far ($47 in Q1 2013 and$130 Q2 2013).
2. more painfully, I decided to try Google’s . While AdWords hadn’t been the most pleasant experience in the world, it was fairly decent, and Google’s ads always seemed reasonable to me in the search engine (before I learned of AdBlock). AdSense has scary language in its terms of service forbidding detailed discussion of revenue, but I should be safe when I say that the mean CTR was 0.17% & the mean CPC was $0.57, ~40% of visitors did not have ad filtering enabled, and so over the 80 days AdSense was enabled, I earned ~$260. Helpful, but not rent-paying.
3. donations made the difference. After “Google shutdowns” of Hacker News, an acquaintance . $408 in 14 Paypal donations; through Bitcoin, 1.62btc (then$189) in 4 donations (). I am grateful to all the HNers who donated.

##### Promotion

As before, I did minimal active promotion of my content.

1. news.ycombinator.com: 51,256
2. reddit.com: 25,479
3. Twitter (t.co): 7,748
4. Slate Star Codex: 3,166
6. lesswrong.com: 3,035
7. Disqus: 2,042
8. Bullet Proof Exec: 1,931
9. Longecity: 1,887
10. Wired: 1,854

(I have no idea why Facebook traffic increased so much, since the referrers are useless.)

My monthly mailing list has reached 857 subscribers.

### 2016

#### January 2016–July 2016

“It is easy enough to say that man is immortal simply because he will endure: that when the last ding-dong of doom has clanged and faded from the last worthless rock hanging tideless in the last red and dying evening, that even then there will still be one more sound: that of his puny inexhaustible voice, still talking. I refuse to accept this. I believe that man will not merely endure: he will prevail.”

William Faulkner ()

Activity graph for this period (4 January 2016–3 July 2016):

##### Traffic

records 294,009 page-views by 141,795 unique visitors over the 182 days, or 1615 page-views per day for a daily decrease of 937 compared to the previous six months. (Lifetime total: 5,036,804 page-views by 2,329,713 unique visitors.)

Page rankings:

1. homepage/index: 35,673
2. Modafinil: 33,928
3. LSD microdosing: 21,712
4. DNM arrests: 14,322
5. Spaced repetition: 14,044
6. Nootropics: 12,117
7. DNB FAQ: 10,983

continues to be a success.

##### Promotion

As before, I did minimal active promotion of my content.

1. Reddit: 17,792
4. HN: 2,518
5. Slate Star Codex: 2,467
6. LessWrong: 2,135
7. Longecity: 1,555
8. : 1,440
9. brainworkshop.sourceforge.net: 1,035
10. MetaFilter: 1,018

My monthly mailing list has reached 1053 subscribers.

#### July 2016–January 2017

“With the people of the past / How I wish I could share the beauty / Of these cherry blossoms. // Have each of us across the years / Left behind this very thought?”

(“Shogaku Hyakushu”, 1181 AD, trans. )

Activity graph for this period (4 July 2016–3 January 2017):

##### Traffic

records 339,529 page-views by 172,203 unique visitors over the 184 days, or 1845 page-views per day for a daily increase of 230 compared to the previous six months. (Lifetime total: 5,376,333 page-views by 2,499,949 unique visitors.)

Page rankings:

1. Nootropics: 43,266
2. index/main page: 39,161
3. Spaced repetition: 26,965
4. Modafinil: 26,849
5. DNM arrests: 16,279
6. LSD microdosing: 12,943
7. DNB FAQ: 10,501
8. : 8,026
9. In Defense of Inclusionism: 7,037

##### Promotion

As before, I did minimal active promotion of my content. Referrals:

1. HN: 10,881 (“Story Of Your Life” was an unexpected hit)
2. Reddit: 9,538
5. LessWrong: 1,568
6. Slate Star Codex: 1,462
7. rs.io: 1,315
8. Longecity: 1,071
9. Brain Workshop: 861
10. DuckDuckGo: 820

My monthly mailing list has reached 1419 subscribers, up ~207.

#### July 2017–January 2018

“It is only the attempt to write down your ideas that enables them to develop.”

Wittgenstein to Drury (, Rhees 1984)

Activity graph for this period (4 July 2017–3 January 2018):

##### Traffic

records 326,852 page-views by 155,532 unique visitors over the 184 days, or 1776 page-views per day for a daily increase of 513 visits compared to the previous six months. (Lifetime total: 5,931,837 page-views by 2,763,402 unique visitors.)

(somewhat scrambled due to the big renaming and GA not merging analytics across URLs):

1. index: 43,863
2. : 20,013
3. Modafinil: 14,340
4. DNM arrests: 14,115
5. : 13,808
6. DNB FAQ: 13,650
7. : 11,401
8. Spaced repetition: 12,428
9. : 11,150
10. Nootropics: 7,615

My A/B test of banner ads turned in an unexpected result of substantial harm, which was of considerable general interest (especially to anyone who runs an ad-supported website), so no surprise that had a lot of traffic. I wrote off the obscure neural net/tank story years ago after some cursory investigation showed it had all the classic traits of an urban legend; but it has kept coming up, and at Anders Sandberg’s request, I wrote it up in considerably more detail, and was surprised that people took such an interest in it and even defended it as probably genuine, in the absence of all evidence—I guess people like debunkings? The final oddity was the resurgence of my old essay on an extremely obscure bit of US historical trivia, that it had tried to acquire Greenland in the early Cold War, where I noted that Denmark’s refusal was a costly one; I don’t know why anyone was reading it, but I was annoyed at the idiotic objections people made, typically either pure fantasy, already refuted like the many suggestions that Greenland oil would yet pay off, or illogical virtue signaling—ironically demonstrating my point. (Considerable effort invested in defining hundreds of redirects for 404 errors didn’t pay off as people kept discovering ever new ways to misspell URLs, but it did allow me to rename pages without destroying traffic by breaking most incoming links, switching from spaces to hyphens in order to idiot-proof them.)

##### Promotion

Referrals:

1. HN: 13,533
2. Reddit: 6,686
4. Slate Star Codex: 6,246
6. adme.ru: 2,414
8. The Browser: 562
9. 4chan: 523
10. Duck Duck Go: 521

My monthly mailing list has reached 2183 subscribers, up ~307. By crossing the 2000 subscriber mark, MailChimp disabled my account and began charging ~$30 a month to send out a single newsletter; as it’s a little hard for me to justify spending anywhere up to$400/year on my newsletter, which I don’t directly make any money on, I had to stop using MailChimp. after looking around some, I switched to , which oddly enough is also run by MailChimp but doesn’t have that arbitrary subscriber limit (though it lacks many features of MailChimp proper).

#### July 2018–January 2019

“Is not a patron, my lord, one who looks with unconcern on a man struggling for life in the water, and when he has reached ground, encumbers him with help?”

Samuel Johnson (, 1755)

Activity graph for this period (4 July 2018–3 January 2019):

##### Traffic

records 407,395 page-views by 185,141 unique visitors over the 184 days, or 2214 page-views per day for an increase of 396 pageviews a day compared to the previous six months. (Lifetime total: 6,638,722 page-views by 3,113,460 unique visitors.)

1. index: 50,849
2. : 27,241
3. Turing-complete: 26,662
4. Notes: (for /): 17,706
5. Spaced repetition: 14,471
6. Modafinil: 13,958
7. “Bitcoin Is Worse Is Better”: 13,169
8. DNB FAQ: 10,344
10. Drug heuristics/Algernon’s Law: 9,724
11. And honorable mentions for the new pages which almost made it to the top 10 (#13) despite being released mid-way December 2018, and (on cat psychology, genetics, domestication, & dysgenics) at #21.

I was disappointed that two of my new ones had little or no interest ( & ), and that Danbooru2017, despite its use in the impressive , saw little uptake. Maybe Danbooru2018 will do better.

##### Promotion

Referrals changed dramatically, driven mostly by TWDNE:

1. thiswaifudoesnotexist.net: 75,707
2. news.ycombinator.com: 34,761
3. t.co: 12,744
4. reddit.com: 7,993
5. slatestarcodex.com: 4,754
6. m.facebook.com: 2,476
7. link.zhihu.com: 1,886
8. github.com: 1,746
9. mail01.tinyletterapp.com: 1,342
10. old.reddit.com: 1,164

My monthly mailing list has reached 3559 subscribers, up ~806.

## Colophon

### Hosting

gwern.net is served by through the . (Amazon charges less for bandwidth and disk space than NFSN, although one loses all the capabilities offered by Apache’s , and compression is difficult so must be handled by CloudFlare; total costs may turn out to be a wash and I will consider the switch to Amazon S3 a success if it can bring my monthly bill to <$10 or <$120 a year.)

From October 2010 to June 2012, the site was hosted on , an old hosting company; its specific niche is controversial material and activist-friendly pricing. Its libertarian owners cast a jaundiced eye on s, and pricing is . I like the former aspect, but the latter sold me on NFSN. Before I stumbled on NFSN (someone mentioned it offhandedly while chatting), I was getting ready to pay $10–15 a month ($120 yearly) to . Linode’s offerings are overkill since I do not run dynamic websites or something like (with wikis and mailing lists and repositories), but I didn’t know a good alternative. NFSN’s pricing meant that I paid for usage rather than large flat fees. I put in $32 to cover registering gwern.net until 2014, and then another$10 to cover bandwidth & storage price. DNS aside, I was billed $8.27 for October-December 2010; DNS included, January-April 2011 cost$10.09. $10 covered months of gwern.net for what I would have paid Linode in 1 month! In total, my 2010 costs were$39.44 (bill archive); my 2011 costs were $118.32 ($9.86 a month; archive); and my 2012 costs through June were $112.54 ($21 a month; archive); sum total: $270.3. The switch to Amazon S3 hosting is complicated by my simultaneous addition of CloudFlare as a CDN; my total June 2012 Amazon bill is$1.62, with $0.19 for storage. CloudFlare claims it covered 17.5GB of 24.9GB total bandwidth, so the$1.41 represents 30% of my total bandwidth; multiply 1.41 by 3 is 4.30, and my hypothetical non-CloudFlare S3 bill is ~$4.5. Even at$10, this was well below the $21 monthly cost at NFSN. (The traffic graph indicates that June 2012 was a relatively quiet period, but I don’t think this eliminates the factor of 5.) From July 2012 to June 2013, my Amazon bills totaled$60, which is reasonable except for the steady increase ($1.62/$3.27/$2.43/$2.45/$2.88/$3.43/$4.12/$5.36/$5.65/$5.49/$4.88/$8.48/$9.26), being primarily driven by out-bound bandwidth (in June 2013, the$9.26 was largely due to the 75GB transferred—and that was after CloudFlare dealt with 82GB); $9.26 is much higher than I would prefer since that would be >$110 annually. This was probably due to all the graphics I included in the “Google shutdowns” analysis, since it returned to a more reasonable $5.14 on 42GB of traffic in August. September, October, November and December 2013 saw high levels maintained at$7.63/$12.11/$5.49/$8.75, so it’s probably a new normal. 2014 entailed new costs related to EC2 instances & S3 bandwidth spikes due to hosting a multi-gigabyte scientific dataset, so bills ran$8.51/$7.40/$7.32/$9.15/$26.63/$14.75/$7.79/$7.98/$8.98/$7.71/$7/$5.94. 2015 & 2016 were similar:$5.94/$7.30/$8.21/$9.00/$8.00/$8.30/$10.00/$9.68/$14.74/$7.10/$7.39/$8.03/$8.20/$8.31/$8.25/$9.04/$7.60/$7.93/$7.96/$9.98/$9.22/$11.80/$9.01/$8.87. 2017 saw costs increase due to one of my side-projects, aggressively increasing fulltexting of gwern.net by providing more papers & scanning cited books, only partially offset by changes like lossy optimization of images & converting GIFs to WebMs:$12.49/$10.68/$11.02/$12.53/$11.05/$10.63/$9.04/$11.03/$14.67/$15.52/$13.12/$12.23 (total:$144.01). In 2018, I continued fulltexting: $13.08/$14.85/$14.14/$18.73/$18.88/$15.92/$15.64/$15.27/$16.66/$22.56/$23.59/$25.91/(total: $213). ### Source The revision history is kept in git; individual page sources can be read by appending .page to their URL. #### Size As of 4 January 2019, the source of gwern.net is composed of >321 text files with >3,208,000 words or 23MB; this includes my writings & documents I have transcribed into Markdown, but excludes images, PDFs, archives, infrastructure, and the revision history. With those included and everything compiled to the static42 HTML, the site is >14.2GB. The source repository contains >11,801 patches (this is an under-count as the creation of the repository in 26 September 2008 included already-written material). ### Design “People who are really serious about software should make their own hardware.” The of web design & typography is that it all can matter just a little how you present your pages. A page can be terribly designed and render as typewriter text in 80-column ASCII monospace, and readers will still read it, even if they complain about it. And the most tastefully-designed page, with true smallcaps and correct use of em-dashes vs en-dashes vs hyphens vs minuses and all, which loads in a fraction of a second and is SEO-optimized, is of little avail if the page has nothing worth reading; no amount of typography can rescue a page of dreck. Perhaps 1% of readers could even name any of these details, much less recognize them. If we added up all the small touches, they surely make a difference to the reader’s happiness, but it would have to be a small one—say, 5%.43 It’s hardly worth it for writing just a few things. But the great joy of web design & typography is that just its presentation can matter a little to all your pages. Writing is hard work, and any new piece of writing will generally add to the pile of existing ones, rather than multiplying it all. Good design improvements benefit one’s entire website & all readers, and so at a certain scale, can be quite useful. I feel I’ve reached that point. There are 3 design goals: 1. Aesthetically-pleasing minimalism The design esthetic is minimalist. I believe that helps one focus on the content. Anything besides the content is . ‘Attention!’, as would say44. The palette is deliberately kept to grayscale as an experiment in consistency and whether this constraint permits a readable aesthetically-pleasing website. Various classic typographical tools, like and are used for emphasis. 2. Accessibility & Semantic markup is used where Markdown permits. JavaScript is not required for the core reading experience, only for optional features: comments, table-sorting, sidenotes, and so on. Pages can even be read without much problem in a smartphone or a text browser like elinks. 3. Speed & efficiency On an increasingly-bloated Internet, a website which is anywhere remotely as fast as it could be is a breath of fresh air. Readers deserve better. gwern.net uses a number of tricks to offer nice features like sidenotes or LaTeX math at minimal cost. Features: • sidenotes using both margins, fallback to floating footnotes • code folding (collapsible sections/code blocks/tables) • JS-free LaTeX math rendering • Wikipedia popup summaries • Disqus comments • click-to-zoom images & slideshows • sortable tables • interwiki link syntax • automatically inflation-adjust dollar amounts • full-width tables/images • link filetype/domain icons • lightweight drop caps/initials • epigraphs • quote syntax highlighting Much of gwern.net design and JS/CSS was developed by , 2017–2019. Some inspiration has come from & Matthew Butterick’s . ### Tools Software tools & libraries used in the site as a whole: • The source files are written in (Pandoc: John MacFarlane et al; GPL) (source files: Gwern Branwen, CC-0) • math is written in which compiles to , rendered by MathJax (Apache) • the site is compiled with the v4+ static site generator, used to generate gwern.net, written in (Jasper Van der Jeugt et al; BSD); for the gory details, see hakyll.hs which implements the compilation, RSS feed generation, & parsing of interwiki links as well. This just generates the basic website; I do many additional optimizations/tests before & after uploading, which is handled by sync-gwern.net.sh (Gwern Branwen, CC-0) My preferred method of use is to browse & edit locally using Emacs, and then distribute using Hakyll. The simplest way to use Hakyll is that you cd into your repository and runhaskell hakyll.hs build (with hakyll.hs having whatever options you like). Hakyll will build a static HTML/CSS hierarchy inside _site/; you can then do something like firefox _static/index. (Because HTML extensions are not specified in the interest of , you cannot use the Hakyll watch webserver as of January 2014.) Hakyll’s main advantage for me is relatively straightforward integration with the Pandoc Markdown libraries; Hakyll is not that easy to use, and so I do not recommend use of Hakyll as a general static site generator unless one is already adept with Haskell. • the CSS is borrowed from a motley of sources and has been heavily modified, but its origin was the & ; for specifics, see default.css • Markdown extensions: • I implemented a Pandoc Markdown plugin for a custom syntax for interwiki links in Gitit, and then ported it to Hakyll (defined in hakyll.hs); it allows linking to the English Wikipedia (among others) with syntax like [malefits](!Wiktionary) or [antonym of 'benefits'](!Wiktionary "Malefits"). CC-0. • inflation adjustment: provides a Pandoc Markdown plugin which allows automatic inflation adjusting of dollar amounts, presenting the nominal amount & a current real amount, with a syntax like [$5]($1980). • Book affiliate links are through an tag appended in the hakyll.hs • image dimensions are looked up at compilation time & inserted into <img> tags as browser hints • JavaScript: • Comments are outsourced to (since I am not interested in writing a dynamic system to do it, and their anti-spam techniques are much better than mine). • the floating footnotes are via (Lukas Mathis, PD); when the browser window is wide enough, the floating footnotes are instead replaced with marginal notes/sidenotes45 using a custom library, sidenotes.js (Said Achmiz, MIT) • the HTML tables are sortable via (Christian Bach; MIT/GPL) • the MathML is rendered using • analytics are handled by • is done using (Daniele Mazzini; MIT) which hooks into Google Analytics (see ) for individual-level testing; when doing site-level long-term testing like in the , I simply write the JS manually. • for loading introductions/summaries of WP articles when one mouses-over a WP link (Said Achmiz; MIT) • generalizes the Wikipedia tooltip popups for loading introductions/summaries of all links when one mouses-over a link; reads statically-generated annotations automatically populated from many sources, with special handling of YouTube videos (Said Achmiz, Shawn Presser; MIT) • image size: full-scale images (figures) can be clicked on to zoom into them with slideshow mode—useful for figures or graphs which do not comfortably fit into the narrow body—using another custom library, image-focus.js (Said Achmiz; GPL) • error checking: problems such as broken links are checked in 3 phases: #### Implementation Details There are a number of little tricks or details that web designers might find interesting. Efficiency: • fonts: • Adobe //: originally gwern.net used Baskerville, but system Baskerville fonts don’t have adequate small caps. Adobe’s open-source “Source” font family of , however, is high quality and comes with good small caps, multiple sets of numerals ( numbers for the body text and different numbers for tables), and looks particularly nice on Macs. It is also subsetted to cut down the load time. • efficient drop caps by subsetting: 1 drop cap is used on every page, but a typical drop cap font will slowly download as much as a megabyte in order to render 1 single letter. CSS font loads avoid downloading font files which are entirely unused, but they must download the entire font file if anything in it is used, so it doesn’t matter that only one letter gets used. To avoid this, we split each drop cap font up into and use CSS to load all the font files; since only 1 font file is used at all, only 1 gets downloaded, and it will be ~4kb rather than 168kb. This has been done for all the drop cap fonts used (, , , , ), and the necessary CSS can be seen in fonts.css. To specify the drop cap for each page, a Hakyll metadata field is used to pick the class and substituted into the HTML template. • lazy JavaScript loading by : several JS features are used rarely or not at all on many pages, but are responsible for much network activity. For example, most pages have no tables but tablesorter must be loaded anyway, and many readers will never get all the way to the Disqus comments at the bottom of each page, but Disqus will load anyway, causing much network activity and disturbing the reader because the page is not ‘finished loading’ yet. To avoid this, IntersectionObserver can be used to write a small JS function which fires only when particular page elements are visible to the reader. The JS then loads the library which does its thing. So an IntersectionObserver can be defined to fire only when an actual <table> element becomes visible, and on pages with no tables, this never happens. Similarly for Disqus and image-focus.js. This trick is a little dangerous if a library depends on another library because the loading might cause race conditions; fortunately, only 1 library, tablesorter, has a prerequisite, jQuery, so I simply prepend jQuery to tablesorter and load tablesorter. (Other libraries, like sidenotes or WP popups, aren’t lazy-loaded because sidenotes need to be rendered as fast as possible or the page will jump around & be laggy, and WP links are so universal it’s a waste of time making them lazy since they will be in the first screen on every page & be loaded immediately anyway, so they are simply loaded asynchronously with the defer JS keyword.) • image optimization: PNGs are optimized by pngnq/advpng, JPEGs with mozjpeg, SVGs are minified, PDFs are compressed with support. (GIFs are not used at all in favor of WebM/MP4 <video>s.) • JS/CSS minification: because Cloudflare does Brotli compression, minification of JS/CSS has little advantage and makes development harder, so no minification is done; the font files don’t need any special compression either. • MathJax: getting well-rendered mathematical equations requires MathJax or a similar heavyweight JS library; worse, even after disabling features, the load & render time is extremely high—a page like which is both large & has a lot of equations can visibly take >5s (as a progress bar that helpfully pops up informs the reader). The solution here is to , using the local tool to load the final HTML files, parse the page to find all the math, compile the expressions, define the necessary CSS, and write the HTML back out. Pages still need to download the fonts but the overall speed goes from >5s to <0.5s, and JS is not necessary at all. • collapsible sections: managing complexity of pages is a balancing act. It is good to provide all necessary code to reproduce results, but does the reader really want to look at a big block of code? Sometimes they always would, sometimes only a few readers interested in the gory details will want to read the code. Similarly, a section might go into detail on a tangential topic or provide additional justification, which most readers don’t want to plow through to continue with the main theme. Should the code or section be deleted? No. But relegating it to an appendix, or another page entirely is not satisfactory either—for code blocks particularly, one loses the literate programming aspect if code blocks are being shuffled around out of order. A nice solution is to simply use a little JS to implement approach where sections or code blocks can be visually shrunk or collapsed, and expanded on demand by a mouse click. Collapsed sections are specified by a HTML class (eg <div class="collapse"></div>), and summaries of a collapsed section can be displayed, defined by another class (<div class="collapseSummary">). This allows code blocks to be collapse by default where they are lengthy or distracting, and for entire regions to be collapsed & summarized, without resorting to many appendices or forcing the reader to an entirely separate page. • sidenotes: one might wonder why sidenotes.js is necessary when most sidenote uses are like and use a static HTML/CSS approach, which would avoid a JS library entirely and visibly repainting the page after load? The problem is that Tufte-CSS-style sidenotes do not reflow and are solely on the right margin (wasting the considerable whitespace on the left), and depending on the implementation, may overlap, be pushed far down the page away from their, break when the browser window is too narrow or not work on smartphones/tablets at all. The JS library is able to handle all these and can handle the most difficult cases like . (, however, pose no such problems and we take the same approach of defining a HTML class & styling with CSS.) • Link icons: icons are defined for all filetypes used in gwern.net and many commonly-linked websites such as Wikipedia, gwern.net (within-page section links and between-page get ‘§’ & logo icons respectively), or YouTube; all are inlined into default.css as ; the SVGs are so small it would be absurd to have them be files. • Redirects: AWS 3S does not support a .htaccess-like mechanism for rewriting URLs. To allowing moving pages & fix broken links, I wrote for generating simple HTML pages which simply redirect from URL 1 to URL 2. In addition to page renames, I monitor 404 hits in Google Analytics to fix errors where possible. There are an astonishing number of ways to misspell gwern.net URLs, it turns out, and I have defined >4k redirects so far. ##### Benford’s law Does gwern.net follow the famous Benford’s law? A quick analysis suggests that it sort of does, except for the digit 2, probably due to the many citations to research from the past 2 decades (>2000 AD). In March 2013 I wondered, upon seeing a mention of : “if I extracted all the numbers from everything I’ve written on gwern.net, would it satisfy Benford’s law?” It seems the answer is… almost. I generate the list of numbers by running a Haskell program to parse digits, commas, and periods; and then I process it with shell utilities.46 This can then be read in R to run a confirming lack of fit (p=~0) and generate this comparison of the data & Benford’s law47: There’s a clear resemblance for everything but the digit ‘2’, which then blows the fit to heck. I have no idea why 2 is overrepresented—it may be due to all the citations to recent academic papers which would involve numbers starting with ‘2’ (2002, 2010, 2013…) and cause a double-count in both the citation and filename, since if I look in the docs/ fulltext folder, I see 160 files starting with ‘1’ but 326 starting with ‘2’. But this can’t be the entire explanation since ‘2’ has 20.3k entries while to fit Benford, it needs to be just 11.5k—leaving a gap of ~10k numbers unexplained. A mystery. ### License This site is licensed under the license. I believe the public domain license reduces and 48, encourages copying (), gives back (however little) to /, and costs me nothing49. 1. , pg 19 of , on why he wrote his book of biographical sketches of great Soviet chess players. (As Richardson asks (Vectors 1.0, 2001): “25. Why would we write if we’d already heard what we wanted to hear?”)↩︎ 2. “It is only the attempt to write down your ideas that enables them to develop.” –Wittgenstein (pg 109, ); “I thought a little [while in the isolation tank], and then I stopped thinking altogether…incredible how idleness of body leads to idleness of mind. After 2 days, I’d turned into an idiot. That’s the reason why, during a flight, astronauts are always kept busy.” –Oriana Fallaci, in by Craig Nelson.↩︎ 3. One danger of such an approach is that you will simply engage in , and build up an impressive-looking wall of citations that is completely wrong but effective in brainwashing yourself. The only solution is to be diligent to include criticism—so even if you do not escape brainwashing, at least your readers have a chance. , 1902: I had, also, during many years followed a golden rule, namely, that whenever a published fact, a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from the memory than favourable ones. Owing to this habit, very few objections were raised against my views which I had not at least noticed and attempted to answer. ↩︎ 4. Such as ’s universe; consider the introduction to the chronologically last story in that setting, “Safe at Any Speed” (Tales of Known Space).↩︎ 5. “If the individual lived five hundred or one thousand years, this clash (between his interests and those of society) might not exist or at least might be considerably reduced. He then might live and harvest with joy what he sowed in sorrow; the suffering of one historical period which will bear fruit in the next one could bear fruit for him too.” ↩︎ 6. One way to distinguish empirically between aging effects and proximity-to-death effects would be to compare, with respect to choice of occupation, investment, education, leisure activities, and other activities, elderly people on the one hand with young or middle-aged people who have truncated life expectancies but are in apparent good health, on the other. For example, a person newly infected with the AIDS virus (HIV) has roughly the same life expectancy as a 65-year-old and is unlikely to have, as yet, [major] symptoms. The conventional human-capital model implies that, after correction for differences in income and for other differences between such persons and elderly persons who have the same life expectancy (a big difference is that the former will not have pension entitlements to fall back upon), the behavior of the two groups will be similar. It does appear to be similar, so far as investing in human capital is concerned; the truncation of the payback period causes disinvestment. And there is a high suicide rate among HIV-infected persons (even before they have reached the point in the progression of the disease at which they are classified as persons with AIDS), just as there is, as we shall see in chapter 6, among elderly persons. ↩︎ 7. , 1962: I am reminded of the story of the great French Marshal Lyautey, who once asked his gardener to plant a tree. The gardener objected that the tree was slow-growing and would not reach maturity for a hundred years. The Marshal replied, “In that case, there is no time to lose, plant it this afternoon.” ↩︎ 8. In the long run, the utility of all non-Free software approaches zero. All non-Free software is a dead end. ↩︎ 9. These dependencies can be subtle. Computer archivist Jason Scott of services that: URL shorteners may be one of the worst ideas, one of the most backward ideas, to come out of the last five years. In very recent times, per-site shorteners, where a website registers a smaller version of its hostname and provides a single small link for a more complicated piece of content within it… those are fine. But these general-purpose URL shorteners, with their shady or fragile setups and utter dependence upon them, well. If we lose or , millions of weblogs, essays, and non-archived tweets lose their meaning. Instantly. To someone in the future, it’ll be like everyone from a certain era of history, say ten years of the 18th century, started speaking in a one-time pad of cryptographic pass phrases. We’re doing our best to stop it. Some of the shorteners have been helpful, others have been hostile. A number have died. We’re going to release torrents on a regular basis of these spreadsheets, these code breaking spreadsheets, and we hope others do too. ↩︎ 10. (and the comments provide even more examples) further on URL shorteners: But the biggest burden falls on the clicker, the person who follows the links. The extra layer of indirection slows down browsing with additional DNS lookups and server hits. A new and now sits between the link and its destination. And the long-term archivability of the hyperlink now depends on the health of a third party. The shortener may decide a link is a Terms Of Service violation and delete it. If the shortener accidentally erases a database, forgets to renew its domain, or just , the link will break. If a top-level domain , the link will break. If the shortener gets hacked, every link becomes a potential phishing attack. ↩︎ 11. A static text-source site has many advantages for Long Content that I consider use almost a no-brainer. • By nature, they compile most content down to flat standalone textual files, which allow recovery of content even if the original site software has bit-rotted or the source files have been lost or the compiled versions cannot be directly used in new site software: one can parse them with XML tools or with quick hacks or by eye. • Site compilers generally require dependencies to be declared up front, and the approach makes explicitness and content easy, but dynamic interdependent components difficult, all of which discourages creeping complexity and hidden state. • A static site can be archived into a tarball of files which will be readable as long as web browsers exist (or afterwards if the HTML is reasonably clean), but it could be difficult to archive a CMS like WordPress or Blogspot (the latter doesn’t even provide the content in HTML—it only provides a rat’s-nest of inscrutable JavaScript files which then download the content from somewhere and display it somehow; indeed, I’m not sure how I would automate archiving of such a site if I had to; I would need some sort of headless browser to run the JS and serialize the final resulting DOM, possibly with some scripting of mouse/keyboard actions). • The content is often not available locally, or is stored in opaque binary formats rather than text (if one is lucky, it will at least be a database), both of which make it difficult to port content to other website software; you won’t have the necessary pieces, or they will be in wildly incompatible formats. • Static sites are usually written in a reasonably standardized markup language such as Markdown or Latex, in distinction to blogs which force one through WYSIWYG editors or invent their own markup conventions, which is yet another barrier: parsing a possibly ill-defined language. • The lowered sysadmin efforts (who wants to be constantly cleaning up spam or hacks on their WordPress blog?) are a final advantage: lower running costs make it more likely that a site will stay up rather than cease to be worth the hassle. Static sites are not appropriate for many kinds of websites, but they are appropriate for websites which are content-oriented, do not need interactivity, expect to migrate website software several times over coming decades, want to enable archiving by oneself or third parties (“lots of copies keeps stuff safe”), and to gracefully degrade after loss or bitrot.↩︎ 12. Such as burning the occasional copy onto read-only media like DVDs.↩︎ 13. One can’t be sure; the IA is fed by , and Alexa doesn’t guarantee pages will be & preserved if one goes through their request form.↩︎ 14. I am diligent in backing up my files, in periodically copying my content from the , and in preserving viewed Internet content; why do I do all this? Because I want to believe that my memories are precious, that the things I saw and said are valuable; “I want to meet them again, because I believe my feelings at that time were real.” My past is not trash to me, used up & discarded.↩︎ 15. Examples of such blogs: 1. ’s contributions to were the rough draft of a philosophy book (or two) 2. John Robb’s lead to his 3. ’s was turned into . An example of how not to do it would be ‘s blog; it is stuffed with fascinating citations & sketches of ideas, but they never go anywhere with the exception of his mind emulation economy posts which were eventually published in 2016 as The Age of Em. Just his posts on would make a fascinating essay or just list—but he has never made one. ( would be a natural home for many of his posts’ contents, but will never be updated.)↩︎ 16. , 6 September 2011: [Question:] “One purpose of the is to encourage long-term thinking. Aside from the Clock, though, what do you think people can do in their everyday lives to adopt or promote long-term thinking?” : “The 10,000-year Clock we are building is meant to remind us to think long-term, but learning how to do that as in individual is difficult. Part of the difficulty is that as individuals we constrained to short lives, and are inherently not long-term. So part of the skill in thinking long-term is to place our values and energies in ways that transcend the individual—either in generational projects, or in social enterprises.” “As a start I recommend engaging in a project that will not be complete in your lifetime. Another way is to require that your current projects exhibit some payoff that is not immediate; perhaps some small portion of it pays off in the future. A third way is to create things that get better, or run up in time, rather than one that decays and runs down in time. For instance a seedling grows into a tree, which has seedlings of its own. A program like which gives breeding pairs of animals to poor farmers, who in turn must give one breeding pair away themselves, is an exotropic scheme, growing up over time.” ↩︎ 17. ‘Princess Irulan’, , ↩︎ 18. reports in that of volunteers motivated enough to email them asking to help, something like <20% will complete the GiveWell test assignment and render meaningful help. Such persons would have been well-advised to have simply donated some money. I have long noted that many of the most popular pages on gwern.net could have been written by anyone and drew on no unique talents of mine; I have on several occasions received offers to help with the DNB FAQ—none of which have resulted in actual help.↩︎ 19. An old sentiment; consider “A drop hollows out the stone” (Ovid, Epistles) or Thomas Carlyle’s “The weakest living creature, by concentrating his powers on a single object, can accomplish something. The strongest, by dispensing his over many, may fail to accomplish anything. The drop, by continually falling, bores its passage through the hardest rock. The hasty torrent rushes over it with hideous uproar, and leaves no trace behind.” (The life of Friedrich Schiller, 1825)↩︎ 20. Richard Feynman was fond of giving the following advice on how to be a genius. You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say: ‘How did he do it? He must be a genius!’ ↩︎ 21. IQ is sometimes used as a proxy for health, like height, because it sometimes seems like any health problem will damage IQ. Didn’t get much protein as a kid? Congratulations, your nerves will lack and you will literally think slower. Missing some ? Say good bye to <10 points! If you’re anemic or iron-deficient, that might increase to <15 points. Have tapeworms? There go some more points, and maybe centimeters off your adult height, thanks to the worms stealing nutrients from you. Have a rough birth and suffer a spot of before you began breathing on your own? Tough luck, old bean. It is very easy to lower IQ; you can do it with a baseball bat. It’s the other way around that’s nearly impossible.↩︎ 22. And America has tried pretty hard over the past 60 years to affect IQ. The whole nature/nurture would be moot if there were some nutrient or educational system which could add even 10 points on average, because then we would use it on all the blacks. But it seems that I’m constantly reading about programs like which boost IQ for a little while… and do nothing in the long run.↩︎ 23. For details on the many valuable correlates of the Conscientiousness personality factor, see .↩︎ 24. 25 episodes, 6 movies, >11 manga volumes—just to stick to the core works.↩︎ 25. More than my life What I most regret Is A dream unfinished And awakening. ↩︎ 26. As with Cloud Nine; I accidentally erased everything on a routine basis while messing around with Windows.↩︎ 27. For example, I notice I am no longer deeply interested in the occult. Hopefully this is because I have grown mentally and recognize it as rubbish; I would be embarrassed if when I died it turned out my youthful self had a better grasp on the real world.↩︎ 28. Some pages don’t have any connection to predictions. It’s possible to make predictions for some border cases like the terrorism essays (death tolls, achievements of particular groups’ policy goals), but what about the short stories or poems? My imagination fails there.↩︎ 29. Thinking of predictions is good mental discipline; we should always be able to our beliefs in terms of the real world, or know why we cannot. Unfortunately, humans being humans, we need to actually track our predictions——lest our predicting degenerate into like political punditry.↩︎ 30. Dozens of theories have been put forth. I have been collecting & making predictions; and am up to 219. It will be interesting to see how the movies turn out.↩︎ 31. I have 2 predictions registered about the thesis on PB.com: and .↩︎ 32. See Robin Hanson, ↩︎ 33. I originally used last file modification time but this turned out to be confusing to readers, because I so regularly add or update links or add new formatting features that the file modification time was usually quite recent, and so it was meaningless.↩︎ 34. Reactive archiving is inadequate because such links may die before my crawler gets to them, may not be archivable, or will just expose readers to dead links for an unacceptably long time before I’d normally get around to them.↩︎ 35. Anecdotally, the rankings seem correct. When I went to a meetup in California, many knew of or had read the DNB FAQ, some had read or used my modafinil price-chart, and very few remembered reading anything else.↩︎ 36. I am sometimes reminded of another waka, by Ikkyu: To write something and leave it behind us, Is but a dream. When we awake we know There is not even anyone to read it. ↩︎ 37. They spend an average of 27 seconds; in comparison, my second largest source of traffic, LessWrongers, average 3 minutes and 29 seconds; even my third largest traffic source, Redditers, manage almost 2 minutes. Even random people coming from Google manage to spend 44 seconds on their visit!↩︎ 38. It ultimately fell through as O’Reilly decided the QS moment had passed, and killed the series.↩︎ 39. Since I converted the repo to use git, darcs-graph was no longer available, so I wrote my own script in R using splines rather than a 7-day moving average: start <- as.Date("2014-01-02") end <- as.Date("2014-07-02") patches <- as.Date(system(paste0("git log --after=", start, " --before=", end, " --format='%ad' --date=short master"), intern=TRUE)) patchFrame <- aggregate( patches , by = list(patches) , length ) days <- seq(start, end, "day") dayFrame <- aggregate( days , by = list(days) , length ) bar <- rbind(dayFrame, patchFrame) all <- aggregate(bar$x, by=list(bar$Group.1), FUN=function (x) { sum(as.vector(x));}) plot(x ~ Group.1, data=all, xlab="Date (Jan--July 2014)", ylab="gwern.net daily patches") library(splines); lines(predict(interpSpline(days, all$x)))
↩︎
40. I rewrote the script a little to use the nicer plots of and switches to LOESS for the smoothed curve:

start <- as.Date("2014-07-03")
end <- as.Date("2015-01-02")

patches <- as.Date(system(paste0("git --git-dir=/home/gwern/wiki/.git/ log --after=",
start, " --before=", end,
patchFrame <- aggregate( patches , by = list(patches) , length )

days <- seq(start, end, "day")
dayFrame <- aggregate( days , by = list(days) , length )

bar <- rbind(dayFrame, patchFrame)

all <- aggregate(bar$x, by=list(bar$Group.1), FUN=function (x) { sum(as.vector(x));})

library(ggplot2)
qplot(Group.1, x, data=all) +
theme_bw() +
xlab("Date (2014)") + ylab("Patch count") +
theme(legend.title=element_blank()) +
stat_smooth()
↩︎
41. I like the static site approach to things; it tends to be harder to use and more restrictive, but in exchange it yields & leads to fewer .↩︎

42. Rutter argues for this point in , which is consistent with my own A/B tests where even lousy changes are difficult to distinguish from zero effect despite large n, and with the general shambolic state of the Internet (eg as reviewed in the ). If users and loading times of multiple seconds have relatively modest traffic reductions, things like aligning columns properly or using section signs or sidenotes must have effects on behavior so close to zero as to be unobservable.↩︎

43. Paraphrased from Dialogues of the Zen Masters as quoted in pg 11 of the Editor’s Introduction to :

One day a man of the people said to Master Ikkyu: “Master, will you please write for me maxims of the highest wisdom?” Ikkyu immediately brushed out the word ‘Attention’. “Is that all? Will you not write some more?” Ikkyu then brushed out twice: ‘Attention. Attention.’ The man remarked irritably that there wasn’t much depth or subtlety to that. Then Ikkyu wrote the same word 3 times running: ‘Attention. Attention. Attention.’ Half-angered, the man demanded: “What does ‘Attention’ mean anyway?” And Ikkyu answered gently: “Attention means attention.”

↩︎
44. Sidenotes have long been used as a typographic solution to densely-annotated texts such as the (), but have not shown up much online yet.↩︎

45. We write a short Haskell program as part of a pipeline:

echo '{-# LANGUAGE OverloadedStrings #-};
import Data.Text as T;
main = interact (T.unpack . T.unlines . Prelude.filter (/="") .
T.split (not . (elem "0123456789,.")) . T.pack)' > ~/number.hs &&
find ~/wiki/ -type f -name "*.page" -exec cat "{}" \; | runhaskell ~/number.hs |
sort | tr -d ',' | tr -d '.' | cut -c 1 | sed -e 's/0$//' -e '/^$/d' > ~/number.txt
↩︎
46. Graph then test:

numbers <- read.table("number.txt")
ta <- table(numbers\$V1); ta

#     1     2     3     4     5     6     7     8     9
# 20550 20356  7087  5655  3900  2508  2075  2349  2068
## cribbing exact R code from http://www.math.utah.edu/~treiberg/M3074BenfordEg.pdf
sta <- sum(ta)
pb <- sapply(1:9, function(x) log10(1+1/x)); pb
m <- cbind(ta/sta,pb)
colnames(m)<- c("Observed Prop.", "Theoretical Prop.")
barplot( rbind(ta/sta,pb/sum(pb)), beside = T, col = rainbow(7)[c(2,5)],
xlab = "First Digit")
title("Benford's Law Compared to Writing Data")
legend(16,.28, legend = c("From Page Data", "Theoretical"),
fill = rainbow(7)[c(2,5)],bg="white")
chisq.test(ta,p=pb)
#
#     Chi-squared test for given probabilities
#
# data:  ta
# X-squared = 9331, df = 8, p-value < 2.2e-16
↩︎
47. PD increases economic efficiency through—if nothing else—making works easier to find. says that If that is so, then that means that difficulty of finding works reduces the welfare of artists and consumers, because both forgo a beneficial trade (the artist loses any revenue and the consumer loses any enjoyment). Even small increases in inconvenience make .↩︎

48. Not that I could sell anything on this wiki; and if I could, I would polish it as much as possible, giving me fresh copyright.↩︎