# Design Of This Website

Meta page describing Gwern.net site implementation and experiments for better ‘structural reading’ of hypertext; technical decisions using Markdown and static hosting.

2010-10-012022-04-07 finished
certainty: highly likely

Gwern.net is implemented as a static website compiled via Hakyll from Pandoc Markdown and hosted on a dedicated server (due to expensive cloud bandwidth).

It stands out from your standard Markdown static website by aiming at good typography, fast performance, and advanced hypertext browsing features (at the cost of great implementation complexity); the 4 design principles are: aesthetically-pleasing minimalism, accessibility/​progressive-enhancement, speed, and a ‘structural reading’ approach to hypertext use.

Unusual features include the monochrome esthetics, sidenotes instead of footnotes on wide windows, efficient drop caps/​smallcaps, collapsible sections, automatic inflation-adjusted currency, Wikipedia-style link icons & infoboxes, custom syntax highlighting⁠, extensive local archives to fight linkrot, and an ecosystem of “popup”/​“popin” annotations & previews of links for frictionless browsing—the net effect of hierarchical structures with collapsing and instant popup access to excerpts enables iceberg-like pages where most information is hidden but the reader can easily drill down as deep as they wish. (For a demo of all features & stress-test page, see Lorem Ipsum⁠.)

Also discussed are the many failed experiments /   changes made along the way.

What does it take to present, for the long-term, complex, highly-referenced, link-intensive, long-form text online as effectively as possible, while conserving the reader’s time & attention?

# Benefit

“People who are really serious about software should make their own hardware.”

Alan Kay⁠, “Creative Think” 1982

The sorrow of web design & typography is that it all can matter just a little how you present your pages. A page can be terribly designed and render as typewriter text in 80-column ASCII monospace, and readers will still read it, even if they complain about it. And the most tastefully-designed page, with true smallcaps and correct use of em-dashes vs en-dashes vs hyphens vs minuses and all, which loads in a fraction of a second and is SEO optimized, is of little avail if the page has nothing worth reading; no amount of typography can rescue a page of dreck. Perhaps 1% of readers could even name any of these details, much less recognize them. If we added up all the small touches, they surely make a difference to the reader’s happiness, but it would have to be a small one—say, 5%.1 It’s hardly worth it for writing just a few things.

But the joy of web design & typography is that just its presentation can matter a little to all your pages. Writing is hard work, and any new piece of writing will generally add to the pile of existing ones, rather than multiplying it all; it’s an enormous amount of work to go through all one’s existing writings and improve them somehow, so it usually doesn’t happen. Design improvements, on the other hand, benefit one’s entire website & all future readers, and so at a certain scale, can be quite useful. I feel I’ve reached the point where it’s worth sweating the small stuff, typographically.

# Principles

There are 4 design principles:

1. Aesthetically-pleasing Minimalism

The design esthetic is minimalist, with a dash of Art Nouveau⁠. I believe that minimalism helps one focus on the content. Anything besides the content is distraction and not design⁠. ‘Attention!’, as Ikkyu would say.2

The palette is deliberately kept to grayscale as an experiment in consistency and whether this constraint permits a readable aesthetically-pleasing website. Various classic typographical tools, like drop caps and small caps are used for theming or emphasis.3

This does not mean lacking features; many ‘minimalist’ designs proud of their simplicity are merely simple-minded.4

2. Accessibility & Progressive Enhancement

Semantic markup is used where Markdown permits. JavaScript is not required for the core reading experience, only for optional features: comments, table-sorting, sidenotes⁠, and so on. Pages can even be read without much problem in a smartphone or a text browser like elinks.

3. Speed & Efficiency

On an increasingly-bloated Internet, a website which is anywhere remotely as fast as it could be is a breath of fresh air. Readers deserve better. Gwern.net uses many tricks to offer nice features like sidenotes or LaTeX math at minimal cost.

How should we present texts online? A web page, unlike many mediums such as print magazines, lets us provide an unlimited amount of text. We need not limit ourselves to overly concise constructions, which countenance contemplation but not conviction.

A web page, however, can be dynamic. The solution to the length problem is to progressively expose more beyond the default as the user requests it, and make requesting as easy as possible. For lack of a well-known term and by analogy to code folding in structural editors⁠/​outliners⁠, I call this structural reading: the hierarchy is made visible & malleable to allow reading at multiple levels of the structure.

A Gwern.net page can be read at multiple structural levels, high to low: title, metadata block, abstracts, section headers, margin notes, emphasized keywords in list items, footnotes/​sidenotes, collapsed sections or paragraphs, internal cross-referencing links to other sections (such as appendices) which popup for immediate reading, and fulltext links or internal links to other pages (also popping up).

Miscellaneous principles:

• visual differences should be semantic differences

• UI elements that can react should change on hover

• all UI elements should have tooltips/​summaries

• hierarchies & progressions should come in cycles of 3 (eg. bold > smallcaps > italics)

• all numbers should be 0, 1, or ∞

• function > form

• more > less

• self-contained > fragmented

• convention (linters/​checkers) > constraints

• hypertext is a great idea, we should try that!

• local > remote—every link dies someday

• archives are expensive short-term but cheap long-term

• speed is the 2nd-most important feature after correctness

• always bet on text

• if you go overboard on minimalism, you may barely be mediocre
• “users won’t tell you when it’s broken”

• UI consistency is underrated

• when in doubt, copy Wikipedia

• if you find yourself doing something 3 times⁠, fix it.

# Features

Notable features (compared to standard Markdown static site):

• Link popup annotations (‘popin’ on small screens or mobile):

Annotations are hand-written, and automatically extracted from Arxiv⁠/​BioRxiv/​MedRxiv/​Crossref or written by hand (formatting is kept consistent by an extensive series of rewrite rules & checks); popups can be recursive, and can be manipulated in many ways—moved, fullscreened, ‘stickied’ (anchored in place), etc. Wikipedia pages are specially-supported, enabling them to be recursively navigated as well. Local Gwern.net pages & whitelisted domains can be popped up and viewed in full; PDFs can be read inside a PDF viewer; and supported source code formats will pop up a syntax-highlighted version if available (eg).

• sidenotes using both margins, fallback to floating footnotes

• source code syntax highlighting (custom monochrome theme)

• code folding (collapsible sections/​code blocks/​tables)

• JS-free LaTeX math rendering (but where possible, HTML+Unicode is used instead, as it is much more efficient & natural-looking)

• dark mode (with a theme switcher)

• click-to-zoom images & slideshows; width-full tables/​images

• sortable tables; tables of various sizes

• automatically inflation-adjust dollar amounts, exchange-rate Bitcoin amounts

• “admonitions” infoboxes (Wikipedia-like by way of Markdeep)

• lightweight drop caps

• epigraphs

• automatic smallcaps typesetting

• multi-column lists

Much of Gwern.net design and JS/​CSS was developed by Said Achmiz⁠, 2017–2020. Some inspiration has come from Tufte CSS & Matthew Butterick’s Practical Typography⁠.

# Abandoned

Often the most interesting part of any design are the parts that are invisible—what was tried but did not work.

Some post-mortems of things I tried but abandoned (in chronological order).

See Gwern.net Design Graveyard⁠.

# Tools

Software tools & libraries used in the site as a whole:

• The source files are written in Pandoc Markdown (Pandoc: John MacFarlane et al; GPL) (source files: Gwern Branwen, CC-0). The Pandoc Markdown uses a number of extensions; pipe tables are preferred for anything but the simplest tables; and I use semantic linefeeds (also called “semantic line breaks” or “ventilated prose”) formatting.

• math is written in LaTeX which compiles to MathML⁠, rendered by MathJax (Apache)

• syntax highlighting: we originally used Pandoc’s builtin Kate-derived themes, but most clashed with the overall appearance; after looking through all the existing themes, we took inspiration from Pygments’s algol_nu (BSD) based on the original ALGOL report, and typeset it in the IBM Plex Mono font6

• the site is compiled with the Hakyllv4+ static site generator, used to generate Gwern.net, written in Haskell (Jasper Van der Jeugt et al; BSD); for the gory details, see hakyll.hs which implements the compilation, RSS feed generation, & parsing of interwiki links as well. This just generates the basic website; I do many additional optimizations/​tests before & after uploading, which is handled by sync-gwern.net.sh (Gwern Branwen, CC-0)

My preferred method of use is to browse & edit locally using Emacs⁠, and then distribute using Hakyll. The simplest way to use Hakyll is that you cd into your repository and runghc hakyll.hs build (with hakyll.hs having whatever options you like). Hakyll will build a static HTML/​CSS hierarchy inside _site/; you can then do something like firefox _static/index. (Because HTML extensions are not specified in the interest of cool URIs⁠, you cannot use the Hakyll watch webserver as of January 2014.) Hakyll’s main advantage for me is relatively straightforward integration with the Pandoc Markdown libraries; Hakyll is not that easy to use, and so I do not recommend use of Hakyll as a general static site generator unless one is already adept with Haskell.

• the CSS is borrowed from a motley of sources and has been heavily modified, but its origin was the Hakyll homepage & Gitit⁠; for specifics, see default.css

• Markdown syntax extensions:

• I implemented a Pandoc Markdown plugin for a custom syntax for interwiki links in Gitit, and then ported it to Hakyll (defined in hakyll.hs); it allows linking to the English Wikipedia (among others) with syntax like [malefits](!Wiktionary) or [antonym of 'benefits'](!Wiktionary "Malefits"). CC-0.
• inflation adjustment: Inflation.hs provides a Pandoc Markdown plugin which allows automatic inflation adjusting of dollar amounts, presenting the nominal amount & a current real amount, with a syntax like [$5]($1980).
• Book affiliate links are through an Amazon Affiliates tag appended in the hakyll.hs
• image dimensions are looked up at compilation time & inserted into <img> tags as browser hints
• JavaScript:

• the HTML tables are sortable via tablesorter (Christian Bach; MIT/​GPL)

• the MathML is rendered using MathJax

• analytics are handled by Google Analytics

• A /   B testing is done using ABalytics (Daniele Mazzini; MIT) which hooks into Google Analytics (see testing notes) for individual-level testing; when doing site-level long-term testing like in the advertising A /   B tests⁠, I simply write the JS manually.

• Generalized tooltip popups for loading introductions/​summaries/​previews of all links when one mouses-over a link; reads annotations, which are manually written & automatically populated from many sources (Wikipedia, Pubmed, BioRxiv, Arxiv, hand-written…), with special handling of YouTube videos (Said Achmiz, Shawn Presser⁠; MIT).

Note that ‘links’ here is interpreted broadly: almost everything can be ‘popped up’. This includes links to sections (or div IDs) on the current or other pages, PDFs (often page-linked using the obscure but handy #page=N feature), source code files (which are syntax-highlighted by Pandoc), locally-mirrored web pages, footnotes/​sidenotes, any such links within the popups themselves recursively…

• the floating footnotes are handled by the generalized tooltip popups (they were originally implemented via footnotes.js); when the browser window is wide enough, the floating footnotes are instead replaced with marginal notes/​sidenotes7 using a custom library, sidenotes.js (Said Achmiz, MIT)

• image size: full-scale images (figures) can be clicked on to zoom into them with slideshow mode—useful for figures or graphs which do not comfortably fit into the narrow body—using another custom library, image-focus.js (Said Achmiz; GPL)

• error checking: problems such as broken links are checked in 3 phases:

## Implementation Details

There are a number of little tricks or details that web designers might find interesting.

Efficiency:

• fonts:

• Adobe Source Serif⁠/​Sans⁠/​Code Pro: originally Gwern.net used Baskerville⁠, but system fonts often don’t have true small caps (requiring browsers to synthesize ugly downscaled capitals as ‘small caps’). Some Baskerville digital fonts do, but if one is going to bite the bullet of needing to download a font, one might as well consider other options.

Adobe’s open-source “Source” font family of screen serifs⁠, however, is high quality and actively-maintained, comes with good small caps and multiple sets of numerals (‘old-style’ numbers for the body text and different numbers for tables), and looks particularly nice on Macs. (It is also subsetted to cut down the load time.) Small caps CSS is automatically added to abbreviations/​acronyms/​initials by a Hakyll/​Pandoc plugin, to avoid manual annotation.

• efficient drop caps by subsetting: 1 drop cap is used on every page, but a typical drop cap font will slowly download as much as a megabyte in order to render 1 single letter.

CSS font loads avoid downloading font files which are entirely unused, but they must download the entire font file if anything in it is used, so it doesn’t matter that only one letter gets used. To avoid this, we split each drop cap font up into a single font file per letter and use CSS to load all the font files; since only 1 font file is used at all, only 1 gets downloaded, and it will be ~4kb rather than 168kb. This has been done for all the drop cap fonts used (yinit⁠, Cheshire Initials⁠, Deutsche Zierschrift⁠, Goudy Initialen⁠, Kanzlei Initialen), and the necessary CSS can be seen in fonts.css⁠. To specify the drop cap for each page, a Hakyll metadata field is used to pick the class and substituted into the HTML template.

• lazy JavaScript loading by IntersectionObserver: several JS features are used rarely or not at all on many pages, but are responsible for much network activity. For example, most pages have no tables but tablesorter must be loaded anyway.

To avoid this, IntersectionObserver can be used to write a small JS function which fires only when particular page elements are visible to the reader. The JS then loads the library which does its thing. So an IntersectionObserver can be defined to fire only when an actual <table> element becomes visible, and on pages with no tables, this never happens. Similarly for image-focus.js. This trick is a little dangerous if a library depends on another library because the loading might cause race conditions; fortunately, only 1 library (tablesorter), has a prerequisite (jQuery), so I simply prepend jQuery to tablesorter and load tablesorter.

Other libraries, like sidenotes or WP popups, aren’t lazy-loaded because sidenotes need to be rendered as fast as possible or the page will jump around & be laggy, and WP links are so universal it’s a waste of time making them lazy since they will be in the first screen on every page & be loaded immediately anyway, so they are simply loaded asynchronously with the async/​defer JS keywords.

• image optimization: PNGs are optimized by pngnq/​advpng, JPEGs with mozjpeg, SVGs are minified, PDFs are compressed with ocrmypdf’s JBIG2 support. (GIFs are not used at all in favor of WebM/​MP4 <video>s.)

• JS/​CSS minification: because Cloudflare does Brotli compression, minification of JS/​CSS has little advantage and makes development harder, so no minification is done; the font files don’t need any special compression either.

• MathJax: getting well-rendered mathematical equations requires MathJax or a similar heavyweight JS library; worse, even after disabling features, the load & render time is extremely high—a page like the embryo selection page which is both large & has a lot of equations can visibly take >5s (as a progress bar that helpfully pops up informs the reader).

The solution here is to prerender MathJax locally after Hakyll compilation⁠, using the local tool mathjax-node-page to load the final HTML files, parse the page to find all the math, compile the expressions, define the necessary CSS, and write the HTML back out. Pages still need to download the fonts but the overall speed goes from >5s to <0.5s, and JS is not necessary at all.

• Automatic Link-Ification Regexps: I wrote LinkAuto.hs⁠, a Pandoc library for automatically turning user-defined regexp-matching strings into links, to automatically turn all the scientific jargon into Wikipedia or paper links. (There are too many to annotate by hand, especially as new terms are added to the list or abstracts are generated for popups.)

“Test all strings against a list of regexps and rewrite if they match” may sound simple and easy, but the naive approach is exponential: n strings, r regexps tested on each, so 𝑂(nr) matches total. With >600 regexps initially & millions of words on Gwern.net… Regexp matching is fast, but it’s not that fast. Getting this into the range of ‘acceptable’ (~3× slowdown) required a few tricks.

The major trick is that each document is converted to a simple plain text format, and the regexps are run against the entire document; in the average case (think of short pages or popup annotations), there will be zero matches, and the document can be skipped entirely. Only the matching regexps get used in the full-strength AST traversal. While it is expensive to check a regexp against an entire document, it is an order of magnitude or two less expensive than checking that regexp against every string node inside that document!

Correctness:

• Dark mode: our dark mode is custom, and tries to make dark mode a first-class citizen.

1. Avoiding Flashing & Laggy Scrolling: it is implemented in the standard best-practice way of creating two color palettes (associating a set of color variables for every element, for a light-mode and then automatically-generating dark mode colors by inverting & gamma-correcting), and using JS to toggle the media-query to instantly enable that color.

This avoids the ‘flash of white’ on page loads which regular JS-based approaches incur (because the CSS media-queries can only implement auto-dark-mode, and the dark mode widget requires JS; however, the JS, when it decides to inject dark mode CSS into the page, is too late and that CSS will be rendered last after the reader has already been exposed to the flash). The separate color palette approach also avoids the lag & jank of using invert CSS filters (one would think that invert(100%) would be free from a performance standpoint—but it is not).

2. Native Dark Mode Color Scheme: we modify the color scheme as necessary: some blue breaks the monochrome coloring, and the source code syntax highlighting is tweaked for dark mode visibility.

Because of the changes in contrast, inverting the color scheme only mostly works. In particular, inline & code blocks tend to disappear.

3. Inverted Images: color images are desaturated & grayscaled by default to reduce their brightness; grayscale/​monochrome images, are automatically inverted by an ImageMagick heuristic, falling back to manual annotations.

This avoids the common failure mode where a blog uses a dark mode library which implements the class approach correctly… but then all of their images still have blinding bright white backgrounds or overall coloration, defeating the point! However, one cannot just blindly invert images because many images, photographs of people especially, are garbage as ‘photo-negatives’.

4. Three-Way Dark Mode Toggle: Many dark modes are implemented with a simple binary on/​off logic stored in a cookie, ignoring browser/​OS preferences, or simply defining ‘dark mode’ as the negation of the current browser/​OS preference.

This is incorrect, and leads to odd situations like a website enabling dark mode during the day, and then light mode during the night! Using an auto/​dark/​light three-way toggle means that users can force dark/​light mode but also leave it on ‘auto’ to follow the browser/​OS preference over the course of the day. This requires a UI widget and it still incurs some of the problems of an auto-only dark mode⁠, but overall strikes the best balance between enabling dark mode unasked, user control/​confusion, and avoiding dark mode at the wrong time.

• collapsible sections: managing complexity of pages is a balancing act. It is good to provide all necessary code to reproduce results, but does the reader really want to look at a big block of code? Sometimes they always would, sometimes only a few readers interested in the gory details will want to read the code. Similarly, a section might go into detail on a tangential topic or provide additional justification, which most readers don’t want to plow through to continue with the main theme. Should the code or section be deleted? No. But relegating it to an appendix, or another page entirely is not satisfactory either—for code blocks particularly, one loses the literate programming aspect if code blocks are being shuffled around out of order.

A nice solution is to simply use a little JS to implement code folding approach where sections or code blocks can be visually shrunk or collapsed, and expanded on demand by a mouse click. Collapsed sections are specified by a HTML class (eg. <div class="collapse"></div>), and summaries of a collapsed section can be displayed, defined by another class (<div class="abstract-collapse">). This allows code blocks to be collapse by default where they are lengthy or distracting, and for entire regions to be collapsed & summarized, without resorting to many appendices or forcing the reader to an entirely separate page.

• Sidenotes: one might wonder why sidenotes.js is necessary when most sidenote uses are like Tufte-CSS and use a static HTML/​CSS approach, which would avoid a JS library entirely and visibly repainting the page after load?

The problem is that Tufte-CSS-style sidenotes do not reflow and are solely on the right margin (wasting the considerable whitespace on the left), and depending on the implementation, may overlap, be pushed far down the page away from their, break when the browser window is too narrow or not work on smartphones/​tablets at all. (This is fixable⁠, Tufte-CSS’s maintainers just haven’t.) The JS library is able to handle all these and can handle the most difficult cases like my annotated edition of Radiance⁠. (Tufte-CSS-style epigraphs⁠, however, pose no such problems and we take the same approach of defining an HTML class & styling with CSS.)

• Link icons: icons are defined for all filetypes used in Gwern.net and many commonly-linked websites such as Wikipedia, Gwern.net (within-page section links and between-page get ‘§’ & logo icons respectively), or YouTube; all are inlined into default.css as data URIs⁠; the SVGs are so small it would be absurd to have them be files.

• Redirects: static sites have trouble with redirects, as they are just static files. AWS 3S does not support a .htaccess-like mechanism for rewriting URLs. To allowing moving pages & fix broken links, I wrote Hakyll.Web.Redirect for generating simple HTML pages with redirect metadata+JS, which simply redirect from URL 1 to URL 2. After moving to Nginx hosting, I converted all the redirects to regular Nginx rewrite rules.

In addition to page renames, I monitor 404 hits in Google Analytics to fix errors where possible, and Nginx logs. There are an astonishing number of ways to misspell Gwern.net URLs, it turns out, and I have defined >10k redirects so far (in addition to generic regexp rewrites to fix patterns of errors).

# Appendix

## Returns To Design?

What is the ‘shape’ of returns on investment in industrial design, UI/​UX, typography etc? Is it a sigmoid with a golden mean of effort vs return… or a parabola with an unhappy valley of mediocrity?

My experience with Gwern.net design improvements is that readers appreciated changes moderately early on in making its content more pleasant to read (if only by comparison to the rest of the Internet!), but after a certain point, it all ‘came together’, in some sense, and readers started raving over the design and pointing to Gwern.net’s design rather than its content. This is inconsistent with the default, intuitive model of ‘diminishing returns’, where each successive design tweak should be worth less than the previous one.

Is there a ’perfection premium’ (perhaps as a signal of underlying unobservable quality⁠, or perhaps user interaction is like an O-ring process)?

Particularly with typography, there seems to be an infinite number of finicky details one could spend time on (much of which appears to be for novelty’s sake⁠, while vastly more important things like advertising harms go ignored by so-called designers). One’s initial guess is that it’d be diminishing returns like most things: it’d look something like a log curve, where every additional tweak costs more effort as one approaches the Platonic ideal. A more sophisticated guess would be that it’d look like a sigmoid: at first, something is so awful that any fixes are irrelevant to the user because that just means they suffer from a different problem (it doesn’t matter much if a website doesn’t render because of a JS bug if the text when it does render is so light-shaded that one can’t read it); then each improvements makes a difference to some users as it approaches a respectable mediocrity; and after that, it’s back to diminishing returns.

My experience with improving the design of Gwern.net & reading about design has made me wonder if either of those is right. The shape may resemble more of a parabola: the sigmoid, at some point, spikes up and returns increase rather than diminish?

Similarly, the LW team put an unusual amount of effort into designing a 2018 essay compilation⁠, making it stylish (even redrawing all the images to match the color themes), and they were surprised by unusually large the preorders were: not a few percentage points, but many times. (There are many books on data visualization, but I suspect Edward Tufte’s books outsell them, even the best, by similar magnitudes.) And what should we make of Apple & design⁠, whose devices & software have glaring flaws and yet, by making more of an attempt, command a premium and are regarded well by the public? Or Stripe?8

If the sigmoid were right, just how much more effort would be necessary to elicit such jumps? Orders of magnitude more? I & Said have invested effort, certainly, but there are countless sites (even confining the comparison to just personal websites and excluding sites with professional full-time developers/​designers), whose creators have surely invested more time; millions of books are self-published every year; and Apple is certainly not the only tech company which tries to design things well.

What might be going on is related to the “aesthetic-usability effect”: at a certain level, the design itself becomes noticeable to the user for its esthetic effect and the esthetics itself becomes a feature adding to the experience. That is, at the bottom of the sigmoid, on a website strewn with typos and broken links and confusing colors, the user thinks “this website sucks!”, while in the middle, the user ceases to think of the website at all and just gets on with using it, only occasionally irritated by design flaws; finally, at a certain level, when all the flaws have been removed and the site itself is genuinely unironically beautiful, both the beauty & absence of flaws themselves become noticeable, and the reader thinks, “this website, it is—pretty awesome!” The spike is where suddenly the design itself is perceived as a distinct thing, not merely how the thing happens to be. Designers often aspire to an end-state of sprezzatura or the “crystal goblet”, where they do their job so well the user doesn’t realize there was a job to be done at all—but in this fallen world, where excellence seems so rare, the better one does the job, the more the contrast with all the botched jobs inevitably draws attention.

It is difficult for even the reader least interested in the topic to open a Tufte book, or walk into an Apple store, and not be struck by first impressions of elegance and careful design—which is not necessarily a good thing if that cannot be lived up to. (Any person struck by this must also realize that other people will be similarly impressed, using their own response as a proxy for the general reaction9⁠, and will take it as a model for aspiration; liking Apple or Tufte signals your good taste, and that makes them luxury products as much as anything.)

This suggests a dangerous idea (dangerous because a good excuse for complacency & mediocrity, especially for those who do not manage even mediocrity but believe otherwise): if you are going to invest in design, half-measures yield less than half-results. If the design is terrible, then one should continue; but if the design is already reasonable, then instead of there being substantial returns, the diminishing returns have already set in, and it may be a too-long slog from where you are to the point where people are impressed enough by the design for the esthetic effect to kick in. Those moderate improvements may not be worthwhile if one can only modestly improve on mediocrity; and a sufficiently-flawed design may not be able to reach the esthetic level at all, requiring a radical new design.

1. Rutter argues for this point in Web Typography⁠, which is consistent with my own A /  ​B tests where even lousy changes are difficult to distinguish from zero effect despite large n, and with the general shambolic state of the Internet (eg. as reviewed in the 2019 Web Almanac). If users will not install adblock and loading times of multiple seconds have relatively modest traffic reductions, things like aligning columns properly or using section signs or sidenotes must have effects on behavior so close to zero as to be unobservable.↩︎

2. Paraphrased from Dialogues of the Zen Masters as quoted in pg 11 of the Editor’s Introduction to Three Pillars of Zen:

One day a man of the people said to Master Ikkyu: “Master, will you please write for me maxims of the highest wisdom?” Ikkyu immediately brushed out the word ‘Attention’. “Is that all? Will you not write some more?” Ikkyu then brushed out twice: ‘Attention. Attention.’ The man remarked irritably that there wasn’t much depth or subtlety to that. Then Ikkyu wrote the same word 3 times running: ‘Attention. Attention. Attention.’ Half-angered, the man demanded: “What does ‘Attention’ mean anyway?” And Ikkyu answered gently: “Attention means attention.”

↩︎
3. And also, admittedly, for esthetic value. One earns the right to add ‘extraneous’ details by first putting in the hard work of removing the actual extraneous details; only after the ground has been cleared—the ‘data-ink ratio’ maximized, the ‘chartjunk’ removed—can one see what is actually beautiful to add.↩︎

4. Good design may be “as little design as possible” which gets the job done, to paraphrase Dieter Rams⁠; the problem comes when designers focus on the first part, and forget the second part. If a minimalist design cannot handle more content than a few paragraphs of text & a generic ‘hero image’, then it has not solved the design problem, and is merely a sub-genre of illustration. (Like photographs of elegant minimalist Scandinavian or Japanese architecture which leave one wondering whether any human could live inside them, and how those buildings would learn⁠.) And if a minimalist website cannot even present some text well, you can be sure they have not solved any of the hard problems of web design like link rot or cross-referencing!↩︎

5. The default presentation of separate pages means that an entire page may contain only a single paragraph or sentence. The HTML versions of many technical manuals (typically compiled from LaTeX, DocBook, or GNU Info) are even worse, because they fail to exploit prefetching & are slower than local documentation, and take away all of the useful keybindings which makes navigating info manuals fast & convenient. Reading such documentation in a web browser is Chinese water torture. (That, decades later, the GNU project keeps generating documentation in that format, rather than at least as large single-page manuals with hyperlinked table-of-contents, is a good example of how bad they are at UI/​UX design.) And it’s not clear that it’s that much worse than the other extreme, the monolithic man page which includes every detail under the sun and is impossible to navigate without one’s eyes glazing over even using incremental search to navigate through dozens of irrelevant hits—every single time!↩︎

6. An unusual choice, as one does not associate IBM with font design excellence, but nevertheless, it was our choice after blind comparison of ~20 code fonts with variant zeroes (which we consider a requirement for code). An appealing newer alternative is JetBrains Mono (which doesn’t work as well with Gwern.net’s style, but may suit other websites).↩︎

7. Sidenotes have long been used as a typographic solution to densely-annotated texts such as the Geneva Bible (first 2 pages), but have not shown up much online yet.

An early & inspiring use of margin/​side notes.↩︎

8. Perhaps the returns to design are also going up with time as Internet designers increasingly get all the rope they need to hang themselves? What browser devs & Moore’s Law giveth, semi-malicious web designers take away⁠. Every year, the range of worst to best website gets broader, as ever new ways to degrade the browsing experience—not 1 but 100 trackers! newsletter popups! support chat! Taboola chumbox! ‘browser notifications requested’! 50MB of hero images! layout shifts right as you click on something!—are invented. 80-column ASCII text files on BBSes offer little design greatness, but they are also hard to screw up. To make an outstandingly bad website requires the latest CMSes, A/​B testing infrastructure to Schlitz your way to profitability, CDNs, ad network auctioning technology, and high-paid web designers using only Apple laptops. (A 2021 satire⁠; note that you need to disable adblock.) Given the subtlety of this creep towards degradation & short-term profits and the relatively weak correlation with fitness/​profitability, we can’t expect any rapid evolution towards better design, unfortunately, but there is an opportunity for those businesses with taste.↩︎

9. Which might account for why improvements in Gwern.net design also seem to correlate with more comments where the commenter appears infuriated by the design—that’s cheating!↩︎