Skip to main content

design directory

Links

“Why Do Hipsters Steal Stuff?”, Branwen 2022

LARPing: “Why Do Hipsters Steal Stuff?”⁠, Gwern Branwen (2022-04-29; ⁠, ⁠, ⁠, ; backlinks; similar):

Many fashions and artworks originate as copies of practical objects. Why? Because any form of optimized design is intrinsically esthetically-pleasing, and a great starting point.

Countless genres of art start in appropriating objects long incubated in subcultures for originally practical purposes, often becoming fashionable and collectible because no longer practically relevant, such as fancy watches. This seems a little odd, and leads to weird economic situations where brands bend over backwards to try to maintain ‘authenticity’ by, say, showing that some $5,000 pair of sneakers sold to collectors has some connection to a real athlete.

With an infinite design-universe to explore, why does this keep happening and why does anyone care so much? Why, indeed, is l’art pour l’art not enough and people insist on the art being for something else, even when it blatantly is not?

Because humans respond esthetically to not simply complexity or ornamentation, but to the optimal combination of these in the pursuit of some comprehensible goal, yielding constraint, uniqueness, and comprehensibility. A functional goal keeps artists honest, and drives the best design, furnishing an archive of designs that can be mined for other purposes like fashion.

For that reason, the choice of a goal or requirement can, even if completely irrelevant or useless, be a useful design tool by fighting laziness and mediocrity.


“Fake Journal Club: Teaching Critical Reading”, Branwen 2022

Fake-Journal-Club: “Fake Journal Club: Teaching Critical Reading”⁠, Gwern Branwen (2022-03-07; ⁠, ⁠, ; backlinks; similar):

Discussion of how to teach active reading and questioning of scientific research. Partially fake research papers may teach a critical attitude. Various ideas for games reviewed.

How do researchers transition from uncritically absorbing research papers or arguments to actively grappling with it and questioning it? Most learn this meta-cognitive skill informally or by ad hoc mechanisms like being tutored by a mentor, or watching others critique papers at a ‘journal club’. This patchwork may not always work or be the best approach, as it is slow and largely implicit, and similar to calibration training in statistical forecasting, targeted training may be able to teach it rapidly.

To teach this active reading attitude of not believing everything you read, I borrow the pedagogical strategy of deliberately inserting errors which the student must detect, proposing fake research articles which could be read in a ‘fake journal club’.

Faking entire articles is a lot of work and so I look at variations on it. I suggest that NN language models like GPT-3 have gotten good enough to, for short passages, provide a challenge for human readers, and that one could create a fake journal club by having a language model repeatedly complete short passages of research articles (possibly entirely fictional ones).

This would provide difficult criticism problems with rapid feedback, scalability to arbitrarily many users, and great flexibility in content.

“That’s Nothing Compared to Japanese Consumers.”, tzs 2022

“That’s nothing compared to Japanese consumers.”⁠, tzs (2022-01-28; similar):

We were a US company working with a Japanese software distributor to do Japanese versions of our products. Occasionally on some Japanese non-IBM compatible PCs we were seeing a lockup during installation.

It was the kind of lockup where CTRL-ALT-DEL does nothing, the CAPS LOCK light no longer toggles, and if you have a GUI that mouse pointer no longer moves. There’s usually pretty much nothing to do at that point except hit the reset button or toggle power.

It was quite rare, giving us not much to work with. Our Japanese partners decided it was rare enough to go ahead and ship, handling the (hopefully) handful of people that hit it via tech support.

So we shipped. And they got something like 100 support calls—but the callers were not upset. In fact, they were happy with the product except that they wanted to suggest that the installer should be made faster or should run in the background so they could use the computer while the install takes place. The reports said that the install took something like 20–30 hours.

“Fooled by Beautiful Data: Visualization Aesthetics Bias Trust in Science, News, and Social Media”, Lin & Thornton 2022

“Fooled by beautiful data: Visualization aesthetics bias trust in science, news, and social media”⁠, Chujun Lin, Mark Allen Thornton (2022-01-04; ⁠, ⁠, ; backlinks; similar):

4 preregistered studies show that beauty increases trust in graphs from scientific papers, news, and social media.

Scientists, policymakers, and the public increasingly rely on data visualizations—such as COVID tracking charts, weather forecast maps, and political polling graphs—to inform important decisions. The aesthetic decisions of graph-makers may produce graphs of varying visual appeal, independent of data quality.

Here we tested whether the beauty of a graph influences how much people trust it. Across 3 studies, we sampled graphs from social media, news reports, and scientific publications, and consistently found that graph beauty predicted trust. In a 4th study, we manipulated both the graph beauty and misleadingness.

We found that beauty, but not actual misleadingness, causally affected trust.

These findings reveal a source of bias in the interpretation of quantitative data and indicate the importance of promoting data literacy in education. [Particularly worrisome given how effective statistics design is ignored by designers optimizing only for beauty⁠.]

[Keywords: aesthetics, beauty-is-good stereotype/​halo effect⁠, causal effects, data visualizations, publication bias, public trust]

…Here we test the hypothesis that the beauty of data visualizations influences how much people trust them. We first examined the correlation between perceived beauty and trust in graphs. To maximize the generalizability and external validity of our findings, we systematically sampled graphs (Figure 1) of diverse types and topics (Figure 2) from the real world. These graphs spanned a wide range of domains, including social media (Study 1), news reports (Study 2), and scientific publications (Study 3). We asked participants how beautiful they thought the graphs looked and how much they trusted the graphs. We also measured how much participants found the graphs interesting, understandable, surprising, and negative, to control for potential confounds (Figure 3A). In addition to predicting trust ratings, we also examined whether participants’ beauty ratings predicted real-world impact. We measured impact using indices including the number of comments the graphs received on social media, and the number of citations the graphs’ associated papers had. Finally, we tested the causal effect of graph beauty on trust by generating graphs using arbitrary data (Study 4). We orthogonally manipulated both the beauty and the actual misleadingness of these graphs and measured how these manipulations affected trust.

Results: Beauty correlates with trust across domains. We found that participants’ trust in graphs was associated with how beautiful participants thought the graphs looked for graphs across all 3 domains (Figure 3B): social media posts on Reddit (Pearson’s r = 0.45, p = 4.15×10−127 in Study 1a; r = 0.41, p = 3.28×10−231 in Study 1b), news reports (r = 0.43, p = 1.14×10−278 in Study 2), and scientific papers (r = 0.41, p = 6.×10−234 in Study 3). These findings indicate that, across diverse contents and sources of the graphs, perceived beauty and trust in graphs are reliably correlated in the minds of perceivers. The association between beauty and trust remained robust when controlling for factors that might influence both perceived beauty and trust, including how much participants thought the graphs were interesting, understandable, surprising, and negative (linear mixed modeling: b = 0.19, standardized 𝛽 = 0.22, p = 1.05×10−30 in Study 1a; b = 0.14, 𝛽 = 0.16, p = 8.81×10−46 in Study 1b; b = 0.14, 𝛽 = 0.15, p = 5.35×10−35 in Study 2; b = 0.10, 𝛽 = 0.12, p = 1.85×10−25 in Study 3; see Figure 1: for the coefficients of covariates). These findings indicate that beautiful visualizations predict increased trust even when controlling for the effects of interesting topics, understandable presentation, confirmation bias⁠, and negativity bias⁠.

Figure 3: Correlations between beauty and trust in Studies 1–3. (A) Participants viewed each graph (top; an example from Study 3) and rated each graph on 6 aspects (bottom; the order was randomized). (B) The frequency of ratings (colored; presented with 2D kernel density) on the beauty and trust of the graphs in Studies 1a, 1b, 2, and 3 (from top to bottom), and univariate correlations between the 2 variables (line for linear regression, text for Pearson’s correlation, asterisks indicate statistical-significance: ✱✱✱ for p < 0.001; n = 2,681 in Study 1a; n = 5,780 in Study 1b; n = 6,204 in Study 2; n = 6,030 in Study 3).

Beauty predicts real-world popularity: We found that the real-world popularity of the graphs was associated with how beautiful participants thought they were. The more beautiful graphs from Reddit were associated with higher numbers of comments in both Study 1a (b = 0.04, 𝛽 = 0.04, p = 0.011) and Study 1b (b = 0.11, 𝛽 = 0.12, p = 2.84×10−22). The more beautiful graphs from scientific journals were associated with papers that had higher numbers of citations in Study 3 (b = 0.07, 𝛽 = 0.05, p = 0.001; but not higher numbers of views, b = 0.03, 𝛽 = 0.02, p = 0.264). The association between the perceived beauty of a paper’s graphs and the paper’s number of citations remained robust when controlling for the paper’s publication date and how much participants thought the graphs were interesting, understandable, surprising, and negative (b = 0.05, 𝛽 = 0.04, p = 0.005). These findings suggest that people’s bias in favor of trusting beautiful graphs has real-world consequences.

Figure 4: Causal effects of beauty on trust in Study 4. (A) Manipulations of an example graph of a specific type and topic in 4 experimental conditions. (B) Manipulation check of beauty. linear mixed model regression of beauty ratings (7-point Likert scale) on beauty manipulations (binary), while controlling for the manipulations of misleadingness and the random effects of participants, graph types, and graph topics (n = 2,574 observations). (C) Causal effects of beauty and misleadingness. Linear mixed model regression of trust ratings (7-point Likert scale) on beauty and misleadingness manipulations (binary), while controlling for the random effects of participants, graph types, and graph topics (n = 2,574 observations).

Discussion: …A second, non-mutually exclusive, explanation suggests that this apparent bias may be rooted in rational thinking. More beautiful graphs may indicate that the data is of higher quality and that the graph maker is more skillful [Steele & Iliinsky 2010, Beautiful Visualization: Looking at Data through the Eyes of Experts]. However, our results suggest that this reasoning may not be accurate. It does not require sophisticated techniques to make beautiful graphs: we reliably made graphs look more beautiful simply by increasing their resolution and color saturation, and using a legible, professional font (Figure 4A–B). Findings from the real-world graphs (Studies 1–3) also suggest that one could make a very basic graph such as a bar plot look very beautiful (Figure S2F). Visual inspection of the more and less beautiful real-world graphs suggests that people perceive graphs with more colors (eg. rainbow colors), shapes (eg. cartoons, abstract shapes), and meaningful text (eg. a title explaining the meaning of the graph) as more beautiful. It also does not require high quality data to make a beautiful graph either: we generated graphs that were perceived as beautiful using arbitrary data (Figure 4B). Therefore, our findings highlight that the beauty of a graph may not be an informative cue for its quality. Even if beauty was correlated with actual data quality in the real-world, this would be a dangerous and fallible heuristic to rely upon for evaluating research and media.

“The CEO Beauty Premium: Founder CEO Attractiveness and Firm Valuation in Initial Coin Offerings”, Colombo et al 2021

“The CEO beauty premium: Founder CEO attractiveness and firm valuation in initial coin offerings”⁠, Massimo G. Colombo, Christian Fisch, Paul P. Momtaz, Silvio Vismara (2021-12-22; ⁠, ⁠, ; similar):

ICOs allow ventures to collect funding from investors using blockchain technology. We leverage this novel funding context, in which information on the ventures and their future prospects is scarce, to empirically investigate whether the founder CEOs’ physical attractiveness is associated with increased funding (ie. amount raised) and post-funding performance (ie. buy-and-hold returns). We find that ventures with more attractive founder CEOs outperform ventures with less attractive CEOs in both dimensions. For ICO investors, this suggests that ICOs of firms with more attractive founder CEOs are more appealing investment targets. Our findings are also interesting for startups seeking external finance in uncertain contexts, such as ICOs. If startups can appoint attractive leaders, they may have better access to growth capital.


We apply insights from research in social psychology and labor economics to the domain of entrepreneurial finance and investigate how founder chief executive officers’ (founder CEOs’) facial attractiveness influences firm valuation.

Leveraging the novel context of initial coin offerings (ICOs), we document a pronounced founder CEO beauty premium, with a positive relationship between founder CEO attractiveness and firm valuation.

We find only very limited evidence of stereotype-based evaluations, through the association of founder CEO attractiveness with latent traits such as competence, intelligence, likeability, or trustworthiness. Rather, attractiveness seems to bear economic value per se, especially in a context in which investors base their decisions on a limited information set. Indeed, attractiveness has a sustainable effect on post-ICO performance.

“An Unsupervised Font Style Transfer Model Based on Generative Adversarial Networks”, Zeng & Pan 2021

2021-zeng.pdf: “An unsupervised font style transfer model based on generative adversarial networks”⁠, Sihan Zeng, Zhongliang Pan (2021-12-15; ; similar):

Chinese characters, because of their complex structure and a large number, lead to an extremely high cost of time for designers to design a complete set of characters. As a result, the dramatic growth of characters used in various fields such as culture and business has formed a strong contradiction between supply and demand with Chinese font design. Although most of the existing Chinese characters transformation models greatly alleviate the demand for character usage, the semantics of the generated characters cannot be guaranteed and the generation efficiency is low. At the same time, the models require large amounts of paired data for training, which requires a large amount of sample processing time.

To address the problems of existing methods, this paper proposes an unsupervised Chinese characters generation method based on generative adversarial networks, which fuses Style-Attentional Net to a skip-connected U-Net as a+GAN generator network architecture. It effectively and flexibly integrates local style patterns based on the semantic spatial distribution of content images while retaining feature information of different sizes. Our model generates fonts that maintain the source domain content features and the target domain style features at the end of training. The addition of the style specification module and the classification discriminator allows the model to generate multiple style typefaces.

The generation results show that the model proposed in this paper can perform the task of Chinese character style-transfer well. The model generates high-quality images of Chinese characters and generates Chinese characters with complete structures and natural strokes.

In the quantitative comparison experiments and qualitative comparison experiments, our model has more superior visual effects and image performance indexes compared with the existing models. In sample size experiments, clearly structured fonts are still generated and the model demonstrates substantial robustness.

At the same time, the training conditions of our model are easy to meet and facilitate generalization to real applications.

[Keywords: Chinese characters, style transfer, generative adversarial networks, unsupervised learning, style-attentional networks]

“The Science of Visual Data Communication: What Works”, Franconeri et al 2021

2021-franconeri.pdf: “The Science of Visual Data Communication: What Works”⁠, Steven L. Franconeri, Lace M. Padilla, Priti Shah, Jeffrey M. Zacks, Jessica Hullman (2021-12-15; ; backlinks; similar):

Effectively designed data visualizations allow viewers to use their powerful visual systems to understand patterns in data across science, education, health, and public policy. But ineffectively designed visualizations can cause confusion, misunderstanding, or even distrust—especially among viewers with low graphical literacy.

We review research-backed guidelines for creating effective and intuitive visualizations oriented toward communicating data to students, coworkers, and the general public. We describe how the visual system can quickly extract broad statistics from a display, whereas poorly designed displays can lead to misperceptions and illusions. Extracting global statistics is fast, but comparing between subsets of values is slow. Effective graphics avoid taxing working memory, guide attention, and respect familiar conventions.

Data visualizations can play a critical role in teaching and communication, provided that designers tailor those visualizations to their audience.

“𝜇NCA: Texture Generation With Ultra-Compact Neural Cellular Automata”, Mordvintsev & Niklasson 2021

“𝜇NCA: Texture Generation with Ultra-Compact Neural Cellular Automata”⁠, Alexander Mordvintsev, Eyvind Niklasson (2021-11-26; ; backlinks; similar):

We study the problem of example-based procedural texture synthesis using highly compact models. Given a sample image, we use differentiable programming to train a generative process, parameterised by a recurrent Neural Cellular Automata (NCA) rule.

Contrary to the common belief that neural networks should be highly over-parameterised, we demonstrate that our model architecture and training procedure allows for representing complex texture patterns using just a few hundred learned parameters, making their expressivity comparable to hand-engineered procedural texture generating programs. The smallest models from the proposed 𝜇NCA family scale down to 68 parameters. When using quantisation to one byte per parameter, proposed models can be shrunk to a size range between 588 and 68 bytes.

Implementation of a texture generator that uses these parameters to produce images is possible with just a few lines of GLSL or C code.

“HTCN: Harmonious Text Colorization Network for Visual-Textual Presentation Design”, Yang et al 2021

2021-yang.pdf: “HTCN: Harmonious Text Colorization Network for Visual-Textual Presentation Design”⁠, Xuyong Yang, Xiaobin Xu, Yaohong Huang, Nenghai Yu (2021-10-22; ; similar):

The selection of text color is a time-consuming and important aspect in the designing of visual-textual presentation layout.

In this paper, we propose a novel deep neural network architecture for predicting text color in the designing of visual-textual presentation layout. The proposed architecture consists of a text colorization network, a color harmony scoring network, and a text readability scoring network. The color harmony scoring network is learned by training with color theme data with aesthetic scores. The text readability scoring network is learned by training with design works. Finally, the text colorization network is designed to predict text colors by maximizing both color harmony and text readability, as well as learning from designer’s choice of color.

In addition, this paper conducts a comparison with other methods based on random generation, color theory rules or similar features search.

Both quantitative and qualitative evaluation results demonstrate that the proposed method has better performance.

[Keywords: text colorization, color harmonization, text readability, visual-textual presentation design]

4.1 Datasets:

  1. Color Combination Aesthetics Score Dataset: We obtained the Mechanical Turk public dataset from,14 which consists of 10,743 carefully selected color themes created by users on Adobe Kuler,1 covering a wide range of highly and poorly rated color themes, each of which rated by at least 3 random users with ratings between 1 and 5. The Mechanical Turk dataset uses Amazon Mechanical Turk1 to collect more user ratings for the selected topics, making each topic rated by 40 users. Finally, the average score for each topic was taken as the final score.
  2. Visual-Textual Design Works Dataset: We constructed a visual-textual design dataset called VTDSet (Visual-Textual Design Set) where 10 designers selected text colors in 5 to 7 areas on each of the total 1,226 images, resulting in 77,038 designed text colors and their corresponding information. We randomly selected 10,000 design results associated with 1,000 background images from the dataset as the training dataset, and 2,260 design results associated with the remaining 226 background images as the testing dataset.

4.4 Comparison with Other Methods: We compare the text colorization network HTCN proposed in this paper with the following 3 approaches:

  1. Random Text Colorization (“Random”). A random value is selected in the RGB color space, and this baseline is used to check whether the color design of the text in the generation of the visual-textual presentation layout is arbitrary.
  2. Text Colorization Based on Matsuda Color Wheel Theory (“Matsuda CW”). This text colorization method bases on the color wheel theory, which is also adopted in the work of Yang et al.18 We reproduce the method by first performing principal component analysis on the image to obtain the color theme, taking the color with the largest proportion as the base color Cd of the image, and then calculating the minimum harmonic color wheel distance between the base color Cd and the aesthetic template color set according to the constraint defined by Matsuda to obtain the optimal hue value of the text color Cr. Finally, the color mean μh,s,v of the image covered by the text area is calculated, and the optimal text color is obtained by reasonably maximizing the distance between μh,s,v and Cr in the (s, v) saturation and luminance space.
  3. Text Colorization Based on Image Feature Retrieval (“Retrieval”). Retrieval-based strategy is frequently used in design, i.e., seeking reference among solutions of similar problems. For the text colorization problem, the original designer’s color can become the recommended color when the background image and the text area are similar. As a result, we concatenate the global features of the image and the local image features of the text-covered region to obtain the K nearest neighbor recommendations for the current text coloring by the cosine distance. We used the VGG-16 network 15 pretrained on the ImageNet dataset, and selected the output of the fc6 layer as the image features. The combined feature of the text region image Itext on the global image I is f=<VGGI,VGGItext~~>. The text color corresponding to the feature with greatest similarity in the design library is selected for colorization.
Figure 3: Comparison of the actual effect of text colorization under various algorithms: (a) random generation of text colors, (b) method based on the Matsuda color wheel theory, (c) retrieval-based method that directly obtains corresponding color recommendations from historically similar design examples, (d) the HTCN network proposed in this paper, and (e) is the designer’s original work.

“Who Buys Fonts?”, Branwen 2021

Fonts: “Who Buys Fonts?”⁠, Gwern Branwen (2021-04-21; ⁠, ⁠, ; backlinks; similar):

Fonts are durable, highly-reusable, compact, & high-quality software products which do not ‘bitrot’. Nevertheless, hundreds or thousands of new ones come out every year despite enormous duplication; why? I speculate that designer boredom seems to be the answer: they crave novelty.

Fonts are a rare highlight in software design—stable, with well-defined uses, highly-compatible software stacks, and long-lived. Unsurprisingly, a back-catalogue of tens or hundreds of thousands of digital fonts out there, many nigh-indistinguishable from the next in both form and function.

Why, then do they all cost so much, and who is paying for them all, and even going around commissioning more fonts?

The casualness of the highly marked-up prices & the language around commissioned fonts strongly points to designers spending client money, largely for the sake of novelty & boredom, functioning as a cross-subsidy from large corporations to the art of typography. The surplus of fonts then benefits everyone else—as long as they can sort through all the choices!

“Experiences of Ugliness in Nature and Urban Environments”, Felisberti 2021

“Experiences of Ugliness in Nature and Urban environments”⁠, Fatima M. Felisberti (2021-03-17; ):

In folk psychology experiences of ugliness are associated with the negation of beauty and disorder, but empirical evidence is remarkably rare.

Here, participants (called informed) took 102 photographs of ugly landscapes and urban scenes and reflected on their experiences. Later, participants naïve to the intentional ugliness in the photographs rated landscapes higher than informed participants. The ratings for urban scenes were similar in the 2 cohorts.

Reflective notes revealed that emotional experiences with visual ugliness could overlap (eg. decay), but ugliness was associated more frequently with fear and death in landscapes, and with sadness and disgust in urban scenes. The findings uncovered a complex layer of associations.

Experiences triggered by perceived ugliness were contingent on a composite of socio-cultural, emotional, and evolutionary factors. Rather than being the endpoint on an aesthetic scale culminating with beauty, ugliness seems to be experienced as an independent aesthetic experience with its own processing streams.

[Keywords: ugliness, emotion, nature, urban, environment, beauty]

“Entropy Trade-offs in Artistic Design: A Case Study of Tamil kolam”, Tran et al 2021

“Entropy trade-offs in artistic design: A case study of Tamil kolam⁠, N.-Han Tran, Timothy Waring, Silke Atmaca, Bret A. Beheim (2021-03-01; ⁠, ; similar):

From an evolutionary perspective, art presents many puzzles. Humans invest substantial effort in generating apparently useless displays that include artworks. These vary greatly from ordinary to intricate. From the perspective of signalling theory, these investments in highly complex artistic designs can reflect information about individuals and their social standing.

Using a large corpus of kolam art from South India (n = 3,139 kolam from 192 women), we test a number of hypotheses about the ways in which social stratification and individual differences affect the complexity of artistic designs.

Consistent with evolutionary signalling theories of constrained optimisation, we find that kolam art tends to occupy a ‘sweet spot’ at which artistic complexity, as measured by Shannon information entropy⁠, remains relatively constant from small to large drawings. This stability is maintained through an observable, apparently unconscious trade-off between 2 standard information-theoretic measures: richness and evenness⁠.

Although these drawings arise in a highly stratified, caste-based society, we do not find strong evidence that artistic complexity is influenced by the caste boundaries of Indian society. Rather, the trade-off is likely due to individual-level aesthetic preferences and differences in skill, dedication and time, as well as the fundamental constraints of human cognition and memory.

[Keywords: art, signalling, entropy⁠, skill, material culture, Bayesian inference]

Kolam drawings are geometric art practised by women in the Kodaikanal region of Tamil Nadu⁠, southern India (Layard 1937). A kolam consists of one or more loops drawn around a grid of dots (in Tamil called pulli). On a typical morning, a Tamil woman will prepare a grid of dots on the threshold of her home, and then draw a kolam with rice powder or chalk. During the day the drawing weathers away, and a new kolam is created the next day. Kolam drawings are historically traditions of matrilines, but more recently are also a topic of cultural education in Tamil schools. Girls in Tamil Nadu begin practising kolam-making from an early age, and competency in this art is considered necessary for the transition into womanhood (Nagarajan 2018, Feeding a thousand souls: Women, ritual, and ecology in India—An exploration of the kolam). Although the primary medium is the threshold of the home, women practice kolam-making in notebooks, and it is common for artists to share, copy and embellish each other’s kolam designs. Such unrestrained artistic exchange is fostered by the fact that kolam designs are not considered to belong to any one person, but rather to be a type of community knowledge (Nagarajan 2018). However, the ability to successfully draw aesthetically pleasing (ie. diverse, complex, large) kolam drawings is said to reflect certain qualities of a woman (eg. her degree of traditionalness or patience), and as such her capacity to run a household and become a good wife and mother (Laine 2013; Nagarajan 2018).

…Here we study the ner pulli nelevu or sikku kolam family because of its unique form. Because sikku kolam drawings represent an unusually strict system of artistic expression, kolam drawings can be mapped onto a small identifiable set of gestures and are therefore well suited to systematic, quantitative analyses as a naturalistic model system of cultural evolution. A given kolam’s gesture sequence can be characterised by a number of informative summary statistics which capture aspects of kolam itself: the sequence length (ie. the total number of gestures), the discrete canvas size (measured by the grid of dots, or pulli), the gesture density per unit canvas area and gesture diversity as measured by evenness (here, the Gini index), richness and Shannon information entropy.

Figure 3: Trade-off between evenness and richness. The grey lines measure maximum entropy isoclines. The raw kolam data are jittered and illustrated in blue (light blue = low density, dark blue = high density). The (90, 75, 50%) kernel density of the average richness and evenness for each canvas size of the data are depicted in the orange area (light orange to dark orange).

“How Is Science Clicked on Twitter? Click Metrics for Bitly Short Links to Scientific Publications”, Fang et al 2021

“How is science clicked on Twitter? Click metrics for Bitly short links to scientific publications”⁠, Zhichao Fang, Rodrigo Costas, Wencan Tian, Xianwen Wang, Paul Wouters (2021-01-23; ; similar):

To provide some context for the potential engagement behavior of Twitter users around science, this article investigates how Bitly short links to scientific publications embedded in scholarly Twitter mentions are clicked on Twitter.

Based on the click metrics of over 1.1 million Bitly short links referring to Web of Science (WoS) publications, our results show that around 49.5% of them were not clicked by Twitter users.

For those Bitly short links with clicks from Twitter, the majority of their Twitter clicks accumulated within a short period of time after they were first tweeted. Bitly short links to the publications in the field of Social Sciences and Humanities tend to attract more clicks from Twitter over other subject fields. This article also assesses the extent to which Twitter clicks are correlated with some other impact indicators. Twitter clicks are weakly correlated with scholarly impact indicators (WoS citations and Mendeley readers), but moderately correlated to other Twitter engagement indicators (total retweets and total likes).

In light of these results, we highlight the importance of paying more attention to the click metrics of URLs in scholarly Twitter mentions, to improve our understanding about the more effective dissemination and reception of science information on Twitter.

“Time Travel: A Live Demo of the Intermedia Hypertext System—Circa 1989”, Meyrowitz 2020

2020-meyrowitz.pdf: “Time Travel: A Live Demo of the Intermedia Hypertext System—Circa 1989”⁠, Normen K. Meyrowitz (2020-11-06; ; similar):

[talk by author of Intermedia; blog: “Hypertext tools from the 80s”] In the late 1980s, before the WWW came to be, hypertext was a hot new field. Brown University’s Institute for Information and Scholarship (IRIS) developed Intermedia⁠, a networked, multiuser, multi application hypermedia system that was well-known and oft demoed at conferences (and used by the speaker for his keynote at Hypertext ’89). Its most lasting contribution has been the speaker’s coining of the word “anchor” to represent the “sticky selection” that is the source or destination of a link within documents. Anchors generalized these link endpoints to include any media type.

Intermedia’s development began in 1985. Its paradigm was the integration of bi-directional hypermedia links between different applications in what was then the graphical desktop interface introduced by Apple only a year earlier.

Intermedia had many features, some of which have since become mainstream—anchors (links to a span of text or a set of objects, rather than just a point), full-text indexing, dictionary lookup, links in different media type—and some still yet to be common in web browser-based systems– such as bi-directional links, integrated annotation capabilities, tracking of anchors in edited documents, and simultaneous linking by multiple individuals across the network.

Two years ago, the Computer History Museum asked if the speaker could resurrect Intermedia to show at the celebration of the 50th Anniversary of the Doug Engelbart’s Mother-of-All-Demos⁠. It was believed that all the backup disks and tapes had deteriorated, but through the intervention of the hypertext gods, a disk was found that worked and had a full-installation of Intermedia, along with demo files—including the Hypertext ’89 keynote content.

The speaker procured some Macintosh IIci machines, monitors, mice, and keyboards on eBay and amazingly, Intermedia ran.

In this presentation, you will see a fully-operational hypermedia system running quite nicely on a computer that is 250,000× slower than today’s high-end PCs.

“Footnote 36: Redisturbed: In This Issue We're Focusing on the Redisturbed Typeface For The New Decade [Redisturbed Is a Fresh Look at Our Original Disturbance Typeface from 1993. Looking Deeper at the Concept of an Unicase Alphabet and Designing It for Expanded Use Today. More Weights, Optical Sizes, Language Support and OpenType Features.]”, Tankard 2020

2020-jeremytankard-footnote-36-redisturbed.pdf: “Footnote 36: Redisturbed: In This Issue We're Focusing on the Redisturbed Typeface For The New Decade [Redisturbed is a fresh look at our original Disturbance typeface from 1993. Looking deeper at the concept of an unicase alphabet and designing it for expanded use today. More weights, optical sizes, language support and OpenType features.]”⁠, Jeremy Tankard (2020-10-05)

“Lorem Ipsum”, Branwen 2020

Lorem: “Lorem Ipsum”⁠, Gwern Branwen (2020-09-27; ⁠, ⁠, ⁠, ; similar):

Systems stress-test page for Gwern.net functionality, exercising Markdown/​HTML/​CSS/​JS features at scale to check that they render correctly in mobile/​desktop.

Abstract of article summarizing the page. For design philosophy, see About⁠. This is a test page which exercises all standard functionality and features of Gwern.net, from standard Pandoc Markdown like blockquotes/​headers/​tables/​images, to custom features like sidenotes, margin notes, left/​right-floated and width-full images, columns, epigraphs, admonitions, small/​wide tables, smallcaps, collapse sections, link annotations, link icons.

User-visible bugs which may appear on this page: zooms into small rather than large original image (mobile).

“Singular: Possible Futures of the Singularity”, Yu & GPT-3 2020

“Singular: Possible futures of the singularity”⁠, James Yu, GPT-3 (2020-08-20; ; backlinks; similar):

[Fiction writing exercise by James Yu⁠, using OpenAI GPT-3 via Sudowrite as a coauthor and interlocutor, to write a SF story about AIs and the Singularity⁠. Rather than edit GPT-3 output, Yu writes most passages and alternates with GPT-3 completions. Particularly striking for the use of meta-fictional discussion, presented in sidenotes⁠, where Yu and GPT-3 debate the events of the story: “I allowed GPT-3 to write crucial passages, and each time, I chatted with it ‘in character’, prompting it to role-play.”]

In each of these stories, colored text indicates a passage written by GPT-3. I used the Sudowrite app to generate a set of possibilities, primed with the story’s premise and a few paragraphs.

I chatted with GPT-3 about the passage, prompting it to roleplay as the superintelligent AI character in each story. I question the AI’s intent, leading to a meta-exchange where we both discover and create the fictional narrative in parallel. This kind of interaction—where an author can spontaneously talk to their characters—can be an effective tool for creative writing. And at times, it can be quite unsettling.

Can GPT-3 hold beliefs? Probably not, since it is simply a pile of word vectors. However, these transcripts could easily fool me into believing that it does.

“Sidenotes In Web Design”, Branwen 2020

Sidenotes: “Sidenotes In Web Design”⁠, Gwern Branwen (2020-08-06; ⁠, ⁠, ⁠, ; backlinks; similar):

In typography/​design, ‘sidenotes’ place footnotes/​endnotes in the margins for easier reading. I discuss design choices, HTML implementations and their pros/​cons.

Sidenotes/​margin notes are a typographic convention which improves on footnotes & endnotes by instead putting the notes in the page margin to let the reader instantly read them without needing to refer back and forth to the end of the document (endnotes) or successive pages (footnotes spilling over).

They are particularly useful for web pages, where ‘footnotes’ are de facto endnotes, and clicking back and forth to endnotes is a pain for readers. (Footnote variants, like “floating footnotes” which pop up on mouse hover, reduce the reader’s effort but don’t eliminate it.)

However, they are not commonly used, perhaps because web browsers until relatively recently made it hard to implement sidenotes easily & reliably. Tufte-CSS has popularized the idea and since then, there has been a proliferation of slightly variant approaches. I review some of the available implementations.

For general users, I recommend Tufte-CSS: it is fast & simple (using only compile-time generation of sidenotes, rendered by static HTML/​CSS), popular, and easy to integrate into most website workflows. For heavy footnote users or users who want a drop-in, runtime Javascript-based solutions like sidenotes.js may be more useful.

“Technology Holy Wars Are Coordination Problems”, Branwen 2020

Holy-wars: “Technology Holy Wars are Coordination Problems”⁠, Gwern Branwen (2020-06-15; ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ; backlinks; similar):

Flamewars over platforms & upgrades are so bitter not because people are jerks but because the choice will influence entire ecosystems, benefiting one platform through network effects & avoiding ‘bitrot’ while subtly sabotaging the rest through ‘bitcreep’.

The enduring phenomenon of ‘holy wars’ in computing, such as the bitterness around the prolonged Python 2 to Python 3 migration, is not due to mere pettiness or love of conflict, but because they are a coordination problem: dominant platforms enjoy strong network effects, such as reduced ‘bitrot’ as it is regularly used & maintained by many users, and can inflict a mirror-image ‘bitcreep’ on other platforms which gradually are neglected and begin to bitrot because of the dominant platform.

The outright negative effect of bitcreep mean that holdouts do not just cost early adopters the possible network effects, they also greatly reduce the value of a given thing, and may cause the early adopters to be actually worse off and more miserable on a daily basis. Given the extent to which holdouts have benefited from the community, holdout behavior is perceived as parasitic and immoral behavior by adopters, while holdouts in turn deny any moral obligation and resent the methods that adopters use to increase adoption (such as, in the absence of formal controls, informal ones like bullying).

This desperate need for there to be a victor, and the large technical benefits/​costs to those who choose the winning/​losing side, explain the (only apparently) disproportionate energy, venom, and intractability of holy wars⁠.

Perhaps if we explicitly understand holy wars as coordination problems, we can avoid the worst excesses and tap into knowledge about the topic to better manage things like language migrations.

“Found: A Greasy Leftover Snack Inside a Rare Book—Whether a Cookie or a Fruit Bun, the 'offending Object' Has Been Discarded”, Taub 2020

“Found: A Greasy Leftover Snack Inside a Rare Book—Whether a cookie or a fruit bun, the 'offending object' has been discarded”⁠, Matthew Taub (2020-03-11; backlinks; similar):

Emily Dourish, deputy keeper of Rare Books and Early Manuscripts at the Cambridge University Library, was recently making rounds through the collection when she made a most unusual discovery. Wedged inside a Renaissance-era volume of Saint Augustine’s complete works sat a flat, decaying, dry, partially eaten snack—likely a cookie, or “some kind of fruit bun”, though Dourish admits that the treat was well past easy identification.

…It’s not the first time that Dourish or her colleagues have found foreign objects inside their rare books. Over the years, they’ve encountered flower petals, unexpected annotations, bits of medieval manuscripts within actual book bindings, and even an unknown poem by the Dutch scholar Erasmus. One particularly notable example was a key found by Dourish’s colleague in a medieval manuscript, which left a rusty impression even after its removal…Sometimes, you find a plant inside a 15th-century German Bible……or wax drippings in 16th-century Spanish prayer books.

“Collections/Images: Cosmography Manuscript (12th Century)”, Review 2020

“Collections/Images: Cosmography Manuscript (12th Century)”⁠, The Public Domain Review (2020-02-07; ⁠, ; backlinks; similar):

This wonderful series of medieval cosmographic diagrams and schemas are sourced from a late 12th-century manuscript created in England. Coming to only 9 folios, the manuscript is essentially a scientific textbook for monks, bringing together cosmographical knowledge from a range of early Christian writers such as Bede and Isidore⁠, who themselves based their ideas on such classical sources as Pliny the Elder⁠, though adapting them for their new Christian context. As for the intriguing diagrams themselves, The Walters Art Museum⁠, which holds the manuscript and offers up excellent commentary on its contents, provides the following description:

The twenty complex diagrams that accompany the texts in this pamphlet help illustrate [the ideas], and include visualizations of the heavens and earth, seasons, winds, tides, and the zodiac, as well as demonstrations of how these things relate to man.

Most of the diagrams are rotae, or wheel-shaped schemata, favored throughout the Middle Ages for the presentation of scientific and cosmological ideas because they organized complex information in a clear, orderly fashion, making this material easier to apprehend, learn, and remember. Moreover, the circle, considered the most perfect shape and a symbol of God, was seen as conveying the cyclical nature of time and the Creation as well as the logic, order, and harmony of the created universe.

“HTML Living Standard: Text-level Semantics: 4.5.10: The ruby Element”, WhatWG 2020

“HTML Living Standard: Text-level semantics: 4.5.10: The ruby element”⁠, WhatWG (2020-01-29; backlinks; similar):

The ruby element allows one or more spans of phrasing content to be marked with ruby annotations. Ruby annotations are short runs of text presented alongside base text, primarily used in East Asian typography as a guide for pronunciation or to include other annotations. In Japanese, this form of typography is also known as furigana…The ruby and rt elements can be used for a variety of kinds of annotations, including in particular (though by no means limited to) those described below. For more details on Japanese Ruby in particular, and how to render Ruby for Japanese, see Requirements for Japanese Text Layout.

Note: At the time of writing, CSS does not yet provide a way to fully control the rendering of the HTML ruby element. It is hoped that CSS will be extended to support the styles described below in due course.

Example: Mono-ruby for individual base characters in Japanese: One or more hiragana or katakana characters (the ruby annotation) are placed with each ideographic character (the base text). This is used to provide readings of kanji characters:

<ruby>B<rt>annotation</ruby>

“Having Had No Predecessor to Imitate, He Had No Successor Capable of Imitating Him”, Menard 2020

“Having Had No Predecessor to Imitate, He Had No Successor Capable of Imitating Him”⁠, Alvaro de Menard (2020-01-17; backlinks; similar):

[Summary of the Homeric Question that gripped Western classical literary scholarship for centuries: who wrote the Iliad/​Odyssey, when, and how? They appear in Greek history out of nowhere: 2 enormously lengthy, sophisticated, beautiful, canonical, unified works that would dominate Western literature for millennia, and yet, appeared to draw on no earlier tradition nor did Homer have any earlier (non-spurious) works. How was this possible?

The iconoclastic Analysts proposed it was a fraud, and the works were pieced together later out of scraps from many earlier poets. The Unitarians pointed to the overall quality; the complex (apparently planned) structure; the disagreements of Analysts on what parts were what pieces; and the Analysts’ inability to explain many anomalies in Homer: there are passages splicing together Greek dialects, passages which were metrical only given long-obsolete Greek letters/​pronunciations, and even individual words which mixed up Greek dialects! (Not that these anomalies were all that much easier to explain by the Unitarian hypothesis of a single author).

The eventual resolution relied an old hypothesis: that Homer was in fact the product of a lost oral tradition⁠. There was, unfortunately, no particular evidence for it, and so it never made any headway against the Analysts or Unitarians—until Milman Parry found a living oral tradition of epic poetry in the Balkans, and discovered in it all the signs of the Homeric poems, from repetitive epithets to a patchwork of dialects, and thus empirical examples of how long oral traditions could produce a work like Homer if one of them happened to get written down at some point.]

“What Does BERT Dream Of? A Visual Investigation of Nightmares in Sesame Street”, Bäuerle & Wexler 2020

“What does BERT dream of? A visual investigation of nightmares in Sesame Street”⁠, Alex Bäuerle, James Wexler (2020-01-13; ⁠, ; backlinks; similar):

BERT⁠, a neural network published by Google in 2018, excels in natural language understanding. It can be used for multiple different tasks, such as sentiment analysis or next sentence prediction, and has recently been integrated into Google Search. This novel model has brought a big change to language modeling as it outperformed all its predecessors on multiple different tasks. Whenever such breakthroughs in deep learning happen, people wonder how the network manages to achieve such impressive results, and what it actually learned. A common way of looking into neural networks is feature visualization. The ideas of feature visualization are borrowed from Deep Dream, where we can obtain inputs that excite the network by maximizing the activation of neurons, channels, or layers of the network. This way, we get an idea about which part of the network is looking for what kind of input.

In Deep Dream, inputs are changed through gradient descent to maximize activation values. This can be thought of as similar to the initial training process, where through many iterations, we try to optimize a mathematical equation. But instead of updating network parameters, Deep Dream updates the input sample. What this leads to is somewhat psychedelic but very interesting images, that can reveal to what kind of input these neurons react. Examples for Deep Dream processes with images from the original Deep Dream blogpost. Here, they take a randomly initialized image and use Deep Dream to transform the image by maximizing the activation of the corresponding output neuron. This can show what a network has learned about different classes or for individual neurons.

Feature visualization works well for image-based models, but has not yet been widely explored for language models. This blogpost will guide you through experiments we conducted with feature visualization for BERT. We show how we tried to get BERT to dream of highly activating inputs, provide visual insights of why this did not work out as well as we hoped, and publish tools to explore this research direction further. When dreaming for images, the input to the model is gradually changed. Language, however, is made of discrete structures, ie. tokens, which represent words, or word-pieces. Thus, there is no such gradual change to be made…Looking at a single pixel in an input image, such a change could be gradually going from green to red. The green value would slowly go down, while the red value would increase. In language, however, we can not slowly go from the word “green” to the word “red”, as everything in between does not make sense. To still be able to use Deep Dream, we have to utilize the so-called Gumbel-Softmax trick, which has already been employed in a paper by Poerner et al 2018⁠. This trick was introduced by Jang et. al. and Maddison et. al.. It allows us to soften the requirement for discrete inputs, and instead use a linear combination of tokens as input to the model. To assure that we do not end up with something crazy, it uses two mechanisms. First, it constrains this linear combination so that the linear weights sum up to one. This, however, still leaves the problem that we can end up with any linear combination of such tokens, including ones that are not close to real tokens in the embedding space. Therefore, we also make use of a temperature parameter, which controls the sparsity of this linear combination. By slowly decreasing this temperature value, we can make the model first explore different linear combinations of tokens, before deciding on one token.

…The lack of success in dreaming words to highly activate specific neurons was surprising to us. This method uses gradient descent and seemed to work for other models (see Poerner et al 2018). However, BERT is a complex model, arguably much more complex than the models that have been previously investigated with this method.

“Subscripting Typographic Convention For Citations/Dates/Sources/Evidentials: A Proposal”, Branwen 2020

“Subscripting Typographic Convention For Citations/Dates/Sources/Evidentials: A Proposal”⁠, Gwern Branwen (2020-01-08; backlinks; similar):

Reviving an old General Semantics proposal: borrowing from scientific notation and using subscripts like ‘Gwern2020’ for denoting sources (like citation, timing, or medium) might be an useful trick for clearer writing, compared to omitting such information or using standard cumbersome circumlocutions.

“Subscripts For Citations”, Branwen 2020

Subscripts: “Subscripts For Citations”⁠, Gwern Branwen (2020-01-08; ⁠, ; backlinks; similar):

A typographic proposal: replace cumbersome inline citation formats like ‘Foo et al. (2010)’ with subscripted dates/​sources like ‘Foo⁠…⁠2020’. Intuitive, easily implemented, consistent, compact, and can be used for evidentials in general.

I propose reviving an old General Semantics notation: borrow from scientific notation and use subscripts like ‘Gwern2020’ for denoting sources (like citation, timing, or medium). Using subscript indices is flexible, compact, universally technically supported, and intuitive. This convention can go beyond formal academic citation and be extended further to ‘evidentials’ in general, indicating the source & date of statements. While (currently) unusual, subscripting might be a useful trick for clearer writing, compared to omitting such information or using standard cumbersome circumlocutions.

“Host: Deep into the Mercenary World of Take-no-prisoners Political Talk Radio [footnote Redesign]”, Achmiz 2020

2005-wallace-redesign.pdf: “Host: Deep into the mercenary world of take-no-prisoners political talk radio [footnote redesign]”⁠, Said Achmiz (2020-01-08; backlinks)

“Visual Model Fit Estimation in Scatterplots and Distribution of Attention: Influence of Slope and Noise Level”, Reimann et al 2020

2020-reimann.pdf: “Visual model fit estimation in scatterplots and distribution of attention: Influence of slope and noise level”⁠, Daniel Reimann, Christine Blech, Robert Gaschler (2020; ; similar):

Scatterplots are ubiquitous data graphs and can be used to depict how well data fit to a quantitative theory. We investigated which information is used for such estimates.

In Experiment 1 (n = 25), we tested the influence of slope and noise on perceived fit between a linear model and data points. Additionally, eye tracking was used to analyze the deployment of attention. Visual fit estimation might mimic one or the other statistical estimate: If participants were influenced by noise only, this would suggest that their subjective judgment was similar to root mean square error⁠. If slope was relevant, subjective estimation would mimic variance explained. While the influence of noise on estimated fit was stronger, we also found an influence of slope.

As most of the fixations fell into the center of the scatterplot, in Experiment 2 (n = 51), we tested whether location of noise affects judgment. Indeed, high noise influenced the judgment of fit more strongly if it was located in the middle of the scatterplot.

Visual fit estimates seem to be driven by the center of the scatterplot and to mimic variance explained.

“Free Movie Of the Week”, Things 2020

“Free Movie Of the Week”⁠, Oh You Pretty Things (2020; backlinks; similar):

Filmmaker Gary Hustwit is streaming his documentaries free worldwide during the global COVID-19 crisis. Each Tuesday we’ll be posting another film here. We hope you enjoy them, and please stay strong.

March 14 to 21: Helvetica
March 24 to 31: Objectified
March 31 to April 7: Urbanized
April 7 to 14: Rams

April 14 to 21: Workplace (2016, 64 minutes) is a film about the past, present, and future of the office…

April 21 to 28: TBA
April 28 to May 5: TBA

[Oh You Pretty Things is a web shop run by a collective of filmmakers and visual artists based in Brooklyn NY. We make films, art, books, posters, photographs, clothing, and other fine stuff.]

“What’s in a Font?: Ideological Perceptions of Typography”, Haenschen & Tamul 2019

“What’s in a Font?: Ideological Perceptions of Typography”⁠, Katherine Haenschen, Daniel J. Tamul (2019-12-20; ; similar):

Although extensive political communication research considers the content of candidate messages, scholars have largely ignored how those words are rendered—specifically, the typefaces in which they are set. If typefaces are found to have political attributes, that may impact how voters receive campaign messages. Our paper reports the results of two survey experiments demonstrating that individuals perceive typefaces, type families, and type styles to have ideological qualities. Furthermore, partisanship moderates subjects’ perceptions of typefaces: Republicans generally view typefaces as more conservative than Independents and Democrats. We also find evidence of affective polarization, in that individuals rate typefaces more favorably when perceived as sharing their ideological orientation. Results broaden our understanding of how meaning is conveyed in political communication, laying the groundwork for future research into the functions of typography and graphic design in contemporary political campaigns. Implications for political practitioners are also discussed. Keywords: Political communication, ideology, partisanship, typeface, graphic design. [Ranking: Blackletter, Times New Roman, Jubilat, Gill Sans, Birds of Paradise, Century Gothic, Sunrise.]

“How Machine Learning Can Help Unlock the World of Ancient Japan”, Lamb 2019

“How Machine Learning Can Help Unlock the World of Ancient Japan”⁠, Alex Lamb (2019-11-17; backlinks; similar):

Humanity’s rich history has left behind an enormous number of historical documents and artifacts. However, virtually none of these documents, containing stories and recorded experiences essential to our cultural heritage, can be understood by non-experts due to language and writing changes over time…This is a global problem, yet one of the most striking examples is the case of Japan. From 800 until 1900 CE, Japan used a writing system called Kuzushiji, which was removed from the curriculum in 1900 when the elementary school education was reformed. Currently, the overwhelming majority of Japanese speakers cannot read texts which are more than 150 years old. The volume of these texts—comprised of over three million books in storage but only readable by a handful of specially-trained scholars—is staggering. One library alone has digitized 20 million pages from such documents. The total number of documents—including, but not limited to, letters and personal diaries—is estimated to be over one billion. Given that very few people can understand these texts, mostly those with PhDs in classical Japanese literature and Japanese history, it would be very expensive and time-consuming to finance for scholars to convert these documents to modern Japanese. This has motivated the use of machine learning to automatically understand these texts.

…Given its importance to Japanese culture, the problem with utilizing computers to help with Kuzushiji recognition has been explored extensively through the use of various methods in deep learning and computer vision. However, these models were unable to achieve strong performance on Kuzushiji recognition. This was due to inadequate understanding of Japanese historical literature in the optical character recognition (OCR) community and the lack of high quality standardized datasets. To address this, the National Institute of Japanese Literature (NIJL) created and released a Kuzushiji dataset, curated by the Center for Open Data in the Humanities (CODH). The dataset currently has over 4000 character classes and a million character images.

KuroNet: KuroNet is a Kuzushiji transcription model that I developed with my collaborators, Tarin Clanuwat and Asanobu Kitamoto from the ROIS-DS Center for Open Data in the Humanities at the National Institute of Informatics in Japan. The KuroNet method is motivated by the idea of processing an entire page of text together, with the goal of capturing both long-range and local dependencies. KuroNet passes images containing an entire page of text through a residual U-Net architecture (FusionNet) in order to obtain a feature representation…For more information about KuroNet, please checkout our paper “KuroNet: Pre-Modern Japanese Kuzushiji Character Recognition with Deep Learning”⁠, which was accepted to the 2019 International Conference on Document Analysis and Recognition (ICDAR)

Kaggle Kuzushiji Recognition Competition: While KuroNet achieved state-of-the-art results at the time of its development and was published in the top tier conference on document analysis and recognition, we wanted to open this research up to the broader community. We did this partially to stimulate further research on Kuzushiji and to discover ways in which KuroNet may be deficient. Ultimately, after 3 months of competition, which saw 293 teams, 338 competitors, and 2652 submissions, the winner achieved an F1 score of 0.950. When we evaluated KuroNet on the same setup, we found that it achieved an F1 score 0.902, which would have put it in 12th place—which, although acceptable, remains well below the best performing solutions.

Future Research: The work done by CODH has already led to substantial progress in transcribing Kuzushiji documents, however, the overall problem of unlocking the knowledge of historical documents is far from solved.

“They Might Never Tell You It’s Broken”, Chevalier-Boisvert 2019

“They Might Never Tell You It’s Broken”⁠, Maxime Chevalier-Boisvert (2019-11-02; ; backlinks; similar):

As part of my PhD, I developed Higgs, an experimental JIT compiler…I developed it on GitHub, completely in the open, and wrote about my progress on this blog. Pretty soon, the project had 300 stars on GitHub⁠, a handful of open source contributors, and I was receiving some nice feedback.

…One day, someone I had been exchanging with on the chat room for two weeks reached out to me to signal a strange bug. They couldn’t get the tests to pass and were getting a segmentation fault. I was puzzled. They asked me if Higgs had MacOS support. I explained that I’d never tested it on MacOS myself, but I couldn’t see any reason why it wouldn’t work. I told this person that the problem was surely on their end. Higgs had been open source for over a year. It was a pretty niche project, but I knew for a fact that at least 40–60 people must have tried it, and at least 50% of these people must have been running MacOS. I assumed that surely, if Higgs didn’t run on MacOS at all, someone would have opened a GitHub issue by now. Again, I was wrong.

…It’s a horrifying thought, but it could be that for every one person who opens an issue on GitHub, 100 or more people have already tried your project, run into that same bug, and simply moved on.

[One developer observes that “Despite having just 5.8% sales, over 38% of bug reports come from the Linux community”⁠, of which only 3⁄400 were Linux-specific—affected users on other platforms simply didn’t report them. Firefox likewise. Data science example. Scott Hanselman listed 19 categories of issues in his previous week (iTunes/​iPhoto simply being categories of their own listed: ‘everything’.) Soren Bjornstad nearly failed an online exam due to a cascade of 15+ problems & attempted fixes, which fixes he estimates as requiring knowledge of at least 12 different pieces of tech esoterica that an user shouldn’t have to know. tzs notes Japanese users are so inured to bad software they’ll let something install for 30 hours. Dan Luu chronicles a week of bugs he observed, large & small (so many he couldn’t report them all). I’ve observed quite a few bugs just from cats walking on my keyboard⁠. (A IRL version of fuzz testing⁠, itself notorious for finding endless bugs in any software it’s used on.) My experience reporting website bugs in particular has been that many of them were unknown. Gwern.net examples of this include: 400,000+ Chinese visitors to This Waifu Does Not Exist not mentioning that the mobile version was horribly broken; Apple users not mentioning that 80% of Gwern.net videos didn’t play for them or that Apple won’t support OGG music; the Anime Faces page loading 500MB+ of files on each page load… Another fun example: popups on all Wikipedias worldwide for ~5 months (September 2020–January 2021), could be disabled but not re-enabled (affecting ~24 billion page views per month or ~120 billion page views total); no one mentioned it until we happened to investigate the feature while cloning it for Gwern.net.

How can devs miss so many bugs, and so many be unreported by users? Many reasons. Users don’t know it’s supposed to not hurt, and have low expectations; they also develop horrifying workflows (obligatory XKCD), (ab)using features in ways designers never thought of. Developers (particularly ones with mechanical sympathy) undergo decades of operant conditioning teaching them subconsciously how to use software in the safest possible way (eg. not typing or mousing when software lags), illusions of transparency in how clear it is to use something, and suffer from a curse-of-expertise in knowing how their software should work, while they need to unsee it in order to break it. (Testing is a completely different mindset; for an amusing fictional demonstration, read “Stargate Physics 101”⁠.) Alan Kay: “…in many ways one of the most difficult things in programming is to find end user sensibilities. I think the general programmer personality is somebody who has, above all other things, learned how to cope with an enormous number of disagreeable circumstances.”]

Neon Genesis Evangelion: Graphic Designer Peiran Tan Plumbs the Typographic Psyche of the Celebrated Anime Franchise”, Tan 2019

Neon Genesis Evangelion: Graphic designer Peiran Tan plumbs the typographic psyche of the celebrated anime franchise”⁠, Peiran Tan (2019-10-17; ; backlinks; similar):

[A look into the signature typefaces of Evangelion: Matisse EB, mechanical compression for distorted resizing, and title cards⁠. Covered typefaces: Matisse/​Helvetica/​Neue Helvetica/​Times/​Helvetica Condensed/​Chicago/​Cataneo/​Futura/​Eurostile/​ITC Avant Garde Gothic/​Gill Sans.]

Evangelion was among the first anime to create a consistent typographic identity across its visual universe, from title cards to NERV’s user interfaces. Subcontractors usually painted anything type-related in an anime by hand, so it was a novel idea at the time for a director to use desktop typesetting to exert typographic control. Although sci-fi anime tended to use either sans serifs or hand lettering that mimicked sans serifs in 1995, Anno decided to buck that trend, choosing a display serif for stronger visual impact. After flipping through iFontworks’ specimen catalog, he personally selected the extra-bold (EB) weight of Matisse (マティス), a Mincho-style serif family…A combination of haste and inexperience gave Matisse a plain look and feel, which turned out to make sense for Evangelion. The conservative skeletal construction restrained the characters’ personality so it wouldn’t compete with the animation; the extreme stroke contrast delivered the desired visual punch. Despite the fact that Matisse was drawn on the computer, many of its stroke corners were rounded, giving it a hand-drawn, fin-de-siècle quality.

…In addition to a thorough graphic identity, Evangelion also pioneered a deep integration of typography as a part of animated storytelling—a technique soon to be imitated by later anime. Prime examples are the show’s title cards and flashing type-only frames mixed in with the animation. The title cards contain nothing but crude, black-and-white Matisse EB, and are often mechanically compressed to fit into interlocking compositions. This brutal treatment started as a hidden homage to the title cards in old Toho movies from the sixties and seventies, but soon became visually synonymous with Evangelion after the show first aired. Innovating on the media of animated storytelling, Evangelion also integrates type-only flashes. Back then, these black-and-white, split-second frames were Anno’s attempt at imprinting subliminal messages onto the viewer, but have since become Easter eggs for die-hard Evangelion fans as well as motion signatures for the entire franchise.

…Established in title cards, this combination of Matisse EB and all-caps Helvetica soon bled into various aspects of Evangelion, most notably the HUD user interfaces in NERV. Although it would be possible to attribute the mechanical compression to technical limitations or typographic ignorance, its ubiquitous occurrence did evoke haste and, at times, despair—an emotional motif perfectly suited to a post-apocalyptic story with existentialist themes.

“Genome-wide Association Studies in Ancestrally Diverse Populations: Opportunities, Methods, Pitfalls, and Recommendations”, Peterson et al 2019

2019-peterson.pdf: “Genome-wide Association Studies in Ancestrally Diverse Populations: Opportunities, Methods, Pitfalls, and Recommendations”⁠, Roseann E. Peterson, Karoline Kuchenbaecker, Raymond K. Walters, Chia-Yen Chen, Alice B. Popejoy, Sathish Periyasamy et al (2019-10-17; ; backlinks)

“How Can We Develop Transformative Tools For Thought?”, Matuschak & Nielsen 2019

“How Can We Develop Transformative Tools For Thought?”⁠, Andy Matuschak, Michael Nielsen (2019-10; ; backlinks; similar):

[Long writeup by Andy Matuschak and Michael Nielsen on experiment in integrating spaced repetition systems with a tutorial on quantum computing, Quantum Country: Quantum Computing For The Very Curious By combining explanation with spaced testing, a notoriously thorny subject may be learned more easily and then actually remembered—such a system demonstrating a possible ‘tool for thought’. Early results indicate users do indeed remember the quiz answers, and feedback has been positive.]

Part I: Memory systems

  • Introducing the mnemonic medium
  • The early impact of the prototype mnemonic medium
  • Expanding the scope of memory systems: what types of understanding can they be used for?
  • Improving the mnemonic medium: making better cards
  • Two cheers for mnemonic techniques
  • How important is memory, anyway?
  • How to invent Hindu-Arabic numerals?

Part II: Exploring tools for thought more broadly:

  • Mnemonic video

  • Why isn’t there more work on tools for thought today?

  • Questioning our basic premises

    • What if the best tools for thought have already been discovered?
    • Isn’t this what the tech industry does? Isn’t there a lot of ongoing progress on tools for thought?
    • Why not work on AGI or BCI instead?
  • Executable books

    • Serious work and the aspiration to canonical content
    • Stronger emotional connection through an inverted writing structure

Summary and Conclusion

… in Quantum Country an expert writes the cards, an expert who is skilled not only in the subject matter of the essay, but also in strategies which can be used to encode abstract, conceptual knowledge. And so Quantum Country provides a much more scalable approach to using memory systems to do abstract, conceptual learning. In some sense, Quantum Country aims to expand the range of subjects users can comprehend at all. In that, it has very different aspirations to all prior memory systems.

More generally, we believe memory systems are a far richer space than has previously been realized. Existing memory systems barely scratch the surface of what is possible. We’ve taken to thinking of Quantum Country as a memory laboratory. That is, it’s a system which can be used both to better understand how memory works, and also to develop new kinds of memory system. We’d like to answer questions such as:

  • What are new ways memory systems can be applied, beyond the simple, declarative knowledge of past systems?
  • How deep can the understanding developed through a memory system be? What patterns will help users deepen their understanding as much as possible?
  • How far can we raise the human capacity for memory? And with how much ease? What are the benefits and drawbacks?
  • Might it be that one day most human beings will have a regular memory practice, as part of their everyday lives? Can we make it so memory becomes a choice; is it possible to in some sense solve the problem of memory?

“CRISPR-Edited Stem Cells in a Patient With HIV and Acute Lymphocytic Leukemia”, Xu et al 2019

2019-xu.pdf: “CRISPR-Edited Stem Cells in a Patient with HIV and Acute Lymphocytic Leukemia”⁠, Lei Xu, Jun Wang, Yulin Liu, Liangfu Xie, Bin Su, Danlei Mou, Longteng Wang, Tingting Liu, Xiaobao Wang et al (2019-09-11; ; backlinks)

popups.js”, Achmiz 2019

popups.js: popups.js⁠, Said Achmiz (2019-08-21; ⁠, ; backlinks; similar):

popups.js: standalone Javascript library for creating ‘popups’ which display link metadata (typically, title/​author/​date/​summary), for extremely convenient reference/​abstract reading, with mobile and YouTube support. Whenever any such link is mouse-overed by the user, popups.js will pop up a large tooltip-like square with the contents of the attributes. This is particularly intended for references, where it is extremely convenient to autopopulate links such as to Arxiv.org/​Biorxiv.org/​Pubmed/​PLOS/​gwern.net/​Wikipedia with the link’s title/​author/​date/​abstract, so the reader can see it instantly.

popups.js parses a HTML document and looks for <a> links which have the link-annotated attribute class, and the attributes data-popup-title, data-popup-author, data-popup-date, data-popup-doi, data-popup-abstract. (These attributes are expected to be populated already by the HTML document’s compiler, however, they can also be done dynamically. See wikipedia-popups.js for an example of a library which does Wikipedia-only dynamically on page loads.)

For an example of a Hakyll library which generates annotations for Wikipedia/​Biorxiv/​Arxiv⁠/​PDFs/​arbitrarily-defined links, see LinkMetadata.hs⁠.

“Rubrication Design Examples”, Branwen 2019

Red: “Rubrication Design Examples”⁠, Gwern Branwen (2019-05-30; ⁠, ⁠, ; backlinks; similar):

A gallery of typographic and graphics design examples of rubrication, a classic pattern of using red versus black for emphasis.

Dating back to medieval manuscripts, text has often been highlighted using a particular distinct bright red. The contrast of black and red on a white background is highly visible and striking, and this has been reused many times, in a way which I have not noticed for other colors. I call these uses rubrication and collate examples I have noticed from many time periods. This design pattern does not seem to have a widely-accepted name or be commonly discussed, so I propose extending the term “rubrication” to all instances of this pattern, not merely religious texts.

Why this rubrication design pattern? Why red, specifically, and not, say, orange or purple? Is it just a historical accident? Cross-cultural research suggests that for humans, red may be intrinsically more noticeable & has a higher contrast with black, explaining its perennial appeal as a design pattern.

Regardless, it is a beautiful design pattern which has been used in many interesting ways over the millennia, and perhaps may inspire the reader.

Note: to hide apparatus like the links, you can use reader-mode ().

“How Low Can You Go? Detecting Style in Extremely Low Resolution Images”, Searston et al 2019

2019-searston.pdf: “How low can you go? Detecting style in extremely low resolution images”⁠, Rachel A. Searston, Matthew B. Thompson, John R. Vokey, Luke A. French, Jason M. Tangen (2019-04-04; ; similar):

Accurate recognition and discrimination of complex visual stimuli is critical to human decision making in medicine, forensic science, aviation, security, and defense. This study highlights the sufficiency of redundant low-spatial and low-dimensional information for visual recognition and visual discrimination of 3 large-scale natural image sets.


Humans can see through the complexity of scenes, faces, and objects by quickly extracting their redundant low-spatial and low-dimensional global properties, or their style. It remains unclear, however, whether semantic coding is necessary, or whether visual stylistic information is sufficient, for people to recognize and discriminate complex images and categories.

In 2 experiments, we systematically reduce the resolution of hundreds of unique paintings, birds, and faces, and test people’s ability to discriminate and recognize them.

We show that the stylistic information retained at extremely low image resolutions is sufficient for visual recognition of images and visual discrimination of categories. Averaging over the 3 domains, people were able to reliably recognize images reduced down to a single pixel, with large differences from chance discriminability across 8 different image resolutions. People were also able to discriminate categories substantially above chance with an image resolution as low as 2×2 pixels.

We situate our findings in the context of contemporary computational accounts of visual recognition and contend that explicit encoding of the local features in the image, or knowledge of the semantic category, is not necessary for recognizing and distinguishing complex visual stimuli.

[Keywords: visual recognition, visual discrimination, ensemble, gist, perceptual expertise]

Figure 2: Panels A, B, and C depict participants’ mean discriminability (A), response bias (b), and rate correct scores (in seconds) recognition memory task as a function of image resolution (x-axes), along with their polynomial trend over pixels at the top of the 3 panels. All plots represent the 50 participants’ responses, collapsing over the 3 domains: paintings, birds, and faces. Panel D shows the receiver operating characteristic curves for the 8 image resolutions, overlaid with the “best-fitting” curve assuming binomial distributions (the dotted line indicates chance performance). Finally, the raincloud plots in Panel E depict a half violin plot of participants’ mean proportion correct scores across the 8 image resolutions overlaid with jittered data points from each individual participant, the mean proportion correct per resolution (the black dot), and standard error of the mean per resolution.

“Local-first Software: You Own Your Data, in spite of the Cloud [web]”, Kleppmann et al 2019

“Local-first software: You own your data, in spite of the cloud [web]”⁠, Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, Mark McGranaghan (Ink & Switch) (2019-04; ; backlinks; similar):

[PDF version]

Cloud apps like Google Docs and Trello are popular because they enable real-time collaboration with colleagues, and they make it easy for us to access our work from all of our devices. However, by centralizing data storage on servers, cloud apps also take away ownership and agency from users. If a service shuts down, the software stops functioning, and data created with that software is lost.

In this article we propose “local-first software”: a set of principles for software that enables both collaboration and ownership for users. Local-first ideals include the ability to work offline and collaborate across multiple devices, while also improving the security, privacy, long-term preservation, and user control of data.

We survey existing approaches to data storage and sharing, ranging from email attachments to web apps to Firebase-backed mobile apps, and we examine the trade-offs of each. We look at Conflict-free Replicated Data Types (CRDTs): data structures that are multi-user from the ground up while also being fundamentally local and private. CRDTs have the potential to be a foundational technology for realizing local-first software.

We share some of our findings from developing local-first software prototypes at Ink & Switch over the course of several years. These experiments test the viability of CRDTs in practice, and explore the user interface challenges for this new data model. Lastly, we suggest some next steps for moving towards local-first software: for researchers, for app developers, and a startup opportunity for entrepreneurs.

…in the cloud, ownership of data is vested in the servers, not the users, and so we became borrowers of our own data. The documents created in cloud apps are destined to disappear when the creators of those services cease to maintain them. Cloud services defy long-term preservation. No Wayback Machine can restore a sunsetted web application. The Internet Archive cannot preserve your Google Docs.

In this article we explored a new way forward for software of the future. We have shown that it is possible for users to retain ownership and control of their data, while also benefiting from the features we associate with the cloud: seamless collaboration and access from anywhere. It is possible to get the best of both worlds.

But more work is needed to realize the local-first approach in practice. Application developers can take incremental steps, such as improving offline support and making better use of on-device storage. Researchers can continue improving the algorithms, programming models, and user interfaces for local-first software. Entrepreneurs can develop foundational technologies such as CRDTs and peer-to-peer networking into mature products able to power the next generation of applications.

  • Motivation: collaboration and ownership

  • Seven ideals for local-first software

    • No spinners: your work at your fingertips
    • Your work is not trapped on one device
    • The network is optional
    • Seamless collaboration with your colleagues
    • The Long Now
    • Security and privacy by default
    • You retain ultimate ownership and control
  • Existing data storage and sharing models

    • How application architecture affects user experience
      • Files and email attachments
      • Web apps: Google Docs, Trello, Figma
      • Dropbox, Google Drive, Box, OneDrive, etc.
      • Git and GitHub
    • Developer infrastructure for building apps
      • Web app (thin client)
      • Mobile app with local storage (thick client)
      • Backend-as-a-Service: Firebase, CloudKit, Realm
      • CouchDB
  • Towards a better future

    • CRDTs as a foundational technology
    • Ink & Switch prototypes
      • Trello clone
      • Collaborative drawing
      • Media canvas
      • Findings
    • How you can help
      • For distributed systems and programming languages researchers
      • For Human-Computer Interaction (HCI) researchers
      • For practitioners
      • Call for startups
  • Conclusions

“InflationAdjuster”, Branwen 2019

Inflation.hs: “InflationAdjuster”⁠, Gwern Branwen (2019-03-27; ; backlinks; similar):

Experimental Pandoc module for implementing automatic inflation adjustment of nominal date-stamped dollar or Bitcoin amounts to provide real prices; Bitcoin’s exchange rate has moved by multiple orders of magnitude over its early years (rendering nominal amounts deeply unintuitive), and this is particularly critical in any economics or technology discussion where a nominal price from 1950 is 11× the 2019 real price!

Years/​dates are specified in a variant of my interwiki link syntax; for example: $50 or [₿0.5]​(₿2017-01-01), giving link adjustments which compile to something like like <span class="inflationAdjusted" data-originalYear="2017-01-01" data-originalAmount="50.50" data-currentYear="2019" data-currentAmount="50,500">₿50.50<span class="math inline"><sub>2017</sub><sup>$50,500</sup></span></span>.

Dollar amounts use year, and Bitcoins use full dates, as the greater temporal resolution is necessary. Inflation rates/​exchange rates are specified as constants and need to be manually updated every once in a while; if out of date, the last available rate is carried forward for future adjustments.

“Fancy Euclid’s Elements in TeX”, Slyusarev 2019

“Fancy Euclid’s Elements in TeX”⁠, Sergey Slyusarev (2019-03-19; backlinks; similar):

The most obvious option—to draw all the illustrations in Illustrator and compose the whole thing in InDesign—was promptly rejected. Geometrical constructions are not exactly the easiest thing to do in Illustrator, and no obvious way to automatically connect the main image to miniatures came to my mind. As for InDesign, although it’s very good at dealing with such visually rich layouts, it promised to scare the hell out of me by the overcrowded “Links” panel. So, without thinking twice, I decided to use other tools that I was familiar with—MetaPost, which made it relatively easy to deal with geometry, and LaTeX, which I knew could do the job. Due to some problems with MetaPost libs for LaTeX, I replaced the latter with ConTeXt that enjoys an out-of-the-box merry relationship with MetaPost.

Converting a Bryne Euclid diagram to ConTeXt vector graphics

… There are also initials and vignettes in the original edition. On one hand, they were reasonably easy to recreate (at least, it wouldn’t take a lot of thought to do this), but I decided to go with a more interesting (albeit hopeless) option—automatically generating the initials and vignettes with a random ornament. Not only is it fun, but also, the Russian translation would require adapting the style of the original initials to the Cyrillic script, which was not something I’d prefer to do. So, long story short, when you compile the book, a list of initial letters is written to the disk, and a separate MetaPost script can process it (very slowly) to produce the initials and vignettes. No two of them have the exact same ornament.

“A to Z of Modern Living: Future-proof Design at Furniture Manufacturer Vitsœ's Headquarters in Leamington Spa”, House 2019

“A to Z of Modern Living: future-proof design at furniture manufacturer Vitsœ's headquarters in Leamington Spa”⁠, The Modern House (2019-01-06; backlinks; similar):

Vitsœ was founded in 1959 to manufacture the designs of Dieter Rams, of Braun’s golden years’ fame, a luminary designer who’s championed functional, considered design for well over 60 years. The company is best known for its production of Rams’ 606 Universal Shelving System, a do-it-all, have-forever modular system that can take the form of a few shelves or host an entire inventory of an university library. “I don’t regard this as a piece of architecture. I regard it as a way of thinking”, says Mark Adams, Vitsœ’s managing director, as he shows us around the firm’s Leamington Spa headquarters, which the company moved into in late 2017. “We developed the design with academics for years before building anything”, he says, explaining that the plan was essentially finished before it was handed to architects only at the delivery stage.

…At their new headquarters, Mark is fastidiously explaining elements of the building’s construction and evidence of those decades of work becomes apparent. With restrained enthusiasm he reels off details about the beech laminate veneer used for the building’s frame that he found in a German factory six years ago; about not building to conventional sustainable building standards, which he calls “box ticking exercises”; and he later gently explains how buildings are designed the wrong way around when it comes to thermal insulation. “Ours is designed a bit like if it had a Gore-Tex jacket on: it can release moisture, but it stays insulated.” This, Mark says, is better for people’s wellbeing: “Being hotter in the summer and cooler in the winter is better for your immune system.” That expenditure of time and consideration has resulted in a building in which not a single artificial light needs to be turned on during the day—the building’s party trick, if indeed it has one. Inside, daylight is utilised and amplified, pouring in through the overhead skylights in the sawtooth roof, illuminating the beech frame in splendid fashion.

The building, which amorphously combines manufacturing and office space, along with apartments for internationally-visiting staff, and a restaurant-quality canteen, is truly a mixed-use space. Looking down to the far end, it’s not uncommon for a member of Motionhouse contemporary dance troupe to launch into view above the workstations. “I think it’s completely logical that arts and commerce should be totally interwoven”, proclaims Mark.

…Mid-way into lunch, Mark interjects, inviting us to see how many phones we can spot. We look around and see no vacant faces staring at screens, but rather groups of people chatting and eating at communal tables, while outside a game of pétanque gets underway.

“Making of Byrne’s Euclid”, Rougeux 2018

“Making of Byrne’s Euclid”⁠, Nicholas Rougeux (2018-12-16; backlinks; similar):

Creating a faithful online reproduction of a book considered one of the most beautiful and unusual publications ever published is a daunting task. Byrne’s Euclid is my tribute to Oliver Byrne’s most celebrated publication from 1847 that illustrated the geometric principles established in Euclid’s original Elements from 300 BC.

In 1847, Irish mathematics professor Oliver Byrne worked closely with publisher William Pickering in London to publish his unique edition titled The First Six Books of the Elements of Euclid in which Coloured Diagrams and Symbols are Used Instead of Letters for the Greater Ease of Learners—or more simply, Byrne’s Euclid. Byrne’s edition was one of the first multicolor printed books and is known for its unique take on Euclid’s original work using colorful illustrations rather than letters when referring to diagrams. The precise use of colors and diagrams meant that the book was very challenging and expensive to reproduce. Little is known about why Byrne only designed 6 of the 13 books but it was could have been due to time and cost involved…I knew of other projects like Sergey Slyusarev’s ConTeXt rendition and Kronecker Wallis’ modern redesign but I hadn’t seen anyone reproduce the 1847 edition online in its entirety and with a design true to the original. This was my goal and I knew it was going to be a fun challenge.

Diagrams from Book 1

[Detailed discussion of how to use Adobe Illustrator to redraw the modernist art-like primary color diagrams from Bryne in scalable vector graphics (SVG) for use in interactive HTML pages, creation of a custom drop caps /  ​initials font to replicate Bryne, his (questionable) efforts to use the ‘long s’ for greater authenticity, rendering the math using MathJax, and creating posters demonstrating all diagrams from the project for offline viewing.]

“How AI Training Scales”, McCandlish et al 2018

“How AI Training Scales”⁠, Sam McCandlish, Jared Kaplan, Dario Amodei (2018-12-14; ⁠, ; backlinks; similar):

We’ve discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks. Since complex tasks tend to have noisier gradients, increasingly large batch sizes are likely to become useful in the future, removing one potential limit to further growth of AI systems. More broadly, these results show that neural network training need not be considered a mysterious art, but can be rigorized and systematized.

In an increasing number of domains it has been demonstrated that deep learning models can be trained using relatively large batch sizes without sacrificing data efficiency. However the limits of this massive data parallelism seem to differ from domain to domain, ranging from batches of tens of thousands in ImageNet to batches of millions in RL agents that play the game Dota 2. To our knowledge there is limited conceptual understanding of why these limits to batch size differ or how we might choose the correct batch size in a new domain. In this paper, we demonstrate that a simple and easy-to-measure statistic called the gradient noise scale predicts the largest useful batch size across many domains and applications, including a number of supervised learning datasets (MNIST⁠, SVHN, CIFAR-10, ImageNet, Billion Word), reinforcement learning domains (Atari and Dota), and even generative model training (autoencoders on SVHN). We find that the noise scale increases as the loss decreases over a training run and depends on the model size primarily through improved model performance. Our empirically-motivated theory also describes the tradeoff between compute-efficiency and time-efficiency, and provides a rough model of the benefits of adaptive batch-size training.

The gradient noise scale (appropriately averaged over training) explains the vast majority (R2 = 80%) of the variation in critical batch size over a range of tasks spanning six orders of magnitude. Batch sizes are measured in either number of images, tokens (for language models), or observations (for games).

…We have found that by measuring the gradient noise scale, a simple statistic that quantifies the signal-to-noise ratio of the network gradients, we can approximately predict the maximum useful batch size. Heuristically, the noise scale measures the variation in the data as seen by the model (at a given stage in training). When the noise scale is small, looking at a lot of data in parallel quickly becomes redundant, whereas when it is large, we can still learn a lot from huge batches of data…We’ve found it helpful to visualize the results of these experiments in terms of a tradeoff between wall time for training and total bulk compute that we use to do the training (proportional to dollar cost). At very small batch sizes, doubling the batch allows us to train in half the time without using extra compute (we run twice as many chips for half as long). At very large batch sizes, more parallelization doesn’t lead to faster training. There is a “bend” in the curve in the middle, and the gradient noise scale predicts where that bend occurs.

Increasing parallelism makes it possible to train more complex models in a reasonable amount of time. We find that a Pareto frontier chart is the most intuitive way to visualize comparisons between algorithms and scales.

…more powerful models have a higher gradient noise scale, but only because they achieve a lower loss. Thus, there’s some evidence that the increasing noise scale over training isn’t just an artifact of convergence, but occurs because the model gets better. If this is true, then we expect future, more powerful models to have higher noise scale and therefore be more parallelizable. Second, tasks that are subjectively more difficult are also more amenable to parallelization…we have evidence that more difficult tasks and more powerful models on the same task will allow for more radical data-parallelism than we have seen to date, providing a key driver for the continued fast exponential growth in training compute.

“Open Questions”, Branwen 2018

Questions: “Open Questions”⁠, Gwern Branwen (2018-10-17; ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ; backlinks; similar):

Some anomalies/​questions which are not necessarily important, but do puzzle me or where I find existing explanations to be unsatisfying.

? ? ?

A list of some questions which are not necessarily important, but do puzzle me or where I find existing ‘answers’ to be unsatisfying, categorized by subject (along the lines of Patrick Collison’s list & Alex Guzey⁠; see also my list of project ideas).

“Structural Typography: Type As Both Language and Composition”, Heck 2018

2018-10-09-heck-structuraltypography.html: “Structural Typography: Type as both language and composition”⁠, Bethany Heck (2018-10-09; similar):

Words matter (or so I’m told). Some of my favorite typographic pieces are the ones that use typography not only to deliver a message but to serve as the compositional foundation that a design centers around. Letterforms are just as valuable as graphic elements as they are representations of language, and asking type to serve multiples roles in a composition is a reliable way to elevate the quality of your work…I’ve pulled out a few of my favorite designs that use type in this way and grouped them into shared themes so we can analyze the range of techniques different designers have used to let typography guide their work. Let’s dive in!…

  • Type Informing Grid: Using one typographic element to influence other pieces of the design
  • Type as Representation: Rendering type as a manifestation of an object or ideal
  • Reinforcing Imagery: Type can extend the impact of imagery in a design
  • Large Type Does Not Mean Structural Type: Big type can be lazy type (Lastly, I wanted to show a few examples that aren’t good examples of type as structure…)

…There’s something freeing about starting a design with a commitment to only using type and words to communicate effectively. I hope this essay demystifies some of the thought processes that can go into improving how you handle type in a variety of situations and leaves you with a different perspective on the pieces discussed, as well as a new toolkit of process-starters for your design work going forward.

“Why Scatter Plots Suggest Causality, and What We Can Do about It”, Bergstrom & West 2018

“Why scatter plots suggest causality, and what we can do about it”⁠, Carl T. Bergstrom, Jevin D. West (2018-09-25; ; similar):

Scatter plots carry an implicit if subtle message about causality. Whether we look at functions of one variable in pure mathematics, plots of experimental measurements as a function of the experimental conditions, or scatter plots of predictor and response variables, the value plotted on the vertical axis is by convention assumed to be determined or influenced by the value on the horizontal axis. This is a problem for the public understanding of scientific results and perhaps also for professional scientists’ interpretations of scatter plots. To avoid suggesting a causal relationship between the x and y values in a scatter plot, we propose a new type of data visualization, the diamond plot. Diamond plots are essentially 45 degree rotations of ordinary scatter plots; by visually jarring the viewer they clearly indicate that she should not draw the usual distinction between independent/​predictor variable and dependent/​response variable. Instead, she should see the relationship as purely correlative.

“Can Behavioral Tools Improve Online Student Outcomes? Experimental Evidence from a Massive Open Online Course”, Patterson 2018

2018-patterson.pdf: “Can behavioral tools improve online student outcomes? Experimental evidence from a massive open online course”⁠, Richard W. Patterson (2018-09-01; ⁠, ; backlinks):

  • I design 3 behaviorally motivated software tools for students in an online course.
  • Tools include (1) a commitment device⁠, (2) an alert, and (3) a distraction-blocking tool.
  • I test these tools in a randomized controlled trial in a massive open online course.
  • The commitment device increased both effort and performance in the course.
  • Neither the alert nor distraction-blocking tools led to different outcomes from control.

In order to address poor outcomes for online students, I leverage insights from behavioral economics to design 3 software tools including (1) a commitment device, (2) an alert tool, and (3) a distraction-blocking tool. I test the impact of these tools in a massive open online course (MOOC).

Relative to students in the control group, students in the commitment device treatment spend 24% more time working on the course, receive course grades that are 0.29 standard deviations higher, and are 40% more likely to complete the course. In contrast, outcomes for students in the alert and distraction-blocking treatments are statistically indistinguishable from the control.

[Keywords: education, self control, commitment devices, reminders]

“Raincloud Plots: a Multi-platform Tool for Robust Data Visualization”, Allen et al 2018

“Raincloud plots: a multi-platform tool for robust data visualization”⁠, Micah Allen, Davide Poggiali, Kirstie Whitaker, Tom R. Marshall, Rogier Kievit (2018-08-23; ; backlinks; similar):

Across scientific disciplines, there is a rapidly growing recognition of the need for more statistically robust, transparent approaches to data visualization. Complimentary to this, many scientists have realized the need for plotting tools that accurately and transparently convey key aspects of statistical effects and raw data with minimal distortion.

Previously common approaches, such as plotting conditional mean or median barplots together with error-bars have been criticized for distorting effect size⁠, hiding underlying patterns in the raw data, and obscuring the assumptions upon which the most commonly used statistical tests are based.

Here we describe a data visualization approach which overcomes these issues, providing maximal statistical information while preserving the desired ‘inference at a glance’ nature of barplots and other similar visualization devices. These “raincloud plots” [scatterplots + smoothed histograms⁠/​density plot + box plots] can visualize raw data, probability density, and key summary statistics such as median, mean, and relevant confidence intervals in an appealing and flexible format with minimal redundancy.

In this tutorial paper we provide basic demonstrations of the strength of raincloud plots and similar approaches, outline potential modifications for their optimal use, and provide open-source code for their streamlined implementation in R, Python and Matlab⁠. Readers can investigate the R and Python tutorials interactively in the browser using Binder by Project Jupyter⁠.

Figure 3: Example Raincloud plot. The raincloud plot combines an illustration of data distribution (the ‘cloud’), with jittered raw data (the ‘rain’). This can further be supplemented by adding box plots or other standard measures of central tendency and error.—See figure3.Rmd for code to generate this figure.

…To remedy these shortcomings, a variety of visualization approaches have been proposed, illustrated in Figure 2, below. One simple improvement is to overlay individual observations (datapoints) beside the standard bar-plot format, typically with some degree of randomized jitter to improve visibility (Figure 2A). Complementary to this approach, others have advocated for more statistically robust illustrations such as box plots (Tukey 1970), which display sample median alongside interquartile range. Dot plots can be used to combine a histogram-like display of distribution with individual data observations (Figure 2B). In many cases, particularly when parametric statistics are used, it is desirable to plot the distribution of observations. This can reveal valuable information about how eg. some condition may increase the skewness or overall shape of a distribution. In this case, the ‘violin plot’ (Figure 2C) which displays a probability density function of the data mirrored about the uninformative axis is often preferred (Hintze & Nelson 1998). With the advent of increasingly flexible and modular plotting tools such as ggplot2 (Wickham 2010; Wickham & Chang 2008), all of the aforementioned techniques can be combined in a complementary fashion…Indeed, this combined approach is typically desirable as each of these visualization techniques have various trade-offs.

…On the other hand, the interpretation of dot plots depends heavily on the choice of dot-bin and dot-size, and these plots can also become extremely difficult to read when there are many observations. The violin plot in which the probability density function (PDF) of observations are mirrored, combined with overlaid box plots, have recently become a popular alternative. This provides both an assessment of the data distribution and statistical inference at a glance (SIG) via overlaid box plots3. However, there is nothing to be gained, statistically speaking, by mirroring the PDF in the violin plot, and therefore they are violating the philosophy of minimizing the “data-ink ratio” (Tufte 1983)4.

To overcome these issues, we propose the use of the ‘raincloud plot’ (Neuroconscience 2018), illustrated in Figure 3: The raincloud plot combines a wide range of visualization suggestions, and similar precursors have been used in various publications (eg. Ellison 1993, Figure 2.4; Wilson et al 2018). The plot attempts to address the aforementioned limitations in an intuitive, modular, and statistically robust format. In essence, raincloud plots combine a ‘split-half violin’ (an un-mirrored PDF plotted against the redundant data axis), raw jittered data points, and a standard visualization of central tendency (ie. mean or median) and error, such as a boxplot. As such the raincloud plot builds on code elements from multiple developers and scientific programming languages (Hintze & Nelson 1998; Patil 2018; Wickham & Chang 2008; Wilke 2017).

“The Simple but Ingenious System Taiwan Uses to Crowdsource Its Laws: VTaiwan Is a Promising Experiment in Participatory Governance. But Politics Is Blocking It from Getting Greater Traction”, Horton 2018

“The simple but ingenious system Taiwan uses to crowdsource its laws: vTaiwan is a promising experiment in participatory governance. But politics is blocking it from getting greater traction”⁠, Chris Horton (2018-08-21; ⁠, ⁠, ; backlinks; similar):

[Paper: Small et al 2021] That was when a group of government officials and activists decided to take the question to a new online discussion platform called vTaiwan. Starting in early March 2016, about 450 citizens went to vtaiwan.tw, proposed solutions, and voted on them…Three years after its founding, vTaiwan hasn’t exactly taken Taiwanese politics by storm. It has been used to debate only a couple of dozen bills, and the government isn’t required to heed the outcomes of those debates (though it may be if a new law passes later this year). But the system has proved useful in finding consensus on deadlocked issues such as the alcohol sales law, and its methods are now being applied to a larger consultation platform, called Join, that’s being tried out in some local government settings.

…vTaiwan relies on a hodgepodge of open-source tools for soliciting proposals, sharing information, and holding polls, but one of the key parts is Pol.is⁠, created by Megill and a couple of friends in Seattle after the events of Occupy Wall Street and the Arab Spring in 2011. On Pol.is, a topic is put up for debate. Anyone who creates an account can post comments on the topic, and can also upvote or downvote other people’s comments.

That may sound much like any other online forum, but 2 things make Pol.is unusual. The first is that you cannot reply to comments. “If people can propose their ideas and comments but they cannot reply to each other, then it drastically reduces the motivation for trolls to troll”, Tang says. “The opposing sides had never had a chance to actually interact with each other’s ideas.”

The second is that it uses the upvotes and downvotes to generate a kind of map [using PCA⁠/​UMAP for dimensionality reduction clustering] of all the participants in the debate, clustering together people who have voted similarly. Although there may be hundreds or thousands of separate comments, like-minded groups rapidly emerge in this voting map, showing where there are divides and where there is consensus. People then naturally try to draft comments that will win votes from both sides of a divide, gradually eliminating the gaps.

“The visualization is very, very helpful”, Tang says. “If you show people the face of the crowd, and if you take away the reply button, then people stop wasting time on the divisive statements.”

In one of the platform’s early successes, for example, the topic at issue was how to regulate the ride-hailing company Uber⁠, which had—as in many places around the world—run into fierce opposition from local taxi drivers. As new people joined the online debate, they were shown and asked to vote on comments that ranged from calls to ban Uber or subject it to strict regulation, to calls to let the market decide, to more general statements such as “I think that Uber is a business model that can create flexible jobs.”

Within a few days, the voting had coalesced to define 2 groups, one pro-Uber and one, about twice as large, anti-Uber. But then the magic happened: as the groups sought to attract more supporters, their members started posting comments on matters that everyone could agree were important, such as rider safety and liability insurance. Gradually, they refined them to garner more votes. The end result was a set of 7 comments that enjoyed almost universal approval, containing such recommendations as “The government should set up a fair regulatory regime”, “Private passenger vehicles should be registered”, and “It should be permissible for a for-hire driver to join multiple fleets and platforms.” The divide between pro-Uber and anti-Uber camps had been replaced by consensus on how to create a level playing field for Uber and the taxi firms, protect consumers, and create more competition. Tang herself took those suggestions into face-to-face talks with Uber, the taxi drivers, and experts, which led the government to adopt new regulations along the lines vTaiwan had produced.

Jason Hsu, a former activist, and now an opposition legislator, helped bring the vTaiwan platform into being. He says its big flaw is that the government is not required to heed the discussions taking place there. vTaiwan’s website boasts that as of August 2018, it had been used in 26 cases, with 80% resulting in “decisive government action.” As well as inspiring regulations for Uber and for online alcohol sales, it has led to an act that creates a “fintech sandbox”, a space for small-scale technological experiments within Taiwan’s otherwise tightly regulated financial system.

“It’s all solving the same problem: essentially saying, ‘What if we’re talking about things that are emergent, [for which] there are only a handful of early adopters?’” Tang says. “That’s the basic problem we were solving at the very beginning with vTaiwan.”

“SMPY Bibliography”, Branwen 2018

SMPY: “SMPY Bibliography”⁠, Gwern Branwen (2018-07-28; ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ; backlinks; similar):

An annotated fulltext bibliography of publications on the Study of Mathematically Precocious Youth (SMPY), a longitudinal study of high-IQ youth.

SMPY (Study of Mathematically Precocious Youth) is a long-running longitudinal survey of extremely mathematically-talented or intelligent youth, which has been following high-IQ cohorts since the 1970s. It has provided the largest and most concrete findings about the correlates and predictive power of screening extremely intelligent children, and revolutionized gifted & talented educational practices.

Because it has been running for over 40 years, SMPY-related publications are difficult to find; many early papers were published only in long-out-of-print books and are not available in any other way. Others are digitized and more accessible, but one must already know they exist. Between these barriers, SMPY information is less widely available & used than it should be given its importance.

To fix this, I have been gradually going through all SMPY citations and making fulltext copies available online with occasional commentary.

“Wiktionary: et Alii”, Wiktionary 2018

“Wiktionary: et alii⁠, Wiktionary (2018-06-18; backlinks; similar):

Etymology: From Latin et (“and”) + alii (“others”)

Phrase: et alii

  1. And others; used of men or boys, or groups of mixed gender; masculine plural

Usage notes: In some academic contexts, it may be appropriate to use the specific Latin form that would be used in Latin text, selecting the appropriate grammatical case. The abbreviation “et al” finesses the need for such fastidiousness.

“Exquisite Rot: Spalted Wood and the Lost Art of Intarsia”, Elkind 2018

“Exquisite Rot: Spalted Wood and the Lost Art of Intarsia”⁠, Daniel Elkind (2018-05-16; ⁠, ; similar):

The technique of intarsia—the fitting together of pieces of intricately cut wood to make often complex images—has produced some of the most awe-inspiring pieces of Renaissance craftsmanship. Daniel Elkind explores the history of this masterful art, and how an added dash of colour arose from a most unlikely source: lumber ridden with fungus…painting in wood is in many ways more complicated than painting on wood. Rather than fabricating objects from a single source, the art of intarsia is the art of mosaic, of picking the right tone, of sourcing only properly seasoned lumber from mature trees and adapting materials intended for one context to another. Painting obscures the origins of a given material, whereas intarsia retains the original character of the wood grain—whose knots and whorls are as individual as the islands and deltas of friction ridges that constitute the topography of a fingerprint—while forming a new image. From a distance, the whole appears greater than the sum of its parts; up close, one can appreciate the heterogeneity of the components…

Inspired by the New Testament and uninhibited by Mosaic proscription, craftsmen in the city of Siena began to introduce flora and fauna into their compositions in the 14th century. Figures and faces became common by the late 15th century and, by the early 16th century, intarsiatori in Florence were making use of a wide variety of dyes in addition to natural hardwoods to mimic the full spectrum from the lightest (spindlewood) to medium (walnut) and dark (bog oak)—with the tantalizing exception of an aquamarine color somewhere between green and blue which required treating wood with “copper acetate (verdigris) and copper sulfate (vitriol).”8

…Furnishings that featured slivers of griinfaule or “green oak” were especially prized by master cabinetmakers like Bartholomew Weisshaupt and coveted by the elite of the Holy Roman Empire.10 Breaking open rotting hardwood logs to reveal delicate veins of turquoise and aquamarine, craftsmen discovered that the green in green oak was the result of colonization by the green elf-cup fungus, Chlorociboria aeruginascens, whose tiny teal fruiting bodies grow on felled, barkless conifers and hardwoods like oak and beech across much of Europe, Asia, and North America. Fungal rot usually devalues wood, but green oak happened to fill a lucrative niche in a burgeoning luxury trade, and that made it, for a time at least, as precious as some rare metals. During the reign of Charles V, when the Hapsburgs ruled both Spain and Germany, a lively trade in these intarsia pieces sprang up between the two countries.

“Good Sound, Good Research: How Audio Quality Influences Perceptions of the Research and Researcher”, Newman & Schwarz 2018

2018-newman.pdf: “Good Sound, Good Research: How Audio Quality Influences Perceptions of the Research and Researcher”⁠, Eryn J. Newman, Norbert Schwarz (2018-03-20; similar):

Increasingly, scientific communications are recorded and made available online. While researchers carefully draft the words they use, the quality of the recording is at the mercy of technical staff. Does it make a difference?

We presented identical conference talks (Experiment 1) [n = 97 / k = 2] and radio interviews from NPR’s Science Friday (Experiment 2) [n = 99 / k = 2] in high or low audio quality and asked people to evaluate the researcher and the research they presented.

Despite identical content, people evaluated the research and researcher less favorably when the audio quality was low, suggesting that audio quality can influence impressions of science.

[Keywords: fluency, science communication, audio quality, truth]

Figure 1: The top panel displays mean ratings of the talk and researcher by audio quality (High vs. Low). The lower panel displays these same means split by video. This lower panel is a between-subject comparison; participants either saw the High Quality Audio Physics Talk + Low Quality Audio Engineering Talk or the Low Quality Audio Physics Talk + High Quality Audio Engineering Talk. Note that error bars represent 1 SE.
Figure 2: The top panel displays mean ratings of the research and researcher by audio quality (High Quality vs. Low Quality). The lower panel displays these same means split by interview. This lower panel is a between-subjects comparison; participants either saw the High Quality Audio Physics Interview + Low Quality Audio Genetics Interview or the Low Quality Audio Physics Interview + High Quality Audio Genetics Interview. Note that error bars represent 1 SE.

“Laws of Tech: Commoditize Your Complement”, Branwen 2018

Complement: “Laws of Tech: Commoditize Your Complement”⁠, Gwern Branwen (2018-03-17; ⁠, ⁠, ⁠, ⁠, ⁠, ; backlinks; similar):

A classic pattern in technology economics, identified by Joel Spolsky, is layers of the stack attempting to become monopolies while turning other layers into perfectly-competitive markets which are commoditized, in order to harvest most of the consumer surplus; discussion and examples.

Joel Spolsky in 2002 identified a major pattern in technology business & economics: the pattern of “commoditizing your complement”, an alternative to vertical integration, where companies seek to secure a chokepoint or quasi-monopoly in products composed of many necessary & sufficient layers by dominating one layer while fostering so much competition in another layer above or below its layer that no competing monopolist can emerge, prices are driven down to marginal costs elsewhere in the stack, total price drops & increases demand, and the majority of the consumer surplus of the final product can be diverted to the quasi-monopolist. No matter how valuable the original may be and how much one could charge for it, it can be more valuable to make it free if it increases profits elsewhere. A classic example is the commodification of PC hardware by the Microsoft OS monopoly, to the detriment of IBM & benefit of MS.

This pattern explains many otherwise odd or apparently self-sabotaging ventures by large tech companies into apparently irrelevant fields, such as the high rate of releasing open-source contributions by many Internet companies or the intrusion of advertising companies into smartphone manufacturing & web browser development & statistical software & fiber-optic networks & municipal WiFi & radio spectrum auctions & DNS (Google): they are pre-emptive attempts to commodify another company elsewhere in the stack, or defenses against it being done to them.

“Community Interaction and Conflict on the Web”, Kumar et al 2018

“Community Interaction and Conflict on the Web”⁠, Srijan Kumar, William L. Hamilton, Jure Leskovec, Dan Jurafsky (2018-03-09; ⁠, ⁠, ⁠, ; similar):

Users organize themselves into communities on web platforms. These communities can interact with one another, often leading to conflicts and toxic interactions. However, little is known about the mechanisms of interactions between communities and how they impact users.

Here we study inter-community interactions across 36,000 communities on Reddit, examining cases where users of one community are mobilized by negative sentiment to comment in another community. We show that such conflicts tend to be initiated by a handful of communities—less than 1% of communities start 74% of conflicts. While conflicts tend to be initiated by highly active community members, they are carried out by statistically-significantly less active members. We find that conflicts are marked by formation of echo chambers, where users primarily talk to other users from their own community. In the long-term, conflicts have adverse effects and reduce the overall activity of users in the targeted communities.

Our analysis of user interactions also suggests strategies for mitigating the negative impact of conflicts—such as increasing direct engagement between attackers and defenders. Further, we accurately predict whether a conflict will occur by creating a novel LSTM model that combines graph embeddings, user, community, and text features. This model can be used to create early-warning systems for community moderators to prevent conflicts. Altogether, this work presents a data-driven view of community interactions and conflict, and paves the way towards healthier online communities.

Rams”, Hustwit 2018

Rams⁠, Gary Hustwit (2018; backlinks; similar):

Rams is a documentary portrait of Dieter Rams, one of the most influential designers alive, and a rumination on consumerism, sustainability, and the future of design…In 2008, Gary interviewed Dieter for his documentary Objectified, but was only able to share a small piece of his story in that film. Dieter, who is now 86, is a very private person; however Gary was granted unprecedented access to create the first feature-length documentary about his life and work.

Rams includes in-depth conversations with Dieter, and deep dives into his philosophy, his process, and his inspirations. But one of the most interesting parts of Dieter’s story is that he now looks back on his career with some regret. “If I had to do it over again, I would not want to be a designer”, he’s said. “There are too many unnecessary products in this world.” Dieter has long been an advocate for the ideas of environmental consciousness and long-lasting products. He’s dismayed by today’s unsustainable world of over-consumption, where “design” has been reduced to a meaningless marketing buzzword.

Rams is a design documentary, but it’s also a rumination on consumerism, materialism, and sustainability. Dieter’s philosophy is about more than just design, it’s about a way to live. It’s about getting rid of distractions and visual clutter, and just living with what you need. The film features original music by pioneering musician and producer Brian Eno.

“Computer Latency: 1977–2017”, Luu 2017

“Computer latency: 1977–2017”⁠, Dan Luu (2017-12; ⁠, ; backlinks; similar):

I’ve had this nagging feeling that the computers I use today feel slower than the computers I used as a kid. As a rule, I don’t trust this kind of feeling because human perception has been shown to be unreliable in empirical studies, so I carried around a high-speed camera and measured the response latency of devices I’ve run into in the past few months. These are tests of the latency between a keypress and the display of a character in a terminal (see appendix for more details)…If we look at overall results, the fastest machines are ancient. Newer machines are all over the place. Fancy gaming rigs with unusually high refresh-rate displays are almost competitive with machines from the late 70s and early 80s, but “normal” modern computers can’t compete with thirty to forty year old machines.

…Almost every computer and mobile device that people buy today is slower than common models of computers from the 70s and 80s. Low-latency gaming desktops and the iPad Pro can get into the same range as quick machines from thirty to forty years ago, but most off-the-shelf devices aren’t even close.

If we had to pick one root cause of latency bloat, we might say that it’s because of “complexity”. Of course, we all know that complexity is bad. If you’ve been to a non-academic non-enterprise tech conference in the past decade, there’s a good chance that there was at least one talk on how complexity is the root of all evil and we should aspire to reduce complexity.

Unfortunately, it’s a lot harder to remove complexity than to give a talk saying that we should remove complexity. A lot of the complexity buys us something, either directly or indirectly. When we looked at the input of a fancy modern keyboard vs. the Apple 2 keyboard, we saw that using a relatively powerful and expensive general purpose processor to handle keyboard inputs can be slower than dedicated logic for the keyboard, which would both be simpler and cheaper. However, using the processor gives people the ability to easily customize the keyboard, and also pushes the problem of “programming” the keyboard from hardware into software, which reduces the cost of making the keyboard. The more expensive chip increases the manufacturing cost, but considering how much of the cost of these small-batch artisanal keyboards is the design cost, it seems like a net win to trade manufacturing cost for ease of programming.

“On Having Enough Socks”, Branwen 2017

Socks: “On Having Enough Socks”⁠, Gwern Branwen (2017-11-22; ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ; backlinks; similar):

Personal experience and surveys on running out of socks; discussion of socks as small example of human procrastination and irrationality, caused by lack of explicit deliberative thought where no natural triggers or habits exist.

After running out of socks one day, I reflected on how ordinary tasks get neglected. Anecdotally and in 3 online surveys, people report often not having enough socks, a problem which correlates with rarity of sock purchases and demographic variables, consistent with a neglect/​procrastination interpretation: because there is no specific time or triggering factor to replenish a shrinking sock stockpile, it is easy to run out.

This reminds me of akrasia on minor tasks, ‘yak shaving’, and the nature of disaster in complex systems: lack of hard rules lets errors accumulate, without any ‘global’ understanding of the drift into disaster (or at least inefficiency). Humans on a smaller scale also ‘drift’ when they engage in System I reactive thinking & action for too long, resulting in cognitive biases⁠. An example of drift is the generalized human failure to explore/​experiment adequately, resulting in overly greedy exploitative behavior of the current local optimum. Grocery shopping provides a case study: despite large gains, most people do not explore, perhaps because there is no established routine or practice involving experimentation. Fixes for these things can be seen as ensuring that System II deliberative cognition is periodically invoked to review things at a global level, such as developing a habit of maximum exploration at first purchase of a food product, or annually reviewing possessions to note problems like a lack of socks.

While socks may be small things, they may reflect big things.

“Threading Is Sticky: How Threaded Conversations Promote Comment System User Retention”, Budak et al 2017

2017-budak.pdf: “Threading is Sticky: How Threaded Conversations Promote Comment System User Retention”⁠, Ceren Budak, R. Kelly Garrett, Paul Resnick, Julia Kamin (2017-11-01; ; similar):

Figure 2: Article-level repeated participation over time, all sections.

The Guardian—the fifth most widely read online newspaper in the world as of 2014—changed conversations on its commenting platform by altering its design from non-threaded to single-level threaded in 2012.

We studied this naturally occurring experiment to investigate the impact of conversation threading on user retention as mediated by several potential changes in conversation structure and style.

Our analysis shows that the design change made new users statistically-significantly more likely to comment a second time, and that this increased stickiness is due in part to a higher fraction of comments receiving responses after the design change. In mediation analysis, other anticipated mechanisms such as reciprocal exchanges and comment civility did not help to explain users’ decision to return to the commenting system; indeed, civility did not increase after the design change and reciprocity declined.

These analyses show that even simple design choices can have a substantial impact on news forums’ stickiness. Further, they suggest that this influence is more powerfully shaped by affordances—the new system made responding easier—than by changes in users’ attention to social norms of reciprocity or civility. This has an array of implications for designers.

[Keywords: commenting systems, interrupted time series design⁠, mediation analysis, design principles, stickiness]

“Keyboard Latency”, Luu 2017

“Keyboard latency”⁠, Dan Luu (2017-10-16; ⁠, ; backlinks; similar):

[Dan Luu continues his investigation of why computers feel so laggy and have such high latency compared to old computers (total computer latency⁠, terminal latency⁠, web bloat⁠, cf. Pavel Fatin’s “Typing with pleasure” text editor analysis).

He measures 21 keyboard latencies using a logic analyzer, finding a range of 15–60ms (!), representing a waste of a large fraction of the available ~100–200ms latency budget before an user notices and is irritated (“the median keyboard today adds as much latency as the entire end-to-end pipeline as a fast machine from the 70s.”). The latency estimates are surprising, and do not correlate with advertised traits. They simply have to be measured empirically.]

We can see that, even with the limited set of keyboards tested, there can be as much as a 45ms difference in latency between keyboards. Moreover, a modern computer with one of the slower keyboards attached can’t possibly be as responsive as a quick machine from the 70s or 80s because the keyboard alone is slower than the entire response pipeline of some older computers. That establishes the fact that modern keyboards contribute to the latency bloat we’ve seen over the past forty years…Most keyboards add enough latency to make the user experience noticeably worse, and keyboards that advertise speed aren’t necessarily faster. The two gaming keyboards we measured weren’t faster than non-gaming keyboards, and the fastest keyboard measured was a minimalist keyboard from Apple that’s marketed more on design than speed.

“NIMA: Neural Image Assessment”, Talebi & Milanfar 2017

“NIMA: Neural Image Assessment”⁠, Hossein Talebi, Peyman Milanfar (2017-09-15; ; backlinks; similar):

Automatically learned quality assessment for images has recently become a hot topic due to its usefulness in a wide variety of applications such as evaluating image capture pipelines, storage techniques and sharing media. Despite the subjective nature of this problem, most existing methods only predict the mean opinion score provided by datasets such as AVA [1] and TID2013 [2].

Our approach differs from others in that we predict the distribution of human opinion scores using a convolutional neural network. Our architecture also has the advantage of being substantially simpler than other methods with comparable performance. Our proposed approach relies on the success (and retraining) of proven, state-of-the-art deep object recognition networks.

Our resulting network can be used to not only score images reliably and with high correlation to human perception, but also to assist with adaptation and optimization of photo editing/​enhancement algorithms in a photographic pipeline. All this is done without need for a “golden” reference image, consequently allowing for single-image, semantic-aware and perceptually-aware, no-reference quality assessment.

“Modernity, Method and Minimal Means: Typewriters, Typing Manuals and Document Design”, Walker 2017

2017-walker.pdf: “Modernity, Method and Minimal Means: Typewriters, Typing Manuals and Document Design”⁠, Sue Walker (2017-06-06; backlinks; similar):

This essay is about the contribution that typing manuals and typists have made to the history of graphic language and communication design, and that typewriter composition has played in typographic education and design practice, particularly in the 1960s and 1970s. The limited technical capabilities of typewriters are discussed in relation to the rules in typing manuals for articulating and organizing the structure of text. Such manuals were used to train typists who went on to produce documents of considerable complexity within what typographers would consider to be minimal means in terms of flexibility in the use of letterforms and space.

…Typing manuals and the relentless repetition of typing exercises in class formed the basis of this training, and generations of office workers acquired considerable knowledge about the visual organization of often complex documents. In the context of the history of typography, typewriter operators (typists, as they became known) were designing within ‘minimal means’. They worked with a restricted range of letterforms and character sets, and with limited flexibility for manipulating vertical and horizontal space. The documents they made—in their material form—were true to the limitations of the machines that made them. Designers and educators also exploited the characteristics (and limitations) of typewriters in their work; in the 1950s and 1960s especially, typewriters were regarded by designers as one of the tools of the trade, though perhaps, as Ken Garland has noted, ‘a design tool that is not usually regarded as such’.2 Design educators such as Norman Potter and Michael Twyman used the limitations of typewriter composition to good effect in teaching typography. And because typing manuals were concerned with the kind of document that Herbert Spencer, in 1952, called ‘utility’ printing (‘technical catalogues, handbooks, timetables, stationery and forms, the primary purpose of which is to inform’3), the typewriter as the means of production for such documents has a place in the history of document design and, by inference, of information design.4 ‘Typewriter composition’ was prevalent in the printing trade in the 1960s and 1970s and many typists who trained on mechanical typewriters went on to become ‘compositors’, working with electric machines such as the IBM 72, the IBM Executive, the Justowriter and later models of the Varityper.5 In this context typists assumed the role of compositor, applying rules acquired through typing training to typesetting in books

…Typewritten material, on the whole, was monochrome, but some document types typically required the used of a second colour to fulfil a particular function. Typing in colours other than black involved either the use of coloured carbon paper, special 2-colour or 3-colour attachments, or a bi-chrome or tri-chrome ribbon. Red, the preferred second colour, was recommended for emphasis and particular words in a text, and was referred to in Pitman’s Typewriter Manual: A practical guide to all classes of typewriting work in 1897 [1909 edition] as ‘variegated typewriting’.46 In the typing of plays, for example, underlining in red was prescribed to denote non-spoken elements, such as stage directions, as shown in the illustration Figure 6. However, as affirmed in Pitman’s Typewriter Manual,47 in recognition of the fact it was time-consuming to do, typists were encouraged to do the red ruling with a pen or pencil—a pragmatic solution. Later typing manuals proposed that when a typewriter was fitted with a red-black bi-chrome ribbon, the non-speaking parts in a play should be typed in red (with no underlining)—an example of simplicity of operation changing conventional practice.

Figure 6: Detail from plate XIV showing use of red underscoring to denote non-spoken parts in a play. (Pitman 1897)

“What Makes a Good Image? Airbnb Demand Analytics Leveraging Interpretable Image Features”, Zhang et al 2017

“What Makes a Good Image? Airbnb Demand Analytics Leveraging Interpretable Image Features”⁠, Shunyuan Zhang, Dokyun Lee, Param Vir Singh, Kannan Srinivasan (2017-05-25; ⁠, ; similar):

[see also NIMA⁠, Murray & Gordo 2017⁠, Porzi et al 2015⁠/​Dubey et al 2016⁠/​Fu et al 2018⁠, CLIP prompts] We study how Airbnb property demand changed after the acquisition of verified images (taken by Airbnb’s photographers) and explore what makes a good image for an Airbnb property.

Using deep learning and difference-in-difference analyses on an Airbnb panel dataset spanning 7,423 properties over 16 months, we find that properties with verified images had 8.98% higher occupancy than properties without verified images (images taken by the host).

To explore what constitutes a good image for an Airbnb property, we quantify 12 human-interpretable image attributes that pertain to 3 artistic aspects—composition, color, and the figure-ground relationship—and we find systematic differences between the verified and unverified images. We also predict the relationship between each of the 12 attributes and property demand, and we find that most of the correlations are statistically-significant and in the theorized direction.

Our results provide actionable insights for both Airbnb photographers and amateur host photographers who wish to optimize their images. Our findings contribute to and bridge the literature on photography and marketing (eg. staging), which often either ignores the demand side (photography) or does not systematically characterize the images (marketing).

[Keywords: sharing economy, Airbnb, property demand, computer vision, deep learning, image feature extraction, content engineering]

…One of our key objectives is to determine what makes a good image for an Airbnb property. Our CNN model is highly accurate at predicting image quality, but the CNN-extracted features are uninterpretable. To provide better guidance for managers, we use the photography literature to identify 12 human-interpretable image attributes that are relevant to image quality in the real estate context. We theorize the relationship between each of the 12 image attributes and property demand. The 12 attributes fall under 3 key artistic aspects: composition, color, and the figure-ground relationship. Composition is the arrangement of visual elements in the photograph; ideally, the composition leads the viewer’s eyes to the center of focus (Freeman 2007). We capture composition with 4 attributes: diagonal dominance, the rule of thirds⁠, visual balance of color, and visual balance of intensity. Color can affect the viewer’s emotional arousal. The marketing literature has studied the impact of color on consumer behavior particularly in the context of web design, product packaging design, and advertisement design (Gorn et al 1997, Gorn et al 2004; Miller & Kahn 2005). We include 5 aspects related to color: warm hue, saturation, brightness, contrast of brightness⁠, and image clarity. The principle of the figure-ground relationship is one of the most basic laws of perception and is used extensively by expert photographers to plan their photographs. In visual art, the figure refers to the key region (ie. foreground), and the ground refers to the background; photographs in which the figure is inseparable from the ground do not retain the viewer’s attention. We include 3 attributes: the area difference, texture difference, and color difference between the figure and ground.

…Of the 12 image attributes, the visual balance of color is most strongly related to property demand, followed by image clarity and the contrast of brightness. The visual balance of color refers to color symmetry, which can be affected by both the property itself and the position from which the image is captured. Image clarity refers to the extent to which the image conveys visual information. The unverified low-quality images scored poorly on image clarity; the verified photos scored almost twice as high. Even without employing a professional photographer, hosts can improve image clarity through the effective use of lighting and access to a good camera. Finally, the contrast of brightness captures the difference in illumination between the brightest and dimmest points in the image; a low contrast of brightness indicates that illumination is relatively even across the image. The verified photos have a substantially lower contrast of brightness than unverified high-quality images. Interestingly, several hosts on the Airbnb community forums complained that the contrast of brightness is so low in the verified photos that they appear washed out, but we find the predicted negative relationship between the contrast of brightness and property demand. In other words, consumers seem to prefer the low contrast of brightness that appears in verified photos.

  1. Diagonal Dominance
  2. Rule of Thirds
  3. Visual Balance of Intensity
  4. Visual Balance of Color
  5. Warm Hue
  6. Saturation
  7. Brightness
  8. Contrast of Brightness
  9. Image Clarity
  10. Area Difference
  11. Color Difference
  12. Texture Difference

“Implementing Recommendations From Web Accessibility Guidelines: A Comparative Study of Nondisabled Users and Users With Visual Impairments”, Schmutz et al 2017

2017-schmutz.pdf: “Implementing Recommendations From Web Accessibility Guidelines: A Comparative Study of Nondisabled Users and Users With Visual Impairments”⁠, Sven Schmutz, Andreas Sonderegger, Juergen Sauer (2017-05-03; backlinks; similar):

[previously: Schmutz et al 2016] Objective: The present study examined whether implementing recommendations of Web accessibility guidelines would have different effects on nondisabled users than on users with visual impairments.

Background: The predominant approach for making Web sites accessible for users with disabilities is to apply accessibility guidelines. However, it has been hardly examined whether this approach has side effects for nondisabled users. A comparison of the effects on both user groups would contribute to a better understanding of possible advantages and drawbacks of applying accessibility guidelines.

Method: Participants from 2 matched samples, comprising 55 participants with visual impairments and 55 without impairments, took part in a synchronous remote testing of a Web site. Each participant was randomly assigned to one of 3 Web sites, which differed in the level of accessibility (very low, low, and high) according to recommendations of the well-established Web Content Accessibility Guidelines 2.0 (WCAG 2.0). Performance (ie. task completion rate and task completion time) and a range of subjective variables (ie. perceived usability, positive affect, negative affect, perceived aesthetics, perceived workload, and user experience) were measured.

Results: Higher conformance to Web accessibility guidelines resulted in increased performance and more positive user ratings (eg. perceived usability or aesthetics) for both user groups. There was no interaction between user group and accessibility level.

Conclusion: Higher conformance to WCAG 2.0 may result in benefits for nondisabled users and users with visual impairments alike.

Application: Practitioners may use the present findings as a basis for deciding on whether and how to implement accessibility best.

[Keywords: Web accessibility, visual impairments, nondisabled users, WCAG 2.0]

“The Aesthetic-Usability Effect”, Moran 2017

“The Aesthetic-Usability Effect”⁠, Kate Moran (2017-01-29; ; backlinks; similar):

Users are more tolerant of minor usability issues when they find an interface visually appealing. This aesthetic-usability effect can mask UI problems and can prevent issue discovery during usability testing. Identify instances of the aesthetic-usability effect in your user research by watching what your users do, as well as listening to what they say.

It’s a familiar frustration to usability-test moderators: You watch an user struggle through a suboptimal UI, encountering many errors and obstacles. Then, when you ask the user to comment on her experience, all she can talk about is the site’s great color scheme:

During usability testing, one user encountered many issues while shopping on the FitBit site, ranging from minor annoyances in the interaction design to serious flaws in the navigation. She was able to complete her task, but with difficulty. However, in a post-task questionnaire, she rated the site very highly in ease of use. “It’s the colors they used”, she said. “Looks like the ocean, it’s calm. Very good photographs.” The positive emotional response caused by the aesthetic appeal of the site helped mask its usability issues.

Instances like this are often the result of the aesthetic-usability effect.

Definition: The aesthetic-usability effect refers to users’ tendency to perceive attractive products as more usable. People tend to believe that things that look better will work better—even if they aren’t actually more effective or efficient.

“On the Existence of Powerful Natural Languages”, Branwen 2016

Language: “On the Existence of Powerful Natural Languages”⁠, Gwern Branwen (2016-12-18; ⁠, ⁠, ⁠, ⁠, ; backlinks; similar):

A common dream in philosophy and politics and religion is the idea of languages superior to evolved demotics, whether Latin or Lojban, which grant speakers greater insight into reality and rationality, analogous to well-known efficacy of mathematical sub-languages in solving problems. This dream fails because such languages gain power inherently from specialization.

Designed formal notations & distinct vocabularies are often employed in STEM fields, and these specialized languages are credited with greatly enhancing research & communication. Many philosophers and other thinkers have attempted to create more generally-applicable designed languages for use outside of specific technical fields to enhance human thinking, but the empirical track record is poor and no such designed language has demonstrated substantial improvements to human cognition such as resisting cognitive biases or logical fallacies. I suggest that the success of specialized languages in fields is inherently due to encoding large amounts of previously-discovered information specific to those fields, and this explains their inability to boost human cognition across a wide variety of domains.

“Visions of Algae in 18th-Century Botany”, Feigenbaum 2016

“Visions of Algae in 18th-Century Botany”⁠, Ryan Feigenbaum (2016-09-07; ⁠, ; similar):

Although not normally considered the most glamorous of Mother Nature’s offerings, algae has found itself at the heart of many a key moment in the last few hundred years of botanical science. Ryan Feigenbaum traces the surprising history of one particular species—Conferva fontinalis—from the vials of Joseph Priestley’s laboratory to its possible role as inspiration for Shelley’s Frankenstein.

“‘A Poster Has to Be Joyous’. The Energy and Enthusiasm of Willem Sandberg”, Martin 2016

“‘A poster has to be joyous’. The energy and enthusiasm of Willem Sandberg”⁠, Will Martin (2016-07-11; backlinks; similar):

Born in 1897, Sandberg studied art in Amsterdam before travelling around Europe where he met and learned from printers, artists and teachers, including Johannes Itten, Naum Gabo and Otto Neurath. Upon returning to Amsterdam he became involved with the Stedelijk Museum, initially as a designer and later as curator of modern art from 1937 to 1941. It is after this period that the Second World War became a defining factor in his life. I have, in previous drafts of this piece, tried to summarise his involvement in the conflict, but he did more than is possible to do justice to here. Suffice to say, many items in the Stedelijk collection, not to mention Rembrandt’s The Night Watch and the collection of Van Gogh’s heirs, probably owe their survival to his resistance efforts. Others, such as Simon Garfield, have written about his wartime achievements. I recommend this piece by Mafalda Spencer, my old tutor and daughter of Herbert Spencer, who was one of Sandberg’s pen pals. (Their correspondence, which Mafalda has inherited, is featured in this exhibition.)

After the war Sandberg was made director of the Stedelijk and oversaw hundreds of exhibitions during his 18 years in the role. Throughout this period he carried on designing the catalogues and posters that feature in this exhibition…Among Sandberg’s wartime experience was the period he spent on the run from the Nazis, from 1943 until the end of the war. While in hiding, Sandberg wanted to occupy himself and decided to create a series of small booklets, each ranging from 20 to 60 pages. It is in making these that he seems to have refined what would later be the style he used for the majority of his design work at the Stedelijk. The booklets, which he called experimenta typographica, were filled with illustrations of inspirational quotes, which Sandberg took from great thinkers and other designers…The posters don’t really establish any sense of a coherent identity in the way that a modern designer might be driven to do these days. There isn’t really any consistency in layout, the typefaces chosen to spell out the Stedelijk’s name vary widely and while the use of red in each poster is a constant, it’s not always the same shade. But they do fulfil the criteria for Stedelijk posters of the time that Sandberg himself drew up:

  1. a poster has to be joyous
  2. red has to be in every poster
  3. a poster has to provoke a closer look, otherwise it doesn’t endure
  4. with a respect for society, designer and director both are responsible for the street scene, a poster does not only have to revive the street, it also has to be human
  5. every poster has to be an artwork

Blade Runner (Typeset In The Future)”, Addey 2016

Blade Runner (Typeset In The Future)”⁠, Dave Addey (2016-06-19; ⁠, ; similar):

[Discussion with screenshots of the classic Ridley Scott SF movie Blade Runner, which employs typography to disconcert the viewer, with unexpected choices, random capitalization and small caps, corporate branding/​advertising, and the mashed-up creole multilingual landscape of noir cyberpunk LA (plus discussion of the buildings and sets, and details such as call costs being correctly inflation-adjusted).]

“Review: Belladonna of Sadness”, Elkins 2016

“Review: Belladonna of Sadness⁠, Gabriella Elkins (2016-06-17; ⁠, ; backlinks; similar):

Summary: Medieval peasants Jean and Jeanne are idyllic newlyweds. Their happiness vanishes, however, when Jeanne is raped by the local lord in a legally sanctioned deflowering ritual. Afterwards, while the couple tries to resume their life together, Jeanne starts receiving visions from a demon. It comforts her in her sadness, but it also encourages her to act out against the lord. Jeanne resists at first, but as her fortunes continue to wane, she’s thrown further into the demon’s embrace. As time goes on, Jeanne is drawn into an experience that radically reconfigures her sense of herself, the world, and the course of history itself.

Review: An X-rated anime classic newly remastered for the screen, Belladonna of Sadness is one of animation’s premiere psychedelic experiences, brought over to North America for nearly the first time ever in 2016. Its history has already been covered by us before, but here’s a quick refresher: Belladonna of Sadness is a legendarily low-budget, sexual, and psychedelic anime film from the 1970s. Poorly received at the time of its release, it accrued a cult audience over the next few decades. Recently, its reputation has been rehabilitated to the point where it’s considered an overlooked classic. Still, wider appreciation of the film was hampered by the lack of an English release and poor quality of existing prints. That changed in 2014, when the high-end distribution company Cinelicious chose it as their first candidate for an in-house 4k restoration and re-release. This May, the completed film began screening in theaters across the United States and Canada, and will continue to do so until September. I attended one of these screenings at International House theater in Philadelphia. This was my first time seeing the film, and I left very much impressed by both its artistry and storytelling.

…Fair warning, though—it’s not an exaggeration that this film is touted as ultra-sexual. I’d say most of the film’s runtime is made up of sex scenes, some of them violent and disturbing. It literally opens with a rape. These scenes are appropriate to the story, and the scenes are gorgeous in their artistry, but they are unpleasant. Otherwise, the sexual imagery is largely abstract. Flowers become vaginas, figures in cloaks become disembodied penises, and Jeanne’s rape is depicted as her being bisected from the groin upwards. Some psychedelic sequences also contain intense strobe lighting, so epileptics be warned. As for the visuals themselves, expect watercolors, morphing lineart, and little in terms of actual animation. There are no lush Kyoto Animation frame counts here. Much of the film’s motion consists of pans and zooms across static illustrations. There aren’t even any lip flaps. The studio went under while making this film, so this was a method of cutting costs. However, the results are memorable and even contribute to the film’s power. (There’s a great analysis to be written about its use of vertical versus horizontal space.) Despite these limitations, Belladonna of Sadness is, on a purely aesthetic level, almost unbelievably beautiful. I’d hang any given frame of it up on my wall. Even if you don’t care about it’s message, this film is still worth watching as a work of altered-state eroticism.

Overall, viewers who can handle the content will probably be entertained by this gorgeous and trippy movie. However, I especially recommend Belladonna of Sadness to anyone interested in the history of anime.

Belladonna of Sadness is the culmination of a rare attempt to make blatantly un-commercial, artistically challenging anime. At the cost of bankruptcy, Mushi Productions made a masterpiece that wouldn’t be fully appreciated for 40 years. Now hindsight allows us to see the breadth of its influence and depth of its daring. Get in on this experience while you have the chance.

“Candy Japan’s New Box A/B Test”, Branwen 2016

Candy-Japan: “Candy Japan’s new box A/B test”⁠, Gwern Branwen (2016-05-06; ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ; backlinks; similar):

Bayesian decision-theoretic analysis of the effect of fancier packaging on subscription cancellations & optimal experiment design.

I analyze an A/​B test from a mail-order company of two different kinds of box packaging from a Bayesian decision-theory perspective, balancing posterior probability of improvements & greater profit against the cost of packaging & risk of worse results, finding that as the company’s analysis suggested, the new box is unlikely to be sufficiently better than the old. Calculating expected values of information shows that it is not worth experimenting on further, and that such fixed-sample trials are unlikely to ever be cost-effective for packaging improvements. However, adaptive experiments may be worthwhile.

“The First Roman Fonts”, Boardley 2016

“The First Roman Fonts”⁠, John Boardley (2016-04-18; backlinks; similar):

[Where did our fonts come from? Your standard Latin alphabet can be written in many styles, so where did the regular upright sort of font (which you are reading right now) come from? Boardley traces the evolution of the Roman font from its origins in Imperial Roman styles through to the Renaissance, where it was perfectly placed for the print revolution and canonization as the Western font. Early printers, working in a difficult business, would invent the new typefaces they needed, modeled on humanist scribes’ Roman script, refining the letters into what we know today, including such variants as the lowercase ‘g’ (which looks so different from the handwritten letter).]

The Renaissance affected change in every sphere of life, but perhaps one of its most enduring legacies are the letterforms it bequeathed to us. But their heritage reaches far beyond the Italian Renaissance to antiquity. In ancient Rome, the Republican and Imperial capitals were joined by rustic capitals, square capitals (Imperial Roman capitals written with a brush), uncials, and half-uncials, in addition to a more rapidly penned cursive for everyday use. From those uncial and half-uncial forms evolved a new formal book-hand practiced in France, that spread rapidly throughout medieval Europe.

…From the second quarter of the 16th century, roman types, hitherto reserved almost exclusively for classical and humanist literature, began to make inroads into those genres that had traditionally been printed in gothic types. Especially from the 1520s in Paris, we witness books of hours and even Psalters set in roman types.

Two Latin alphabets inspired by both antique and medieval antecedents. Majuscules first incised in stone more than two millennia ago, married to minuscule letterforms that evolved from manuscript hands of the eighth and ninth centuries. The Carolingian or Caroline minuscule joined forces with antique Roman square capitals at the very beginning of the 15th century—a conjunction willed by the great Florentine humanists; their forms first wrought in metal by two German immigrants at Subiaco and Rome, honed by a Frenchman, and consummated at the hands of Griffo of Bologna and Aldus the Venetian. A thousand years after the fall of the Roman Empire, the romans returned and re-conquered—yet another thing the Romans have done for us.

“Implementing Recommendations From Web Accessibility Guidelines: Would They Also Provide Benefits to Nondisabled Users”, Schmutz et al 2016

2016-schmutz.pdf: “Implementing Recommendations From Web Accessibility Guidelines: Would They Also Provide Benefits to Nondisabled Users”⁠, Sven Schmutz, Andreas Sonderegger, Juergen Sauer (2016-04-04; backlinks; similar):

[followup: Schmutz et al 2017⁠; do these show benefits of accessibility guidelines, or just good design with checklists as reminders?] Objective: We examined the consequences of implementing Web accessibility guidelines for nondisabled users.

Background: Although there are Web accessibility guidelines for people with disabilities available, they are rarely used in practice, partly due to the fact that practitioners believe that such guidelines provide no benefits, or even have negative consequences, for nondisabled people, who represent the main user group of Web sites. Despite these concerns, there is a lack of empirical research on the effects of current Web accessibility guidelines on nondisabled users.

Method: 61 nondisabled participants used one of 3 Web sites differing in levels of accessibility (high, low, and very low). Accessibility levels were determined by following established Web accessibility guidelines (WCAG 2.0). A broad methodological approach was used, including performance measures (eg. task completion time) and user ratings (eg. perceived usability).

Results: A high level of Web accessibility led to better performance (ie. task completion time and task completion rate) than low or very low accessibility. Likewise, high Web accessibility improved user ratings (ie. perceived usability, aesthetics, workload, and trustworthiness) compared to low or very low Web accessibility. There was no difference between the very low and low Web accessibility conditions for any of the outcome measures.

Conclusion: Contrary to some concerns in the literature and among practitioners, high conformance with Web accessibility guidelines may provide benefits to users without disabilities.

Application: The findings may encourage more practitioners to implement WCAG 2.0 for the benefit of users with disabilities and nondisabled users.

…[We tested] contrast, text alignment, precision of link description, appropriateness of headings, focus visibility, number of section headings, and consistency in link style…precision of form description, focus order, and error identification…A further reason for choosing these criteria was that most of the criteria were of general relevance because it has been shown that they also provide benefits to other user groups, such as older users.

[Keywords: Web accessibility, nondisabled users, WCAG 2.0, performance, usability]

“Visions of the Future: 14 Space Travel Posters of Colorful, Exotic Space Settings Are Now Available Free for Downloading and Printing”, Goods et al 2016

“Visions of the Future: 14 space travel posters of colorful, exotic space settings are now available free for downloading and printing”⁠, Dan Goods, David Delgado, Liz Barrios De La Torre, Stefan Bucher, Invisible Creature (Don Clark & Ryan Clark) et al (2016-02; ; similar):

[JPL-sponsored Art Deco/​WPA poster series with the concept of advertising travel in the Solar System & to exoplanets; public domain & free to download/​print.]

A creative team of visual strategists at JPL, known as “The Studio”, created the poster series, which is titled “Visions of the Future.” Nine artists, designers, and illustrators were involved in designing the 14 posters, which are the result of many brainstorming sessions with JPL scientists, engineers, and expert communicators. Each poster went through a number of concepts and revisions, and each was made better with feedback from the JPL experts.

David Delgado, creative strategy: “The posters began as a series about exoplanets—planets orbiting other stars—to celebrate NASA’s study of them. (The NASA program that focuses on finding and studying exoplanets is managed by JPL.) Later, the director of JPL was on vacation at the Grand Canyon with his wife, and they saw a similarly styled poster that reminded them of the exoplanet posters. They suggested it might be wonderful to give a similar treatment to the amazing destinations in our solar system that JPL is currently exploring as part of NASA. And they were right! The point was to share a sense of things on the edge of possibility that are closely tied to the work our people are doing today. The JPL director has called our people”architects of the future.” As for the style, we gravitated to the style of the old posters the WPA created for the national parks. There’s a nostalgia for that era that just feels good.”

Joby Harris, illustrator: “The old WPA posters did a really great job delivering a feeling about a far-off destination. They were created at a time when color photography was not very advanced, in order to capture the beauty of the national parks from a human perspective. These posters show places in our solar system (and beyond) that likewise haven’t been photographed on a human scale yet—or in the case of the exoplanets might never be, at least not for a long time. It seemed a perfect way to help people imagine these strange, new worlds.”

David Delgado: “The WPA poster style is beloved, and other artists have embraced it before us. Our unique take was to take one specific thing about the place and focus on the science of it. We chose exoplanets that had really interesting, strange qualities, and everything about the poster was designed to amplify the concept. The same model guided us for the posters that focus on destinations in the solar system.”

Lois Kim, typography: “We worked hard to get the typography right, since that was a very distinctive element in creating the character of those old posters. We wanted to create a retro-future feel, so we didn’t adhere exactly to the period styles, but they definitely informed the design. The Venus poster has a very curvy, flowy font, for example, to evoke a sense of the clouds.”

The Annals of the Parrigues”, Short 2015

2015-short-theannalsoftheparrigues.pdf: “<em>The Annals of the Parrigues< / em>”⁠, Emily Short (2015-12-18; ; backlinks)

“Repeatability of Fractional Flow Reserve Despite Variations in Systemic and Coronary Hemodynamics”, Johnson et al 2015

“Repeatability of Fractional Flow Reserve Despite Variations in Systemic and Coronary Hemodynamics”⁠, Nils P. Johnson, Daniel T. Johnson, Richard L. Kirkeeide, Colin Berry, Bernard De Bruyne, William F. Fearon et al (2015-07; backlinks; similar):

Objectives: This study classified and quantified the variation in fractional flow reserve (FFR) due to fluctuations in systemic and coronary hemodynamics during intravenous adenosine infusion.

Background: Although FFR has become a key invasive tool to guide treatment, questions remain regarding its repeatability and stability during intravenous adenosine infusion because of systemic effects that can alter driving pressure and heart rate.

Methods: We reanalyzed data from the VERIFY (VERification of Instantaneous Wave-Free Ratio and Fractional Flow Reserve for the Assessment of Coronary Artery Stenosis Severity in EverydaY Practice) study, which enrolled consecutive patients who were infused with intravenous adenosine at 140 μg/​kg/​min and measured FFR twice. Raw phasic pressure tracings from the aorta (Pa) and distal coronary artery (Pd) were transformed into moving averages of Pd/​Pa. Visual analysis grouped Pd/​Pa curves into patterns of similar response. Quantitative analysis of the Pd/​Pa curves identified the “smart minimum” FFR using a novel algorithm, which was compared with human core laboratory analysis.

Results: A total of 190 complete pairs came from 206 patients after exclusions. Visual analysis revealed 3 Pd/​Pa patterns: “classic” (sigmoid) in 57%, “humped” (sigmoid with superimposed bumps of varying height) in 39%, and “unusual” (no pattern) in 4%. The Pd/​Pa pattern repeated itself in 67% of patient pairs. Despite variability of Pd/​Pa during the hyperemic period, the “smart minimum” FFR demonstrated excellent repeatability (bias −0.001, SD 0.018, paired p = 0.93, r2 = 98.2%, coefficient of variation = 2.5%). Our algorithm produced FFR values not statistically-significantly different from human core laboratory analysis (paired p = 0.43 vs. VERIFY; p = 0.34 vs. RESOLVE).

Conclusions: Intravenous adenosine produced 3 general patterns of Pd/​Pa response, with associated variability in aortic and coronary pressure and heart rate during the hyperemic period. Nevertheless, FFR—when chosen appropriately—proved to be a highly reproducible value. Therefore, operators can confidently select the “smart minimum” FFR for patient care. Our results suggest that this selection process can be automated, yet comparable to human core laboratory analysis.

Elephas Anthropogenus”, Westphal 2015

2015-westphal.pdf: Elephas anthropogenus⁠, Uli Westphal (2015-05-01; ⁠, ; similar):

This paper and its accompanying artwork examines the history of our perception of nature based on the example of elephants (Elephas maximus⁠, Loxodonta africana⁠, Loxodonta cyclotis).

With the fall of the Roman Empire up until the late Middle Ages, elephants virtually disappeared from Western Europe. Since there was no real knowledge of how these animals actually looked, illustrators had to rely on oral, pictorial and written transmissions to morphologically reconstruct an elephant, thus, reinventing the image of an actual existing creature. This led, in most cases, to illustrations in which the most characteristic features of elephants—such as trunk and tusks—are still visible, but that otherwise completely deviate from the real appearance and physique of these animals. In this process, zoological knowledge about elephants was overwritten by its cultural importance.

Based on a collection of these images I have reconstructed the evolution of the ‘Elephas anthropogenus’, the man-made elephant.

[Keywords: elephants, art, taxonomy⁠, history, evolution, illustration, Physiologus⁠, morphology]

“Markdeep”, McGuire 2015

“Markdeep”⁠, Morgan McGuire (2015; similar):

[Markdeep is a single-file JavaScript Markdown → HTML compiler: it can be inserted into a Markdown file, which will automatically render it inside a visiting web browser. It is highly opinionated and featureful, including a wide variety of automatic symbol replacements, ‘admonitions’, embedded ASCII diagrams, calendars, todo task lists, multi-columns, etc.]

Markdeep is a technology for writing plain text documents that will look good in any web browser, whether local or remote. It supports diagrams, calendars, equations, and other features as extensions of Markdown syntax. Markdeep is free and easy to use. It doesn’t require a plugin or Internet connection. Your document never leaves your machine and there’s nothing to install. Just start writing in your favorite text editor. You don’t have to export, compile, or otherwise process your document. Here’s an example of a text editor and a browser viewing the same file simultaneously:…Markdeep is ideal for design documents, specifications, README files, code documentation, lab reports, blogs, and technical web pages. Because the source is plain text, Markdeep works well with software development toolchains.

Markdeep was created by Morgan McGuire (Casual Effects) with inspiration from John Gruber’s Markdown and Donald Knuth’s and Leslie Lamport’s LaTeX. Unique features:

Diagrams · Insert documents into one another · LaTeX equation typesetting and numbering · Table of contents · Reference images and embedded images · Document title and subtitle formatting · Schedules and calendars · Section numbering and references · Figure, listing, and table numbering and references · Smart quotes · Embedded video · CSS stylesheets · Page breaks · En dash, em dash, ×, minus, and degrees · Attributes on links · Unindexed sections · Works in any browser by adding one line to the bottom of a text document · Fallback to ASCII in a browser if you have neither the local file nor Internet access · Optionally process server-side with node.js · Optionally batch process to PDF with headless browser flags · HTML export to static content using ?export in the URL or “Rasterizer”

“Reflections on How Designers Design With Data”, Bigelow et al 2014

2014-bigelow.pdf: “Reflections on How Designers Design with Data”⁠, Alex Bigelow, Steven Mark Drucker, Danyel Fisher, Miriah D. Meyer (2014-05-27; ; backlinks; similar):

In recent years many popular data visualizations have emerged that are created largely by designers whose main area of expertise is not computer science. Designers generate these visualizations using a handful of design tools and environments. To better inform the development of tools intended for designers working with data, we set out to understand designers’ challenges and perspectives.

We interviewed professional designers, conducted observations of designers working with data in the lab, and observed designers working with data in team settings in the wild.

A set of patterns emerged from these observations from which we extract a number of themes that provide a new perspective on design considerations for visualization tool creators, as well as on known engineering problems.

Patterns: In our observational studies we observed all of the designers initially sketching visual representations of data on paper, on a whiteboard, or in Illustrator. In these sketches, the designers would first draw high-level elements of their design such as the layout and axes, followed by a sketching in of data points based on their perceived ideas of data behavior (P1). An example is shown in Figure 3. The designers often relied on their understanding of the semantics of data to infer how the data might look, such as F1 anticipating that Fitbit data about walking would occur in short spurts over time while sleep data would span longer stretches. However, the designers’ inferences about data behavior were often inaccurate (P2). This tendency was acknowledged by most of the designers: after her inference from data semantics, F1 indicated that to work effectively, she would need “a better idea of the behavior of each attribute.” Similarly, B1 did not anticipate patterns in how software bugs are closed, prompting a reinterpretation and redesign of her team’s visualization much later in the design process once data behavior was explicitly explored. In the time travel studies, T3 misinterpreted one trip that later caused a complete redesign.

Furthermore, the designers’ inferences about data structure were often separated from the actual data (P3). In brainstorming sessions at the hackathon, the designers described data that would be extremely difficult or impossible to gather or derive. In working with the HBO dataset, H1 experienced frustration after he spent time writing a formula in Excel only to realize that he was recreating data he had already seen in the aggregate table…Not surprisingly, the amount of data exploration and manipulation was related to the level of a designer’s experience working with data (P4).

“Transistor Radios Around the World: 1958 Braun T3”, Davidson 2014

“Transistor Radios Around the World: 1958 Braun T3”⁠, Robert Davidson (2014; backlinks; similar):

Micro-table / coat pocket radio, thermoplastic cabinet 5 15⁄16×3 5⁄16×1 5⁄8 inches / 151×84×41 mm 2-band MW/​LW radio, six transistors (OC44, 2× OC45, OC75 2× OC72) + OA70 diode Superheterodyne circuit Four 1.5-volt “AA” cells

Braun’s first pocket transistor radio, designed by Dieter Rams and produced in 1958. An identical-looking model, the T31, was introduced in 1960 and employed seven transistors.

Much has been made in recent years about the Braun T3 having been the design inspiration for the original Apple iPod—that’s pretty clear by now: Apple’s chief industrial designer Jon Ive is well known for his love of Dieter Rams’s designs, and a number of his Apple product designs bear unmistakable direct influences from classic Braun product designs.

1958 Braun T3

“Radiance: A Novel”, Scholz et al 2013

2002-scholz-radiance: “Radiance: A Novel”⁠, Carter Scholz, Gregory Benford, Hugh Gusterson, Sam Cohen, Curtis LeMay (2013-07-06; ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ; backlinks; similar):

E-book edition of the 2002 Carter Scholz novel of post-Cold War science/​technology, extensively annotated with references and related texts.

Radiance: A Novel is SF author Carter Scholz’s second literary novel. It is a roman à clef of the 1990s set at the Lawrence Livermore National Laboratory⁠, centering on two nuclear physicists entangled in corruption, mid-life crises, institutional incentives, technological inevitability, the end of the Cold War & start of the Dotcom Bubble, nuclear bombs & Star Wars missile defense program, existential risks⁠, accelerationism, and the great scientific project of mankind. (For relevant historical background, see the excerpts in the appendices⁠.)

I provide a HTML transcript prepared from the novel, with extensive annotations of all references and allusions, along with extracts from related works, and a comparison with the novella version.

Note: to hide apparatus like the links, you can use reader-mode ().

“The Third User, Or, Exactly Why Apple Keeps Doing Foolish Things”, Tognazzini 2013

“The Third User, or, Exactly Why Apple Keeps Doing Foolish Things”⁠, Bruce Tognazzini (2013-03-06; ; backlinks; similar):

Apple keeps doing things in the Mac OS that leave the user-experience (UX) community scratching its collective head, things like hiding the scroll bars and placing invisible controls inside the content region of windows on computers.

Apple’s mobile devices are even worse: It can take users upwards of 5 seconds to accurately drop the text pointer where they need it, but Apple refuses to add the arrow keys that have belonged on the keyboard from day-one.

Apple’s strategy is exactly right—up to a point:

Apple’s decisions may look foolish to those schooled in UX, but balance that against the fact that Apple consistently makes more money than the next several leaders in the industry combined.

While it’s true Apple is missing something—arrow keys—we in the UX community are missing something, too: Apple’s razor-sharp focus on an user many of us often fail to even consider: The potential user, the buyer. During the first Jobsian era at Apple, I used to joke that Steve Jobs cared deeply about Apple customers from the moment they first considered purchasing an Apple computer right up until the time their check cleared the bank.

…What do most buyers not want? They don’t want to see all kinds of scary-looking controls surrounding a media player. They don’t want to see a whole bunch of buttons they don’t understand. They don’t want to see scroll bars. They do want to see clean screens with smooth lines. Buyers want to buy Ferraris, not tractors, and that’s exactly what Apple is selling.

… Let me offer two examples of Apple objects that aid in selling products, but make life difficult for users thereafter.

  1. The Apple Dock: The Apple Dock is a superb device for selling computers for pretty much the same reasons that it fails miserably as a day-to-day device: A single glance at the Dock lets the potential buyer know that this a computer that is beautiful, fun, approachable, easy to conquer, and you don’t have to do a lot of reading. Of course, not one of these attributes is literally true, at least not if the user ends up exploiting even a fraction of the machine’s potential, but such is the nature of merchandizing, and the Mac is certainly easier than the competition.

    The real problem with the Dock is that Apple simultaneously stripped out functionality that was far superior, though less flashy, when they put the Dock in.

  2. Invisible Scroll Bars:

    “Gee, the screen looks so clean! This computer must be easy to use!” So goes the thinking of the buyer when seeing a document open in an Apple store, exactly the message Apple intends to impart. The problem right now is that Apple’s means of delivering that message is actually making the computer less easy to use!

    …the scroll bar has become a vital status device as well, letting you know at a glance the size of and your current position within a document…Hiding the scroll bar, from an user’s perspective, is madness. If the user wants to actually scroll, it’s bad enough: He or she is now forced to use a thumbwheel or gesture to invoke scrolling, as the scroll bar is no longer even present. However, if the user simply wants to see their place within the document, things can quickly spiral out of control: The only way to get the scroll bar to appear is to initiate scrolling, so the only way to see where you are right now in a document is to scroll to a different part of the document! It may only require scrolling a line or two, but it is still crazy on the face of it! And many windows contain panels with their own scroll bars as well, so trying to trick the correct one into turning on, if you can do so at all (good luck with Safari!) can be quite a challenge…(The scroll bars, even when turned on, are hard to see with their latest mandatory drab gray replacing bright blue and are now so thin they take around twice as long to target as earlier scroll bars. When a company ships products either before user testing or after ignoring the results of that testing, both their product and their users suffer.)

Industrial design: Borrow the aesthetic, ignore the limitation

While Apple has copied over the aesthetics of industrial design into the software world, they have also copied over its limitation: Whether it be a tractor, Ferrari, or electric toaster, that piece of hardware, in the absence of upgradeable software, will look and act the same the first time you use it as the thousandth time. Software doesn’t share that natural physical limitation, and Apple must stop acting as though it does.

“The Olivetti Valentine Typewriter”, Hill 2012

“The Olivetti Valentine typewriter”⁠, Cate St Hill (2012-11-29; ; backlinks; similar):

“Dear Valentine, this is to tell you that you are my friend as well as my Valentine, and that I intend to write you lots of letters”, says the user guide of the familiar red typewriter. This purposefully heartwarming greeting sets the tone for Ettore Sottsass’ typewriter. The blood-red Valentine was a fun, light-hearted and smooth-operating symbol of the 1960s Pop era, and its use of bright, playful casing for a piece of traditional office equipment was arguably a precursor to Apple’s 1998 Bondi Blue iMac. “When I was young, all we ever heard about was functionalism, functionalism, functionalism”, said Sottsass. “It’s not enough. Design should also be sensual and exciting.”

The Valentine—created for the Italian brand Olivetti—was designed in collaboration with the British designer Perry King and entered production in 1969. It was not a commercial success. The Valentine was technically mediocre, expensive and failed to sell to a mass audience, yet still became a design classic. Valentines can be found in the permanent collections of London’s Design Museum and MoMA, the typewriter being accepted into the latter just two years after its launch. The product’s critical success was unhindered by its functional limitations because its design focused as much on its emotional connection to users as it did on practical ease of use.

Sottsass set out his stall early on. One of the initial advertising campaigns for the design featured posters by the graphic designer and founder of New York magazine, Milton Glaser. Glaser used a detail of Piero di Cosimo’s renaissance painting, Satyr Mourning over Nymph. In the poster, the Valentine typewriter is placed next to a red setter, an elegant, rambunctious dog; man’s best friend. The suggestion was that Sottsass’ portable accessory could be just as loyal and convivial. How the product performed was arguably irrelevant. It was about how it made you feel.

The Valentine was available in white, green and blue, but its most famous form was red: lipstick-bright ABS plastic casing, with black plastic keys and white lettering. “Every colour has a history”, said Sottsass, “Red is the color of the Communist flag, the colour that makes a surgeon move faster and the color of passion.”

The distinctive colour was calculated to bring vibrancy and fun into the office world of the 1960s. Sottsass said that the Valentine “was invented for use any place except in an office, so as not to remind anyone of monotonous working hours, but rather to keep amateur poets company on quiet Sundays in the country or to provide a highly coloured object on a table in a studio apartment.” The ideas that later manifested themselves in Sottsass’ 1970s Memphis movement—the Milan design group known for its brightly coloured postmodern furniture—were already evident in the Valentine typewriter. Sottsass gave a standardised piece of office equipment personality.

Although, the designer would later dismiss the Valentine—comparing it to “a girl wearing a very short skirt and too much make-up”—its design was an elegant summation of his belief that successful, long-lasting product design was not solely connected to performance, but rather owed as much to the emotional force of a design.

The Olivetti Valentine typewriter (Design Museum)

“When Graphics Improve Liking but Not Learning from Online Lessons”, Sung & Mayer 2012

2012-sung.pdf: “When graphics improve liking but not learning from online lessons”⁠, Eunmo Sung, Richard E. Mayer (2012-09-01; ; similar):

  • Added instructive, decorative, seductive photos or none to an online lesson.
  • Higher satisfaction ratings for all 3 kinds of photos.
  • Higher recall test scores for instructive photos only.
  • Adding relevant photos helps learning, but adding irrelevant photos does not.

The multimedia principle states that adding graphics to text can improve student learning (Mayer 2009), but all graphics are not equally effective.

In the present study, students studied a short online lesson on distance education that contained instructive graphics (ie. directly relevant to the instructional goal), seductive graphics (ie. highly interesting but not directly relevant to the instructional goal), decorative graphics (ie. neutral but not directly relevant to the instructional goal), or no graphics.

After instruction, students who received any kind of graphic produced statistically-significantly higher satisfaction ratings than the no graphics group, indicating that adding any kind of graphic greatly improves positive feelings. However, on a recall posttest, students who received instructive graphics performed statistically-significantly better than the other 3 groups, indicating that the relevance of graphics affects learning outcomes. The 3 kinds of graphics had similar effects on affective measures but different effects on cognitive measures.

Thus, the multimedia effect is qualified by a version of the coherence principle: Adding relevant graphics to words helps learning but adding irrelevant graphics does not.

[Keywords: graphics, seductive details, e-Learning, web-based learning, multimedia effect, multimedia learning]

“A/B Testing Long-form Readability on Gwern.net”, Branwen 2012

AB-testing: “A/B testing long-form readability on Gwern.net”⁠, Gwern Branwen (2012-06-16; ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ; backlinks; similar):

A log of experiments done on the site design, intended to render pages more readable, focusing on the challenge of testing a static site, page width, fonts, plugins, and effects of advertising.

To gain some statistical & web development experience and to improve my readers’ experiences, I have been running a series of CSS A/​B tests since June 2012. As expected, most do not show any meaningful difference.

“STEPS Toward Expressive Programming Systems: "A Science Experiment"”, Ohshima et al 2012-page-2

“STEPS Toward Expressive Programming Systems: "A Science Experiment"”⁠, Yoshiki Ohshima, Dan Amelang, Ted Kaehler, Bert Freudenberg, Aran Lunzer, Alan Kay⁠, Ian Piumarta, Takashi Yamamiya et al (2012; ⁠, ; backlinks; similar):

[Technical report from a research project aiming at writing a GUI OS in 20k LoC; tricks include ASCII art networking DSLs & generic optimization for text layout⁠, which lets them implement a full OS, sound, GUI desktops, Internet networking & web browsers, a text/​document editor etc, all in less lines of code that most OSes need for small parts of any of those.]

…Many software systems today are made from millions to hundreds of millions of lines of program code that is too large, complex and fragile to be improved, fixed, or integrated. (One hundred million lines of code at 50 lines per page is 5000 books of 400 pages each! This is beyond human scale.) What if this could be made literally 1000× smaller—or more? And made more powerful, clear, simple and robust?…The ’STEPS

STEPS Aims At ‘Personal Computing’—STEPS takes as its prime focus the dynamic modeling of ‘personal computing’ as most people think of it…word processor, spreadsheet, Internet browser, other productivity SW; User Interface and Command Listeners: windows, menus, alerts, scroll bars and other controls, etc.; Graphics and Sound Engine: physical display, sprites, fonts, compositing, rendering, sampling, playing; Systems Services: development system, database query languages, etc.; Systems Utilities: file copy, desk accessories, control panels, etc.; Logical Level of OS: eg. file management, Internet, and networking facilities, etc.; Hardware Level of OS: eg. memory manager, process manager, device drivers, etc.

“Wikimedia UK Board Meeting, London”, Gardner 2011

“Wikimedia UK Board Meeting, London”⁠, Sue Gardner (2011-11-19; ; similar):

It’s getting harder for new people to join our projects. Newbies are making up a smaller percentage of editors overall than ever before, and the absolute number of newbies is dropping as well. Wikimedia needs to attract and retain more new and diverse editors, and to retain our experienced editors. A stable editing community is critical to the long-term sustainability and quality of both our current projects and our movement. We consider meeting this challenge our top priority.

“The Biological Basis of a Universal Constraint on Color Naming: Cone Contrasts and the Two-Way Categorization of Colors”, Xiao et al 2011

“The Biological Basis of a Universal Constraint on Color Naming: Cone Contrasts and the Two-Way Categorization of Colors”⁠, Youping Xiao, Christopher Kavanau, Lauren Bertin, Ehud Kaplan (2011-08-22; backlinks; similar):

Many studies have provided evidence for the existence of universal constraints on color categorization or naming in various languages, but the biological basis of these constraints is unknown. A recent study of the pattern of color categorization across numerous languages has suggested that these patterns tend to avoid straddling a region in color space at or near the border between the English composite categories of “warm” and “cool”. This fault line in color space represents a fundamental constraint on color naming. Here we report that the two-way categorization along the fault line is correlated with the sign of the L-cone versus M-cone contrast of a stimulus color. Moreover, we found that the sign of the L-M cone contrast also accounted for the two-way clustering of the spatially distributed neural responses in small regions of the macaque primary visual cortex, visualized with optical imaging. These small regions correspond to the hue maps, where our previous study found a spatially organized representation of stimulus hue. Altogether, these results establish a direct link between an universal constraint on color naming and the cone-specific information that is represented in the primate early visual system.

“Bitcoin Is Worse Is Better”, Branwen 2011

Bitcoin-is-Worse-is-Better: “Bitcoin Is Worse Is Better”⁠, Gwern Branwen (2011-05-27; ⁠, ⁠, ; backlinks; similar):

2011 essay on how Bitcoin’s long gestation and early opposition indicates it is an example of the ‘Worse is Better’ paradigm in which an ugly complex design with few attractive theoretical properties compared to purer competitors nevertheless successfully takes over a niche, survives, and becomes gradually refined.

The genius of Bitcoin⁠, in inventing a digital currency successful in the real world, is not in creating any new abstruse mathematics or cryptographic breakthrough, but in putting together decades-old pieces in a semi-novel but extremely unpopular way. Everything Bitcoin needed was available for many years, including the key ideas.

The sacrifice Bitcoin makes to achieve decentralization is—however practical—a profoundly ugly one. Early reactions to Bitcoin by even friendly cryptographers & digital currency enthusiasts were almost uniformly extremely negative, and emphasized the (perceived) inefficiency & (relative to most cryptography) weak security guarantees. Critics let ‘perfect be the enemy of better’ and did not perceive Bitcoin’s potential.

However, in an example of ‘Worse is Better’, the ugly inefficient prototype of Bitcoin successfully created a secure decentralized digital currency, which can wait indefinitely for success, and this was enough to eventually lead to adoption, improvement, and growth into a secure global digital currency.

“The Pilcrow, Part 2 of 3”, Houston 2011

“The Pilcrow, part 2 of 3”⁠, Keith Houston (2011-03-06; backlinks; similar):

Just as kaput stood for a section or a paragraph, so its diminutive capitulum, or ‘little head’, denoted a chapter. The general Roman preference for the letter ‘C’ had all but seen off the older Etruscan ‘K’ by 300 BC,15 but ‘K’ for kaput persisted some time longer in written documents. By the 12th century, though, ‘C’ for capitulum had overtaken ‘K’ in this capacity as well.16 The use of capitulum in the sense of a chapter of a written work was so closely identified with ecclesiastical documents that it came to be used in church terminology in a bewildering number of ways: monks went ad capitulum, ‘to the chapter (meeting)’, to hear a chapter from the book of their religious orders, or ‘chapter-book’, read out in the ‘chapter room’.17

Monastic scriptoria worked on the same principle as factory production lines, with each stage of book production delegated to a specialist. A scribe would copy out the body of the text, leaving spaces for a ‘rubricator’ to later embellish the text by adding versals (large, elaborate initial letters), headings and other section marks as required. Taken from the Latin rubrico, ‘to colour red’, rubricators often worked in contrasting red ink, which not only added a decorative flourish but also guided the eye to important divisions in the text.18 In the hands of the rubricators, ‘C’ for capitulum came to be accessorized by a vertical bar, as were other litterae notabiliores [notable letters: “enlarged letter within a text, designed to clarify the syntax of a passage”] in the fashion of the time; later, the resultant bowl was filled in and so ‘¢’ for capitulum became the familiar reversed-P of the pilcrow.16

‘C’ for capitulum in De Gestis Regum Anglorum, William of Malmesbury’s 1125 text detailing “deeds of the English kings”. (Image courtesy of Bibliothèque nationale de France.)

As the capitulum’s appearance changed, so too did its usage. At first used only to mark chapters, it started to pepper texts as a paragraph or even sentence marker so that it broke up a block of running text into meaningful sections as the writer saw fit. ¶ This style of usage yielded very compact text,19 harking back, perhaps, to the still-recent practice of scriptio continua [un-punctuated spaceless writing]. Ultimately, though, the concept of the paragraph overrode the need for efficiency and became so important as to warrant a new line—prefixed with a pilcrow, of course, to introduce it.20

“The Snowflake Man of Vermont”, Heidorn 2011

“The Snowflake Man of Vermont”⁠, Keith C. Heidorn (2011-02-14; ⁠, ):

Keith C. Heidorn takes a look at the life and work of Wilson Bentley, a self-educated farmer from a small American town who, by combining a bellows camera with a microscope, managed to photograph the dizzyingly intricate and diverse structures of the snow crystal.

“Design Graveyard”, Branwen 2010

Design-graveyard: “Design Graveyard”⁠, Gwern Branwen (2010-10-01; ⁠, ⁠, ⁠, ⁠, ; backlinks; similar):

Meta page describing Gwern.net website design experiments and post-mortem analyses.

Often the most interesting part of any design are the parts that are invisible—what was tried but did not work. Sometimes they were unnecessary, other times users didn’t understand them because it was too idiosyncratic, and sometimes we just can’t have nice things.

Some post-mortems of things I tried on Gwern.net but abandoned (in chronological order).

“Design Of This Website”, Branwen 2010

Design: “Design Of This Website”⁠, Gwern Branwen (2010-10-01; ⁠, ⁠, ⁠, ⁠, ; backlinks; similar):

Meta page describing Gwern.net site implementation and experiments for better ‘structural reading’ of hypertext; technical decisions using Markdown and static hosting.

Gwern.net is implemented as a static website compiled via Hakyll from Pandoc Markdown and hosted on a dedicated server (due to expensive cloud bandwidth).

It stands out from your standard Markdown static website by aiming at good typography, fast performance, and advanced hypertext browsing features (at the cost of great implementation complexity); the 4 design principles are: aesthetically-pleasing minimalism, accessibility/​progressive-enhancement, speed, and a ‘structural reading’ approach to hypertext use.

Unusual features include the monochrome esthetics, sidenotes instead of footnotes on wide windows, efficient drop caps/​smallcaps, collapsible sections, automatic inflation-adjusted currency, Wikipedia-style link icons & infoboxes, custom syntax highlighting⁠, extensive local archives to fight linkrot, and an ecosystem of “popup”/​“popin” annotations & previews of links for frictionless browsing—the net effect of hierarchical structures with collapsing and instant popup access to excerpts enables iceberg-like pages where most information is hidden but the reader can easily drill down as deep as they wish. (For a demo of all features & stress-test page, see Lorem Ipsum⁠.)

Also discussed are the many failed experiments /  ​ changes made along the way.

“About This Website”, Branwen 2010

About: “About This Website”⁠, Gwern Branwen (2010-10-01; ⁠, ⁠, ⁠, ⁠, ⁠, ⁠, ; backlinks; similar):

Meta page describing Gwern.net site ideals of stable long-term essays which improve over time; idea sources and writing methodology; metadata definitions; site statistics; copyright license.

This page is about Gwern.net content; for the details of its implementation & design like the popup paradigm, see Design⁠; and for information about me, see Links⁠.

“P22 Civilite Type Specimen”, Foundry 2010

2010-p22-civilite-typespecimen.pdf: “P22 Civilite Type Specimen”⁠, P22 Type Foundry (2010-01-01; backlinks)

“Beware Trivial Inconveniences”, Alexander 2009

“Beware Trivial Inconveniences”⁠, Scott Alexander (2009-05-06; ⁠, ; backlinks; similar):

The Great Firewall of China. A massive system of centralized censorship purging the Chinese version of the Internet of all potentially subversive content. Generally agreed to be a great technical achievement and political success even by the vast majority of people who find it morally abhorrent. I spent a few days in China. I got around it at the Internet cafe by using a free online proxy. Actual Chinese people have dozens of ways of getting around it with a minimum of technical knowledge or just the ability to read some instructions.

The Chinese government isn’t losing any sleep over this (although they also don’t lose any sleep over murdering political dissidents, so maybe they’re just very sound sleepers). Their theory is that by making it a little inconvenient and time-consuming to view subversive sites, they will discourage casual exploration. No one will bother to circumvent it unless they already seriously distrust the Chinese government and are specifically looking for foreign websites, and these people probably know what the foreign websites are going to say anyway.

Think about this for a second. The human longing for freedom of information is a terrible and wonderful thing. It delineates a pivotal difference between mental emancipation and slavery. It has launched protests, rebellions, and revolutions. Thousands have devoted their lives to it, thousands of others have even died for it. And it can be stopped dead in its tracks by requiring people to search for “how to set up proxy” before viewing their anti-government website.

…But these trivial inconveniences have major policy implications. Countries like China that want to oppress their citizens are already using “soft” oppression to make it annoyingly difficult to access subversive information. But there are also benefits for governments that want to help their citizens.

“Three Doors to Other Worlds”, Crompton 2008

2008-crompton.pdf: “Three Doors to Other Worlds”⁠, Andrew Crompton (2008-10-20; ; similar):

Architecture that is hard to describe by being immaterial, irrelevant, and unintended may engage us in a narrative rather than a visual sense.

3 examples of anonymous architecture are presented where stories regarding interfaces between existence and nonexistence emerge. They are all places where people can vanish and taken together tell stories of death, hell, and heaven. In these unexpected places, the deeper issues of life may be obliquely and ironically experienced.

[Geoff Manaugh: While the entirety of the paper is worth reading, I want to highlight a specific moment, wherein Crompton introduces us to the colossal western bellmouth drain of the Ladybower reservoir in Derbyshire, England.

His description of this “inverted infrastructural monument”, as InfraNet Lab described it in their own post about Crompton’s paper—adding that spillways like this “maintain 2 states: (1) in use they disappear and are minimally obscured by flowing water, (2) not in use they are sculptural oddities hovering ambiguously above the water line”—is spine-tingling.

“What is down that hole is a deep mystery”, Crompton begins, and the ensuing passage deserves quoting in full:

Not even Google Earth can help you since its depths are in shadow when photographed from above. To see for yourself means going down the steps as far as you dare and then leaning out to take a look. Before attempting a descent, you might think it prudent to walk around the hole looking for the easiest way down. The search will reveal that the workmanship is superb and that there is no weakness to exploit, nowhere to tie a rope and not so much as a pebble to throw down the hole unless you brought it with you in the boat. The steps of this circular waterfall are all 18 inches high. This is an awkward height to descend, and most people, one imagines, would soon turn their back on the hole and face the stone like a climber. How far would you be willing to go before the steps became too small to continue? With proper boots, it is possible to stand on a sharp edge as narrow as a quarter of an inch wide; in such a position, you will risk your life twisting your cheek away from the stone to look downward because that movement will shift your center of gravity from a position above your feet, causing you to pivot away from the wall with only friction at your fingertips to hold you in place. Sooner or later, either your nerves or your grip will fail while diminishing steps accumulate below preventing a vertical view. In short, as if you were performing a ritual, this structure will first make you walk in circles, then make you turn your back on the thing you fear, then give you a severe fright, and then deny you the answer to a question any bird could solve in a moment. When you do fall, you will hit the sides before hitting the bottom. Death with time to think about it arriving awaits anyone who peers too far into that hole.

“What we have here”, he adds, “is a geometrical oddity: an edge over which it is impossible to look. Because you can see the endless walls of the abyss both below you and facing you, nothing is hidden except what is down the hole. Standing on the rim, you are very close to a mystery: a space receiving the light of the sun into which we cannot see.”]

“Verbal Probability Expressions In National Intelligence Estimates: A Comprehensive Analysis Of Trends From The Fifties Through Post-9/11”, Kesselman 2008

2008-kesselman.pdf: “Verbal Probability Expressions In National Intelligence Estimates: A Comprehensive Analysis Of Trends From The Fifties Through Post-9/11”⁠, Rachel F. Kesselman (2008-05; ⁠, ; backlinks; similar):

This research presents the findings of a study that analyzed words of estimators probability in the key judgments of National Intelligence Estimates from the 1950s through the 2000s. The research found that of the 50 words examined, only 13 were statistically-significant. Furthermore, interesting trends have emerged when the words are broken down into English modals, terminology that conveys analytical assessments and words employed by the National Intelligence Council as of 2006. One of the more intriguing findings is that use of the word will has by far been the most popular for analysts, registering over 700 occurrences throughout the decades; however, a word of such certainty is problematic in the sense that intelligence should never deal with 100% certitude. The relatively low occurrence and wide variety of word usage across the decades demonstrates a real lack of consistency in the way analysts have been conveying assessments over the past 58 years. Finally, the researcher suggests the Kesselman List of Estimative Words for use in the IC. The word list takes into account the literature review findings as well as the results of this study in equating odds with verbal probabilities.

[Rachel’s lit review, for example, makes for very interesting reading. She has done a thorough search of not only the intelligence but also the business, linguistics and other literatures in order to find out how other disciplines have dealt with the problem of “What do we mean when we say something is ‘likely’…” She uncovered, for example, that, in medicine, words of estimative probability such as “likely”, “remote” and “probably” have taken on more or less fixed meanings due primarily to outside intervention or, as she put it, “legal ramifications”. Her comparative analysis of the results and approaches taken by these other disciplines is required reading for anyone in the Intelligence Community trying to understand how verbal expressions of probability are actually interpreted. The NICs list only became final in the last several years so it is arguable whether this list of nine words really captures the breadth of estimative word usage across the decades. Rather, it would be arguable if this chart didn’t make it crystal clear that the Intelligence Community has really relied on just two words, “probably” and “likely” to express its estimates of probabilities for the last 60 years. All other words are used rarely or not at all.

Based on her research of what works and what doesn’t and which words seem to have the most consistent meanings to users, Rachel even offers her own list of estimative words along with their associated probabilities:

  1. Almost certain: 86–99%
  2. Highly likely: 71–85%
  3. Likely: 56–70%
  4. Chances a little better [or less] than even: 46–55%
  5. Unlikely: 31–45%
  6. Highly unlikely: 16–30%
  7. Remote: 1–15%

]

[See also “Decision by sampling”⁠, Stewart et al 2006; “Processing Linguistic Probabilities: General Principles and Empirical Evidence”⁠, Budescu & Wallsten 1995.]

“The Monotype 4-Line System for Setting Mathematics”, Rhatigan 2007

2007-rhatigan.pdf: “The Monotype 4-Line System for Setting Mathematics”⁠, Daniel Rhatigan (2007-08-13; ; similar):

[Author blog⁠. Description of the most advanced mechanical typesetting system for the challenging task of typesetting mathematics (which high-quality typography is what Knuth aimed to recreate).

To provide the typographic quality of hand-set math but at an affordable cost, the Monotype corporation made a huge investment post-WWII into enhancing its mechanical hot metal typesetting system into one which would encode every mathematical equation into symbols placed on a vertical grid of 4 horizontal ‘lines’, into which could be slotted entries from a vast new family of fonts & symbols, all tweaked to fit in various positions, which would then be spat out by the machine into a single solid lead piece which could be combined with the rest to form a single page.

This allowed a skilled operator to rapidly ‘type’ his way through a page of math to yield a beautiful custom output without endlessly tedious hand-arranging lots of little metal bits.]

“The Architectural Relevance of Gordon Pask: Usman Haque Reviews the Contribution of Gordon Pask, the Resident Cybernetician on Cedric Price’s Fun Palace. He Describes Why in the Twenty First Century the Work of This Early Proponent and Practitioner of Cybernetics Has Continued to Grow in Pertinence for Architects and Designers Interested in Interactivity”, Bettella 2007

2007-haque.pdf: “The Architectural Relevance of Gordon Pask: Usman Haque reviews the contribution of Gordon Pask, the resident cybernetician on Cedric Price’s Fun Palace. He describes why in the Twenty First century the work of this early proponent and practitioner of cybernetics has continued to grow in pertinence for architects and designers interested in interactivity”⁠, Andrea Bettella (2007-01-01)

“Host: Deep into the Mercenary World of Take-no-prisoners Political Talk Radio”, Wallace 2005

2005-wallace.pdf: “Host: Deep into the mercenary world of take-no-prisoners political talk radio”⁠, David Foster Wallace (2005-04-01; ; backlinks)

“Mark Lombardi: Global Networks”, Hobbs 2003

2003-hobbs-marklombardiglobalnetworks.pdf: “Mark Lombardi: Global Networks”⁠, Robert Hobbs (2003-01-01; ; backlinks)

“Type & Typography: Highlights from Matrix, the Review for Printers and Bibliophiles”, Archive 2003

2003-matrix-typeandtypography.pdf: “Type & Typography: Highlights from Matrix, the review for printers and bibliophiles”⁠, Internet Archive (2003-01-01)

“Naked Objects: a Technique for Designing More Expressive Systems”, Pawson & Matthews 2001

2001-pawson.pdf: “Naked objects: a technique for designing more expressive systems”⁠, Richard Pawson, Robert Matthews (2001-12-01; ; similar):

Naked objects is an approach to systems design in which core business objects show directly through to the user interface, and in which all interaction consists of invoking methods on those objects in the noun-verb style.

One advantage of this approach is that it results in systems that are more expressive from the viewpoint of the user: they treat the user like a problem solver, not as merely a process-follower. Another advantage is that the 1:1 mapping between the user’s representation and the underlying model means that it is possible to auto-generate the former from the latter, which yields benefits to the development process.

The authors have designed a Java-based, open source toolkit called Naked Objects which facilitates this style of development. This paper describes the design and operation of the toolkit and its application to the prototyping of a core business system.

Some initial feedback from the project is provided, together with a list of future research directions both for the toolkit and for a methodology to apply the naked objects approach.

“Joachim of Fiore and Apocalyptic Immanence”, Ziolo 2001

2001-ziolo.pdf: “Joachim of Fiore and apocalyptic immanence”⁠, Paul Ziolo (2001-09; ; backlinks; similar):

Apocalyptic envisionings of the historical process, whether philosophical, pseudo-scientific or incarnate as chiliastic movements have always been, and in all likelihood will continue to be, an integral dimension in the unfolding of the Euroamerican cultural chreod. This paper begins with some general observations on the genesis and character of apocalyptic movements, then proceeds to trace the psychological roots of Euroamerican apocalyptic thought as expressed in the Trinitarian-dualist formulations of Christian dogma, showing how the writings of the medieval Calabrian mystic Joachim of Fiore (c.1135–1202) created a synthesis of dynamic Trinitarianism and existential dualism within a framework of historical immanence. The resulting Joachimite ‘program’ later underwent further dissemination and distortion within the context of psychospeciation and finally led to the great totalitarian systems of the 20th century, thereby indirectly exercising an influence on the development of psychohistory itself as an independent discipline.

“Micro-typographic Extensions to the TeX Typesetting System”, Thành 2000

2000-thanh.pdf: “Micro-typographic extensions to the TeX typesetting system”⁠, Hàn Thế Thành (2000-10-01; ; backlinks; similar):

This thesis investigates the possibility to improve the quality of text composition. 2 typographic extensions were examined: margin kerning and composing with font expansion.

Margin kerning is the adjustments of the characters at the margins of a typeset text. A simplified employment of margin kerning is hanging punctuation⁠. Margin kerning is needed for optical alignment of the margins of a typeset text, because mechanical justification of the margins makes them look rather ragged. Some characters can make a line appear shorter to the human eye than others. Shifting such characters by an appropriate amount into the margins would greatly improve the appearance of a typeset text.

Composing with font expansion is the method to use a wider or narrower variant of a font to make interword spacing more even. A font in a loose line can be substituted by a wider variant so the interword spaces are stretched by a smaller amount. Similarly, a font in a tight line can be replaced by a narrower variant to reduce the amount that the interword spaces are shrunk by. There is certainly potential danger of font distortion when using such manipulations, thus they must be used with extreme care. The potential to adjust a line width by font expansion can be taken into consideration while a paragraph is being broken into lines, in order to choose better breakpoints.

These typographic extensions were implemented in pdfTeX⁠, a derivation of TeX⁠.

Many experiments have been done to examine the influence of the extensions on the quality of typesetting. The extensions turned out to noticeably improve the appearance of a typeset text. A number of ‘real-world’ documents have been typeset using these typographic extensions, including this thesis.

Visual Explanations: Images and Quantities, Evidence and Narrative, Tufte 1997”, Tufte 1997

Visual Explanations: Images and Quantities, Evidence and Narrative, Tufte 1997”⁠, Edward Tufte (1997; backlinks; similar):

Visual Explanations: Images and Quantities, Evidence and Narrative [Tufte #3] is about pictures of verbs, the representation of mechanism and motion, process and dynamics, causes and effects, explanation and narrative. Practical applications and examples include statistical graphics, charts for making important decisions in engineering and medicine, technical manuals, diagrams, design of computer interfaces and websites and on-line manuals, animations and scientific visualizations, techniques for talks, and design strategies for enhancing the rate of information transfer in print, presentations, and computer screens. The use of visual evidence in deciding to launch the space shuttle Challenger is discussed in careful detail. Video snapshots show redesigns of a supercomputer animation of a thunderstorm. The book is designed and printed to the highest standards, with luscious color throughout and four built-in flaps for showing motion and before/​after effects.

158 pages; ISBN 1930824157

Cover of Visual Explanations

Tokyo: A Certain Style”, Tsuzuki 1997

1997-tsuzuki-tokyoacertainstyle.pdf: Tokyo: A Certain Style⁠, Kyoichi Tsuzuki (1997; ⁠, ; backlinks; similar):

Writer-photographer Kyoichi Tsuzuki visited a hundred apartments, condos, and houses, documenting what he saw in more than 400 color photos that show the real Tokyo style—a far cry from the serene gardens, shoji screens, and Zen minimalism usually associated with Japanese dwellings.

In this Tokyo, necessities such as beds, bathrooms, and kitchens vie for space with electronic gadgets, musical instruments, clothes, books, records, and kitschy collectibles. Candid photos vividly capture the dizzying “cockpit effect” of living in a snug space crammed floor to ceiling with stuff. And it’s not just bohemian types and students who must fit their lives and work into tight quarters, but professionals and families with children, too. In descriptive captions, the inhabitants discuss the ingenious ways they’ve adapted their home environments to suit their diverse lifestyles.

“Role of Color in Perception of Attractiveness”, Radeloff 1990

1990-radeloff.pdf: “Role of Color in Perception of Attractiveness”⁠, Deanna J. Radeloff (1990-08-01; ; similar):

In this color study females reported a favorite color statistically-significantly more often than males. Males preferred bright colors statistically-significantly more than females, with a converse finding for preference for soft colors. The 276 subjects, when asked to evaluate the attractiveness of stimulus models in photographs, gave as the reason color statistically-significantly more often than style of clothing or facial expressions. Subjects significantly concurred with expert choices of recommended and nonrecommended colors in five of the six sets of photographs. This study lends credence that wearing recommended colors makes a difference in judgments of what looks best by subjects over the age of 12.

Envisioning Information: Chapter 5, ‘Color and Information’, Pg83-86 [on Oliver Byrne's Color Diagram Version of Euclid's Elements]”, Tufte 1990

1990-tufte-envisioninginformation-ch5-byrneseuclid.pdf: Envisioning Information: chapter 5, ‘Color and Information’, pg83-86 [on Oliver Byrne's color diagram version of Euclid's Elements]”⁠, Edward Tufte (1990; ; backlinks; similar):

[Extracts from Tufte textbook on graphing information and visual design, where he revives & popularizes Oliver Byrne’s obscure edition of Euclid⁠.

Tufte notes how effectively Bryne converts lengthy formal text proofs (intended for recitation) into short sequences of cleanly-designed diagrams exploiting primary colors for legibility, and the curious anticipation of modernist design movements like De Stijl⁠.

This inspired 2 digital recreations by Slyusarev Sergey & Nicholas Rougeux⁠.]

“Hypertext '87: Keynote Address”, Dam 1988

1988-vandam.pdf: “Hypertext '87: keynote address”⁠, Andries van Dam (1988-07-01; )

“Hypertext and the Oxford English Dictionary”, Raymond & Tompa 1988

1988-raymond.pdf: “Hypertext and the Oxford English dictionary”⁠, Darrel R. Raymond, Frank William Tompa (1988-07-01; ):

Hypertext databases can be produced by converting existing text documents to electronic form. The basic task in conversion is identification of fragments. We illustrate that this is not always a straightforward process with an analysis of the Oxford English Dictionary.

“The Printing of Mathematics”, Wishart 1988

1988-wishart.pdf: “The Printing of Mathematics”⁠, David Wishart (1988-01-01)

Atlas Of Oblique Maps: A Collection Of Landform Portrayals Of Selected Areas Of The World”, Alpha et al 1988

1988-alpha-atlasofobliquemaps.pdf: Atlas Of Oblique Maps: A Collection Of Landform Portrayals Of Selected Areas Of The World⁠, Tau Rho Alpha, Janis S. Detterman, Jim Morley (1988; similar):

Pp. 137, 200+ maps and geological sections (some in color and some color-tinted). Publisher’s 2-color printed wrappers, large folio (20.5×16 inches).

This folio comprises scale-accurate, obliquely viewed maps compiled from 1961 to 1986 that portray the physiography of selected areas of the ocean floor and continents. The life’s work of Tau Rho Alpha…the maps are all oblique aerials, and range from 1961 to 1986, so are pre-digital.

The ability to represent complex geographic and topography features enlightens many maps of this sort, and the techniques to create this makes for a fascinating read.

…Some of the benefits of this type of map are discussed, including more realism and easier comprehension, and ability maintain scale. Disadvantages included displacement of features, and hiding of key elements, and a relative inexactness of elevation and location.

“The Little Can That Could”, Daniel 1987

“The Little Can That Could”⁠, Richard M. Daniel (1987-02; ⁠, ; backlinks):

…Hitler knew this. He perceived early on that the weakest link in his plans for blitzkrieg using his panzer divisions was fuel supply. He ordered his staff to design a fuel container that would minimize gasoline losses under combat conditions. As a result the German army had thousands of jerrycans⁠, as they came to be called, stored and ready when hostilities began in 1939.

The jerrycan had been developed under the strictest secrecy, and its unique features were many. It was flat-sided and rectangular in shape, consisting of 2 halves welded together as in a typical automobile gasoline tank. It had 3 handles, enabling one man to carry 2 cans and pass one to another man in bucket-brigade fashion. Its capacity was ~5 U.S. gallons; its weight filled, 45 pounds. Thanks to an air chamber at the top, it would float on water if dropped overboard or from a plane. Its short spout was secured with a snap closure that could be propped open for pouring, making unnecessary any funnel or opener. A gasket made the mouth leakproof. An air-breathing tube from the spout to the air space kept the pouring smooth. And most important, the can’s inside was lined with an impervious plastic material developed for the insides of steel beer barrels. This enabled the jerrycan to be used alternately for gasoline and water.

Early in the summer of 1939, this secret weapon began a roundabout odyssey into American hands…Back in the United States, Pleiss told military officials about the container, but without a sample can he could stir no interest, even though the war was now well under way…Pleiss immediately sent one of the cans to Washington. The War Department looked at it but unwisely decided that an updated version of their World War I container would be good enough. That was a cylindrical ten-gallon can with 2 screw closures. It required a wrench and a funnel for pouring. That one jerrycan in the Army’s possession was later sent to Camp Holabird⁠, in Maryland. There it was poorly redesigned; the only features retained were the size, shape, and handles. The welded circumferential joint was replaced with rolled seams around the bottom and one side. Both a wrench and a funnel were required for its use. And it now had no lining. As any petroleum engineer knows, it is unsafe to store gasoline in a container with rolled seams. This ersatz can did not win wide acceptance.

The British first encountered the jerrycan during the German invasion of Norway⁠, in 1940, and gave it its English name (the Germans were, of course, the “Jerries”). Later that year Pleiss was in London and was asked by British officers if he knew anything about the can’s design and manufacture. He ordered the second of his 3 jerrycans flown to London. Steps were taken to manufacture exact duplicates of it. 2 years later the United States was still oblivious of the can.

…The British historian Desmond Young later confirmed the great importance of oil cans in the early African part of the war. “No one who did not serve in the desert”, he wrote, “can realise to what extent the difference between complete and partial success rested on the simplest item of our equipment—and the worst. Whoever sent our troops into desert warfare with the [five-gallon] petrol tin has much to answer for. General Auchinleck estimates that this ‘flimsy and ill-constructed container’ led to the loss of 30% of petrol between base and consumer. … The overall loss was almost incalculable. To calculate the tanks destroyed, the number of men who were killed or went into captivity because of shortage of petrol at some crucial moment, the ships and merchant seamen lost in carrying it, would be quite impossible.”

After my colleague and I made our report, a new 5-gallon container under consideration in Washington was canceled. Meanwhile the British were finally gearing up for mass production. 2 million British jerrycans were sent to North Africa in early 1943, and by early 1944 they were being manufactured in the Middle East. Since the British had such a head start, the Allies agreed to let them produce all the cans needed for the invasion of Europe. Millions were ready by D-day⁠. By V-E day some 21 million Allied jerrycans had been scattered all over Europe. President Roosevelt observed in November 1944, “Without these cans it would have been impossible for our armies to cut their way across France at a lightning pace which exceeded the German Blitz of 1940.”

“Embedded Menus: Selecting Items in Context”, Koved & Shneiderman 1986

1986-koved.pdf: “Embedded menus: selecting items in context”⁠, Larry Koved, Ben Shneiderman (1986-04-01; ):

In many situations, embedded menus represent an attractive alternative to the more traditional explicit menus, particularly in touchtext, spelling checkers, language-based program editors, and graphics-based systems.

“Contrasting Concepts of Harmony in Architecture”, Alexander & Eisenman 1982

“Contrasting Concepts of Harmony in Architecture”⁠, Christopher Alexander, Peter Eisenman (1982-11-17; backlinks):

The 1982 Debate Between Christopher Alexander and Peter Eisenman: An Early Discussion of the “New Sciences” of Organised Complexity in Architecture: This legendary debate took place at the Graduate School of Design⁠, Harvard University, on November 17th 1982. Not long before it, Alexander had given a talk on The Nature of Order⁠, which was to become the subject of his magnum opus of architectural philosophy. The original version he envisaged was less than half the size of the final 4-volume work as it now stands, but its main ideas were already formulated.


CA: Now then, I look at the buildings which purport to come from a point of view similar to the one I’ve expressed, and the main thing I recognize is, that whatever the words are—the intellectual argument behind that stuff—the actual buildings are totally different. Diametrically opposed. Dealing with entirely different matters. Actually, I don’t even know what that work is dealing with, but I do know that it is not dealing with feelings. And in that sense those buildings are very similar to the alienated series of constructions that preceded them since 1930…I really cannot conceive of a properly formed attitude towards buildings, as an artist or a builder, or in any way, if it doesn’t ultimately confront the fact that buildings work in the realm of feeling…Now, I will pick a building, let’s take Chartres for example. We probably don’t disagree that it’s a great building.

PE: Well, we do actually, I think it is a boring building. Chartres, for me, is one of the least interesting cathedrals. In fact, I have gone to Chartres a number of times to eat in the restaurant across the street—had a 1934 red Meursault wine⁠, which was exquisite—I never went into the cathedral. The cathedral was done en passant. Once you’ve seen one Gothic cathedral, you have seen them all…Let’s pick something that we can agree on—Palladio’s Palazzo Chiericati⁠. For me, one of the things that qualifies it in an incredible way, is precisely because it is more intellectual and less emotional. It makes me feel high in my mind, not in my gut. Things that make me feel high in my gut are very suspicious, because that is my psychological problem. So I keep it in the mind, because I’m happier with that.

You see, the Mies and Chiericati thing was far greater than Moore and Chiericati⁠, because Moore is just a pasticheur. We agree on that. But Mies and Chiericati is a very interesting example, and I find much of what is in Palladio—that is the contamination of wholeness—also in Mies [a reference to Mies’s treatment of corners?]… Now the space between is not part of classical unity, wholeness, completeness; it is another typology. It is not a typology of sameness or wholeness; it’s a typology of differences. It is a typology which transgresses wholeness and contaminates it.

CA: I don’t fully follow what you’re saying. It never occurred to me that someone could so explicitly reject the core experience of something like Chartres. It’s very interesting to have this conversation. If this weren’t a public situation, I’d be tempted to get into this on a psychiatric level. I’m actually quite serious about this. What I’m saying is that I understand how one could be very panicked by these kinds of feelings. Actually, it’s been my impression that a large part of the history of modern architecture has been a kind of panicked withdrawal from these kinds of feelings, which have governed the formation of buildings over the last 2,000 years or so.

Why that panicked withdrawal occurred, I’m still trying to find out. It’s not clear to me. But I’ve never heard somebody say, until a few moments ago, someone say explicitly: “Yes, I find that stuff freaky. I don’t like to deal with feelings. I like to deal with ideas.” Then, of course, what follows is very clear. You would like the Palladio building; you would not be particularly happy with Chartres, and so forth. And Mies …

PE: The panicked withdrawal of the alienated self was dealt with in Modernism—which was concerned with the alienation of the self from the collective.

CA: …I will give you another example, a slightly absurd example. A group of students under my direction was designing houses for about a dozen people, each student doing one house. In order to speed things up (we only had a few weeks to do this project), I said: “We are going to concentrate on the layout and cooperation of these buildings, so the building system is not going to be under discussion.”

So I gave them the building system, and it happened to include pitched roofs, fairly steep pitched roofs. The following week, after people had looked at the notes I handed out about the building system, somebody raised his hand and said: “Look, you know everything is going along fine, but could we discuss the roofs?” So I said: “Yes, what would you like to discuss about the roofs?” And the person said: “Could we make the roofs a little different?” I had told them to make just ordinary pitched roofs. I asked, “What’s the issue about the roofs?” And the person responded: “Well, I don’t know, it’s just kind of funny.” Then that conversation died down a bit. 5 minutes later, somebody else popped up his hand and said: “Look, I feel fine about the building system, except the roofs. Could we discuss the roofs?” I said: “What’s the matter with the roofs?” He said, “Well, I have been talking to my wife about the roofs, and she likes the roofs”—and then he sniggered…The simplest explanation is that you have to do these others to prove your membership in the fraternity of modern architecture. You have to do something more far out, otherwise people will think you are a simpleton. But I do not think that is the whole story. I think the more crucial explanation—very strongly related to what I was talking about last night—is that the pitched roof contains a very, very primitive power of feeling. Not a low pitched, tract house roof, but a beautifully shaped, fully pitched roof. That kind of roof has a very primitive essence as a shape, which reaches into a very vulnerable part of you. But the version that is okay among the architectural fraternity is the one which does not have the feeling: the weird angle, the butterfly, the asymmetrically steep shed, etc.—all the shapes which look interesting but which lack feeling altogether.

PE: This is a wonderful coincidence, because I too am concerned with the subject of roofs. Let me answer it in a very deep way. I would argue that the pitched roof is—as Gaston Bachelard points out—one of the essential characteristics of “houseness”. It was the extension of the vertebrate structure which sheltered and enclosed man…That distance, which you call alienation or lack of feeling, may have been merely a natural product of this new cosmology…Last night, you gave 2 examples of structural relationships that evoke feelings of wholeness—of an arcade around a court, which was too large, and of a window frame which is also too large. Le Corbusier once defined architecture as having to do with a window which is either too large or too small, but never the right size. Once it was the right size it was no longer functioning. When it is the right size, that building is merely a building. The only way in the presence of architecture that is that feeling, that need for something other, when the window was either too large or too small.

I was reminded of this when I went to Spain this summer to see the town hall at Logrono by Rafael Moneo⁠. He made an arcade where the columns were too thin. It was profoundly disturbing to me when I first saw photographs of the building. The columns seemed too thin for an arcade around the court of a public space. And then, when I went to see the building, I realized what he was doing. He was taking away from something that was too large, achieving an effect that expresses the separation and fragility that man feels today in relationship to the technological scale of life, to machines, and the car-dominated environment we live in. I had a feeling with that attenuated colonnade of precisely what I think you are talking about.

CA: …The thing that strikes me about your friend’s building—if I understood you correctly—is that somehow in some intentional way it is not harmonious. That is, Moneo intentionally wants to produce an effect of disharmony. Maybe even of incongruity.

PE: That is correct.

CA: I find that incomprehensible. I find it very irresponsible. I find it nutty. I feel sorry for the man. I also feel incredibly angry because he is f—king up the world.

Audience: (Applause)

PE: Precisely the reaction that you elicited from the group. That is, they feel comfortable clapping. The need to clap worries me because it means that mass psychology is taking over…If you repress the destructive nature, it is going to come out in some way. If you are only searching for harmony, the disharmonies and incongruities which define harmony and make it understandable will never be seen. A world of total harmony is no harmony at all. Because I exist, you can go along and understand your need for harmony, but do not say that I am being irresponsible or make a moral judgement that I am screwing up the world, because I would not want to have to defend myself as a moral imperative for you.

[‘Mass psychology’ here, used by an Jewish-American architect working post-WWII in NYC, alludes to Adorno⁠/​Frankfurt School Marxist criticism of American society & projects like the pseudoscience of The Authoritarian Personality⁠; Modernist architecture is implied here to be anti-fascist, in opposition to the mass appeal of more neo-classical or folk Nazi architecture⁠.]

CA: Good God!

PE: Nor should you feel angry. I think you should just feel this harmony is something that the majority of the people need and want. But equally there must be people out there like myself who feel the need for incongruity, disharmony, etc.

CA: If you were an unimportant person, I would feel quite comfortable letting you go your own way. But the fact is that people who believe as you do are really f—king up the whole profession of architecture right now by propagating these beliefs. Excuse me, I’m sorry, but I feel very, very strongly about this. It’s all very well to say: “Look, harmony here, disharmony there, harmony here—it’s all fine”. But the fact is that we as architects are entrusted with the creation of that harmony in the world. And if a group of very powerful people, yourself and others …

PE: I am not preaching disharmony. I am suggesting that disharmony might be part of the cosmology that we exist in. I am not saying right or wrong. My children live with an unconscious fear that they may not live out their natural lives. [see previous note, Woody Allen etc] I am not saying that fear is good. I am trying to find a way to deal with that anxiety. An architecture that puts its head in the sand and goes back to neoclassicism⁠, and Schinkel⁠, Lutyens⁠, and Ledoux⁠, does not seem to be a way of dealing with the present anxiety. Most of what my colleagues are doing today does not seem to be the way to go. Equally, I do not believe that the way to go, as you suggest, is to put up structures to make people feel comfortable, to preclude that anxiety. What is a person to do if he cannot react against anxiety or see it pictured in his life? After all, that is what all those evil Struwwelpeter characters are for in German fairy tales.

CA: Don’t you think there is enough anxiety at present? Do you really think we need to manufacture more anxiety in the form of buildings?

PE: Let me see if I can get it to you another way. Tolstoy wrote about the man who had so many modern conveniences in Russia that when he was adjusting the chair and the furniture, etc., that he was so comfortable and so nice and so pleasant that he didn’t know—he lost all control of his physical and mental reality. There was nothing. What I’m suggesting is that if we make people so comfortable in these nice little structures of yours, that we might lull them into thinking that everything’s all right, Jack, which it isn’t. And so the role of art or architecture might be just to remind people that everything wasn’t all right. And I’m not convinced, by the way, that it is all right.

“Epigrams on Programming”, Perlis 1982

1982-perlis.pdf: “Epigrams on Programming”⁠, Alan J. Perlis (1982-09; ⁠, ⁠, ; backlinks; similar):

[130 epigrams on computer science & technology, compiled for ACM’s SIGPLAN journal, by noted computer scientist and programming language researcher Alan Perlis⁠. The epigrams are a series of short, programming-language-neutral, humorous statements about computers and programming, distilling lessons he had learned over his career, which are widely quoted.]

8. A programming language is low level when its programs require attention to the irrelevant….19. A language that doesn’t affect the way you think about programming, is not worth knowing….54. Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy.

15. Everything should be built top-down, except the first time….30. In programming, everything we do is a special case of something more general—and often we know it too quickly….31. Simplicity does not precede complexity, but follows it….58. Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it….65. Make no mistake about it: Computers process numbers—not symbols. We measure our understanding (and control) by the extent to which we can arithmetize an activity….56. Software is under a constant tension. Being symbolic it is arbitrarily perfectible; but also it is arbitrarily changeable.

1. One man’s constant is another man’s variable. 34. The string is a stark data structure and everywhere it is passed there is much duplication of process. It is a perfect vehicle for hiding information.

36. The use of a program to prove the 4-color theorem will not change mathematics—it merely demonstrates that the theorem, a challenge for a century, is probably not important to mathematics.

39. Re graphics: A picture is worth 10K words—but only those to describe the picture. Hardly any sets of 10K words can be adequately described with pictures.

48. The best book on programming for the layman is Alice in Wonderland; but that’s because it’s the best book on anything for the layman.

77. The cybernetic exchange between man, computer and algorithm is like a game of musical chairs: The frantic search for balance always leaves one of the 3 standing ill at ease….79. A year spent in artificial intelligence is enough to make one believe in God….84. Motto for a research laboratory: What we work on today, others will first think of tomorrow.

91. The computer reminds one of Lon Chaney—it is the machine of a thousand faces.

7. It is easier to write an incorrect program than understand a correct one….93. When someone says “I want a programming language in which I need only say what I wish done”, give him a lollipop….102. One can’t proceed from the informal to the formal by formal means.

100. We will never run out of things to program as long as there is a single program around.

108. Whenever 2 programmers meet to criticize their programs, both are silent….112. Computer Science is embarrassed by the computer….115. Most people find the concept of programming obvious, but the doing impossible. 116. You think you know when you can learn, are more sure when you can write, even more when you can teach, but certain when you can program. 117. It goes against the grain of modern education to teach children to program. What fun is there in making plans, acquiring discipline in organizing thoughts, devoting attention to detail and learning to be self-critical?

[Warning: There is an HTML version which is more commonly linked; however, it appears to omit a few epigrams, and mispell others in harmful ways.]

“The Concept of a Meta-Font”, Knuth 1982

1982-knuth.pdf: “The Concept of a Meta-Font”⁠, Donald E. Knuth (1982-01; backlinks; similar):

A single drawing of a single letter reveals only a small part of what was in the designer’s mind when that letter was drawn.

But when precise instructions are given about how to make such a drawing, the intelligence of that letter can be captured in a way that permits us to obtain an infinite variety of related letters from the same specification. Instead of merely describing a single letter, such instructions explain how that letter would change its shape if other parameters of the design were changed. Thus an entire font of letters and other symbols can be specified so that each character adapts itself to varying conditions in an appropriate way.

Initial experiments with a precise language for pen motions suggest strongly that the font designer of the future should not simply design isolated alphabets; the challenge will be to explain exactly how each design should adapt itself gracefully to a wide range of changes in the specification.

This paper gives examples of a meta-font and explains the changeable parameters in its design.

“Breaking Paragraphs into Lines”, Knuth & Plass 1981

1981-knuth.pdf: “Breaking paragraphs into lines”⁠, Donald E. Knuth, Michael F. Plass (1981-11; ; backlinks; similar):

This paper discusses a new approach to the problem of dividing the text of a paragraph into lines of ~equal length.

Instead of simply making decisions one line at a time, the method considers the paragraph as a whole, so that the final appearance of a given line might be influenced by the text on succeeding lines.

A system based on three simple primitive concepts called ‘boxes’, ‘glue’, and ‘penalties’ provides the ability to deal satisfactorily with a wide variety of typesetting problems in an unified framework, using a single algorithm that determines optimum breakpoints. The algorithm avoids backtracking by a judicious use of the techniques of dynamic programming⁠.

Extensive computational experience confirms that the approach is both efficient and effective in producing high-quality output. The paper concludes with a brief history of line-breaking methods, and an appendix presents a simplified algorithm that requires comparatively few resources.

“The Letter S”, Knuth 1980

1980-knuth.pdf: “The Letter S”⁠, Donald E. Knuth (1980-09-01):

This expository paper explains how the problem of drawing the letter ‘S’ leads to interesting problems in elementary calculus and analytic geometry.

It also gives a brief introduction to the author’s METAFONT language for alphabet design.

The Printing of Mathematics: Aids for Authors and Editors and Rules for Compositors and Readers at the University Press, Oxford”, Chaundy et al 1954

1954-chaundy-theprintingofmathematics.pdf: The Printing of Mathematics: Aids for Authors and Editors and Rules for Compositors and Readers at the University Press, Oxford⁠, T. W. Chaundy, P. R. Barrett, Charles Batey (1954; similar):

Although mechanical composition had become firmly established in printing-houses long before 1930, no substantial attempt had been made before that time to develop the resources of the machine, or adapt the technique of the machine compositor, to the exacting demands of mathematical printing. In that year the first serious approach to the problem was made at the University Press in Oxford. The early experiments were made in collaboration with Professor G. H. Hardy and Professor R. H. Fowler, and the editors of the Quarterly Journal of Mathematics (for which these first essays were designed) and with the Monotype Corporation. Much adaptation and recutting of type faces was necessary before the new system could be brought into use. These joint preparations included the drafting of an entirely new code of ‘Rules for the Composition of Mathematics’ which has been reserved hitherto for the use of compositors at the Press and those authors and editors whose work was produced under the Press imprints. It is now felt that these rules should have a wider circulation since, in the twenty years which have intervened, they have acquired a greater importance.

…The original ‘Rules’, themselves amended by continuous trial and rich experience, are here preceded by two new chapters. The first chapter is a simple explanation of the technique of printing and is addressed to those authors who are curious to know how their writings are transformed to the orderliness of the printed page; the second chapter, begun as the offering of a mathematical author and editor to his fellow-workers in this field, culled from notes gathered over many years, has ended in closest collaboration with the reader who for as many years has reconciled the demands of author, editor, and printer; the third chapter is the aforesaid collection of ‘Rules’ and is intended for compositors, readers, authors, and editors. Appendixes follow on Handwriting, Types available, and Abbreviations. It is not expected that anyone will read this book from cover to cover, but it is hoped that both author and printer will find it an acceptable and ready work of reference.

List Of Illustrations · I. The Mechanics Of Mathematical Printing · II. Recommendations To Mathematical Authors · 1. Introduction · 2. Fractions · 3. Surds · 4. Superiors And Inferiors · 5. Brackets · 6. Embellished Characters · 7. Displayed Formulae · 8. Notation (Miscellaneous) · 9. Headings And Numbering · 10. Footnotes And References · 11. Varieties Of Type · 12. Punctuation · 13. Wording · 14. Preparing Copy · 15. Corrections Of Proofs · 16. Final Queries And Offprints · III. Rules For The Composition Of Mathematics At The University Press, Oxford · Appendixes: · A. Legible Handwriting · B. Type Specimens And List Of Special Sorts · C. Abbreviations · Index

“The Translators of The Thousand and One Nights”, Borges 1936

1936-borges-thetranslatorsofthethousandandonenights.pdf: “The Translators of The Thousand and One Nights⁠, Jorge Luis Borges (1936; ; backlinks; similar):

[18pg Borges essay on translations of the collection of Arab fairytales The Thousand and One Nights: each translator—Galland⁠, Lane⁠, Burton⁠, Littmann⁠, Mardrus—criticized the previous translator by creation.]

At Trieste, in 1872, in a palace with damp statues and deficient hygienic facilities, a gentleman on whose face an African scar told its tale-Captain Richard Francis Burton, the English consul-embarked on a famous translation of the Quitab alif laila ua laila, which the roumis know by the title The Thousand and One Nights. One of the secret aims of his work was the annihilation of another gentleman (also weather-beaten, and with a dark and Moorish beard) who was compiling a vast dictionary in England and who died long before he was annihilated by Burton. That gentleman was Edward Lane, the Orientalist, author of a highly scrupulous version of The Thousand and One Nights that had supplanted a version by Galland. Lane translated against Galland, Burton against Lane; to understand Burton we must understand this hostile dynasty.

“The Art of Spacing: A Treatise on the Proper Distribution of White Space in Typography”, Bartels 1926

1926-bartels-theartofspacing.pdf: “The Art of Spacing: A Treatise on the Proper Distribution of White Space in Typography”⁠, Samuel A. Bartels (1926-01-01; backlinks)

“Ornament and Crime”, Loos 1910

1910-loos.pdf: “Ornament and Crime”⁠, Adolf Loos (1910-01-01)

“Olivetti Valentine”, Soul 2022

“Olivetti Valentine”⁠, Mass Made Soul (; backlinks; similar):

It came with a slide-on case that ingeniously fastens to the back plate of the typewriter with rubber straps. Unfortunately, over time these would often dry out, crack, and break off. This example still has them intact, but given its age, it’s not a good idea to rely on them to carry it around!

The body is made largely of shiny ABS plastic, while the case has a heavy matte texture, and some key structural pieces, such as the ends of the platen, are of painted metal. The bright orange caps of the ribbon reels perk up the actual mechanism, something which in other typewriters is typically hidden from view…The large fold-out handle on the back of the machine (what becomes the top when carrying it in its case) overtly invites picking up the Valentine and taking it along for a joy ride, much as the handle on the first Mac signified the same intent. The case itself was custom-designed to match the aesthetic, unlike most typewriter cases of the day, which were nondescript black or gray plastic, or perhaps semi-soft vinyl. This is another example of Sottsass’ thinking about the whole user experience (as we would call it today).

…The Valentine was conceived as competitor to the inexpensive units coming on to the market from Japan. Sottsass had some interesting ideas about how to simplify and lower the cost of the machine, such as not having lower case letters (EVERYTHING WOULD BE SHOUTING IN UPPER CASE!), removing the bell that went “ding” at the end of the line, and using an inexpensive plastic for the case. Olivetti rejected all these as too radical, and used the higher-quality ABS plastic for the case, which pushed the price up higher than Sottsass had wanted.

“[F]or use in any place except in an office, so as not to remind anyone of the monotonous working hours, but rather to keep amateur poets company on quiet Sundays in the country or to provide a highly colored object on a table in a studio apartment. An anti-machine machine, built around the commonest mass-produced mechanism, the works inside any typewriter, it may also seem to be an unpretentious toy.”

Ettore Sottsass

“Kicks Condor”, Condor 2022

“Kicks Condor”⁠, Kicks Condor ():

[Homepage of programmer Kicks Condor; hypertext-oriented link compilation and experimental design blog.]

“The Model Book of Calligraphy (1561–1596) [image Gallery]”, Review 2022

“The Model Book of Calligraphy (1561–1596) [image gallery]”⁠, The Public Domain Review (; similar):

Pages from a remarkable book entitled Mira calligraphiae monumenta (The Model Book of Calligraphy), the result of a collaboration across many decades between a master scribe, the Croatian-born Georg Bocskay, and Flemish artist Joris Hoefnagel. In the early 1560s, while secretary to the Holy Roman Emperor Ferdinand I, Bocksay produced his Model Book of Calligraphy, showing off the wonderful range of writing style in his repertoire. Some 30 years later (and 15 years after the death of Bocskay), Ferdinand’s grandson, who had inherited the book, commissioned Hoefnagel to add his delightful illustrations of flowers, fruits, and insects. It would prove to be, as The Getty, who now own the manuscript, comment, “one of the most unusual collaborations between scribe and painter in the history of manuscript illumination”. In addition to the amendments to Bocksay’s pages shown here, Hoefnagel also added an elaborately illustrated section on constructing the letters of the alphabet which we featured on the site a while back.

“Alexander Graham Bell’s Tetrahedral Kites (1903–9) [image Gallery]”, Review 2022

“Alexander Graham Bell’s Tetrahedral Kites (1903–9) [image gallery]”⁠, The Public Domain Review (⁠, ; similar):

The wonderful imagery documenting Alexander Graham Bell’s experiments with tetrahedral box kites…the Scottish-born inventor Alexander Graham Bell is also noted for his work in aerodynamics, a rather more photogenic endeavour perhaps, as evidenced by the wonderful imagery documenting his experiments with tetrahedral kites.

The series of photographs depict Bell and his colleagues demonstrating and testing out a number of different kite designs, all based upon the tetrahedral structure, to whose pyramid-shaped cells Bell was drawn as they could share joints and spars and so crucially lessen the weight-to-surface area ratio.

…Bell began his experiments with tetrahedral box kites in 1898, eventually developing elaborate structures comprised of multiple compound tetrahedral kites covered in maroon silk, constructed with the aim of carrying a human through the air⁠. Named Cygnet I, II, and III (for they took off from water) [cf. AEA Cygnet] these enormous tetrahedral beings were flown both unmanned and manned during a 5 year period from 1907 until 1912.

“Oliver Byrne’s Edition of Euclid [Scans]”, Casselman 2022

“Oliver Byrne’s edition of Euclid [Scans]”⁠, Bill Casselman (; backlinks):

Online scanned edition; part of a set of Euclid editions.

TI-83

Wikipedia

Sea Ranch, California

Wikipedia

Roblox

Wikipedia

Repl.it

Wikipedia

Rams (2018 film)

Wikipedia

RPG Maker

Wikipedia

Olivetti

Wikipedia

Muji

Wikipedia

Minecraft

Wikipedia

List of lists of lists

Wikipedia

Jerrycan

Wikipedia

Industrial design

Wikipedia

Hypercard

Wikipedia

Helvetica (film)

Wikipedia

Helvetica

Wikipedia

Gary Hustwit

Wikipedia

Dieter Rams § "Good design" principles

Wikipedia

Dieter Rams

Wikipedia

Braun (company)

Wikipedia

“Tufte-CSS: Sidenotes: Footnotes and Marginal Notes”, Liepmann 2022

“Tufte-CSS: Sidenotes: Footnotes and Marginal Notes”⁠, Dave Liepmann (; backlinks; similar):

One of the most distinctive features of Tufte’s style is his extensive use of sidenotes.3 Sidenotes are like footnotes, except they don’t force the reader to jump their eye to the bottom of the page, but instead display off to the side in the margin. Perhaps you have noticed their use in this document already. You are very astute.

Sidenotes are a great example of the web not being like print. On sufficiently large viewports, Tufte CSS uses the margin for sidenotes, margin notes, and small figures. On smaller viewports, elements that would go in the margin are hidden until the user toggles them into view. The goal is to present related but not necessary information such as asides or citations as close as possible to the text that references them. At the same time, this secondary information should stay out of the way of the eye, not interfering with the progression of ideas in the main text.

…If you want a sidenote without footnote-style numberings, then you want a margin note. Notice there isn’t a number preceding the note. On large screens, a margin note is just a sidenote that omits the reference number. This lessens the distracting effect taking away from the flow of the main text, but can increase the cognitive load of matching a margin note to its referent text.

“Markdeep Features: Multiple Columns”, McGuire 2022

“Markdeep Features: Multiple Columns”⁠, Morgan McGuire:

You can use the CSS columns style to make an HTML multicolumn block. Then, just use regular Markdeep within it and the browser will automatically apply your multicolumn layout… multi-column only works well if you know that you have very short sections (as in this example), or if you were planning on printing to separate pages when done.

“Markdeep Features: Admonitions”, McGuire 2022

“Markdeep Features: Admonitions”⁠, Morgan McGuire (; backlinks; similar):

Admonitions are small break-out boxes with notes, tips, warnings, etc. for the reader. They begin with a title line of a pattern of three exclamation marks, an optional CSS class, and an optional title. All following lines that are indented at least three spaces are included in the body, which may include multiple paragraphs. The default stylesheet provides classes for “note” (default), “tip”, “warning”, and “error”.

Miscellaneous