[Fiction writing exercise by James Yu, using OpenAI GPT-3 via Sudowrite as a coauthor and interlocutor, to write a SF story about AIs and the Singularity. Rather than edit GPT-3 output, Yuwrites most passages and alternates with GPT-3 completions. Particularly striking for the use of meta-fictional discussion, presented in sidenotes, where Yu and GPT-3 debate the events of the story: “I allowed GPT-3 to write crucial passages, and each time, I chatted with it”in character“, prompting it to role-play.”]
In each of these stories, colored text indicates a passage written by GPT-3. I used the Sudowrite app to generate a set of possibilities, primed with the story’s premise and a few paragraphs.
I chatted with GPT-3 about the passage, prompting it to roleplay as the superintelligent AI character in each story. I question the AI’s intent, leading to a meta-exchange where we both discover and create the fictional narrative in parallel. This kind of interaction—where an author can spontaneously talk to their characters—can be an effective tool for creative writing. And at times, it can be quite unsettling.
Can GPT-3 hold beliefs? Probably not, since it is simply a pile of word vectors. However, these transcripts could easily fool me into believing that it does.
Emily Dourish, deputy keeper of Rare Books and Early Manuscripts at the Cambridge University Library, was recently making rounds through the collection when she made a most unusual discovery. Wedged inside a Renaissance-era volume of Saint Augustine’s complete works sat a flat, decaying, dry, partially eaten snack—likely a cookie, or “some kind of fruit bun”, though Dourish admits that the treat was well past easy identification.
…It’s not the first time that Dourish or her colleagues have found foreign objects inside their rare books. Over the years, they’ve encountered flower petals, unexpected annotations, bits of medieval manuscripts within actual book bindings, and even an unknown poem by the Dutch scholar Erasmus. One particularly notable example was a key found by Dourish’s colleague in a medieval manuscript, which left a rusty impression even after its removal…Sometimes, you find a plant inside a 15th-century German Bible……or wax drippings in 16th-century Spanish prayer books.
This wonderful series of medieval cosmographic diagrams and schemas are sourced from a late 12th-century manuscript created in England. Coming to only nine folios, the manuscript is essentially a scientific textbook for monks, bringing together cosmographical knowledge from a range of early Christian writers such as Bede and Isodere, who themselves based their ideas on such classical sources as Pliny the Elder, though adapting them for their new Christian context. As for the intriguing diagrams themselves, The Walters Art Museum, which holds the manuscript and offers up excellent commentary on its contents, provides the following description:
The twenty complex diagrams that accompany the texts in this pamphlet help illustrate [the ideas], and include visualizations of the heavens and earth, seasons, winds, tides, and the zodiac, as well as demonstrations of how these things relate to man. Most of the diagrams are rotae, or wheel-shaped schemata, favored throughout the Middle Ages for the presentation of scientific and cosmological ideas because they organized complex information in a clear, orderly fashion, making this material easier to apprehend, learn, and remember. Moreover, the circle, considered the most perfect shape and a symbol of God, was seen as conveying the cyclical nature of time and the Creation as well as the logic, order, and harmony of the created universe.
[Summary of the Homeric Question that gripped Western classical literary scholarship for centuries: who wrote the Iliad/Odyssey, when, and how? They appear in Greek history out of nowhere: 2 enormously lengthy, sophisticated, beautiful, canonical, unified works that would dominate Western literature for millennia, and yet, appeared to draw on no earlier tradition nor did Homer have any earlier (non-spurious) works. How was this possible?
The iconoclastic Analysts proposed it was a fraud, and the works were pieced together later out of scraps from many earlier poets. The Unitarians pointed to the overall quality; the complex (apparently planned) structure; the disagreements of Analysts on what parts were what pieces; and the Analysts’ inability to explain many anomalies in Homer: there are passages splicing together Greek dialects, passages which were metrical only given long-obsolete Greek letters/pronunciations, and even individual words which mixed up Greek dialects! (Not that these anomalies were all that much easier to explain by the Unitarian hypothesis of a single author).
The eventual resolution relied an old hypothesis: that Homer was in fact the product of a lost oral tradition. There was, unfortunately, no particular evidence for it, and so it never made any headway against the Analysts or Unitarians—until Milman Parry found a living oral tradition of epic poetry in the Balkans, and discovered in it all the signs of the Homeric poems, from repetitive epithets to a patchwork of dialects, and thus empirical examples of how long oral traditions could produce a work like Homer if one of them happened to get written down at some point.]
BERT, a neural network published by Google in 2018, excels in natural language understanding. It can be used for multiple different tasks, such as sentiment analysis or next sentence prediction, and has recently been integrated into Google Search. This novel model has brought a big change to language modeling as it outperformed all its predecessors on multiple different tasks. Whenever such breakthroughs in deep learning happen, people wonder how the network manages to achieve such impressive results, and what it actually learned. A common way of looking into neural networks is feature visualization. The ideas of feature visualization are borrowed from Deep Dream, where we can obtain inputs that excite the network by maximizing the activation of neurons, channels, or layers of the network. This way, we get an idea about which part of the network is looking for what kind of input.
In Deep Dream, inputs are changed through gradient descent to maximize activation values. This can be thought of as similar to the initial training process, where through many iterations, we try to optimize a mathematical equation. But instead of updating network parameters, Deep Dream updates the input sample. What this leads to is somewhat psychedelic but very interesting images, that can reveal to what kind of input these neurons react. Examples for Deep Dream processes with images from the original Deep Dream blogpost. Here, they take a randomly initialized image and use Deep Dream to transform the image by maximizing the activation of the corresponding output neuron. This can show what a network has learned about different classes or for individual neurons.
Feature visualization works well for image-based models, but has not yet been widely explored for language models. This blogpost will guide you through experiments we conducted with feature visualization for BERT. We show how we tried to get BERT to dream of highly activating inputs, provide visual insights of why this did not work out as well as we hoped, and publish tools to explore this research direction further. When dreaming for images, the input to the model is gradually changed. Language, however, is made of discrete structures, ie. tokens, which represent words, or word-pieces. Thus, there is no such gradual change to be made…Looking at a single pixel in an input image, such a change could be gradually going from green to red. The green value would slowly go down, while the red value would increase. In language, however, we can not slowly go from the word “green” to the word “red”, as everything in between does not make sense. To still be able to use Deep Dream, we have to utilize the so-called Gumbel-Softmax trick, which has already been employed in a paper by Poerner et al 2018. This trick was introduced by Jang et. al. and Maddison et. al.. It allows us to soften the requirement for discrete inputs, and instead use a linear combination of tokens as input to the model. To assure that we do not end up with something crazy, it uses two mechanisms. First, it constrains this linear combination so that the linear weights sum up to one. This, however, still leaves the problem that we can end up with any linear combination of such tokens, including ones that are not close to real tokens in the embedding space. Therefore, we also make use of a temperature parameter, which controls the sparsity of this linear combination. By slowly decreasing this temperature value, we can make the model first explore different linear combinations of tokens, before deciding on one token.
…The lack of success in dreaming words to highly activate specific neurons was surprising to us. This method uses gradient descent and seemed to work for other models (see Poerner et al 2018). However, BERT is a complex model, arguably much more complex than the models that have been previously investigated with this method.
Reviving an old General Semantics proposal: borrowing from scientific notation and using subscripts like ‘Gwern2020’ for denoting sources (like citation, timing, or medium) might be a useful trick for clearer writing, compared to omitting such information or using standard cumbersome circumlocutions.
Humanity’s rich history has left behind an enormous number of historical documents and artifacts. However, virtually none of these documents, containing stories and recorded experiences essential to our cultural heritage, can be understood by non-experts due to language and writing changes over time…This is a global problem, yet one of the most striking examples is the case of Japan. From 800 until 1900 CE, Japan used a writing system called Kuzushiji, which was removed from the curriculum in 1900 when the elementary school education was reformed. Currently, the overwhelming majority of Japanese speakers cannot read texts which are more than 150 years old. The volume of these texts—comprised of over three million books in storage but only readable by a handful of specially-trained scholars—is staggering. One library alone has digitized 20 million pages from such documents. The total number of documents—including, but not limited to, letters and personal diaries—is estimated to be over one billion. Given that very few people can understand these texts, mostly those with PhDs in classical Japanese literature and Japanese history, it would be very expensive and time-consuming to finance for scholars to convert these documents to modern Japanese. This has motivated the use of machine learning to automatically understand these texts.
…Given its importance to Japanese culture, the problem with utilizing computers to help with Kuzushiji recognition has been explored extensively through the use of various methods in deep learning and computer vision. However, these models were unable to achieve strong performance on Kuzushiji recognition. This was due to inadequate understanding of Japanese historical literature in the optical character recognition (OCR) community and the lack of high quality standardized datasets. To address this, the National Institute of Japanese Literature (NIJL) created and released a Kuzushijidataset, curated by the Center for Open Data in the Humanities (CODH). The dataset currently has over 4000 character classes and a million character images.
KuroNet: KuroNet is a Kuzushiji transcription model that I developed with my collaborators, Tarin Clanuwat and Asanobu Kitamoto from the ROIS-DS Center for Open Data in the Humanities at the National Institute of Informatics in Japan. The KuroNet method is motivated by the idea of processing an entire page of text together, with the goal of capturing both long-range and local dependencies. KuroNet passes images containing an entire page of text through a residual U-Net architecture (FusionNet) in order to obtain a feature representation…For more information about KuroNet, please checkout our paper “KuroNet: Pre-Modern Japanese Kuzushiji Character Recognition with Deep Learning”, which was accepted to the 2019 International Conference on Document Analysis and Recognition (ICDAR)
…Kaggle Kuzushiji Recognition Competition: While KuroNet achieved state-of-the-art results at the time of its development and was published in the top tier conference on document analysis and recognition, we wanted to open this research up to the broader community. We did this partially to stimulate further research on Kuzushiji and to discover ways in which KuroNet may be deficient. Ultimately, after 3 months of competition, which saw 293 teams, 338 competitors, and 2652 submissions, the winner achieved an F1 score of 0.950. When we evaluated KuroNet on the same setup, we found that it achieved an F1 score 0.902, which would have put it in 12th place—which, although acceptable, remains well below the best performing solutions.
…Future Research: The work done by CODH has already led to substantial progress in transcribing Kuzushiji documents, however, the overall problem of unlocking the knowledge of historical documents is far from solved.
[A look into the signature typefaces of Evangelion: Matisse EB, mechanical compression for distorted resizing, and title cards. Covered typefaces: Matisse/Helvetica/Neue Helvetica/Times/Helvetica Condensed/Chicago/Cataneo/Futura/Eurostile/ITC Avant Garde Gothic/Gill Sans.]
Evangelion was among the first anime to create a consistent typographic identity across its visual universe, from title cards to NERV’s user interfaces. Subcontractors usually painted anything type-related in an anime by hand, so it was a novel idea at the time for a director to use desktop typesetting to exert typographic control. Although sci-fi anime tended to use either sans serifs or hand lettering that mimicked sans serifs in 1995, Anno decided to buck that trend, choosing a display serif for stronger visual impact. After flipping through iFontworks’ specimen catalog, he personally selected the extra-bold (EB) weight of Matisse (マティス), a Mincho-style serif family…A combination of haste and inexperience gave Matisse a plain look and feel, which turned out to make sense for Evangelion. The conservative skeletal construction restrained the characters’ personality so it wouldn’t compete with the animation; the extreme stroke contrast delivered the desired visual punch. Despite the fact that Matisse was drawn on the computer, many of its stroke corners were rounded, giving it a hand-drawn, fin-de-siècle quality.
…In addition to a thorough graphic identity, Evangelion also pioneered a deep integration of typography as a part of animated storytelling—a technique soon to be imitated by later anime. Prime examples are the show’s title cards and flashing type-only frames mixed in with the animation. The title cards contain nothing but crude, black-and-white Matisse EB, and are often mechanically compressed to fit into interlocking compositions. This brutal treatment started as a hidden homage to the title cards in old Toho movies from the sixties and seventies, but soon became visually synonymous with Evangelion after the show first aired. Innovating on the media of animated storytelling, Evangelion also integrates type-only flashes. Back then, these black-and-white, split-second frames were Anno’s attempt at imprinting subliminal messages onto the viewer, but have since become Easter eggs for die-hard Evangelion fans as well as motion signatures for the entire franchise.
…Established in title cards, this combination of Matisse EB and all-caps Helvetica soon bled into various aspects of Evangelion, most notably the HUD user interfaces in NERV. Although it would be possible to attribute the mechanical compression to technical limitations or typographic ignorance, its ubiquitous occurrence did evoke haste and, at times, despair—an emotional motif perfectly suited to a post-apocalyptic story with existentialist themes.
Expanding the scope of memory systems: what types of understanding can they be used for?
Improving the mnemonic medium: making better cards
Two cheers for mnemonic techniques
How important is memory, anyway?
How to invent Hindu-Arabic numerals?
Part II: Exploring tools for thought more broadly:
Why isn’t there more work on tools for thought today?
Questioning our basic premises
What if the best tools for thought have already been discovered?
Isn’t this what the tech industry does? Isn’t there a lot of ongoing progress on tools for thought?
Why not work on AGI or BCI instead?
Serious work and the aspiration to canonical content
Stronger emotional connection through an inverted writing structure
Summary and Conclusion
… in Quantum Country an expert writes the cards, an expert who is skilled not only in the subject matter of the essay, but also in strategies which can be used to encode abstract, conceptual knowledge. And so Quantum Country provides a much more scalable approach to using memory systems to do abstract, conceptual learning. In some sense, Quantum Country aims to expand the range of subjects users can comprehend at all. In that, it has very different aspirations to all prior memory systems.
More generally, we believe memory systems are a far richer space than has previously been realized. Existing memory systems barely scratch the surface of what is possible. We’ve taken to thinking of Quantum Country as a memory laboratory. That is, it’s a system which can be used both to better understand how memory works, and also to develop new kinds of memory system. We’d like to answer questions such as:
What are new ways memory systems can be applied, beyond the simple, declarative knowledge of past systems?
How deep can the understanding developed through a memory system be? What patterns will help users deepen their understanding as much as possible?
How far can we raise the human capacity for memory? And with how much ease? What are the benefits and drawbacks?
Might it be that one day most human beings will have a regular memory practice, as part of their everyday lives? Can we make it so memory becomes a choice; is it possible to in some sense solve the problem of memory?
popups.js parses a HTML document and looks for <a> links which have the docMetadata attribute class, and the attributes data-popup-title, data-popup-author, data-popup-date, data-popup-doi, data-popup-abstract. (These attributes are expected to be populated already by the HTML document’s compiler, however, they can also be done dynamically. See wikipedia-popups.js for an example of a library which does Wikipedia-only dynamically on page loads.)
For an example of a Hakyll library which generates annotations for Wikipedia/Biorxiv/Arxiv/PDFs/arbitrarily-defined links, seeLinkMetadata.hs.
Cloud apps like Google Docs and Trello are popular because they enable real-time collaboration with colleagues, and they make it easy for us to access our work from all of our devices. However, by centralizing data storage on servers, cloud apps also take away ownership and agency from users. If a service shuts down, the software stops functioning, and data created with that software is lost.
In this article we propose “local-first software”: a set of principles for software that enables both collaboration and ownership for users. Local-first ideals include the ability to work offline and collaborate across multiple devices, while also improving the security, privacy, long-term preservation, and user control of data.
We survey existing approaches to data storage and sharing, ranging from email attachments to web apps to Firebase-backed mobile apps, and we examine the trade-offs of each. We look at Conflict-free Replicated Data Types (CRDTs): data structures that are multi-user from theground up while also being fundamentally local and private. CRDTs have the potential to be a foundational technology for realizing local-first software.
We share some of our findings from developing local-first software prototypes at Ink & Switch over the course of several years. These experiments test the viability of CRDTs in practice, and explore the user interface challenges for this new data model. Lastly, we suggest some next steps for moving towards local-first software: for researchers, for app developers, and a startup opportunity for entrepreneurs.
…in the cloud, ownership of data is vested in the servers, not the users, and so we became borrowers of our own data. The documents created in cloud apps are destined to disappear when the creators of those services cease to maintain them. Cloud services defy long-term preservation. No Wayback Machine can restore a sunsetted web application. The Internet Archive cannot preserve your Google Docs.
In this article we explored a new way forward for software of the future. We have shown that it is possible for users to retain ownership and control of their data, while also benefiting from the features we associate with the cloud: seamless collaboration and access from anywhere. It is possible to get the best of both worlds.
But more work is needed to realize the local-first approach in practice. Application developers can take incremental steps, such as improving offline support and making better use of on-device storage. Researchers can continue improving the algorithms, programming models, and user interfaces for local-first software. Entrepreneurs can develop foundational technologies such as CRDTs and peer-to-peer networking into mature products able to power the next generation of applications.
Motivation: collaboration and ownership
Seven ideals for local-first software
No spinners: your work at your fingertips
Your work is not trapped on one device
The network is optional
Seamless collaboration with your colleagues
The Long Now
Security and privacy by default
You retain ultimate ownership and control
Existing data storage and sharing models
How application architecture affects user experience
Files and email attachments
Web apps: Google Docs, Trello, Figma
Dropbox, Google Drive, Box, OneDrive, etc.
Git and GitHub
Developer infrastructure for building apps
Web app (thin client)
Mobile app with local storage (thick client)
Backend-as-a-Service: Firebase, CloudKit, Realm
Towards a better future
CRDTs as a foundational technology
Ink & Switch prototypes
How you can help
For distributed systems and programming languages researchers
Experimental Pandoc module for implementing automatic inflation adjustment of nominal date-stamped dollar or Bitcoin amounts to provide real prices; Bitcoin’s exchange rate has moved by multiple orders of magnitude over its early years (rendering nominal amounts deeply unintuitive), and this is particularly critical in any economics or technology discussion where a nominal price from 1950 is 11× the 2019 real price!
Years/dates are specified in a variant of my interwiki link syntax; for example: $50 or [₿0.5](₿2017-01-01), giving link adjustments which compile to something like like <span class="inflationAdjusted" data-originalYear="2017-01-01" data-originalAmount="50.50" data-currentYear="2019" data-currentAmount="50,500">₿50.50<span class="math inline"><sub>2017</sub><sup>$50,500</sup></span></span>.
Dollar amounts use year, and Bitcoins use full dates, as the greater temporal resolution is necessary. Inflation rates/exchange rates are specified as constants and need to be manually updated every once in a while; if out of date, the last available rate is carried forward for future adjustments.
The most obvious option—to draw all the illustrations in Illustrator and compose the whole thing in InDesign—was promptly rejected. Geometrical constructions are not exactly the easiest thing to do in Illustrator, and no obvious way to automatically connect the main image to miniatures came to my mind. As for InDesign, although it’s very good at dealing with such visually rich layouts, it promised to scare the hell out of me by the overcrowded “Links” panel. So, without thinking twice, I decided to use other tools that I was familiar with—MetaPost, which made it relatively easy to deal with geometry, and LaTeX, which I knew could do the job. Due to some problems with MetaPost libs for LaTeX, I replaced the latter with ConTeXt that enjoys an out-of-the-box merry relationship with MetaPost.
… There are also initials and vignettes in the original edition. On one hand, they were reasonably easy to recreate (at least, it wouldn’t take a lot of thought to do this), but I decided to go with a more interesting (albeit hopeless) option—automatically generating the initials and vignettes with a random ornament. Not only is it fun, but also, the Russian translation would require adapting the style of the original initials to the Cyrillic script, which was not something I’d prefer to do. So, long story short, when you compile the book, a list of initial letters is written to the disk, and a separate MetaPost script can process it (very slowly) to produce the initials and vignettes. No two of them have the exact same ornament.
Vitsœ was founded in 1959 to manufacture the designs of Dieter Rams, of Braun’s golden years’ fame, a luminary designer who’s championed functional, considered design for well over 60 years. The company is best known for its production of Rams’ 606 Universal Shelving System, a do-it-all, have-forever modular system that can take the form of a few shelves or host an entire inventory of a university library. “I don’t regard this as a piece of architecture. I regard it as a way of thinking”, says Mark Adams, Vitsœ’s managing director, as he shows us around the firm’s Leamington Spa headquarters, which the company moved into in late 2017. “We developed the design with academics for years before building anything”, he says, explaining that the plan was essentially finished before it was handed to architects only at the delivery stage.
…At their new headquarters, Mark is fastidiously explaining elements of the building’s construction and evidence of those decades of work becomes apparent. With restrained enthusiasm he reels off details about the beech laminate veneer used for the building’s frame that he found in a German factory six years ago; about not building to conventional sustainable building standards, which he calls “box ticking exercises”; and he later gently explains how buildings are designed the wrong way around when it comes to thermal insulation. “Ours is designed a bit like if it had a Gore-Tex jacket on: it can release moisture, but it stays insulated.” This, Mark says, is better for people’s wellbeing: “Being hotter in the summer and cooler in the winter is better for your immune system.” That expenditure of time and consideration has resulted in a building in which not a single artificial light needs to be turned on during the day—the building’s party trick, if indeed it has one. Inside, daylight is utilised and amplified, pouring in through the overhead skylights in the sawtooth roof, illuminating the beech frame in splendid fashion.
The building, which amorphously combines manufacturing and office space, along with apartments for internationally-visiting staff, and a restaurant-quality canteen, is truly a mixed-use space. Looking down to the far end, it’s not uncommon for a member of Motionhouse contemporary dance troupe to launch into view above the workstations. “I think it’s completely logical that arts and commerce should be totally interwoven”, proclaims Mark.
…Mid-way into lunch, Mark interjects, inviting us to see how many phones we can spot. We look around and see no vacant faces staring at screens, but rather groups of people chatting and eating at communal tables, while outside a game of pétanque gets underway.
Creating a faithful online reproduction of a book considered one of the most beautiful and unusual publications ever published is a daunting task. Byrne’s Euclid is my tribute to Oliver Byrne’s most celebrated publication from 1847 that illustrated the geometric principles established in Euclid’s original Elements from 300 BC.
In 1847, Irish mathematics professor Oliver Byrne worked closely with publisher William Pickering in London to publish his unique edition titled The First Six Books of the Elements of Euclid in which Coloured Diagrams and Symbols are Used Instead of Letters for the Greater Ease of Learners—or more simply, Byrne’s Euclid. Byrne’s edition was one of the first multicolor printed books and is known for its unique take on Euclid’s original work using colorful illustrations rather than letters when referring to diagrams. The precise use of colors and diagrams meant that the book was very challenging and expensive to reproduce. Little is known about why Byrne only designed 6 of the 13 books but it was could have been due to time and cost involved…I knew of other projects like Sergey Slyusarev’s ConTeXt rendition and Kronecker Wallis’ modern redesign but I hadn’t seen anyone reproduce the 1847 edition online in its entirety and with a design true to the original. This was my goal and I knew it was going to be a fun challenge.
[Detailed discussion of how to use Adobe Illustrator to redraw the modernist art-like primary color diagrams from Bryne in scalable vector graphics (SVG) for use in interactive HTML pages, creation of a custom drop caps / initials font to replicate Bryne, his (questionable) efforts to use the ‘long s’ for greater authenticity, rendering the math using MathJax, and creating posters demonstrating all diagrams from the project for offline viewing.]
We’ve discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks. Since complex tasks tend to have noisier gradients, increasingly large batch sizes are likely to become useful in the future, removing one potential limit to further growth of AI systems. More broadly, these results show that neural network training need not be considered a mysterious art, but can be rigorized and systematized.
In an increasing number of domains it has been demonstrated that deep learning models can be trained using relatively large batch sizes without sacrificing data efficiency. However the limits of this massive data parallelism seem to differ from domain to domain, ranging from batches of tens of thousands in ImageNet to batches of millions in RL agents that play the game Dota 2. To our knowledge there is limited conceptual understanding of why these limits to batch size differ or how we might choose the correct batch size in a new domain. In this paper, we demonstrate that a simple and easy-to-measure statistic called the gradient noise scale predicts the largest useful batch size across many domains and applications, including a number of supervised learning datasets (MNIST,SVHN,CIFAR-10,ImageNet, Billion Word), reinforcement learning domains (Atari and Dota), and even generative model training (autoencoders on SVHN). We find that the noise scale increases as the loss decreases over a training run and depends on the model size primarily through improved model performance. Our empirically-motivated theory also describes the tradeoff between compute-efficiency and time-efficiency, and provides a rough model of the benefits of adaptive batch-size training.
…We have found that by measuring the gradient noise scale, a simple statistic that quantifies the signal-to-noise ratio of the network gradients, we can approximately predict the maximum useful batch size. Heuristically, the noise scale measures the variation in the data as seen by the model (at a given stage in training). When the noise scale is small, looking at a lot of data in parallel quickly becomes redundant, whereas when it is large, we can still learn a lot from huge batches of data…We’ve found it helpful to visualize the results of these experiments in terms of a tradeoff between wall time for training and total bulk compute that we use to do the training (proportional to dollar cost). At very small batch sizes, doubling the batch allows us to train in half the time without using extra compute (we run twice as many chips for half as long). At very large batch sizes, more parallelization doesn’t lead to faster training. There is a “bend” in the curve in the middle, and the gradient noise scale predicts where that bend occurs.
…more powerful models have a higher gradient noise scale, but only because they achieve a lower loss. Thus, there’s some evidence that the increasing noise scale over training isn’t just an artifact of convergence, but occurs because the model gets better. If this is true, then we expect future, more powerful models to have higher noise scale and therefore be more parallelizable. Second, tasks that are subjectively more difficult are also more amenable to parallelization…we have evidence that more difficult tasks and more powerful models on the same task will allow for more radical data-parallelism than we have seen to date, providing a key driver for the continued fast exponential growth in training compute.
Words matter (or so I’m told). Some of my favorite typographic pieces are the ones that use typography not only to deliver a message but to serve as the compositional foundation that a design centers around. Letterforms are just as valuable as graphic elements as they are representations of language, and asking type to serve multiples roles in a composition is a reliable way to elevate the quality of your work…I’ve pulled out a few of my favorite designs that use type in this way and grouped them into shared themes so we can analyze the range of techniques different designers have used to let typography guide their work. Let’s dive in!…
Type Informing Grid: Using one typographic element to influence other pieces of the design
Type as Representation: Rendering type as a manifestation of an object or ideal
Reinforcing Imagery: Type can extend the impact of imagery in a design
Large Type Does Not Mean Structural Type: Big type can be lazy type (Lastly, I wanted to show a few examples that aren’t good examples of type as structure…)
…There’s something freeing about starting a design with a commitment to only using type and words to communicate effectively. I hope this essay demystifies some of the thought processes that can go into improving how you handle type in a variety of situations and leaves you with a different perspective on the pieces discussed, as well as a new toolkit of process-starters for your design work going forward.
Across scientific disciplines, there is a rapidly growing recognition of the need for more statistically robust, transparent approaches to data visualization. Complimentary to this, many scientists have realized the need for plotting tools that accurately and transparently convey key aspects of statistical effects and raw data with minimal distortion.
Previously common approaches, such as plotting conditional mean or median barplots together with error-bars have been criticized for distorting effect size, hiding underlying patterns in the raw data, and obscuring the assumptions upon which the most commonly used statistical tests are based.
Here we describe a data visualization approach which overcomes these issues, providing maximal statistical information while preserving the desired ‘inference at a glance’ nature of barplots and other similar visualization devices. These “raincloud plots” [scatterplots + smoothed histograms/density plot + box plots] can visualize raw data, probability density, and key summary statistics such as median, mean, and relevant confidence intervals in an appealing and flexible format with minimal redundancy.
In this tutorial paper we provide basic demonstrations of the strength of raincloud plots and similar approaches, outline potential modifications for their optimal use, and provide open-source code for their streamlined implementation in R, Python and Matlab. Readers can investigate the R and Python tutorials interactively in the browser using Binder by Project Jupyter.
…To remedy these shortcomings, a variety of visualization approaches have been proposed, illustrated in Figure 2, below. One simple improvement is to overlay individual observations (datapoints) beside the standard bar-plot format, typically with some degree of randomized jitter to improve visibility (Figure 2A). Complementary to this approach, others have advocated for more statistically robust illustrations such as box plots (Tukey 1970), which display sample median alongside interquartile range. Dot plots can be used to combine a histogram-like display of distribution with individual data observations (Figure 2B). In many cases, particularly when parametric statistics are used, it is desirable to plot the distribution of observations. This can reveal valuable information about how e.g., some condition may increase the skewness or overall shape of a distribution. In this case, the ‘violin plot’ (Figure 2C) which displays a probability density function of the data mirrored about the uninformative axis is often preferred (Hintze & Nelson 1998). With the advent of increasingly flexible and modular plotting tools such as ggplot2 (Wickham 2010; Wickham & Chang 2008), all of the aforementioned techniques can be combined in a complementary fashion…Indeed, this combined approach is typically desirable as each of these visualization techniques have various trade-offs.
…On the other hand, the interpretation of dot plots depends heavily on the choice of dot-bin and dot-size, and these plots can also become extremely difficult to read when there are many observations. The violin plot in which the probability density function (PDF) of observations are mirrored, combined with overlaid box plots, have recently become a popular alternative. This provides both an assessment of the data distribution and statistical inference at a glance (SIG) via overlaid box plots3. However, there is nothing to be gained, statistically speaking, by mirroring the PDF in the violin plot, and therefore they are violating the philosophy of minimizing the “data-ink ratio” (Tufte 1983)4.
To overcome these issues, we propose the use of the ‘raincloud plot’ (Neuroconscience 2018), illustrated in Figure 3: The raincloud plot combines a wide range of visualization suggestions, and similar precursors have been used in various publications (e.g., Ellison 1993, Figure 2.4; Wilson et al 2018). The plot attempts to address the aforementioned limitations in an intuitive, modular, and statistically robust format. In essence, raincloud plots combine a ‘split-half violin’ (an un-mirrored PDF plotted against the redundant data axis), raw jittered data points, and a standard visualization of central tendency (i.e., mean or median) and error, such as a boxplot. As such the raincloud plot builds on code elements from multiple developers and scientific programming languages (Hintze & Nelson 1998; Patil 2018; Wickham & Chang 2008; Wilke 2017).
Etymology: From Latin et (“and”) + alii (“others”)
Phrase: et alii
And others; used of men or boys, or groups of mixed gender; masculine plural
Usage notes: In some academic contexts, it may be appropriate to use the specific Latin form that would be used in Latin text, selecting the appropriate grammatical case. The abbreviation “et al.” finesses the need for such fastidiousness.
Increasingly, scientific communications are recorded and made available online. While researchers carefully draft the words they use, the quality of the recording is at the mercy of technical staff. Does it make a difference?
We presented identical conference talks (Experiment 1) [n = 97 / k = 2] and radio interviews from NPR’s Science Friday (Experiment 2) [n = 99 / k = 2] in high or low audio quality and asked people to evaluate the researcher and the research they presented.
Despite identical content, people evaluated the research and researcher less favorably when the audio quality was low, suggesting that audio quality can influence impressions of science.
Rams is a documentary portrait of Dieter Rams, one of the most influential designers alive, and a rumination on consumerism, sustainability, and the future of design…In 2008, Gary interviewed Dieter for his documentary Objectified, but was only able to share a small piece of his story in that film. Dieter, who is now 86, is a very private person; however Gary was granted unprecedented access to create the first feature-length documentary about his life and work.
Rams includes in-depth conversations with Dieter, and deep dives into his philosophy, his process, and his inspirations. But one of the most interesting parts of Dieter’s story is that he now looks back on his career with some regret. “If I had to do it over again, I would not want to be a designer”, he’s said. “There are too many unnecessary products in this world.” Dieter has long been an advocate for the ideas of environmental consciousness and long-lasting products. He’s dismayed by today’s unsustainable world of over-consumption, where “design” has been reduced to a meaningless marketing buzzword.
Rams is a design documentary, but it’s also a rumination on consumerism, materialism, and sustainability. Dieter’s philosophy is about more than just design, it’s about a way to live. It’s about getting rid of distractions and visual clutter, and just living with what you need. The film features original music by pioneering musician and producer Brian Eno.
He measures 21 keyboard latencies using a logic analyzer, finding a range of 15–60ms (!), representing a waste of a large fraction of the available ~100–200ms latency budget before a user notices and is irritated (“the median keyboard today adds as much latency as the entire end-to-end pipeline as a fast machine from the 70s.”). The latency estimates are surprising, and do not correlate with advertised traits. They simply have to be measured empirically.]
We can see that, even with the limited set of keyboards tested, there can be as much as a 45ms difference in latency between keyboards. Moreover, a modern computer with one of the slower keyboards attached can’t possibly be as responsive as a quick machine from the 70s or 80s because the keyboard alone is slower than the entire response pipeline of some older computers. That establishes the fact that modern keyboards contribute to the latency bloat we’ve seen over the past forty years…Most keyboards add enough latency to make the user experience noticeably worse, and keyboards that advertise speed aren’t necessarily faster. The two gaming keyboards we measured weren’t faster than non-gaming keyboards, and the fastest keyboard measured was a minimalist keyboard from Apple that’s marketed more on design than speed.
This essay is about the contribution that typing manuals and typists have made to the history of graphic language and communication design, and that typewriter composition has played in typographic education and design practice, particularly in the 1960s and 1970s. The limited technical capabilities of typewriters are discussed in relation to the rules in typing manuals for articulating and organizing the structure of text. Such manuals were used to train typists who went on to produce documents of considerable complexity within what typographers would consider to be minimal means in terms of flexibility in the use of letterforms and space.
…Typing manuals and the relentless repetition of typing exercises in class formed the basis of this training, and generations of office workers acquired considerable knowledge about the visual organization of often complex documents. In the context of the history of typography, typewriter operators (typists, as they became known) were designing within ‘minimal means’. They worked with a restricted range of letterforms and character sets, and with limited flexibility for manipulating vertical and horizontal space. The documents they made—in their material form—were true to the limitations of the machines that made them. Designers and educators also exploited the characteristics (and limitations) of typewriters in their work; in the 1950s and 1960s especially, typewriters were regarded by designers as one of the tools of the trade, though perhaps, as Ken Garland has noted, ‘a design tool that is not usually regarded as such’.2 Design educators such as Norman Potter and Michael Twyman used the limitations of typewriter composition to good effect in teaching typography. And because typing manuals were concerned with the kind of document that Herbert Spencer, in 1952, called ‘utility’ printing (‘technical catalogues, handbooks, timetables, stationery and forms, the primary purpose of which is to inform’3), the typewriter as the means of production for such documents has a place in the history of document design and, by inference, of information design.4 ‘Typewriter composition’ was prevalent in the printing trade in the 1960s and 1970s and many typists who trained on mechanical typewriters went on to become ‘compositors’, working with electric machines such as the IBM 72, the IBM Executive, the Justowriter and later models of the Varityper.5 In this context typists assumed the role of compositor, applying rules acquired through typing training to typesetting in books
…Typewritten material, on the whole, was monochrome, but some document types typically required the used of a second colour to fulfil a particular function. Typing in colours other than black involved either the use of coloured carbon paper, special 2-colour or 3-colour attachments, or a bi-chrome or tri-chrome ribbon. Red, the preferred second colour, was recommended for emphasis and particular words in a text, and was referred to in Pitman’s Typewriter Manual: A practical guide to all classes of typewriting work in 1897 [1909 edition] as ‘variegated typewriting’.46 In the typing of plays, for example, underlining in red was prescribed to denote non-spoken elements, such as stage directions, as shown in the illustration Figure 6. However, as affirmed in Pitman’s Typewriter Manual,47 in recognition of the fact it was time-consuming to do, typists were encouraged to do the red ruling with a pen or pencil—a pragmatic solution. Later typing manuals proposed that when a typewriter was fitted with a red-black bi-chrome ribbon, the non-speaking parts in a play should be typed in red (with no underlining)—an example of simplicity of operation changing conventional practice.
Users are more tolerant of minor usability issues when they find an interface visually appealing. This aesthetic-usability effect can mask UI problems and can prevent issue discovery during usability testing. Identify instances of the aesthetic-usability effect in your user research by watching what your users do, as well as listening to what they say.
It’s a familiar frustration to usability-test moderators: You watch a user struggle through a suboptimal UI, encountering many errors and obstacles. Then, when you ask the user to comment on her experience, all she can talk about is the site’s great color scheme:
During usability testing, one user encountered many issues while shopping on the FitBit site, ranging from minor annoyances in the interaction design to serious flaws in the navigation. She was able to complete her task, but with difficulty. However, in a post-task questionnaire, she rated the site very highly in ease of use. “It’s the colors they used”, she said. “Looks like the ocean, it’s calm. Very good photographs.” The positive emotional response caused by the aesthetic appeal of the site helped mask its usability issues.
Instances like this are often the result of the aesthetic-usability effect.
Definition: The aesthetic-usability effect refers to users’ tendency to perceive attractive products as more usable. People tend to believe that things that look better will work better—even if they aren’t actually more effective or efficient.
Photosynthesis is the basis of primary productivity on the planet. Crop breeding has sustained steady improvements in yield to keep pace with population growth increases. Yet these advances have not resulted from improving the photosynthetic process per se but rather of altering the way carbon is partitioned within the plant. Mounting evidence suggests that the rate at which crop yields can be boosted by traditional plant breeding approaches is wavering, and they may reach a “yield ceiling” in the foreseeable future. Further increases in yield will likely depend on the targeted manipulation of plant metabolism. Improving photosynthesis poses one such route, with simulations indicating it could have a significant transformative influence on enhancing crop productivity. Here, we summarize recent advances of alternative approaches for the manipulation and enhancement of photosynthesis and their possible application for crop improvement.
Born in 1897, Sandberg studied art in Amsterdam before travelling around Europe where he met and learned from printers, artists and teachers, including Johannes Itten, Naum Gabo and Otto Neurath. Upon returning to Amsterdam he became involved with the Stedelijk Museum, initially as a designer and later as curator of modern art from 1937 to 1941. It is after this period that the Second World War became a defining factor in his life. I have, in previous drafts of this piece, tried to summarise his involvement in the conflict, but he did more than is possible to do justice to here. Suffice to say, many items in the Stedelijk collection, not to mention Rembrandt’s The Night Watch and the collection of Van Gogh’s heirs, probably owe their survival to his resistance efforts. Others, such as Simon Garfield, have written about his wartime achievements. I recommend this piece by Mafalda Spencer, my old tutor and daughter of Herbert Spencer, who was one of Sandberg’s pen pals. (Their correspondence, which Mafalda has inherited, is featured in this exhibition.)
After the war Sandberg was made director of the Stedelijk and oversaw hundreds of exhibitions during his 18 years in the role. Throughout this period he carried on designing the catalogues and posters that feature in this exhibition…Among Sandberg’s wartime experience was the period he spent on the run from the Nazis, from 1943 until the end of the war. While in hiding, Sandberg wanted to occupy himself and decided to create a series of small booklets, each ranging from 20 to 60 pages. It is in making these that he seems to have refined what would later be the style he used for the majority of his design work at the Stedelijk. The booklets, which he called experimenta typographica, were filled with illustrations of inspirational quotes, which Sandberg took from great thinkers and other designers…The posters don’t really establish any sense of a coherent identity in the way that a modern designer might be driven to do these days. There isn’t really any consistency in layout, the typefaces chosen to spell out the Stedelijk’s name vary widely and while the use of red in each poster is a constant, it’s not always the same shade. But they do fulfil the criteria for Stedelijk posters of the time that Sandberg himself drew up:
a poster has to be joyous
red has to be in every poster
a poster has to provoke a closer look, otherwise it doesn’t endure
with a respect for society, designer and director both are responsible for the street scene, a poster does not only have to revive the street, it also has to be human
[Discussion with screenshots of the classic Ridley Scott SF movie Blade Runner, which employs typography to disconcert the viewer, with unexpected choices, random capitalization and small caps, corporate branding/advertising, and the mashed-up creole multilingual landscape of noir cyberpunk LA (plus discussion of the buildings and sets, and details such as call costs being correctly inflation-adjusted).]
[Where did our fonts come from? Your standard Latin alphabet can be written in many styles, so where did the regular upright sort of font (which you are reading right now) come from? Boardley traces the evolution of the Roman font from its origins in Imperial Roman styles through to the Renaissance, where it was perfectly placed for the print revolution and canonization as the Western font. Early printers, working in a difficult business, would invent the new typefaces they needed, modeled on humanist scribes’ Roman script, refining the letters into what we know today, including such variants as the lowercase ‘g’ (which looks so different from the handwritten letter).]
The Renaissance affected change in every sphere of life, but perhaps one of its most enduring legacies are the letterforms it bequeathed to us. But their heritage reaches far beyond the Italian Renaissance to antiquity. In ancient Rome, the Republican and Imperial capitals were joined by rustic capitals, square capitals (Imperial Roman capitals written with a brush), uncials, and half-uncials, in addition to a more rapidly penned cursive for everyday use. From those uncial and half-uncial forms evolved a new formal book-hand practiced in France, that spread rapidly throughout medieval Europe.
…From the second quarter of the sixteenth century, roman types, hitherto reserved almost exclusively for classical and humanist literature, began to make inroads into those genres that had traditionally been printed in gothic types. Especially from the 1520s in Paris, we witness books of hours and even Psalters set in roman types.
Two Latin alphabets inspired by both antique and medieval antecedents. Majuscules first incised in stone more than two millennia ago, married to minuscule letterforms that evolved from manuscript hands of the eighth and ninth centuries. The Carolingian or Caroline minuscule joined forces with antique Roman square capitals at the very beginning of the fifteenth century—a conjunction willed by the great Florentine humanists; their forms first wrought in metal by two German immigrants at Subiaco and Rome, honed by a Frenchman, and consummated at the hands of Griffo of Bologna and Aldus the Venetian. A thousand years after the fall of the Roman Empire, the romans returned and re-conquered—yet another thing the Romans have done for us.
[JPL-sponsored Art Deco/WPA poster series with the concept of advertising travel in the Solar System & to exoplanets; public domain & free to download/print.]
A creative team of visual strategists at JPL, known as “The Studio”, created the poster series, which is titled “Visions of the Future.” Nine artists, designers, and illustrators were involved in designing the 14 posters, which are the result of many brainstorming sessions with JPL scientists, engineers, and expert communicators. Each poster went through a number of concepts and revisions, and each was made better with feedback from the JPL experts.
David Delgado, creative strategy: “The posters began as a series about exoplanets—planets orbiting other stars—to celebrate NASA’sstudy of them. (The NASA program that focuses on finding and studyingexoplanets is managed by JPL.) Later, the director of JPL was on vacation at the Grand Canyon with his wife, and they saw a similarly styled poster that reminded them of the exoplanet posters. They suggested it might be wonderful to give a similar treatment to the amazing destinations in our solar system that JPL is currentlyexploring as part of NASA. And they were right! The point was to share a sense of things on the edge of possibility that are closely tied to the work our people are doing today. The JPL director has called our people”architects of the future." As for the style, we gravitated to the style of the old posters the WPA created for the national parks. There’s a nostalgia for that era that just feels good."
Joby Harris, illustrator: “The old WPA posters did a really great job delivering a feeling about a far-off destination. They were created at a time when color photography was not very advanced, in order to capture the beauty of the national parks from a human perspective. These posters show places in our solar system (and beyond) that likewise haven’t been photographed on a human scale yet—or in the case of the exoplanets might never be, at least not for a long time. It seemed a perfect way to help people imagine these strange, new worlds.”
David Delgado: “The WPA poster style is beloved, and other artists have embraced it before us. Our unique take was to take one specific thing about the place and focus on the science of it. We chose exoplanets that had really interesting, strange qualities, and everything about the poster was designed to amplify the concept. The same model guided us for the posters that focus on destinations in the solar system.”
Lois Kim, typography: “We worked hard to get the typography right, since that was a very distinctive element in creating the character of those old posters. We wanted to create a retro-future feel, so we didn’t adhere exactly to the period styles, but they definitely informed the design. The Venus poster has a very curvy, flowy font, for example, to evoke a sense of the clouds.”
Objectives: This study classified and quantified the variation in fractional flow reserve (FFR) due to fluctuations in systemic and coronary hemodynamics during intravenous adenosine infusion.
Background: Although FFR has become a key invasive tool to guide treatment, questions remain regarding its repeatability and stability during intravenous adenosine infusion because of systemic effects that can alter driving pressure and heart rate.
Methods: We reanalyzed data from the VERIFY (VERification of Instantaneous Wave-Free Ratio and Fractional Flow Reserve for the Assessment of Coronary Artery Stenosis Severity in EverydaY Practice) study, which enrolled consecutive patients who were infused with intravenous adenosine at 140 μg/kg/min and measured FFR twice. Raw phasic pressure tracings from the aorta (Pa) and distal coronary artery (Pd) were transformed into moving averages of Pd/Pa. Visual analysis grouped Pd/Pa curves into patterns of similar response. Quantitative analysis of the Pd/Pa curves identified the “smart minimum”FFR using a novel algorithm, which was compared with human core laboratory analysis.
Results: A total of 190 complete pairs came from 206 patients after exclusions. Visual analysis revealed 3 Pd/Pa patterns: “classic” (sigmoid) in 57%, “humped” (sigmoid with superimposed bumps of varying height) in 39%, and “unusual” (no pattern) in 4%. The Pd/Pa pattern repeated itself in 67% of patient pairs. Despite variability of Pd/Pa during the hyperemic period, the “smart minimum”FFR demonstrated excellent repeatability (bias −0.001, SD 0.018, paired p = 0.93, r2 = 98.2%, coefficient of variation = 2.5%). Our algorithm produced FFR values not statistically-significantly different from human core laboratory analysis (paired p = 0.43 vs. VERIFY; p = 0.34 vs. RESOLVE).
Conclusions: Intravenous adenosine produced 3 general patterns of Pd/Pa response, with associated variability in aortic and coronary pressure and heart rate during the hyperemic period. Nevertheless, FFR—when chosen appropriately—proved to be a highly reproducible value. Therefore, operators can confidently select the “smart minimum” FFR for patient care. Our results suggest that this selection process can be automated, yet comparable to human core laboratory analysis.
Statistical methodology has played a key role in scientific animal breeding. Approximately one hundred years of statistical developments in animal breeding are reviewed. Some of the scientific foundations of the field are discussed, and many milestones are examined from historical and critical perspectives. The review concludes with a discussion of some future challenges and opportunities arising from the massive amount of data generated by livestock, plant, and human genome projects.
In recent years many popular data visualizations have emerged that are created largely by designers whose main area of expertise is not computer science. Designers generate these visualizations using a handful of design tools and environments. To better inform the development of tools intended for designers working with data, we set out to understand designers’ challenges and perspectives. We interviewed professional designers, conducted observations of designers working with data in the lab, and observed designers working with data in team settings in the wild. A set of patterns emerged from these observations from which we extract a number of themes that provide a new perspective on design considerations for visualization tool creators, as well as on known engineering problems.
…Patterns: In our observational studies we observed all of the designers initially sketching visual representations of data on paper, on a whiteboard, or in Illustrator. In these sketches, the designers would first draw high-level elements of their design such as the layout and axes, followed by a sketching in of data points based on their perceived ideas of data behavior (P1). An example is shown in Figure 3. The designers often relied on their understanding of the semantics of data to infer how the data might look, such as F1 anticipating that Fitbit data about walking would occur in short spurts over time while sleep data would span longer stretches. However, the designers’ inferences about data behavior were often inaccurate (P2). This tendency was acknowledged by most of the designers: after her inference from data semantics, F1 indicated that to work effectively, she would need “a better idea of the behavior of each attribute.” Similarly, B1 did not anticipate patterns in how software bugs are closed, prompting a reinterpretation and redesign of her team’s visualization much later in the design process once data behavior was explicitly explored. In the time travel studies, T3 misinterpreted one trip that later caused a complete redesign.
Furthermore, the designers’ inferences about data structure were often separated from the actual data (P3). In brainstorming sessions at the hackathon, the designers described data that would be extremely difficult or impossible to gather or derive. In working with the HBO dataset, H1 experienced frustration after he spent time writing a formula in Excel only to realize that he was recreating data he had already seen in the aggregate table…Not surprisingly, the amount of data exploration and manipulation was related to the level of a designer’s experience working with data (P4).
Braun’s first pocket transistor radio, designed by Dieter Rams and produced in 1958. An identical-looking model, the T31, was introduced in 1960 and employed seven transistors.
Much has been made in recent years about the Braun T3 having been the design inspiration for the original Apple iPod—that’s pretty clear by now: Apple’s chief industrial designer Jon Ive is well known for his love of Dieter Rams’s designs, and a number of his Apple product designs bear unmistakable direct influences from classic Braun product designs.
Apple keeps doing things in the Mac OS that leave the user-experience (UX) community scratching its collective head, things like hiding the scroll bars and placing invisible controls inside the content region of windows on computers.
Apple’s mobile devices are even worse: It can take users upwards of 5 seconds to accurately drop the text pointer where they need it, but Apple refuses to add the arrow keys that have belonged on the keyboard from day-one.
Apple’s strategy is exactly right—up to a point:
Apple’s decisions may look foolish to those schooled in UX, but balance that against the fact that Apple consistently makes more money than the next several leaders in the industry combined.
While it’s true Apple is missing something—arrow keys—we in the UX community are missing something, too: Apple’s razor-sharp focus on a user many of us often fail to even consider: The potential user, the buyer. During the first Jobsian era at Apple, I used to joke that Steve Jobs cared deeply about Apple customers from the moment they first considered purchasing an Apple computer right up until the time their check cleared the bank.
…What do most buyers not want? They don’t want to see all kinds of scary-looking controls surrounding a media player. They don’t want to see a whole bunch of buttons they don’t understand. They don’t want to see scroll bars. They do want to see clean screens with smooth lines. Buyers want to buy Ferraris, not tractors, and that’s exactly what Apple is selling.
… Let me offer two examples of Apple objects that aid in selling products, but make life difficult for users thereafter.
The Apple Dock: The Apple Dock is a superb device for selling computers for pretty much the same reasons that it fails miserably as a day-to-day device: A single glance at the Dock lets the potential buyer know that this a computer that is beautiful, fun, approachable, easy to conquer, and you don’t have to do a lot of reading. Of course, not one of these attributes is literally true, at least not if the user ends up exploiting even a fraction of the machine’s potential, but such is the nature of merchandizing, and the Mac is certainly easier than the competition.
The real problem with the Dock is that Apple simultaneously stripped out functionality that was far superior, though less flashy, when they put the Dock in.
Invisible Scroll Bars:
“Gee, the screen looks so clean! This computer must be easy to use!” So goes the thinking of the buyer when seeing a document open in an Apple store, exactly the message Apple intends to impart. The problem right now is that Apple’s means of delivering that message is actually making the computer less easy to use!
…the scroll bar has become a vital status device as well, letting you know at a glance the size of and your current position within a document…Hiding the scroll bar, from a user’s perspective, is madness. If the user wants to actually scroll, it’s bad enough: He or she is now forced to use a thumbwheel or gesture to invoke scrolling, as the scroll bar is no longer even present. However, if the user simply wants to see their place within the document, things can quickly spiral out of control: The only way to get the scroll bar to appear is to initiate scrolling, so the only way to see where you are right now in a document is to scroll to a different part of the document! It may only require scrolling a line or two, but it is still crazy on the face of it! And many windows contain panels with their own scroll bars as well, so trying to trick the correct one into turning on, if you can do so at all (good luck with Safari!) can be quite a challenge…(The scroll bars, even when turned on, are hard to see with their latest mandatory drab gray replacing bright blue and are now so thin they take around twice as long to target as earlier scroll bars. When a company ships products either before user testing or after ignoring the results of that testing, both their product and their users suffer.)
…Industrial design: Borrow the aesthetic, ignore the limitation
While Apple has copied over the aesthetics of industrial design into the software world, they have also copied over its limitation: Whether it be a tractor, Ferrari, or electric toaster, that piece of hardware, in the absence of upgradeable software, will look and act the same the first time you use it as the thousandth time. Software doesn’t share that natural physical limitation, and Apple must stop acting as though it does.
“Dear Valentine, this is to tell you that you are my friend as well as my Valentine, and that I intend to write you lots of letters”, says the user guide of the familiar red typewriter. This purposefully heartwarming greeting sets the tone for Ettore Sottsass’ typewriter. The blood-red Valentine was a fun, light-hearted and smooth-operating symbol of the 1960s Pop era, and its use of bright, playful casing for a piece of traditional office equipment was arguably a precursor to Apple’s 1998 Bondi Blue iMac. “When I was young, all we ever heard about was functionalism, functionalism, functionalism”, said Sottsass. “It’s not enough. Design should also be sensual and exciting.”
The Valentine—created for the Italian brand Olivetti—was designed in collaboration with the British designer Perry King and entered production in 1969. It was not a commercial success. The Valentine was technically mediocre, expensive and failed to sell to a mass audience, yet still became a design classic. Valentines can be found in the permanent collections of London’s Design Museum and MoMA, the typewriter being accepted into the latter just two years after its launch. The product’s critical success was unhindered by its functional limitations because its design focused as much on its emotional connection to users as it did on practical ease of use.
Sottsass set out his stall early on. One of the initial advertising campaigns for the design featured posters by the graphic designer and founder of New York magazine, Milton Glaser. Glaser used a detail of Piero di Cosimo’s renaissance painting, Satyr Mourning over Nymph. In the poster, the Valentine typewriter is placed next to a red setter, an elegant, rambunctious dog; man’s best friend. The suggestion was that Sottsass’ portable accessory could be just as loyal and convivial. How the product performed was arguably irrelevant. It was about how it made you feel.
The Valentine was available in white, green and blue, but its most famous form was red: lipstick-bright ABS plastic casing, with black plastic keys and white lettering. “Every colour has a history”, said Sottsass, “Red is the color of the Communist flag, the colour that makes a surgeon move faster and the color of passion.”
The distinctive colour was calculated to bring vibrancy and fun into the office world of the 1960s. Sottsass said that the Valentine “was invented for use any place except in an office, so as not to remind anyone of monotonous working hours, but rather to keep amateur poets company on quiet Sundays in the country or to provide a highly coloured object on a table in a studio apartment.” The ideas that later manifested themselves in Sottsass’ 1970s Memphis movement—the Milan design group known for its brightly coloured postmodern furniture—were already evident in the Valentine typewriter. Sottsass gave a standardised piece of office equipment personality.
Although, the designer would later dismiss the Valentine—comparing it to “a girl wearing a very short skirt and too much make-up”—its design was an elegant summation of his belief that successful, long-lasting product design was not solely connected to performance, but rather owed as much to the emotional force of a design.
Many studies have provided evidence for the existence of universal constraints on color categorization or naming in various languages, but the biological basis of these constraints is unknown. A recent study of the pattern of color categorization across numerous languages has suggested that these patterns tend to avoid straddling a region in color space at or near the border between the English composite categories of “warm” and “cool”. This fault line in color space represents a fundamental constraint on color naming. Here we report that the two-way categorization along the fault line is correlated with the sign of the L-cone versus M-cone contrast of a stimulus color. Moreover, we found that the sign of the L-M cone contrast also accounted for the two-way clustering of the spatially distributed neural responses in small regions of the macaque primary visual cortex, visualized with optical imaging. These small regions correspond to the hue maps, where our previous study found a spatially organized representation of stimulus hue. Altogether, these results establish a direct link between a universal constraint on color naming and the cone-specific information that is represented in the primate early visual system.
Just as kaput stood for a section or a paragraph, so its diminutive capitulum, or ‘little head’, denoted a chapter. The general Roman preference for the letter ‘C’ had all but seen off the older Etruscan ‘K’ by 300 BC,15 but ‘K’ for kaput persisted some time longer in written documents. By the 12th century, though, ‘C’ for capitulum had overtaken ‘K’ in this capacity as well.16 The use of capitulum in the sense of a chapter of a written work was so closely identified with ecclesiastical documents that it came to be used in church terminology in a bewildering number of ways: monks went ad capitulum, ‘to the chapter (meeting)’, to hear a chapter from the book of their religious orders, or ‘chapter-book’, read out in the ‘chapter room’.17
Monastic scriptoria worked on the same principle as factory production lines, with each stage of book production delegated to a specialist. A scribe would copy out the body of the text, leaving spaces for a ‘rubricator’ to later embellish the text by adding versals (large, elaborate initial letters), headings and other section marks as required. Taken from the Latin rubrico, ‘to colour red’, rubricators often worked in contrasting red ink, which not only added a decorative flourish but also guided the eye to important divisions in the text.18 In the hands of the rubricators, ‘C’ for capitulum came to be accessorized by a vertical bar, as were other litterae notabiliores [notable letters: “enlarged letter within a text, designed to clarify the syntax of a passage”] in the fashion of the time; later, the resultant bowl was filled in and so ‘¢’ for capitulum became the familiar reversed-P of the pilcrow.16
As the capitulum’s appearance changed, so too did its usage. At first used only to mark chapters, it started to pepper texts as a paragraph or even sentence marker so that it broke up a block of running text into meaningful sections as the writer saw fit. ¶ This style of usage yielded very compact text,19 harking back, perhaps, to the still-recent practice of scriptio continua [un-punctuated spaceless writing]. Ultimately, though, the concept of the paragraph overrode the need for efficiency and became so important as to warrant a new line—prefixed with a pilcrow, of course, to introduce it.20
The Great Firewall of China. A massive system of centralized censorship purging the Chinese version of the Internet of all potentially subversive content. Generally agreed to be a great technical achievement and political success even by the vast majority of people who find it morally abhorrent. I spent a few days in China. I got around it at the Internet cafe by using a free online proxy. Actual Chinese people have dozens of ways of getting around it with a minimum of technical knowledge or just the ability to read some instructions.
The Chinese government isn’t losing any sleep over this (although they also don’t lose any sleep over murdering political dissidents, so maybe they’re just very sound sleepers). Their theory is that by making it a little inconvenient and time-consuming to view subversive sites, they will discourage casual exploration. No one will bother to circumvent it unless they already seriously distrust the Chinese government and are specifically looking for foreign websites, and these people probably know what the foreign websites are going to say anyway.
Think about this for a second. The human longing for freedom of information is a terrible and wonderful thing. It delineates a pivotal difference between mental emancipation and slavery. It has launched protests, rebellions, and revolutions. Thousands have devoted their lives to it, thousands of others have even died for it. And it can be stopped dead in its tracks by requiring people to search for “how to set up proxy” before viewing their anti-government website.
…But these trivial inconveniences have major policy implications. Countries like China that want to oppress their citizens are already using “soft” oppression to make it annoyingly difficult to access subversive information. But there are also benefits for governments that want to help their citizens.
This research presents the findings of a study that analyzed words of estimators probability in the key judgments of National Intelligence Estimates from the 1950s through the 2000s. The research found that of the 50 words examined, only 13 were statistically-significant. Furthermore, interesting trends have emerged when the words are broken down into English modals, terminology that conveys analytical assessments and words employed by the National Intelligence Council as of 2006. One of the more intriguing findings is that use of the word will has by far been the most popular for analysts, registering over 700 occurrences throughout the decades; however, a word of such certainty is problematic in the sense that intelligence should never deal with 100% certitude. The relatively low occurrence and wide variety of word usage across the decades demonstrates a real lack of consistency in the way analysts have been conveying assessments over the past 58 years. Finally, the researcher suggests the Kesselman List of Estimative Words for use in the IC. The word list takes into account the literature review findings as well as the results of this study in equating odds with verbal probabilities.
[Rachel’s lit review, for example, makes for very interesting reading. She has done a thorough search of not only the intelligence but also the business, linguistics and other literatures in order to find out how other disciplines have dealt with the problem of “What do we mean when we say something is ‘likely’…” She uncovered, for example, that, in medicine, words of estimative probability such as “likely”, “remote” and “probably” have taken on more or less fixed meanings due primarily to outside intervention or, as she put it, “legal ramifications”. Her comparative analysis of the results and approaches taken by these other disciplines is required reading for anyone in the Intelligence Community trying to understand how verbal expressions of probability are actually interpreted. The NICs list only became final in the last several years so it is arguable whether this list of nine words really captures the breadth of estimative word usage across the decades. Rather, it would be arguable if this chart didn’t make it crystal clear that the Intelligence Community has really relied on just two words, “probably” and “likely” to express its estimates of probabilities for the last 60 years. All other words are used rarely or not at all.
Based on her research of what works and what doesn’t and which words seem to have the most consistent meanings to users, Rachel even offers her own list of estimative words along with their associated probabilities:
Almost certain: 86–99%
Highly likely: 71–85%
Chances a little better [or less] than even: 46–55%
[Description of the most advanced mechanical typesetting system for the challenging task of typesetting mathematics. To provide the typographic quality of hand-set math but at an affordable cost, the Monotype corporation made a huge investment post-WWII into a mechanical typesetting system which would encode every mathematical equation into 4 horizontal ‘lines’ into which could be slotted entries from a vast new family of fonts & symbols, all tweaked to fit in various positions, which would then be spat out by the machine into a single solid lead piece which could be combined with the rest to form a single page, allowing a skilled operator to rapidly ‘type’ his way through a page of math to yield a beautiful custom output without endlessly tedious hand-arranging lots of little metal bits.]
[Originally the draft chapter of the sparkline (“Intense, Simple, Word-Sized Graphics”) chapter of Edward Tufte’sBeautiful Evidence (2005), this page is a compilation of sparkline examples, links to sparkline software tools, and debates over how best to use sparklines to graph statistical data.]
Apocalyptic envisionings of the historical process, whether philosophical, pseudo-scientific or incarnate as chiliastic movements have always been, and in all likelihood will continue to be, an integral dimension in the unfolding of the Euroamerican cultural chreod. This paper begins with some general observations on the genesis and character of apocalyptic movements, then proceeds to trace the psychological roots of Euroamerican apocalyptic thought as expressed in the Trinitarian-dualist formulations of Christian dogma, showing how the writings of the medieval Calabrian mystic Joachim of Fiore (c.1135–1202) created a synthesis of dynamic Trinitarianism and existential dualism within a framework of historical immanence. The resulting Joachimite ‘program’ later underwent further dissemination and distortion within the context of psychospeciation and finally led to the great totalitarian systems of the 20th century, thereby indirectly exercising an influence on the development of psychohistory itself as an independent discipline.
This thesis investigates the possibility to improve the quality of text composition. Two typographic extensions were examined: margin kerning and composing with font expansion.
Margin kerning is the adjustments of the characters at the margins of a typeset text. A simplified employment of margin kerning is hanging punctuation. Margin kerning is needed for optical alignment of the margins of a typeset text, because mechanical justification of the margins makes them look rather ragged. Some characters can make a line appear shorter to the human eye than others. Shifting such characters by an appropriate amount into the margins would greatly improve the appearance of a typeset text.
Composing with font expansion is the method to use a wider or narrower variant of a font to make interword spacing more even. A font in a loose line can be substituted by a wider variant so the interword spaces are stretched by a smaller amount. Similarly, a font in a tight line can be replaced by a narrower variant to reduce the amount that the interword spaces are shrunk by. There is certainly potential danger of font distortion when using such manipulations, thus they must be used with extreme care. The potentiality to adjust a line width by font expansion can be taken into consideration while a paragraph is being broken into lines, in order to choose better breakpoints.
These typographic extensions were implemented in pdfTeX, a derivation of TeX. Heavy experiments have been done to examine the influence of the extensions on the quality of typesetting. The extensions turned out to noticeably improve the appearance of a typeset text. A number of ‘real-world’ documents have been typeset using these typographic extensions, including this thesis.
Visual Explanations: Images and Quantities, Evidence and Narrative [Tufte #3] is about pictures of verbs, the representation of mechanism and motion, process and dynamics, causes and effects, explanation and narrative. Practical applications and examples include statistical graphics, charts for making important decisions in engineering and medicine, technical manuals, diagrams, design of computer interfaces and websites and on-line manuals, animations and scientific visualizations, techniques for talks, and design strategies for enhancing the rate of information transfer in print, presentations, and computer screens. The use of visual evidence in deciding to launch the space shuttle Challenger is discussed in careful detail. Video snapshots show redesigns of a supercomputer animation of a thunderstorm. The book is designed and printed to the highest standards, with luscious color throughout and four built-in flaps for showing motion and before/after effects.
[Extracts from Tufte textbook on graphing information and visual design, where he revives & popularizes Oliver Bryne’s obscure Euclid edition, noting how effectively Bryne converts lengthy proofs into short sequences of cleanly-designed diagrams exploiting primary colors for legibility, and the curious anticipation of modernist design movements like De Stijl.]
Pp. 137, 200+ maps and geological sections (some in color and some color-tinted). Publisher’s two-color printed wrappers, lg folio (20.5×16 inches). This folio comprises scale-accurate, obliquely viewed maps compiled from 1961 to 1986 that portray the physiography of selected areas of the ocean floor and continents. The life’s work of Tau Rho Alpha…the maps are all oblique aerials, and range from 1961 to 1986, so are pre-digital. The ability to represent complex geographic and topography features enlightens many maps of this sort, and the techniques to create this makes for a fascinating read…Some of the benefits of this type of map are discussed, including more realism and easier comprehension, and ability maintain scale. Disadvantages included displacement of features, and hiding of key elements, and a relative inexactness of elevation and location.
[130 epigrams on computer science and technology, published in 1982, for ACM’s SIGPLAN journal, by noted computer scientist and programming language researcher Alan Perlis. The epigrams are a series of short, programming-language-neutral, humorous statements about computers and programming, distilling lessons he had learned over his career, which are widely quoted.]
8. A programming language is low level when its programs require attention to the irrelevant….19. A language that doesn’t affect the way you think about programming, is not worth knowing….54. Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy.
15. Everything should be built top-down, except the first time….30. In programming, everything we do is a special case of something more general—and often we know it too quickly….31. Simplicity does not precede complexity, but follows it….58. Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it….65. Make no mistake about it: Computers process numbers—not symbols. We measure our understanding (and control) by the extent to which we can arithmetize an activity….56. Software is under a constant tension. Being symbolic it is arbitrarily perfectible; but also it is arbitrarily changeable.
1. One man’s constant is another man’s variable. 34. The string is a stark data structure and everywhere it is passed there is much duplication of process. It is a perfect vehicle for hiding information.
36. The use of a program to prove the 4-color theorem will not change mathematics—it merely demonstrates that the theorem, a challenge for a century, is probably not important to mathematics.
39. Re graphics: A picture is worth 10K words—but only those to describe the picture. Hardly any sets of 10K words can be adequately described with pictures.
48. The best book on programming for the layman is Alice in Wonderland; but that’s because it’s the best book on anything for the layman.
77. The cybernetic exchange between man, computer and algorithm is like a game of musical chairs: The frantic search for balance always leaves one of the 3 standing ill at ease….79. A year spent in artificial intelligence is enough to make one believe in God….84. Motto for a research laboratory: What we work on today, others will first think of tomorrow.
91. The computer reminds one of Lon Chaney—it is the machine of a thousand faces.
7. It is easier to write an incorrect program than understand a correct one….93. When someone says “I want a programming language in which I need only say what I wish done”, give him a lollipop….102. One can’t proceed from the informal to the formal by formal means.
100. We will never run out of things to program as long as there is a single program around.
108. Whenever 2 programmers meet to criticize their programs, both are silent….112. Computer Science is embarrassed by the computer….115. Most people find the concept of programming obvious, but the doing impossible. 116. You think you know when you can learn, are more sure when you can write, even more when you can teach, but certain when you can program. 117. It goes against the grain of modern education to teach children to program. What fun is there in making plans, acquiring discipline in organizing thoughts, devoting attention to detail and learning to be self-critical?
A single drawing of a single letter reveals only a small part of what was in the designer’s mind when that letter was drawn. But when precise instructions are given about how to make such a drawing, the intelligence of that letter can be captured in a way that permits us to obtain an infinite variety of related letters from the same specification. Instead of merely describing a single letter, such instructions explain how that letter would change its shape if other parameters of the design were changed. Thus an entire font of letters and other symbols can be specified so that each character adapts itself to varying conditions in an appropriate way. Initial experiments with a precise language for pen motions suggest strongly that the font designer of the future should not simply design isolated alphabets; the challenge will be to explain exactly how each design should adapt itself gracefully to a wide range of changes in the specification. This paper gives examples of a meta-font and explains the changeable parameters in its design.
This paper discusses a new approach to the problem of dividing the text of a paragraph into lines of approximately equal length. Instead of simply making decisions one line at a time, the method considers the paragraph as a whole, so that the final appearance of a given line might be influenced by the text on succeeding lines.
A system based on three simple primitive concepts called ‘boxes’, ‘glue’, and ‘penalties’ provides the ability to deal satisfactorily with a wide variety of typesetting problems in a unified framework, using a single algorithm that determines optimum breakpoints. The algorithm avoids backtracking by a judicious use of the techniques of dynamic programming.
Extensive computational experience confirms that the approach is both efficient and effective in producing high-quality output. The paper concludes with a brief history of line-breaking methods, and an appendix presents a simplified algorithm that requires comparatively few resources.
This expository paper explains how the problem of drawing the letter ‘S’ leads to interesting problems in elementary calculus and analytic geometry. It also gives a brief introduction to the author’s METAFONT language for alphabet design.
Although mechanical composition had become firmly established in printing-houses long before 1930, no substantial attempt had been made before that time to develop the resources of the machine, or adapt the technique of the machine compositor, to the exacting demands of mathematical printing. In that year the first serious approach to the problem was made at the University Press in Oxford. The early experiments were made in collaboration with Professor G. H. Hardy and Professor R. H. Fowler, and the editors of the Quarterly Journal of Mathematics (for which these first essays were designed) and with the Monotype Corporation. Much adaptation and recutting of type faces was necessary before the new system could be brought into use. These joint preparations included the drafting of an entirely new code of ‘Rules for the Composition of Mathematics’ which has been reserved hitherto for the use of compositors at the Press and those authors and editors whose work was produced under the Press imprints. It is now felt that these rules should have a wider circulation since, in the twenty years which have intervened, they have acquired a greater importance.
…The original ‘Rules’, themselves amended by continuous trial and rich experience, are here preceded by two new chapters. The first chapter is a simple explanation of the technique of printing and is addressed to those authors who are curious to know how their writings are transformed to the orderliness of the printed page; the second chapter, begun as the offering of a mathematical author and editor to his fellow-workers in this field, culled from notes gathered over many years, has ended in closest collaboration with the reader who for as many years has reconciled the demands of author, editor, and printer; the third chapter is the aforesaid collection of ‘Rules’ and is intended for compositors, readers, authors, and editors. Appendixes follow on Handwriting, Types available, and Abbreviations. It is not expected that anyone will read this book from cover to cover, but it is hoped that both author and printer will find it an acceptable and ready work of reference.
List Of Illustrations · I. The Mechanics Of Mathematical Printing · II. Recommendations To Mathematical Authors · 1. Introduction · 2. Fractions · 3. Surds · 4. Superiors And Inferiors · 5. Brackets · 6. Embellished Characters · 7. Displayed Formulae · 8. Notation (Miscellaneous) · 9. Headings And Numbering · 10. Footnotes And References · 11. Varieties Of Type · 12. Punctuation · 13. Wording · 14. Preparing Copy · 15. Corrections Of Proofs · 16. Final Queries And Offprints · III. Rules For The Composition Of Mathematics At The University Press, Oxford · Appendixes: · A. Legible Handwriting · B. Type Specimens And List Of Special Sorts · C. Abbreviations · Index
At Trieste, in 1872, in a palace with damp statues and deficient hygienic facilities, a gentleman on whose face an African scar told its tale-Captain Richard Francis Burton, the English consul-embarked on a famous translation of the Quitab alif laila ua laila, which the roumis know by the title The Thousand and One Nights. One of the secret aims of his work was the annihilation of another gentleman (also weather-beaten, and with a dark and Moorish beard) who was compiling a vast dictionary in England and who died long before he was annihilated by Burton. That gentleman was Edward Lane, the Orientalist, author of a highly scrupulous version of The Thousand and One Nights that had supplanted a version by Galland. Lane translated against Galland, Burton against Lane; to understand Burton we must understand this hostile dynasty.
It came with a slide-on case that ingeniously fastens to the back plate of the typewriter with rubber straps. Unfortunately, over time these would often dry out, crack, and break off. This example still has them intact, but given its age, it’s not a good idea to rely on them to carry it around!
The body is made largely of shiny ABS plastic, while the case has a heavy matte texture, and some key structural pieces, such as the ends of the platen, are of painted metal. The bright orange caps of the ribbon reels perk up the actual mechanism, something which in other typewriters is typically hidden from view…The large fold-out handle on the back of the machine (what becomes the top when carrying it in its case) overtly invites picking up the Valentine and taking it along for a joy ride, much as the handle on the first Mac signified the same intent. The case itself was custom-designed to match the aesthetic, unlike most typewriter cases of the day, which were nondescript black or gray plastic, or perhaps semi-soft vinyl. This is another example of Sottsass’ thinking about the whole user experience (as we would call it today).
…The Valentine was conceived as competitor to the inexpensive units coming on to the market from Japan. Sottsass had some interesting ideas about how to simplify and lower the cost of the machine, such as not having lower case letters (EVERYTHINGWOULDBE SHOUTINGIN UPPERCASE!), removing the bell that went “ding” at the end of the line, and using an inexpensive plastic for the case. Olivetti rejected all these as too radical, and used the higher-quality ABS plastic for the case, which pushed the price up higher than Sottsass had wanted.
“[F]or use in any place except in an office, so as not to remind anyone of the monotonous working hours, but rather to keep amateur poets company on quiet Sundays in the country or to provide a highly colored object on a table in a studio apartment. An anti-machine machine, built around the commonest mass-produced mechanism, the works inside any typewriter, it may also seem to be an unpretentious toy.”
One of the most distinctive features of Tufte’s style is his extensive use of sidenotes.3 Sidenotes are like footnotes, except they don’t force the reader to jump their eye to the bottom of the page, but instead display off to the side in the margin. Perhaps you have noticed their use in this document already. You are very astute.
Sidenotes are a great example of the web not being like print. On sufficiently large viewports, Tufte CSS uses the margin for sidenotes, margin notes, and small figures. On smaller viewports, elements that would go in the margin are hidden until the user toggles them into view. The goal is to present related but not necessary information such as asides or citations as close as possible to the text that references them. At the same time, this secondary information should stay out of the way of the eye, not interfering with the progression of ideas in the main text.
…If you want a sidenote without footnote-style numberings, then you want a margin note. Notice there isn’t a number preceding the note. On large screens, a margin note is just a sidenote that omits the reference number. This lessens the distracting effect taking away from the flow of the main text, but can increase the cognitive load of matching a margin note to its referent text.