# math/​humor directory

## “Rare Greek Variables”, Bayer 2021

“Rare Greek Variables”⁠, (2021-11-28; ; backlinks; similar):

One of John Nash’s first papers deliberately used every Greek letter.

For the film A Beautiful Mind I used this paper for writing on his dorm room window. As luck would have it, a widely circulated publicity still showed Russell Crowe intent behind “0 < π < 1” taken straight from that paper.

Suffice to say this was divisive within the math community. Half of us can’t imagine Pi meaning anything besides, um, Pi. The other half didn’t even blink.

Someone shared with me a hilarious email exchange within the Berkeley math department, wondering if the math consultant was deliberately trying to make Russell Crowe look bad.

I got the chance to edit an interview with John Nash for the DVD extras, where he bragged to Ron Howard about using every Greek letter. I left that in.

## “Rare Greek Variables”, Branwen 2021

Variables: “Rare Greek Variables”⁠, (2021-04-08; ; backlinks; similar):

I scrape Arxiv to find underused Greek variables which can add some diversity to math; the top 10 underused letters are ϰ, ς, υ, ϖ, Υ, Ξ, ι, ϱ, ϑ, & Π. Avoid overused letters like λ, and spice up your next paper with some memorable variables!

Some Greek alphabet variables are just plain overused. It seems like no paper is complete without a bunch of E or μ or α variables splattered across it—and they all mean different things in different papers, and that’s when they don’t mean different things in the same paper! In the spirit of offering constructive criticism, might I suggest that, based on Arxiv frequency of usage, you experiment with more recherché, even, outré variables?

Instead of reaching for that exhausted π, why not use… ϰ (variant kappa)? (It looks like a Hebrew escapee…) Or how about ς (variant sigma), which is calculated to get your reader’s attention by making them go “ςςς” and exclaim “these letters are Greek to me!”

The top 10 least-used Greek variables on Arxiv⁠, rarest to more common:

1. \varkappa (ϰ)
2. \varsigma (ς)
3. \upsilon (υ)
4. \varpi (ϖ)
5. \Upsilon (Υ)
6. \varrho (ϱ)
7. \Xi (Ξ)
8. \vartheta (ϑ)
9. \iota (ι)
10. \Pi (Π)

The leading approaches in language modeling are all obsessed with TV shows of my youth—namely Transformers and Sesame Street. Transformers this, Transformers that, and over here a bonfire worth of GPU-TPU-neuromorphic wafer scale silicon. We opt for the lazy path of old and proven techniques with a fancy crypto inspired acronym: the Single Headed Attention RNN (SHA-RNN). The author’s lone goal is to show that the entire field might have evolved a different direction if we had instead been obsessed with a slightly different acronym and slightly different result.

We take a previously strong language model based only on boring LSTMs and get it to within a stone’s throw of a stone’s throw of state-of-the-art byte level language model results on Enwik8. This work has undergone no intensive hyperparameter optimization and lived entirely on a commodity desktop machine that made the author’s small studio apartment far too warm in the midst of a San Franciscan summer. The final results are achievable in plus or minus 24 hours on a single GPU as the author is impatient. The attention mechanism is also readily extended to large contexts with minimal computation.

Take that Sesame Street.

## “Real Numbers, Data Science and Chaos: How to Fit Any Dataset With a Single Parameter”, Boué 2019

“Real numbers, data science and chaos: How to fit any dataset with a single parameter”⁠, (2019-04-28; similar):

We show how any dataset of any modality (time-series, images, sound…) can be approximated by a well-behaved (continuous, differentiable…) scalar function with a single real-valued parameter. Building upon elementary concepts from chaos theory, we adopt a pedagogical approach demonstrating how to adjust this parameter in order to achieve arbitrary precision fit to all samples of the data. Targeting an audience of data scientists with a taste for the curious and unusual, the results presented here expand on previous similar observations regarding expressiveness power and generalization of machine learning models.

## “Mathematicians Who Never Were”, Pieronkiewicz 2018

2018-pieronkiewicz.pdf: “Mathematicians Who Never Were”⁠, Barbara Pieronkiewicz (2018-05-21; backlinks)

## “Optimal Tip-to-Tip Efficiency: a Model for Male Audience Stimulation”, Chugtai & Gilfoyle 2014

2014-chugtai.pdf: “Optimal Tip-to-Tip Efficiency: a model for male audience stimulation”⁠, (2014-05-29; similar):

A probabilistic model is introduced for the problem of stimulating a large male audience.

Double jerking is considered, in which 2 shafts may be stimulated with a single hand. Both tip-to-tip and shaft-to-shaft configurations of audience members are analyzed.

We demonstrate that pre-sorting members of the audience according to both shaft girth and leg length allows for more efficient stimulation. Simulations establish steady rates of stimulation even as the variance of certain parameters is allowed to grow, whereas naive unsorted schemes have increasingly flaccid performance.

[Analysis for S01E08 of Silicon Valley⁠; additional analysis]

## “Two Curious Integrals and a Graphic Proof”, Schmid 2014

2014-schmid.pdf: “Two curious integrals and a graphic proof”⁠, Hanspeter Schmid (2014-01-01; backlinks)

## “COM3200: Programming Language Semantics: Chapter 5. Induction Techniques. 5.5. Backward Induction and Petard's BGH Theorem”, Birtwistle 2009

2009-birtwistle-com3200programminglanguagesemantics-ch5.5-backwardinductionandpetardsbghtheorem.pdf: “COM3200: Programming Language Semantics: Chapter 5. Induction Techniques. 5.5. Backward Induction and Petard's BGH Theorem”⁠, Graham Birtwistle (2009-01-01; backlinks)

## “Big Game Hunting for Graduate Students in Mathematics”, Athreya & Khare 2009

2009-athreya.pdf: “Big Game Hunting for Graduate Students in Mathematics”⁠, Jayadev Athreya, Apoorva Khare (2009-01-01; backlinks)

## “Serge Lang, 1927–2005: Part 1: Paul Vojta, University of California, Berkeley”, Jorgenson & Krantz 2006-page-12

“Serge Lang, 1927–2005: Part 1: Paul Vojta, University of California, Berkeley”⁠, (2006; backlinks; similar):

…During my time at Yale, I gave 2 or 3 graduate courses. Serge always sat in the front row, paying close attention to the point of interrupting me midsentence: “The notation should be functorial with respect to the ideas!” or “This notation sucks!” But, after class he complimented me highly on the lecture.

While on sabbatical at Harvard, he sat in on a course Mazur was giving and often criticized the notation. Eventually they decided to give him a T-shirt which said, “This notation sucks” on it. So one day Barry intentionally tried to get him to say it. He introduced a complex variable Ξ⁠, took its complex conjugate⁠, and divided by the original Ξ. This was written as a vertical fraction, so it looked like 8 horizontal lines on the blackboard. He then did a few other similar things, but Serge kept quiet—apparently he didn’t criticize notation unless he knew what the underlying mathematics was about. Eventually Barry had to give up and just present him with the T-shirt.

Once, close to the end of my stay at Yale, I was in his office discussing some mathematics with him. He was yelling at me and I was yelling back. At the end of the discussion, he said that he’d miss me (when I left Yale). Now that he has left, I will miss him, too.

## “John W. Tukey: His Life and Professional Contributions”, Brillinger 2002-page-5

2002-brillinger.pdf#page=5: “John W. Tukey: His Life and Professional Contributions”⁠, David R. Brillinger (2002-12-01; ; backlinks)

## “Some Remarkable Properties of Sinc and Related Integrals”, Borwein & Borwein 2001

2001-borwein.pdf: “Some Remarkable Properties of Sinc and Related Integrals”⁠, (2001-03; backlinks; similar):

Using Fourier transform techniques, we establish inequalities for integrals of the form

We then give quite striking closed form evaluations of such integrals and finish by discussing various extensions and applications.

[Keywords: sinc integrals, Fourier transforms, convolution, Parseval’s theorem]

## “Letters [Mathematical Intelligencer, Volume 4, Issue 1, March 1982]”, Neumann et al 1982

1982-pondiczery.pdf: “Letters [Mathematical Intelligencer, Volume 4, issue 1, March 1982]”⁠, Peter M. Neumann, E. S. Pondiczery, Guy Boillat, W. Nowacki, editors (1982-03-01; backlinks)

## “Seven Years of Manifold: 1968–1980”, Stewart & Jaworski 1981

1981-stewart-thebestofmanifold19681980.pdf: “Seven Years of Manifold: 1968–1980”⁠, Ian Stewart, John Jaworski (1981-01-01; backlinks)

## “A Rebuke of A. B. Smith’s Paper, ‘A Note on Piffles’”, Farlow 1980

1980-farlow.pdf: “A rebuke of A. B. Smith’s paper, ‘A Note on Piffles’”⁠, S. J. Farlow (1980-01-01)

## “15 New Ways To Catch A Lion”, Stewart 1976

1976-barrington.pdf: “15 New Ways To Catch A Lion”⁠, Ian Stewart (1976-01-01; backlinks)

## “Further Techniques in the Theory of Big Game Hunting”, Dudley et al 1968

1968-dudley.pdf: “Further Techniques in the Theory of Big Game Hunting”⁠, Patricia L. Dudley, G. T. Evans, K. D. Hansen, I. D. Richardson (1968-10-01; backlinks)

## “Some Modern Mathematical Methods in the Theory of Lion Hunting”, Morphy 1968

1968-morphy.pdf: “Some Modern Mathematical Methods in the Theory of Lion Hunting”⁠, Otto Morphy (1968-01-01; backlinks)

## “A Note On Piffles, By A. B. Smith”, Austin 1967

1967-austin.pdf: “A Note On Piffles, By A. B. Smith”⁠, A. K. Austin (1967-05-01)

## “On a Theorem of H. Pétard”, Roselius 1967

1967-roselius.pdf: “On a Theorem of H. Pétard”⁠, Christian Roselius (1967-01-01; backlinks)

## “A New Method of Catching a Lion”, Good 1965

1965-good.pdf: “A New Method of Catching a Lion”⁠, I. J. Good (1965-01-01; backlinks)

Wikipedia

Wikipedia

Wikipedia