1. `Leprechauns`

2. `#diaconis-mosteller-1989`

3. `https://archive.org/details/mathematiciansmi033496mbp/page/n111`

4. `https://archive.org/details/littlewoodsmisce0000litt/page/n5`

5. `https://archive.org/details/debunked00geor`

6. `https://www.amazon.com/Scientist-Rebel-Review-Books-Paperback/dp/1590172949`

7. `Littlewood`

8. `Fashion`

9. `https://www.et.byu.edu/~wheeler/benchtop/flight.php`

10. `1986-boolos.pdf`: “Review of Manin Yu. I.. _A course in mathematical logic_”⁠, George Boolos

11. `1989-diaconis.pdf#page=6`: ⁠, (1989-01-01; statistics  /​ ​​ ​bias):

The New Word: …Because of our different reading habits, we readers are exposed to the same words at different observed rates, even when the long-run rates are the same Some words will appear relatively early in your experience, some relatively late. More than half will appear before their expected time of appearance, probably more than 60% of them if we use the exponential model, so the appearance of new words is like a ⁠. On the other hand, some words will take more than twice the average time to appear, about 1⁄7 of them (1⁄e2) in the exponential model. They will look rarer than they actually are. Furthermore, their average time to reappearance is less than half that of their observed first appearance, and about 10% of those that took at least twice as long as they should have to occur will appear in less than 1⁄20 of the time they originally took to appear. The model we are using supposes an exponential waiting time to first occurrence of events. The phenomenon that accounts for part of this variable behavior of the words is of course the regression effect.

…We now extend the model. Suppose that we are somewhat more complicated creatures, that we require k exposures to notice a word for the first time, and that k is itself a Poisson random variable…Then, the mean time until the word is noticed is (𝜆 + 1)T, where T is the average time between actual occurrences of the word. The of the time is (2𝜆 + 1)T2. Suppose T = 1 year and 𝜆 = 4. Then, as an approximation, 5% of the words will take at least time [𝜆 + 1 + 1.65 (2𝜆 + 1)(1⁄2)]T or about 10 years to be detected the first time. Assume further that, now that you are sensitized, you will detect the word the next time it appears. On the average it will be a year, but about 3% of these words that were so slow to be detected the first time will appear within a month by natural variation alone. So what took 10 years to happen once happens again within a month. No wonder we are astonished. One of our graduate students learned the word on a Friday and read part of this manuscript the next Sunday, two days later, illustrating the effect and providing an anecdote. Here, sensitizing the individual, the regression effect, and the recall of notable events and the non-recall of humdrum events produce a situation where coincidences are noted with much higher than their expected frequency. This model can explain vast numbers of seeming coincidences. [See also the ⁠; Brockman’s law⁠.]

12. `Regression`

13. `http://www.msri.org/workshops/220`