Changing beliefs with failure in searching a set of possibilities. (psychology, statistics)
created: 01 Jul 2012; modified: 15 Feb 2017; status: finished; belief: log

This transcript has been prepared from a scan of chapter 15, pages 353-377 in Subjective Probability 1994, edited by G. Wright & P. Ayton. All links are my own insertion; references have been inserted as footnotes at the first citation.

See also

  • Waiting for the bus: When base-rates refuse to be neglected, Teigen & Keren 2007:

    The paper reports the results from 16 versions of a simple probability estimation task, where probability estimates derived from base-rate information have to be modified by case knowledge. In the bus problem [adapted from Falk, R., Lipson, A., & Konold, C. (1994). The ups and downs of the hope function in a fruitless search. In G. Wright & P. Ayton (Eds.), Subjective probability (pp. 353-377). Chichester, UK: Wiley], a passenger waits for a bus that departs before schedule in 10% of the cases, and is more than 10 min delayed in another 10%. What are Fred’s chances of catching the bus on a day when he arrives on time and waits for 10 min? Most respondents think his probability is 10%, or 90%, instead of 50%, which is the correct answer. The experiments demonstrate the difficulties people have in replacing the original three-category 1/8/1 partitioning with a normalized, binary partitioning, where the middle category is discarded. In contrast with typical studies of base-rate neglect, or under-weighing of base-rates, this task demonstrates a reversed base-rate fallacy, where frequentistic information is overextended and case information ignored. Possible explanations for this robust phenomenon are briefly discussed.

  • The allure of equality: Uniformity in probabilistic and statistical judgment, Falk & Lann 2008:

    Uniformity, that is, equiprobability of all available options is central as a theoretical presupposition and as a computational tool in probability theory. It is justified only when applied to an appropriate sample space. In five studies, we posed diversified problems that called for unequal probabilities or weights to be assigned to the given units. The predominant response was choice of equal probabilities and weights. Many participants failed the task of partitioning the possibilities into elements that justify uniformity. The uniformity fallacy proved compelling and robust across varied content areas, tasks, and cases in which the correct weights should either have been directly or inversely proportional to their respective values. Debiasing measures included presenting individualized and visual data and asking for extreme comparisons. The preference of uniformity obtains across several contexts. It seems to serve as an anchor also in mathematical and social judgments. People’s pervasive partiality for uniformity is explained as a quest for fairness and symmetry, and possibly in terms of expediency.

  • The Hope Function, Sam Alexander
  • applying the Hope Function to technological forecasting (but it could also be applied to politics)

    1. of Artificial Intelligence; this initial analysis loosely inspired material in Intelligence Explosion: Evidence and Import (Muehlhauser & Salamon 2012), pg5:

      How, then, might we predict when AI will be created? We consider several strategies below. By considering the time since Dartmouth. We have now seen more than 50 years of work toward machine intelligence since the seminal Dartmouth conference on AI, but AI has not yet arrived. This seems, intuitively, like strong evidence that AI won’t arrive in the next minute, good evidence it won’t arrive in the next year, and significant but far from airtight evidence that it won’t arrive in the next few decades. Such intuitions can be formalized into models that, while simplistic, can form a useful starting point for estimating the time to machine intelligence.8

      8: We can make a simple formal model of this evidence by assuming (with much simplification) that every year a coin is tossed to determine whether we will get AI that year, and that we are initially unsure of the weighting on that coin. We have observed more than 50 years of no AI since the first time serious scientists believed AI might be around the corner. This 56 years of no AI observation would be highly unlikely under models where the coin comes up AI on 90% of years (the probability of our observations would be 10^-56), or even models where it comes up AI in 10% of all years (probability 0.3%), whereas it’s the expected case if the coin comes up AI in, say, 1% of all years, or for that matter in 0.0001% of all years. Thus, in this toy model, our no AI for 56 years observation should update us strongly against coin weightings in which AI would be likely in the next minute, or even year, while leaving the relative probabilities of AI expected in 200 years and AI expected in 2 million years more or less untouched. (These updated probabilities are robust to choice of the time interval between coin flips; it matters little whether the coin is tossed once per decade, or once per millisecond, or whether one takes a limit as the time interval goes to zero.) Of course, one gets a different result if a different starting point is chosen, e.g. Alan Turing’s seminal paper on machine intelligence (Turing 1950) or the inaugural conference on artificial general intelligence (Wang, Goertzel, and Franklin 2008). For more on this approach and Laplace’s rule of succession, see Jaynes (2003), chapter 18. We suggest this approach only as a way of generating a prior probability distribution over AI timelines, from which one can then update upon encountering additional evidence.

    2. of [email protected] producing practical results
  • Discrete Sequential Search, Black 1965


  1. Bell, C.R. (1979) Psychological aspects of probability and uncertainty. In C.R. Bell (ed.), Uncertain Outcomes, MTP Press, Lancaster, England, pages 5-21.

  2. Jones, D.E.H. (1966) On being blinded with science. New Scientist, November, 465-7.

  3. Feller, W. (1957) An Introduction to Probability Theory and its Applications, Vol. 1 (2nd edn). Wiley, New York.

  4. Meshalkin, L.D. (1973) Collection of Problems in Probability Theory (L.F. Boron & B.A. Haworth, trans.) Noordhoff, Leyden, The Netherlands (Original work published 1963).

  5. Falk, R. (1993) Understanding Probability and Statistics: A Book of Problems. AK Peters, Wellesley, Ma.

  6. Gabriel, K.R. (1960) Nuptiality and Fertility in Israel. Doctoral dissertation (in Hebrew with English summary). The Hebrew University, Jerusalem.

  7. Tversky, A. & Kahneman, D. (1982) Evidential impact of base rates. In D. Kahneman, P. Slovic & A. Tversky (eds.), Judgement under Uncertainty: Heuristics and Biases. Cambridge University Press, Cambridge, pages 153-60.

  8. Falk, R. & Konold, C. (1992) The psychology of learning probability. In F.S. Gordon & S.P. Gordon (eds.), Statistics for the Twenty-First Century. The Mathematical Association of America, USA, pages 151-64.

  9. Konold, C., Lohmeier, J., Pollatsek, A., Well, A., Falk, R. & Lipson, A. (1991) Novice views on randomness. Proceedings of the Thirteenth Annual Meeting of the International Group for the Psychology of mathematics Education - North American Chapter, 1, 167-73.

  10. Gigerenzer, G., Switjink, Z., Porter, T., Daston, L., Beatty, J. & Krueger, L. (1989) The Empire of Chance: How Probability Changed Science and Everyday Life. Cambridge University Press, Cambridge.

  11. Hacking, I. (1975) The Emergence of Probability. Cambridge University Press, Cambridge.

  12. Ennis, J. (1985) Statistics, St Petersburg and Sellafield. New Scientist, May, 26-28.