/docs/statistics/decision/ Directory Listing

Directories

Files

  • 2020-chamberlain.pdf: ⁠, Gary Chamberlain (2020-08-01):

    This review uses the empirical analysis of portfolio choice to illustrate econometric issues that arise in decision problems. Subjective expected utility (SEU) can provide normative guidance to an investor making a portfolio choice. The investor, however, may have doubts on the specification of the distribution and may seek a decision theory that is less sensitive to the specification. I consider three such theories: maxmin expected utility, variational preferences (including multiplier and divergence preferences and the associated constraint preferences), and smooth ambiguity preferences. I use a simple two-period model to illustrate their application. Normative empirical work on portfolio choice is mainly in the SEU framework, and bringing in ideas from robust decision theory may be fruitful.

  • 2019-azevedo.pdf: “A/B Testing with Fat Tails”⁠, Eduardo M. Azevedo, Alex Deng, José Luis Montiel Olea, Justin M. Rao, E. Glen Weyl

  • 2019-kamenica.pdf: ⁠, Emir Kamenica (2019-08-01; backlinks):

    A school may improve its students’ job outcomes if it issues only coarse grades. Google can reduce congestion on roads by giving drivers noisy information about the state of traffic. A social planner might raise everyone’s welfare by providing only partial information about solvency of banks. All of this can happen even when everyone is fully rational and understands the data-generating process. Each of these examples raises questions of what is the (socially or privately) optimal information that should be revealed. In this article, I review the literature that answers such questions.

  • 2019-isakov.pdf: ⁠, Leah Isakov, Andrew W. Lo, Vahid Montazerhodjat (2019-01-04; backlinks):

    Implicit in the drug-approval process is a host of decisions—target patient population, control group, primary endpoint, sample size, follow-up period, etc.—all of which determine the trade-off between Type I and Type II error. We explore the application of Bayesian decision analysis (BDA) to minimize the expected cost of drug approval, where the relative costs of the two types of errors are calibrated using U.S. Burden of Disease Study 2010 data. The results for conventional fixed-sample randomized clinical-trial designs suggest that for terminal illnesses with no existing therapies such as pancreatic cancer, the standard threshold of 2.5% is substantially more conservative than the BDA-optimal threshold of 23.9% to 27.8%. For relatively less deadly conditions such as prostate cancer, 2.5% is more risk-tolerant or aggressive than the BDA-optimal threshold of 1.2% to 1.5%. We compute BDA-optimal sizes for 25 of the most lethal diseases and show how a BDA-informed approval process can incorporate all stakeholders’ views in a systematic, transparent, internally consistent, and repeatable manner.

  • 2019-orr.pdf: “Using the Results from Rigorous Multisite Evaluations to Inform Local Policy Decisions”⁠, Larry L. Orr, Robert B. Olsen, Stephen H. Bell, Ian Schmid, Azim Shivji, Elizabeth A. Stuart

  • 2018-daye.pdf: ⁠, Christian Dayé (2018-01-01):

    Delphi is a procedure that produces forecasts on technological and social developments. This article traces the history of Delphi’s development to the early 1950s, where a group of logicians and mathematicians working at the RAND Corporation carried out experiments to assess the predictive capacities of groups of experts. While Delphi now has a rather stable methodological shape, this was not so in its early years. The vision that Delphi’s creators had for their brainchild changed considerably. While they had initially seen it as a technique, a few years later they reconfigured it as a scientific method. After some more years, however, they conceived of Delphi as a tool. This turbulent youth of Delphi can be explained by parallel changes in the fields that were deemed relevant audiences for the technique, operations research and the policy sciences. While changing the shape of Delphi led to some success, it had severe, yet unrecognized methodological consequences. The core assumption of Delphi that the convergence of expert opinions observed over the iterative stages of the procedure can be interpreted as consensus, appears not to be justified for the third shape of Delphi as a tool that continues to be the most prominent one.

  • 2018-berman.pdf: “p-Hacking and False Discovery in A/B Testing”⁠, Ron Berman, Leonid Pekelis, Aisling Scott, Christophe Van den Bulte (backlinks)

  • 2017-shenhav.pdf: ⁠, Amitai Shenhav, Sebastian Musslick, Falk Lieder, Wouter Kool, Thomas L. Griffiths, Jonathan D. Cohen, Matthew M. Botvinick (2017-07-01; backlinks):

    In spite of its familiar phenomenology, the mechanistic basis for mental effort remains poorly understood. Although most researchers agree that mental effort is aversive and stems from limitations in our capacity to exercise cognitive control, it is unclear what gives rise to those limitations and why they result in an experience of control as costly. The presence of these control costs also raises further questions regarding how best to allocate mental effort to minimize those costs and maximize the attendant benefits. This review explores recent advances in computational modeling and empirical research aimed at addressing these questions at the level of psychological process and neural mechanism, examining both the limitations to mental effort exertion and how we manage those limited cognitive resources. We conclude by identifying remaining challenges for theoretical accounts of mental effort as well as possible applications of the available findings to understanding the causes of and potential solutions for apparent failures to exert the mental effort required of us.

    [Keywords: motivation, cognitive control, decision making, reward, prefrontal cortex, executive function]

  • 2017-pedroni.pdf: “The risk elicitation puzzle”⁠, Andreas Pedroni, Renato Frey, Adrian Bruhin, Gilles Dutilh, Ralph Hertwig, Jamp#x000F6;rg Rieskamp

  • 2017-nohdurft.pdf: “Was Angelina Jolie Right? Optimizing Cancer Prevention Strategies Among BRCA Mutation Carriers”⁠, Eike Nohdurft, Elisa Long, Stefan Spinler

  • 2015-mi.pdf: “Selectiongain: an R package for optimizing multi-stage selection”⁠, Xuefei Mi, H. Friedrich Utz, Albrecht E. Melchinger (backlinks)

  • 2012-morgan.pdf: ⁠, Kari Lock Morgan, Donald B. Rubin (2012-07-18; backlinks):

    Randomized experiments are the “gold standard” for estimating causal effects, yet often in practice, chance imbalances exist in covariate distributions between treatment groups. If covariate data are available before units are exposed to treatments, these chance imbalances can be mitigated by first checking covariate balance before the physical experiment takes place. Provided a precise definition of imbalance has been specified in advance, unbalanced randomizations can be discarded, followed by a rerandomization, and this process can continue until a randomization yielding balance according to the definition is achieved. By improving covariate balance, rerandomization provides more precise and trustworthy estimates of treatment effects.

    [Keywords: randomization, treatment allocation, experimental design, clinical trial, causal effect, Mahalanobis distance, Hotelling’s T2]

  • 2011-meyers.pdf: “Improving vineyard sampling efficiency via dynamic spatially explicit optimisation”⁠, J. M. Meyers, G. L. Sacks, H. M. Van Es, J. E. Vanden Heuvel

  • 2010-paul.pdf: ⁠, Steven M. Paul, Daniel S. Mytelka, Christopher T. Dunwiddie, Charles C. Persinger, Bernard H. Munos, Stacy R. Lindborg, Aaron L. Schacht (2010-02-19; backlinks):

    • The biopharmaceutical industry is facing unprecedented challenges to its fundamental business model and currently cannot sustain sufficient innovation to replace its products and revenues lost due to patent expirations.
    • The number of truly innovative new medicines approved by regulatory agencies such as the US Food and Drug Administration has declined substantially despite continued increases in R&D spending, raising the current cost of each new molecular entity (NME) to approximately US$2.36$1.82010 billion
    • Declining R&D productivity is arguably the most important challenge the industry faces and thus improving R&D productivity is its most important priority.
    • A detailed analysis of the key elements that determine overall R&D productivity and the cost to successfully develop an NME reveals exactly where (and to what degree) R&D productivity can (and must) be improved.
    • Reducing late-stage (Phase II and III) attrition rates and cycle times during drug development are among the key requirements for improving R&D productivity.
    • To achieve the necessary increase in R&D productivity, R&D investments, both financial and intellectual, must be focused on the ‘sweet spot’ of drug discovery and early clinical development, from target selection to clinical proof-of-concept.
    • The transformation from a traditional biopharmaceutical FIPCo (fully integrated pharmaceutical company) to a FIPNet (fully integrated pharmaceutical network) should allow a given R&D organization to ‘play bigger than its size’ and to more affordably fund the necessary number and quality of pipeline assets.

    The pharmaceutical industry is under growing pressure from a range of environmental issues, including major losses of revenue owing to patent expirations, increasingly cost-constrained healthcare systems and more demanding regulatory requirements. In our view, the key to tackling the challenges such issues pose to both the future viability of the pharmaceutical industry and advances in healthcare is to substantially increase the number and quality of innovative, cost-effective new medicines, without incurring unsustainable R&D costs. However, it is widely acknowledged that trends in industry R&D productivity have been moving in the opposite direction for a number of years.

    Here, we present a detailed analysis based on comprehensive, recent, industry-wide data to identify the relative contributions of each of the steps in the drug discovery and development process to overall R&D productivity. We then propose specific strategies that could have the most substantial impact in improving R&D productivity.

  • 2010-nutt.pdf: “Drug harms in the UK: a multicriteria decision analysis”⁠, David J. Nutt, Leslie A. King, Lawrence D. Phillips, on behalf of the Independent Scientific Committee on Drugs (backlinks)

  • 2009-insua.pdf: ⁠, David Rios Insua, Jesus Rios, David Banks (2009):

    Applications in counterterrorism and corporate competition have led to the development of new methods for the analysis of decision making when there are intelligent opponents and uncertain outcomes.

    This field represents a combination of statistical risk analysis and game theory, and is sometimes called adversarial risk analysis.

    In this article, we describe several formulations of adversarial risk problems, and provide a framework that extends traditional risk analysis tools, such as and probabilistic reasoning, to adversarial problems.

    We also discuss the research challenges that arise when dealing with these models, illustrate the ideas with examples from business, and point out relevance to national defense. [keywords: auctions, decision theory, game theory, influence diagrams]

  • 2008-ziliak.pdf: ⁠, Stephen T. Ziliak (2008-09; backlinks):

    In economics and other sciences, “statistical-significance” is by custom, habit, and education a necessary and sufficient condition for proving an empirical result (Ziliak and McCloskey, 2008; McCloskey and Ziliak, 1996). The canonical routine is to calculate what’s called a t-statistic and then to compare its estimated value against a theoretically expected value of it, which is found in “Student’s” t table. A result yielding a t-value greater than or equal to about 2.0 is said to be “statistically-significant at the 95 percent level.” Alternatively, a regression coefficient is said to be “statistically-significantly different from the null, p < 0.05.” Canonically speaking, if a coefficient clears the 95 percent hurdle, it warrants additional scientific attention. If not, not. The first presentation of “Student’s” test of statistical-significance came a century ago, in “The Probable Error of a Mean” (1908b), published by an anonymous “Student.” The author’s commercial employer required that his identity be shielded from competitors, but we have known for some decades that the article was written by William Sealy Gosset (1876–1937), whose entire career was spent at Guinness’s brewery in Dublin, where Gosset was a master brewer and experimental scientist (E. S. Pearson, 1937). Perhaps surprisingly, the ingenious “Student” did not give a hoot for a single finding of “statistical”-significance, even at the 95 percent level of statistical-significance as established by his own tables. Beginning in 1904, “Student,” who was a businessman besides a scientist, took an economic approach to the logic of uncertainty, arguing finally that statistical-significance is “nearly valueless” in itself.

  • 2007-nice-guidelines-ch8.pdf: “The guidelines manual - Chapter 8: Incorporating health economics in guidelines and assessing resource impact”⁠, NICE (backlinks)

  • 2007-lensberg.pdf: “On the Evolution of Investment Strategies and the Kelly Rule—A Darwinian Approach”⁠, Terje Lensberg, Klaus Reiner Schenk-Hoppé (backlinks)

  • 2006-stewart.pdf: ⁠, Neil Stewart, Nick Chater, Gordon D. A. Brown (2006-08-01; backlinks):

    We present a theory of decision by sampling (DbS) in which, in contrast with traditional models, there are no underlying psychoeconomic scales.

    Instead, we assume that an attribute’s subjective value is constructed from a series of binary, ordinal comparisons to a sample of attribute values drawn from memory and is its rank within the sample. We assume that the sample reflects both the immediate distribution of attribute values from the current decision’s context and also the background, real-world distribution of attribute values.

    DbS accounts for concave utility functions; losses looming larger than gains; hyperbolic temporal discounting; and the overestimation of small probabilities and the underestimation of large probabilities.

    [Keywords: judgment, decision making, sampling, memory, utility, gains and losses, temporal discounting, subjective probability]

  • 2006-smith.pdf: ⁠, James E. Smith, Robert L. Winkler (2006-03-01; backlinks):

    Decision analysis produces measures of value such as expected net present values or expected utilities and ranks alternatives by these value estimates. Other optimization-based processes operate in a similar manner. With uncertainty and limited resources, an analysis is never perfect, so these value estimates are subject to error. We show that if we take these value estimates at face value and select accordingly, we should expect the value of the chosen alternative to be less than its estimate, even if the value estimates are unbiased. Thus, when comparing actual outcomes to value estimates, we should expect to be disappointed on average, not because of any inherent bias in the estimates themselves, but because of the optimization-based selection process. We call this phenomenon the optimizer’s curse and argue that it is not well understood or appreciated in the decision analysis and management science communities. This curse may be a factor in creating skepticism in decision makers who review the results of an analysis. In this paper, we study the optimizer’s curse and show that the resulting expected disappointment may be substantial. We then propose the use of Bayesian methods to adjust value estimates. These Bayesian methods can be viewed as disciplined skepticism and provide a method for avoiding this postdecision disappointment.

  • 2006-thorp.pdf: ⁠, Edward O. Thorp (2006; backlinks):

    [By ] The central problem for gamblers is to find positive expectation bets. But the gambler also needs to know how to manage his money, i.e., how much to bet. In the stock market (more inclusively, the securities markets) the problem is similar but more complex. The gambler, who is now an “investor”, looks for “excess risk adjusted return”.

    In both these settings, we explore the use of the ⁠, which is to maximize the expected value of the logarithm of wealth (“maximize expected logarithmic utility”). The criterion is known to economists and financial theorists by names such as the “geometric mean maximizing portfolio strategy”, maximizing logarithmic utility, the growth-optimal strategy, the capital growth criterion, etc.

    The author initiated the practical application of the Kelly criterion by using it for card counting in blackjack. We will present some useful formulas and methods to answer various natural questions about it that arise in blackjack and other gambling games. Then we illustrate its recent use in a successful casino sports betting system. Finally, we discuss its application to the securities markets where it has helped the author to make a 30 year total of 80 billion dollars worth of “bets”.

    [Keywords: Kelly criterion, betting, long run investing, ⁠, logarithmic utility, capital growth]

    1. Abstract

    2. Introduction

    3. Coin tossing

    4. Optimal growth: Kelly criterion formulas for practitioners

      1. The probability of reaching a fixed goal on or before n trials
      2. The probability of ever being reduced to a fraction x of this initial bankroll
      3. The probability of being at or above a specified value at the end of a specified number of trials
      4. Continuous approximation of expected time to reach a goal
      5. Comparing fixed fraction strategies: the probability that one strategy leads another after n trials
    5. The long run: when will the Kelly strategy “dominate”?

    6. Blackjack

    7. Sports betting

    8. Wall Street: the biggest game

      1. Continuous approximation
      2. The (almost) real world
      3. The case for “fractional Kelly”
      4. A remarkable formula
    9. A case study

      1. The constraints
      2. The analysis and results
      3. The recommendation and the result
      4. The theory for a portfolio of securities
    10. My experience with the Kelly approach

    11. Conclusion

    12. Acknowledgments

    13. Appendix A: Integrals for deriving moments of E

    14. Appendix B: Derivation of formula (3.1)

    15. Appendix C: Expected time to reach goal

    16. References

  • 2006-drescher-goodandreal.pdf: ⁠, Gary Drescher (2006; backlinks):

    In Good and Real⁠, a tour-de-force of metaphysical naturalism, computer scientist examines a series of provocative paradoxes about consciousness, choice, ethics, quantum mechanics, and other topics, in an effort to reconcile a purely mechanical view of the universe with key aspects of our subjective impressions of our own existence.

    Many scientists suspect that the universe can ultimately be described by a simple (perhaps even deterministic) formalism; all that is real unfolds mechanically according to that formalism. But how, then, is it possible for us to be conscious, or to make genuine choices? And how can there be an ethical dimension to such choices? Drescher sketches computational models of consciousness, choice, and subjunctive reasoning—what would happen if this or that were to occur?—to show how such phenomena are compatible with a mechanical, even deterministic universe.

    Analyses of (a paradox about choice) and the (a paradox about self-interest vs altruism, arguably reducible to Newcomb’s Problem) help bring the problems and proposed solutions into focus. Regarding quantum mechanics, Drescher builds on —but presenting a simplified formalism, accessible to laypersons—to argue that, contrary to some popular impressions, quantum mechanics is compatible with an objective, deterministic physical reality, and that there is no special connection between quantum phenomena and consciousness.

    In each of several disparate but intertwined topics ranging from physics to ethics, Drescher argues that a missing technical linchpin can make the quest for objectivity seem impossible, until the elusive technical fix is at hand.:

    • Chapter 2 explores how inanimate, mechanical matter could be conscious, just by virtue of being organized to perform the right kind of computation.
    • Chapter 3 explains why conscious beings would experience an apparent inexorable forward flow of time, even in a universe who physical principles are time-symmetric and have no such flow, with everything sitting statically in spacetime.
    • Chapter 4, following [Hugh] Everett, looks closely at the paradoxes of quantum mechanics, showing how some theorists came to conclude—mistakenly, I argue—that consciousness is part of the story of quantum phenomena, or vice versa. Chapter 4 also shows how quantum phenomena are consistent with determinism (even though so-called of quantum determinism are provably wrong).
    • Chapter 5 examines in detail how it can be that we make genuine choices in in a mechanical, deterministic universe.
    • Chapter 6 analyzes Newcomb’s Problem, a startling paradox that elicits some counterintuitive conclusions about choice and causality.
    • Chapter 7 considers how our choices can have a moral component—that is, how even a mechanical, deterministic universe can provide a basis for distinguishing right from wrong.
    • Chapter 8 wraps up the presentation and touches briefly on some concluding metaphysical questions.
  • 2001-garille.pdf: “FRANJINFOPRE9-1OPRE005”⁠, Susan Garner Garille, Saul I. Gass (backlinks)

  • 2000-gelman.pdf: ⁠, Andrew Gelman (2000-03):

    It is well known that, for estimating a linear treatment effect with constant variance, the optimal design divides the units equally between the 2 extremes of the design space. If the dose-response relation may be nonlinear, however, intermediate measurements may be useful in order to estimate the effects of partial treatments.

    We consider the decision of whether to gather data at an intermediate design point: do the gains from learning about nonlinearity outweigh the loss in efficiency in estimating the linear effect?

    Under reasonable assumptions about nonlinearity, we find that, unless sample size is very large, the design with no interior measurements is best, because with moderate total sample sizes, any nonlinearity in the dose-response will be difficult to detect.

    We discuss in the context of a simplified version of the problem that motivated this work—a study of pest-control treatments intended to reduce asthma symptoms in children.

    [Keywords: asthma, Bayesian inference, dose-response ⁠, pest control, statistical-significance.]

    [See also: the “bet on sparsity principle”⁠.]

    Figure 2: Mean squared error (as a multiple of σ2n) for 4  combinations of θ0.5 as a function of |δ|, the relative magnitude of nonlinearity of the dose-response. The plots show T = 4 and T = 8, which correspond to a treatment effect that is 2 or 4 standard deviations away from zero. The design w = 0 (all the data collected at the 2 extreme points) dominates unless both |δ| and T are large. When the design w = 1⁄3 (data evenly divided between the 3 design points) is chosen, the Bayes estimate has the lowest mean squared error for the range of δ and T considered here.
  • 1999-adams.pdf: ⁠, N. M. Adams, D. J. Hand (1999-07-01):

    Receiver Operating Characteristic (ROC) curves are popular ways of summarising the performance of two class classification rules.

    In fact, however, they are extremely inconvenient. If the relative severity of the two different kinds of misclassification is known, then an awkward projection operation is required to deduce the overall loss. At the other extreme, when the relative severity is unknown, the area under an ROC curve is often used as an index of performance. However, this essentially assumes that nothing whatsoever is known about the relative severity—a situation which is very rare in real problems.

    We present an alternative plot which is more revealing than an ROC plot, and we describe a comparative index which allows one to take advantage of anything that may be known about the relative severity of the two kinds of misclassification.

    [Keywords: ROC curve, error rate, loss function, misclassification costs, classification rule, supervised classification]

  • 1999-ross.pdf: “Adding Risks: Samuelson's Fallacy of Large Numbers Revisited”⁠, Stephen A. Ross (backlinks)

  • 1995-bohn.pdf: “mgs;01jan95”⁠, Roger E. Bohn

  • 1995-budescu.pdf: ⁠, David V. Budescu, Thomas S. Wallsten (1995; backlinks):

    This chapter discusses that practical issues arise because weighty decisions often depend on forecasts and opinions communicated from one person or set of individuals to another.

    The standard wisdom has been that numerical communication is better than linguistic, and therefore, especially in important contexts, it is to be preferred. A good deal of evidence suggests that this advice is not uniformly correct and is inconsistent with strongly held preferences. A theoretical understanding of the preceding questions is an important step toward the development of means for improving communication, judgment, and decision making under uncertainty. The theoretical issues concern how individuals interpret imprecise linguistic terms, what factors affect their interpretations, and how they combine those terms with other information for the purpose of taking action. The chapter reviews the relevant literature in order to develop a theory of how linguistic information about imprecise continuous quantities is processed in the service of decision making, judgment, and communication.

    It provides the current view, which has evolved inductively, to substantiate it where the data allow, and to suggest where additional research is needed. It also summarizes the research on meanings of qualitative probability expressions and compares judgments and decisions made on the basis of vague and precise probabilities.

    Figure 2: First, second, and third quartiles over subjects of the upper and lower probability limits for each phrase in Experiment 1 of Wallsten et al 1986.
  • 1994-benter.pdf: “Computer Based Horse Race Handicapping and Wagering Systems: A Report”⁠, Donald B. Hausch, Victor SY Lo, William T. Ziemba

  • 1993-kristensen.pdf: ⁠, Anders Ringgaard Kristensen (1993-06-01):

    The observed level of milk yield of a dairy cow or the litter size of a sow is only partially the result of a permanent characteristic of the animal; temporary effects are also involved. Thus, we face a problem concerning the proper definition and measurement of the traits in order to give the best possible prediction of the future revenues from an animal considered for replacement. A trait model describing the underlying effects is built into a model combining a Bayesian approach with a hierarchic Markov process in order to be able to calculate optimal replacement policies under various conditions.

  • 1991-tsevat.pdf: “PII: S0022-3476(05)83375-X”

  • 1991-meyer.pdf: ⁠, Margaret A. Meyer (1991):

    An organization’s promotion decision between 2 workers is modelled as a problem of boundedly-rational learning about ability. The decision-maker can bias noisy rank-order contests sequentially, thereby changing the information they convey.

    The optimal final-period bias favours the “leader”, reinforcing his likely ability advantage. When optimally biased rank-order information is a sufficient statistic for cardinal information, the leader is favoured in every period. In other environments, bias in early periods may (1) favour the early loser, (2) be optimal even when the workers are equally rated, and (3) reduce the favoured worker’s promotion chances.

  • 1990-ramsey.pdf: “Weight or the Value of Knowledge”⁠, Frank P. Ramsey (backlinks)

  • 1990-pearson-studentastatisticalbiographyofwilliamsealygosset.pdf: “'Student': A Statistical Biography of William Sealy Gosset”⁠, Egon S. Pearson, R. L. Plackett, G. A. Barnard

  • 1990-mellor-frankramseyphilosophicalpapers.pdf: “F. P. Ramsey: Philosophical Papers”⁠, F. P. Ramsey, D. H. Mellor (backlinks)

  • 1989-graves.pdf: ⁠, Paul R. Graves (1989-06):

    L. J. Savage and I. J. Good have each demonstrated that the expected utility of free information [Value of Information] is never negative for a decision maker who updates her degrees of belief by conditionalization on propositions learned for certain. In this paper Good’s argument is generalized to show the same result for a decision maker who updates her degrees of belief on the basis of uncertain information by Richard Jeffrey’s probability kinematics. The Savage/Good result is shown to be a special case of the more general result.

  • 1988-fishburn-nonlinearpreferencesandutilitytheory.pdf: “Nonlinear Preference and Utility Theory”⁠, Peter C. Fishburn

  • 1987-jonker.pdf: “A shortest augmenting path algorithm for dense and sparse linear assignment problems”

  • 1986-wallsten.pdf: ⁠, Thomas S. Wallsten, David V. Budescu, Amnon Rapoport, Rami Zwick, Barbara Forsyth (1986-12-01; backlinks):

    Can the vague meanings of probability terms such as doubtful, probable, or likely be expressed as membership functions over the [0, 1] probability interval? A function for a given term would assign a membership value of 0 to probabilities not at all in the vague concept represented by the term, a membership value of 1 to probabilities definitely in the concept, and intermediate membership values to probabilities represented by the term to some degree.

    A modified pair-comparison procedure was used in 2 experiments to empirically establish and assess membership functions for several probability terms. Subjects performed 2 tasks in both experiments: They judged (1) to what degree one probability rather than another was better described by a given probability term, and (2) to what degree one term rather than another better described a specified probability. Probabilities were displayed as relative areas on spinners.

    Task 1 data were analyzed from the perspective of conjoint-measurement theory, and membership function values were obtained for each term according to various scaling models. The conjoint-measurement axioms were well satisfied and goodness-of-fit measures for the scaling procedures were high. Individual differences were large but stable. Furthermore, the derived membership function values satisfactorily predicted the judgments independently obtained in task 2.

    The results support the claim that the scaled values represented the vague meanings of the terms to the individual subjects in the present experimental context. Methodological implications are discussed, as are substantive issues raised by the data regarding the vague meanings of probability terms.

    Figure 2: First, second, and third quartiles over subjects of the upper and lower probability limits for each phrase in Experiment 1 of Wallsten et al 1986.

    Assessed membership functions over the [0,1] probability interval for several vague meanings of probability terms (e.g., doubtful, probable, likely), using a modified pair-comparison procedure in 2 experiments with 20 and 8 graduate business students, respectively. Subjects performed 2 tasks in both experiments: They judged (A) to what degree one probability rather than another was better described by a given probability term and (B) to what degree one term rather than another better described a specified probability. Probabilities were displayed as relative areas on spinners. Task A data were analyzed from the perspective of conjoint-measurement theory, and membership function values were obtained for each term according to various scaling models. Findings show that the conjoint-measurement axioms were well satisfied and goodness-of-fit measures for the scaling procedures were high. Individual differences were large but stable, and the derived membership function values satisfactorily predicted the judgments independently obtained in Task B. Results indicated that the scaled values represented the vague meanings of the terms to the individual Ss in the present experimental context.

  • 1985-reilly.pdf: ⁠, Richard R. Reilly, James W. Smither (1985-11-01):

    Two methods for estimating dollar standard deviations were investigated in a simulated environment. 19 graduate students with management experience managed a simulated pharmaceutical firm for 4 quarters. Ss were given information describing the performance of sales representatives on 3 job components. Estimates derived using the method developed by (see record 1981-02231-001) were relatively accurate with objective sales data that could be directly translated to dollars, but resulted in overestimates of means and standard deviations when data were less directly translatable to dollars and involved variable costs. An additional problem with the Schmidt et al procedure involved the presence of outliers, possibly caused by differing interpretations of instructions. The Cascio-Ramos estimate of performance in dollars (CREPID) technique, proposed by W. F. Cascio (1982), yielded smaller dollar standard deviations, but Ss could reliably discriminate among job components in terms of importance and could accurately evaluate employee performance on those components. Problems with the CREPID method included the underlying scale used to obtain performance ratings and a dependency on job component intercorrelations.

  • 1985-aumann.pdf: “Game theoretic analysis of a bankruptcy problem from the Talmud”⁠, Robert J. Aumann, Michael Maschler

  • 2005-howard.pdf: “Influence Diagrams”⁠, Ronald A. Howard, James E. Matheson

  • 1983-howard-readingsondecisionanalysis-v2.pdf: “Readings on the Principles and Applications of Decision Analysis: Volume 2: Professional Collection”⁠, Ronald H. Howard, James E. Matheson

  • 1983-howard-readingsondecisionanalysis-v1.pdf: “Readings on the Principles and Applications of Decision Analysis: Volume 1: General Collection”⁠, Ronald H. Howard, James E. Matheson

  • 1981-frey.pdf: ⁠, Dieter Frey (1981-12-01; backlinks):

    The present experiment determined whether preference for consonant or dissonant information differs when (a) decisions are reversible instead of irreversible, and (b) when different amounts of dissonance are induced. Dissonance was manipulated by having subjects make decisions between alternatives with varying degrees of similarity in attractiveness. Subjects’ preference for consonant information was generally stronger after making irreversible decisions than after making reversible ones. When decisions were irreversible, the relative preference for consonant over dissonant information increased with the similarity in attractiveness of the decision alternatives. When decisions were reversible, the relative preference for consonant information decreased with the similarity in attractiveness of the alternatives. In accordance to earlier investigations on selective exposure, experimental manipulation did not affect the avoidance of dissonance information. The results are interpreted in terms of both dissonance theory and choice certainty theory.

  • 1981-weerahandi.pdf: “Multi-Bayesian Statistical Decision Theory”⁠, S. Weerahandi, J. V. Zidek

  • 1979-schmidt.pdf: ⁠, Frank L. Schmidt, J. E. Hunter, R. C. McKenzie, T. W. Muldrow (1979-01-01; backlinks):

    Used decision theoretic equations to estimate the impact of the Programmer Aptitude Test (PAT) on productivity if used to select new computer programmers for 1 yr in the federal government and the national economy. A newly developed technique was used to estimate the standard deviation of the dollar value of employee job performance, which in the past has been the most difficult and expensive item of required information. For the federal government and the US economy separately, results are presented for different selection ratios and for different assumed values for the validity of previously used selection procedures. The impact of the PAT on programmer productivity was substantial for all combinations of assumptions. Results support the conclusion that hundreds of millions of dollars in increased productivity could be realized by increasing the validity of selection decisions in this occupation. Similarities between computer programmers and other occupations are discussed. It is concluded that the impact of valid selection procedures on work-force productivity is considerably greater than most personnel psychologists have believed.

  • 1976-box.pdf: ⁠, George E. P. Box (1976-12-01):

    Aspects of scientific method are discussed: In particular, its representation as a motivated iteration in which, in succession, practice confronts theory, and theory, practice. Rapid progress requires sufficient flexibility to profit from such confrontations, and the ability to devise parsimonious but effective models, to worry selectively about model inadequacies and to employ mathematics skillfully but appropriately. The development of statistical methods at Rothamsted Experimental Station by Sir is used to illustrate these themes.

    …Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad… In applying mathematics to subjects such as physics or statistics we make tentative assumptions about the real world which we know are false but which we believe may be useful nonetheless. The physicist knows that particles have mass and yet certain results, approximating what really happens, may be derived from the assumption that they do not. Equally, the statistician knows, for example, that in nature there never was a normal distribution, there never was a straight line, yet with normal and linear assumptions, known to be false, he can often derive results which match, to a useful approximation, those found in the real world.

    It follows that, although rigorous derivation of logical consequences is of great importance to statistics, such derivations are necessarily encapsulated in the knowledge that premise, and hence consequence, do not describe natural truth. It follows that we cannot know that any statistical technique we develop is useful unless we use it. Major advances in science and in the science of statistics in particular, usually occur, therefore, as the result of the theory-practice iteration.

  • 1976-tribe-whenvaluesconflict.pdf: ⁠, Laurence Henry Tribe, Corinne S. Schelling, John Voss (1976):

    When Values Conflict: Essays on Environmental Analysis, Discourse, and Decision is a collection of essays each of which addresses the issue of value conflicts in environmental disputes. These authors discuss the need to integrate such “fragile” values as beauty and naturalness with “hard” values such as economic efficiency in the decision making process. When Values Conflict: Essays on Environmental Analysis, Discourse, and Decision will be of interest to those who seek to include environmentalist values in public policy debates. This work is comprised of seven essays.

    1. In the first chapter, Robert Socolow discusses obstacles to the integration of environmental values into natural resource policy. Technical studies often fail to resolve conflicts, because such conflict rest of the parties’ very different goals and values. Nonetheless, agreement on the technical analysis may serve as a platform from which to more clearly articulate value differences.
    2. Irene Thomson draws on the case of the Tocks Island Dam controversy to explore environmental decision making processes. She describes the impact the various party’s interests and values have on their analyses, and argues that the fragmentation of responsibility among institutional actors contributes to the production of inadequate analyses.
    3. Tribe’s essay suggests that a natural environment has intrinsic value, a value that cannot be reduced to human interests. This recognition may serve as the first step in developing an environmental ethic.
    4. Charles Frankel explores the idea that nature has rights. He first explores the meaning of nature, by contrast to the supernatural, technological and cultural. He suggests that appeals to nature’s rights serves as an appeal for “institutional protection against being carried away by temporary enthusiasms.”
    5. In Chapter Five, Harvey Brooks describes three main functions which analysis serves in the environmental decision-making process: they ground conclusions in neutral, generally accepted principles, they separate means from ends, and they legitimate the final policy decision. If environmental values such as beauty, naturalness and uniqueness are to be incorporated into systems analysis, they must do so in such a way as to preserve the basic function of analysis.
    6. Henry Rowen discusses the use of policy analysis as an aid to making environmental decisions. He describes the characteristics of a good analysis, and argues that good analysis can help clarify the issues, and assist in “the design and invention of objectives and alternatives.” Rowen concludes by suggesting ways of improving the field of policy analysis.
    7. Robert Dorfman provides the Afterword for this collection. This essay distinguishes between value and price, and explores the import of this distinction for cost-benefit analysis. The author concludes that there can be no “formula for measuring a projects contribution to humane values.” Environmental decisions will always require the use of human judgement and wisdom.

    When Values Conflict: Essays on Environmental Analysis, Discourse, and Decision offers a series of thoughtful essays on the nature and weight of environmentalist values. The essays range from a philosophic investigation of natural value to a more concrete evaluation of the elements of good policy analysis.

  • 1976-feiveson-boundariesofanalysis.pdf: ⁠, Harold A. Feiveson, Frank W. Sinden, Robert Harry Socolow (1976):

    This is a study of what happens to technical analyses in the real world of politics. The Tocks Island Dam project proposed construction of a dam on the Delaware River at Tocks Island, five miles north of the Delaware Water Gap. Planned and developed in the early 1960’s, it was initially considered a model of water resource planning. But it soon became the target of an extended controversy involving a tangle of interconnected concerns—floods and droughts, energy, growth, congestion, recreation, and the uprooting of people and communities. Numerous participants—economists, scientists, planners, technologists, bureaucrats and environmentalists—measured, modeled and studied the Tocks Island proposal. The results were a weighty legacy of technical and economic analyses—and a decade of political stalemate regarding the fate of the dam. These analyses, to a substantial degree, masked the value conflicts at stake in the controversy; they concealed the real political and human issues of who would win and who would lose if the Tocks Island project were undertaken. And, the studies were infected by rigid categories of thought and divisions of bureaucratic responsibilities. This collection of original essays tells the story of the Tocks Island controversy, with a fresh perspective on the environmental issues at stake. Its contributors consider the political decision-making process throughout the controversy and show how economic and technological analyses affected those decisions. Viewed as a whole, the essays show that systematic analysis and an explicit concern for human values need not be mutually exclusive pursuits.

  • 1975-thorp.pdf: ⁠, Edward O. Thorp (1975):

    This chapter focuses on for long-term ⁠.

    The Kelly (-Bernoulli-Latané or capital growth) criterion is to maximize the expected value E log X of the logarithm of the random variable X, representing wealth. The chapter presents a treatment of the Kelly criterion and Breiman’s results.

    Breiman’s results can be extended to cover many if not most of the more complicated situations which arise in real-world portfolios Specifically, the number and distribution of investments can vary with the time period, the random variables need not be finite or even discrete, and a certain amount of dependence can be introduced between the investment universes for different time periods. The chapter also discusses a few relationships between the max expected log approach and ⁠.

    It highlights a few misconceptions concerning the Kelly criterion, the most notable being the fact that decisions that maximize the expected log of wealth do not necessarily maximize expected utility of terminal wealth for arbitrarily large time horizons.

  • 1973-fishburn-theoryofsocialchoice.pdf: “The Theory of Social Choice”⁠, Peter C. Fishburn

  • 1970-samuelson.pdf: “What Makes for a Beautiful Problem in Science?”⁠, Paul A. Samuelson (backlinks)

  • 1967-samuelson.pdf: “General Proof that Diversification Pays”⁠, Paul Samuelson (backlinks)

  • 1966-giaever.pdf: “Optimal Dairy Cow Replacement Policies”⁠, Harald Birger Giaever

  • 1965-black.pdf: “PII: S0019-9958(65)90052-5” (backlinks)

  • 1963-colton.pdf: ⁠, Theodore Colton (1963):

    A simple cost function approach is proposed for designing an optimal clinical trial when a total of n patients with a disease are to be treated with one of two medical treatments.

    The cost function is constructed with but one cost, the consequences of treating a patient with the superior or inferior of the two treatments. Fixed sample size and sequential trials are considered. Minimax, maximin, and Bayesian approaches are used for determining the optimal size of a fixed sample trial and the optimal position of the boundaries of a sequential trial.

    Comparisons of the different approaches are made as well as comparisons of the results for the fixed and sequential plans.

  • 1962-blackett-studiesofwarnuclearandconventional.pdf: “Studies of War, Nuclear and Conventional”⁠, Patrick Maynard Stuart Blackett

  • 1961-raiffa-appliedstatisticaldecisiontheory.pdf: “Applied Statistical Decision Theory”⁠, Howard Raiffa, Robert Schlaifer (backlinks)

  • 1960-kelley.pdf: ⁠, Henry J. Kelley (1960-10-01; backlinks):

    An analytical development of flight performance optimization according to the method of gradients or ‘method of steepest decent’ is presented. Construction of a minimizing sequence of flight paths by a stepwise process of descent along the local gradient direction is described as a computational scheme. Numerical application of the technique is illustrated in a simple example of orbital transfer via solar sail propulsion. Successive approximations to minimum time planar flight paths from Earth’s orbit to the orbit of Mars are presented for cases corresponding to free and fixed boundary conditions on terminal velocity components.

  • 1960-jewell.pdf: “Letter to the Editor—A Classroom Example of Linear Programming (Lesson Number 2)”⁠, William S. Jewell

  • 1959-lehmann-testingstatisticalhypotheses.pdf: “Testing Statistical Hypotheses (First Edition)”⁠, E. L. Lehmann (backlinks)

  • 1959-schlaifer-probabilitystatisticsbusinessdecisions.pdf: ⁠, Robert Schlaifer (1959; backlinks):

    This book is a non-mathematical introduction to the logical analysis of practical business problems in which a decision must be reached under uncertainty. The analysis which it recommends is based on the modern theory of utility and what has come to be known as the “’personal”’ definition of probability; the author believes, in other words, that when the consequences of various possible courses of action depend on some unpredictable event, the practical way of choosing the “best” act is to assign values to consequences and probabilities to events and then to select the act with the highest expected value. In the author’s experience, thoughtful businessmen intuitively apply exactly this kind of analysis in problems which are simple enough to allow of purely intuitive analysis; and he believes that they will readily accept its formalization once the essential logic of this formalization is presented in a way which can be comprehended by an intelligent layman. Excellent books on the pure mathematical theory of decision under uncertainty already exist; the present text is an endeavor to show how formal analysis of practical decision problems can be made to pay its way.

    From the point of view taken in this book,there is no real difference between a “statistical” decision problem in which a part of the available evidence happens to come from a ‘"’sample"’ and a problem in which all the evidence is of a less formal nature. Both kinds of problems are analyzed by use of the same basic principles; and one of the resulting advantages is that it becomes possible to avoid having to assert that nothing useful can be said about a sample which contains an unknown amount of bias while at the same time having to admit that in most practical situations it is totally impossible to draw a sample which does not contain an unknown amount of bias. In the same way and for the same reason there is no real difference between a decision problem in which the long-run-average demand for some commodity is known with certainty and one in which it is not; and not the least of the advantages which result from recognizing this fact is that it becomes possible to analyze a problem of inventory control without having to pretend that a finite amount of experience can ever give anyone perfect knowledge of long-run-average demand. The author is quite ready to admit that in some situations it may be difficult for the businessman to assess the numerical probabilities and utilities which are required for the kind of analysis recommended in this book, but he is confident that the businessman who really tries to make a reasoned analysis of a difficult decision problem will find it far easier to do this than to make a direct determination of, say, the correct risk premium to add to the pure cost of capital or of the correct level at which to conduct a test of statistical-significance.

    In sum, the author believes that the modern theories of utility and personal probability have at last made it possible to develop a really complete theory to guide the making of managerial decisions—a theory into which the traditional disciplines of statistics and economics under certainty and the collection of miscellaneous techniques taught under the name of operations research will all enter as constituent parts. He hopes, therefore, that the present book will be of interest and value not only to students and practitioners of inventory control, quality control, marketing research, and other specific business functions but also to students of business and businessmen who are interested in the basic principles of managerial economics and to students of economics who are interested in the theory of the firm. Even the teacher of a course in mathematical decision theory who wishes to include applications as well as complete-class and existence theory may find the book useful as a source of examples of the practical decision problems which do arise in the real world.

  • 1954-tukey.pdf: “Unsolved Problems of Experimental Statistics”⁠, John W. Tukey (backlinks)

  • 1940-preinreich.pdf: “The Economic Life of Industrial Equipment”⁠, Gabriel A. D. Preinreich

  • 1939-pearson.pdf: ⁠, E. S. Pearson (1939-01-00; backlinks):

    [Egon Pearson describes Student, or Gosset, as a statistician: Student corresponded widely with young statisticians/mathematicians, encouraging them, and having an outsized influence not reflected in his publication. Student’s preferred statistical tools were remarkably simple, focused on correlations and standard deviations, but wielded effectively in the analysis and efficient design of experiments (particularly agricultural experiments), and he was an early decision-theorist, focused on practical problems connected to his Guinness Brewery job—which detachment from academia partially explains why he didn’t publish methods or results immediately or often. The need to handle small n of the brewery led to his work on small-sample approximations rather than, like Pearson et al in the Galton biometric tradition, relying on collecting large datasets and using asymptotic methods, and Student carried out one of the first Monte Carlo simulations.]

  • 1938-fisher.pdf: “Presidential address to the first Indian statistical congress”⁠, R. A. Fisher (backlinks)

  • 1933-elderton.pdf: “The Lanarkshire Milk Experiment”⁠, Ethel M. Elderton (backlinks)

  • 1931-fisher.pdf: “Pasteurised and Raw Milk”⁠, R. A. Fisher, S. Bartlett (backlinks)

  • 1923-student.pdf: “On Testing Varieties of Cereals”⁠, Student (William Sealy Gosset) (backlinks)

  • 2020-sharot.pdf

  • 2018-cohen.pdf

  • 2018-aguilar.pdf

  • 2017-pal.pdf

  • 2016-08-20-candyjapan-decisiontree-n9.csv (backlinks)

  • 2013-djulbegovic.pdf

  • 2013-06-24-schou-devilgametheory.html (backlinks)

  • 2011-ioannidis.pdf (backlinks)

  • 2009-baumann.pdf

  • 2004-parmigiani.pdf

  • 2004-ades.pdf (backlinks)

  • 1997-mcclelland-optimalexperimentdesign.pdf (backlinks)

  • 1996-puskin.pdf

  • 1995-tengs.pdf (backlinks)

  • 1995-pratt-introductionstatisticaldecisiontheory.epub (backlinks)

  • 1995-bohn-2.pdf

  • 1990-rosenthal.pdf

  • 1990-dantzig.pdf (backlinks)

  • 1990-dantzig-dietproblem.pdf

  • 1987-macgregor.pdf (backlinks)

  • 1987-fishburn-interprofileconditionsimpossibility.pdf

  • 1986-stephens-foragingtheory.pdf (backlinks)

  • 1986-lehmann-testingstatisticalhypotheses.pdf (backlinks)

  • 1986-bolton.pdf

  • 1984-tidman-theoperationsevaluationgroup.pdf (backlinks)

  • 1984-thorp-themathematicsofgambling-ch4.pdf (backlinks)

  • 1984-frey.pdf (backlinks)

  • 1983-hauer.pdf

  • 1982-sobel.pdf

  • 1974-balch-essayseconomicbehavioruncertainty.pdf

  • 1972-savage-foundationsofstatistics.pdf (backlinks)

  • 1968-cohen.pdf (backlinks)

  • 1963-anscombe.pdf

  • 1962-dreyfus.pdf

  • 1960-howard-dynamicprogrammingmarkovprocesses.pdf

  • 1957-savage.pdf (backlinks)

  • 1957-luce-gamesanddecisions.pdf

  • 1957-box.pdf

  • 1957-bellman-dynamicprogramming.pdf (backlinks)

  • 1954-hodges.pdf (backlinks)

  • 1952-yates.pdf

  • 1951-bechtoldt.pdf

  • 1950-wald-statisticaldecisionfunctions.pdf

  • 1947-wald-sequentialanalysis.epub

  • 1945-stigler.pdf (backlinks)

  • 1939-taylor.pdf (backlinks)

  • 1937-fisher-thedesignofexperiments.pdf

  • 1935-thompson.pdf

  • 1933-thompson.pdf

  • 1931-student.pdf (backlinks)

  • 1930-leighton-lanarkshiremilkreport.pdf (backlinks)