Leprechauns (Link Bibliography)

“Leprechauns” links:

  1. Search

  2. https://www.wired.com/story/the-teeny-tiny-scientific-screwup-that-helped-covid-kill/

  3. https://www.amazon.com/Leprechauns-Software-Engineering-Laurent-Bossavit/dp/2954745509

  4. Replication

  5. Littlewood

  6. Hydrocephalus

  7. http://journalofpositivesexuality.org/wp-content/uploads/2018/08/Failure-of-Academic-Quality-Control-Technology-of-Orgasm-Lieberman-Schatzberg.pdf

  8. https://madamescientist.com/2014/04/11/on-the-shoulders-of-giants/#comment-6577

  9. https://en.wikiquote.org/wiki/Talk:William_Thomson#.22X-rays_will_prove_to_be_a_hoax.22

  10. http://sss.sagepub.com/content/44/4/638.long

  11. ⁠, T. J. Hamblin (1981-12-19):

    [4pg discussion of fraud and malpractice in science: the spinach iron leprechaun, Blondlot’s N-rays, plagiarism and data fabrication (A. K. Alsabti, Vijay Soman, John C. Long, William Summerlin, S. Krogh Derr, Robert Gullis, Marc J. Straus, Mark Spector, Gregor Mendel. Ernest Haeckel, Piltdown Man).]

  12. https://www.erwinmayer.com/wp-content/uploads/2010/10/Sutton_Spinach_Iron_and_Popeye_March_2010.pdf

  13. https://dysology.blogspot.com/2017/12/the-spinach-popeye-iron-decimal-error.html

  14. https://journals.sagepub.com/doi/full/10.1177/0306312714535679

  15. https://historiesofecology.blogspot.com/2015/10/the-real-decimal-point-error-that.html

  16. http://patrickmatthew.com/Book%20Reviews.html

  17. https://old.reddit.com/r/wikipedia/comments/57lqr7/of_30_leading_scientists_whose_views_he_sought_29/d8td27r

  18. https://en.wikiquote.org/wiki/Talk:Edsger_W._Dijkstra#Telescope

  19. https://en.wikiquote.org/w/index.php?title=Oliver_Heaviside&type=revision&diff=2949443&oldid=2746598

  20. 1964-goldman.pdf

  21. Tanks

  22. Ads#discussion

  23. 2014-mccaleb

  24. Complement

  25. https://forum.evageeks.org/post/483397/NYT-review-of-_20_/#483397

  26. https://arstechnica.com/tech-policy/2012/06/fbi-halted-one-child-porn-inquiry-because-tor-got-in-the-way/

  27. Littlewood-origin

  28. https://www.lesswrong.com/posts/6vSJe9WXCNvy3Wpoh/the-decline-effect-and-the-scientific-method-link?commentId=CioABB2nPf4goMJ4F

  29. #leprechaun-hunting-and-historical-context

  30. #gwern-littlewood

  31. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C21&q=author%3A%22DC+McClelland%22&btnG=

  32. https://hollisarchives.lib.harvard.edu/repositories/4/archival_objects/1033884

  33. 1973-mcclelland.pdf: ⁠, David C. McClelland (1973-01; iq):

    Argues that while traditional intelligence tests have been validated almost entirely against school performance, the evidence that they measure abilities which are essential to performing well in various life outcomes is weak. Most of the validity studies are correlational in nature and fail to control for the fact that social class might be a 3rd variable accounting for positive correlations between test scores and occupational success, and between level of schooling achieved and occupational success. It is suggested that better measures of competence might be derived by analysis of successful life outcomes and the competencies involved in them, criterion sampling, and assessment of communication skills.

  34. 1980-mcclelland.pdf: ⁠, David C. McClelland, Richard E. Boyatzis (1980-01; iq):

    Innovations in testing emerging from the competency assessment movement offer counselors new capabilities in helping their clients to understand aspects of themselves and their problems, as well as to establish directions for development and improvement efforts. New types of tests and measures sample actual behavior more closely than testing instruments previously used: The characteristics they examine are closely linked to performance in a wide variety of jobs, and therefore provide increased focus of assessment on life outcomes. With this new degree of specificity and criterion referencing, implications for counseling, training, and development efforts emerge more clearly than with other forms of testing.

  35. 1994-mcclelland.pdf: ⁠, David C. McClelland (1994; iq):

    Responds to the criticisms of regarding the author’s (1973) article on competence testing. D. C. McClelland agrees with Barrett and Depinet’s dismissal of competency testing as a poor alternative to ability testing. McClelland holds that well-designed competency-based tests could make an important contribution to selecting people who are better suited for certain jobs, but that these tests will not be developed until there is a strong commitment by psychologists to develop them and the necessary financial support is available.

  36. 1991-barrett.pdf: ⁠, Gerald V. Barrett, Robert L. Depinet (1993; iq):

    David C. McClelland’s 1973 article has deeply influenced both professional and public opinion. In it, he presented 5 major themes: (1) Grades in school did not predict occupational success, (2) intelligence tests and aptitude tests did not predict occupational success or other important life outcomes, (3) tests and academic performance only predicted job performance because of an underlying relationship with social status, (4) such tests were unfair to minorities, and (5) “competencies” would be better able to predict important behaviors than would more traditional tests. Despite the pervasive influence of these assertions, this review of the literature showed only limited support for these claims.

  37. 1994-barrett.pdf: ⁠, Gerald V. Barrett (1994-01; iq):

    Comments that after considering the responses of R. E. Boyatzis (see record 1994-27864-001) and D. C. McClelland (see record 1994-27871-001) and reviewing additional reports by these authors, the conclusions drawn by G. V. Barrett and R. L. Depinet’s (see record 1992-03797-001) article on competence testing are reinforced. If McClelland’s concept of competencies is to make a contribution to psychology, he must present empirical data to support his contention. Three sets of data are presented to illustrate this point.

    [Barrett points out that according to McClelland’s own analyses, his proposed screening methods barely predict job performance, are usually not even ⁠, would violate employment/​​​​discrimination law, and that McClelland’s claim that his methods don’t work because of the is an excuse.]

  38. 2003-barrett.pdf: ⁠, Gerald V. Barrett, Alissa J. Kramen, Sarah B. Lueke (2003; iq):

    In the 1920s and 1930s basic theories of intellectual ability were developed along with operational tests which proved effective in predicting job performance (Spearman 1927; Thorndike 1936). In a series of studies and meta-analyses throughout the 1970s and 1980s, Schmidt and Hunter showed that cognitive ability was the best overall predictor of job performance (Hunter & Hunter 1984; Hunter 1986; Schmidt & Hunter 1981). Partially in reaction to the meta-analytic findings, research to expand on the definitions of competencies continued. The development of competencies by McClelland (1973) was followed by a discussion of tacit knowledge (Wagner & Sternberg 1985), practical intelligence (Sternberg & Wagner 1986), and multiple intelligence (Gardner 1999). In the 1990s, emotional intelligence became the intelligence of interest (Feist & Barron 1996; Goleman 1995, 1998a, 1998b; Graves 1999; Mayer et al 1990).

    All these new theories and proposed measurement instruments pose a challenge to traditional cognitive ability tests since it is claimed that these tests are more valid and have lower adverse impact. It is our contention that many of these tests are nothing more than pop psychology. It is distressing to see such books (ie. Goleman 1998b) quoted as if they had some merit. We will review the themes present throughout all of these “creative” concepts and examine whether they have practical implications and can withhold legal scrutiny in the public and private sector.

  39. http://tefkos.comminfo.rutgers.edu/Courses/e530/Readings/Nicolaisen%20citation%20analysis%20ARIST%202008.pdf#page=3

  40. #moed-vriens-1989

  41. #broadus-1983

  42. #simkin-roychowdhury-2002-2

  43. 1983-broadus.pdf: “An investigation of the validity of bibliographic citations”⁠, Robert N. Broadus

  44. 1989-moed.pdf: ⁠, H. F. Moed, M. Vriens (1989-04-01; statistics  /​ ​​ ​bias):

    Citation analysis of scientific articles constitutes an im portant tool in quantitative studies of science and technology. Moreover, citation indexes are used frequently in searches for relevant scientific documents. In this article we focus on the issue of reliability of citation analysis. How accurate are cita tion counts to individual scientific articles? What pitfalls might occur in the process of data collection? To what extent do ‘random’ or ‘systematic’ errors affect the results of the citation analysis?

    We present a detailed analysis of discrepancies between target articles and cited references with respect to author names, publication year, volume number, and starting page number. Our data consist of some 4500 target articles pub lished in five scientific journals, and 25000 citations to these articles. Both target and citation data were obtained from the Science Citation Index, produced by the Institute for Scientific Information.

    It appears that in many cases a specific error in a citation to a particular target article occurs in more than one citing publication. We present evidence that authors in compiling reference lists, may copy references from reference lists in other articles, and that this may be one of the mechanisms underlying this phenomenon of multiple’ variations/​​​​errors.

  45. ⁠, M. V. Simkin, V. P. Roychowdhury (2002-12-03):

    We report a method of estimating what percentage of people who cited a paper had actually read it. The method is based on a stochastic modeling of the citation process that explains empirical studies of misprint distributions in citations (which we show follows a Zipf law). Our estimate is only about 20% of citers read the original.

  46. https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2006.00202.x

  47. ⁠, M. V. Simkin, V. P. Roychowdhury (2004-01-27):

    We present empirical data on frequency and pattern of misprints in citations to twelve high-profile papers. We find that the distribution of misprints, ranked by frequency of their repetition, follows Zipf’s law. We propose a stochastic model of citation process, which explains these findings, and leads to the conclusion that 70–90% of scientific citations are copied from the lists of references used in other papers.

  48. ⁠, M. V. Simkin, V. P. Roychowdhury (2003-05-08):

    Recently we discovered (cond-mat/​​​​0212043) that the majority of scientific citations are copied from the lists of references used in other papers. Here we show that a model, in which a scientist picks three random papers, cites them,and also copies a quarter of their references accounts quantitatively for empirically observed citation distribution. Simple mathematical probability, not genius, can explain why some papers are cited a lot more than the other.

  49. 2007-simkin.pdf: ⁠, Mikhail V. Simkin, Vwani P. Roychowdhury (2007-07-13; statistics  /​ ​​ ​bias):

    Recently we proposed a model in which when a scientist writes a manuscript, he picks up several random papers, cites them, and also copies a fraction of their references. The model was stimulated by our finding that a majority of scientific citations are copied from the lists of references used in other papers. It accounted quantitatively for several properties of empirically observed distribution of citations; however, important features such as distributions of citations to papers published during the same year and the fact that the average rate of citing decreases with aging of a paper were not accounted for by that model. Here, we propose a modified model: When a scientist writes a manuscript, he picks up several random recent papers, cites them, and also copies some of their references. The difference with the original model is the word recent. We solve the model using methods of the theory of branching processes, and find that it can explain the aforementioned features of citation distribution, which our original model could not account for. The model also can explain “sleeping beauties in science”; that is, papers that are little cited for a decade or so and later “awaken” and get many citations. Although much can be understood from purely random models, we find that to obtain a good quantitative agreement with empirical citation data, one must introduce Darwinian fitness parameter for the papers.

  50. ⁠, M. V. Simkin, V. P. Roychowdhury (2007-01-03):

    Statistical analysis of repeat misprints in scientific citations leads to the conclusion that about 80% of scientific citations are copied from the lists of references used in othe papers. Based on this finding a mathematical theory of citing is constructed. It leads to the conclusion that a large number of citations does not have to be a result of paper’s extraordinary qualities, but can be explained by the ordinary law of chances.

  51. https://link.springer.com/chapter/10.1007/978-1-4614-0754-6_16

  52. 2017-sigut.pdf: ⁠, Martin Š igut, Hana Š igutová, Petr Pyszko, Aleš Dolný, Michaela Drozdová, Pavel Drozd (2017-04-24; statistics  /​ ​​ ​bias):

    The Shannon-Wiener index is a popular nonparametric metric widely used in ecological research as a measure of species diversity. We used the Web of Science database to examine cases where papers published from 1990 to 2015 mislabeled this index. We provide detailed insights into causes potentially affecting use of the wrong name ‘Weaver’ instead of the correct ‘Wiener’. Basic science serves as a fundamental information source for applied research, so we emphasize the effect of the type of research (applied or basic) on the incidence of the error. Biological research, especially applied studies, increasingly uses indices, even though some researchers have strongly criticized their use. Applied research papers had a higher frequency of the wrong index name than did basic research papers. The mislabeling frequency decreased in both categories over the 25-year period, although the decrease lagged in applied research. Moreover, the index use and mistake proportion differed by region and authors’ countries of origin. Our study also provides insight into citation culture, and results suggest that almost 50% of authors have not actually read their cited sources. Applied research scientists in particular should be more cautious during manuscript preparation, carefully select sources from basic research, and read theoretical background articles before they apply the theories to their research. Moreover, theoretical ecologists should liaise with applied researchers and present their research for the broader scientific community. Researchers should point out known, often-repeated errors and phenomena not only in specialized books and journals but also in widely used and fundamental literature.

  53. ⁠, de Lacey, G. Record, C. Wade, J (1985):

    The accuracy of quotations and references in six medical journals published during January 1984 was assessed. The original author was misquoted in 15% of all references, and most of the errors would have misled readers. Errors in citation of references occurred in 24%, of which 8% were major errors–that is, they prevented immediate identification of the source of the reference. Inaccurate quotations and citations are displeasing for the original author, misleading for the reader, and mean that untruths become “accepted fact.” Some suggestions for reducing these high levels of inaccuracy are that papers scheduled for publication with errors of citation should be returned to the author and checked completely and a permanent column specifically for misquotations could be inserted into the journal.

  54. 1987-eichorn.pdf: ⁠, Philip Eichorn, Alfred Yankauer (1987-08-01; statistics  /​ ​​ ​bias):

    We verified a random sample of 50 references in the May 1986 issue of each of 3 public health journals.

    31% of the 150 references had citation errors, one out of 10 being a major error (reference not locatable). 30% of the references differed from authors’ use of them with half being a major error (cited paper not related to author’s contention).

  55. https://www.cambridge.org/core/services/aop-cambridge-core/content/view/E5089C474E3DD9ED63267B88C2547468/S0955603600101266a.pdf/div-class-title-accuracy-of-references-in-psychiatric-literature-a-survey-of-three-journals-div.pdf

  56. https://www.researchgate.net/profile/Carole_Nowicke/publication/236212779_Secondary_and_Tertiary_Citing_A_Study_of_Referencing_Behavior_in_the_Literature_of_Citation_Analysis_Deriving_from_the_%27Ortega_Hypothesis%27_of_Cole_and_Cole/links/571e631208aead26e71a88a5.pdf#page=2

  57. ⁠, Steven A. Greenberg (2009-07-21):

    Objective: To understand belief in a specific scientific claim by studying the pattern of citations among papers stating it.

    Design: A complete citation network was constructed from all PubMed indexed English literature papers addressing the belief that β amyloid, a protein accumulated in the brain in Alzheimer’s disease, is produced by and injures skeletal muscle of patients with inclusion body myositis. Social network theory and graph theory were used to analyse this network.

    Main outcome measures: Citation bias, amplification, and invention, and their effects on determining authority.

    Results: The network contained 242 papers and 675 citations addressing the belief, with 220 553 citation paths supporting it. Unfounded authority was established by citation bias against papers that refuted or weakened the belief; amplification, the marked expansion of the belief system by papers presenting no data addressing it; and forms of invention such as the conversion of hypothesis into fact through citation alone. Extension of this network into text within grants funded by the National Institutes of Health and obtained through the Freedom of Information Act showed the same phenomena present and sometimes used to justify requests for funding.

    Conclusion</>

  58. ⁠, V Pavlovic, T. Weissgerber, D. Stanisavljevic, T. Pekmezovic, V. Garovic, N. Milic, CITE Investigators (2020-12-10):

    Citations are an important, but often overlooked, part of every scientific paper. They allow the reader to trace the flow of evidence, serving as a gateway to relevant literature. Most scientists are aware of citations errors, but few appreciate the prevalence or consequences of these problems. The purpose of this study was to examine how often frequently cited papers in biomedical scientific literature are cited inaccurately. The study included an active participation of first authors of frequently cited papers; to first-hand verify the citations accuracy. The approach was to determine most cited original articles and their parent authors, that could be able to access, and identify, collect and review all citations of their original work. Findings from feasibility study, where we collected and reviewed 1,540 articles containing 2,526 citations of 14 most cited articles in which the 1st authors were affiliated with the Faculty of Medicine University of Belgrade, were further evaluated for external confirmation in an independent verification set of articles. Verification set included 4,912 citations identified in 2,995 articles that cited 13 most cited articles published by authors affiliated with the Mayo Clinic Division of Nephrology and Hypertension (Rochester, Minnesota, USA), whose research focus is hypertension and peripheral vascular disease. Most cited articles and their citations were determined according to SCOPUS database search. A citation was defined as being accurate if the cited article supported or was in accordance with the statement by citing authors. A multilevel regression model for binary data was used to determine predictors of inaccurate citations. At least one inaccurate citation was found in 11% and 15% of articles in the feasibility study and verification set, respectively, suggesting that inaccurate citations are common in biomedical literature. The main findings were similar in both sets. The most common problem was the citation of nonexistent findings (38.4%), followed by an incorrect interpretation of findings (15.4%). One fifth of inaccurate citations were due to “chains of inaccurate citations,” in which inaccurate citations appeared to have been copied from previous papers. Reviews, longer time elapsed from publication to citation, and multiple citations were associated with higher chance of citation being inaccurate. Based on these findings, several actions that authors, mentors and journals can take to reduce citation inaccuracies and maintain the integrity of the scientific literature have been proposed.