[4pg discussion of fraud and malpractice in science: the spinach iron leprechaun, Blondlot’s N-rays, plagiarism and data fabrication (A. K. Alsabti, Vijay Soman, John C. Long, William Summerlin, S. Krogh Derr, Robert Gullis, Marc J. Straus, Mark Spector, Gregor Mendel. Ernest Haeckel, Piltdown Man).]
1973-mcclelland.pdf: “Testing for Competence Rather Than for 'Intelligence'”, (1973-01; ):
Argues that while traditional intelligence tests have been validated almost entirely against school performance, the evidence that they measure abilities which are essential to performing well in various life outcomes is weak. Most of the validity studies are correlational in nature and fail to control for the fact that social class might be a 3rd variable accounting for positive correlations between test scores and occupational success, and between level of schooling achieved and occupational success. It is suggested that better measures of competence might be derived by analysis of successful life outcomes and the competencies involved in them, criterion sampling, and assessment of communication skills.
1980-mcclelland.pdf: “Opportunities for Counselors from the Competency Assessment Movement”, (1980-01; ):
Innovations in testing emerging from the competency assessment movement offer counselors new capabilities in helping their clients to understand aspects of themselves and their problems, as well as to establish directions for development and improvement efforts. New types of tests and measures sample actual behavior more closely than testing instruments previously used: The characteristics they examine are closely linked to performance in a wide variety of jobs, and therefore provide increased focus of assessment on life outcomes. With this new degree of specificity and criterion referencing, implications for counseling, training, and development efforts emerge more clearly than with other forms of testing.
1994-mcclelland.pdf: “The knowledge-testing-educational complex strikes back”, (1994; ):
Responds to the criticisms of G. V. Barrett and R. L. Depinet (see record 1992-03797-001) regarding the author’s (1973) article on competence testing. D. C. McClelland agrees with Barrett and Depinet’s dismissal of competency testing as a poor alternative to ability testing. McClelland holds that well-designed competency-based tests could make an important contribution to selecting people who are better suited for certain jobs, but that these tests will not be developed until there is a strong commitment by psychologists to develop them and the necessary financial support is available.
1991-barrett.pdf: “A reconsideration of testing for competence rather than for intelligence”, (1993; ):
David C. McClelland’s 1973 article has deeply influenced both professional and public opinion. In it, he presented 5 major themes: (1) Grades in school did not predict occupational success, (2) intelligence tests and aptitude tests did not predict occupational success or other important life outcomes, (3) tests and academic performance only predicted job performance because of an underlying relationship with social status, (4) such tests were unfair to minorities, and (5) “competencies” would be better able to predict important behaviors than would more traditional tests. Despite the pervasive influence of these assertions, this review of the literature showed only limited support for these claims.
1994-barrett.pdf: “Empirical data say it all”, (1994-01; ):
Comments that after considering the responses of R. E. Boyatzis (see record 1994-27864-001) and D. C. McClelland (see record 1994-27871-001) and reviewing additional reports by these authors, the conclusions drawn by G. V. Barrett and R. L. Depinet’s (see record 1992-03797-001) article on competence testing are reinforced. If McClelland’s concept of competencies is to make a contribution to psychology, he must present empirical data to support his contention. Three sets of data are presented to illustrate this point.
[Barrett points out that according to McClelland’s own analyses, his proposed screening methods barely predict job performance, are usually not even statistically-significant, would violate employment/discrimination law, and that McClelland’s claim that his methods don’t work because of the “knowledge-testing-educational complex” is an excuse.]
2003-barrett.pdf: “New Concepts of Intelligence: Their Practical and Legal Implications for Employee Selection”, (2003; ):
In the 1920s and 1930s basic theories of intellectual ability were developed along with operational tests which proved effective in predicting job performance (Spearman 1927; Thorndike 1936). In a series of studies and meta-analyses throughout the 1970s and 1980s, Schmidt and Hunter showed that cognitive ability was the best overall predictor of job performance (Hunter & Hunter 1984; Hunter 1986; Schmidt & Hunter 1981). Partially in reaction to the meta-analytic findings, research to expand on the definitions of competencies continued. The development of competencies by McClelland (1973) was followed by a discussion of tacit knowledge (Wagner & Sternberg 1985), practical intelligence (Sternberg & Wagner 1986), and multiple intelligence (Gardner 1999). In the 1990s, emotional intelligence became the intelligence of interest (Feist & Barron 1996; Goleman 1995, 1998a, 1998b; Graves 1999; Mayer et al 1990).
All these new theories and proposed measurement instruments pose a challenge to traditional cognitive ability tests since it is claimed that these tests are more valid and have lower adverse impact. It is our contention that many of these tests are nothing more than pop psychology. It is distressing to see such books (ie. Goleman 1998b) quoted as if they had some merit. We will review the themes present throughout all of these “creative” concepts and examine whether they have practical implications and can withhold legal scrutiny in the public and private sector.
1983-broadus.pdf: “An investigation of the validity of bibliographic citations”, Robert N. Broadus
1989-moed.pdf: “Possible inaccuracies occurring in citation analysis”, (1989-04-01; ):
Citation analysis of scientific articles constitutes an im portant tool in quantitative studies of science and technology. Moreover, citation indexes are used frequently in searches for relevant scientific documents. In this article we focus on the issue of reliability of citation analysis. How accurate are cita tion counts to individual scientific articles? What pitfalls might occur in the process of data collection? To what extent do ‘random’ or ‘systematic’ errors affect the results of the citation analysis?
We present a detailed analysis of discrepancies between target articles and cited references with respect to author names, publication year, volume number, and starting page number. Our data consist of some 4500 target articles pub lished in five scientific journals, and 25000 citations to these articles. Both target and citation data were obtained from the Science Citation Index, produced by the Institute for Scientific Information.
It appears that in many cases a specific error in a citation to a particular target article occurs in more than one citing publication. We present evidence that authors in compiling reference lists, may copy references from reference lists in other articles, and that this may be one of the mechanisms underlying this phenomenon of multiple’ variations/errors.
“Read before you cite!”, (2002-12-03):
We report a method of estimating what percentage of people who cited a paper had actually read it. The method is based on a stochastic modeling of the citation process that explains empirical studies of misprint distributions in citations (which we show follows a Zipf law). Our estimate is only about 20% of citers read the original.
“Stochastic modeling of citation slips”, (2004-01-27):
We present empirical data on frequency and pattern of misprints in citations to twelve high-profile papers. We find that the distribution of misprints, ranked by frequency of their repetition, follows Zipf’s law. We propose a stochastic model of citation process, which explains these findings, and leads to the conclusion that 70–90% of scientific citations are copied from the lists of references used in other papers.
“Copied citations create renowned papers?”, (2003-05-08):
Recently we discovered (cond-mat/0212043) that the majority of scientific citations are copied from the lists of references used in other papers. Here we show that a model, in which a scientist picks three random papers, cites them,and also copies a quarter of their references accounts quantitatively for empirically observed citation distribution. Simple mathematical probability, not genius, can explain why some papers are cited a lot more than the other.
2007-simkin.pdf: “A mathematical theory of citing”, (2007-07-13; ):
Recently we proposed a model in which when a scientist writes a manuscript, he picks up several random papers, cites them, and also copies a fraction of their references. The model was stimulated by our finding that a majority of scientific citations are copied from the lists of references used in other papers. It accounted quantitatively for several properties of empirically observed distribution of citations; however, important features such as power-law distributions of citations to papers published during the same year and the fact that the average rate of citing decreases with aging of a paper were not accounted for by that model. Here, we propose a modified model: When a scientist writes a manuscript, he picks up several random recent papers, cites them, and also copies some of their references. The difference with the original model is the word recent. We solve the model using methods of the theory of branching processes, and find that it can explain the aforementioned features of citation distribution, which our original model could not account for. The model also can explain “sleeping beauties in science”; that is, papers that are little cited for a decade or so and later “awaken” and get many citations. Although much can be understood from purely random models, we find that to obtain a good quantitative agreement with empirical citation data, one must introduce Darwinian fitness parameter for the papers.
“An introduction to the theory of citing”, (2007-01-03):
Statistical analysis of repeat misprints in scientific citations leads to the conclusion that about 80% of scientific citations are copied from the lists of references used in othe papers. Based on this finding a mathematical theory of citing is constructed. It leads to the conclusion that a large number of citations does not have to be a result of paper’s extraordinary qualities, but can be explained by the ordinary law of chances.
2017-sigut.pdf: “Avoiding erroneous citations in ecological research: read before you apply”, (2017-04-24; ):
The Shannon-Wiener index is a popular nonparametric metric widely used in ecological research as a measure of species diversity. We used the Web of Science database to examine cases where papers published from 1990 to 2015 mislabeled this index. We provide detailed insights into causes potentially affecting use of the wrong name ‘Weaver’ instead of the correct ‘Wiener’. Basic science serves as a fundamental information source for applied research, so we emphasize the effect of the type of research (applied or basic) on the incidence of the error. Biological research, especially applied studies, increasingly uses indices, even though some researchers have strongly criticized their use. Applied research papers had a higher frequency of the wrong index name than did basic research papers. The mislabeling frequency decreased in both categories over the 25-year period, although the decrease lagged in applied research. Moreover, the index use and mistake proportion differed by region and authors’ countries of origin. Our study also provides insight into citation culture, and results suggest that almost 50% of authors have not actually read their cited sources. Applied research scientists in particular should be more cautious during manuscript preparation, carefully select sources from basic research, and read theoretical background articles before they apply the theories to their research. Moreover, theoretical ecologists should liaise with applied researchers and present their research for the broader scientific community. Researchers should point out known, often-repeated errors and phenomena not only in specialized books and journals but also in widely used and fundamental literature.
The accuracy of quotations and references in six medical journals published during January 1984 was assessed. The original author was misquoted in 15% of all references, and most of the errors would have misled readers. Errors in citation of references occurred in 24%, of which 8% were major errors–that is, they prevented immediate identification of the source of the reference. Inaccurate quotations and citations are displeasing for the original author, misleading for the reader, and mean that untruths become “accepted fact.” Some suggestions for reducing these high levels of inaccuracy are that papers scheduled for publication with errors of citation should be returned to the author and checked completely and a permanent column specifically for misquotations could be inserted into the journal.
1987-eichorn.pdf: “Do authors check their references? A survey of accuracy of references in 3 public health journals”, (1987-08-01; ):
We verified a random sample of 50 references in the May 1986 issue of each of 3 public health journals.
31% of the 150 references had citation errors, one out of 10 being a major error (reference not locatable). 30% of the references differed from authors’ use of them with half being a major error (cited paper not related to author’s contention).
Objective: To understand belief in a specific scientific claim by studying the pattern of citations among papers stating it.
Design: A complete citation network was constructed from all PubMed indexed English literature papers addressing the belief that β amyloid, a protein accumulated in the brain in Alzheimer’s disease, is produced by and injures skeletal muscle of patients with inclusion body myositis. Social network theory and graph theory were used to analyse this network.
Main outcome measures: Citation bias, amplification, and invention, and their effects on determining authority.
Results: The network contained 242 papers and 675 citations addressing the belief, with 220 553 citation paths supporting it. Unfounded authority was established by citation bias against papers that refuted or weakened the belief; amplification, the marked expansion of the belief system by papers presenting no data addressing it; and forms of invention such as the conversion of hypothesis into fact through citation alone. Extension of this network into text within grants funded by the National Institutes of Health and obtained through the Freedom of Information Act showed the same phenomena present and sometimes used to justify requests for funding.
Citations are an important, but often overlooked, part of every scientific paper. They allow the reader to trace the flow of evidence, serving as a gateway to relevant literature. Most scientists are aware of citations errors, but few appreciate the prevalence or consequences of these problems. The purpose of this study was to examine how often frequently cited papers in biomedical scientific literature are cited inaccurately. The study included an active participation of first authors of frequently cited papers; to first-hand verify the citations accuracy. The approach was to determine most cited original articles and their parent authors, that could be able to access, and identify, collect and review all citations of their original work. Findings from feasibility study, where we collected and reviewed 1,540 articles containing 2,526 citations of 14 most cited articles in which the 1st authors were affiliated with the Faculty of Medicine University of Belgrade, were further evaluated for external confirmation in an independent verification set of articles. Verification set included 4,912 citations identified in 2,995 articles that cited 13 most cited articles published by authors affiliated with the Mayo Clinic Division of Nephrology and Hypertension (Rochester, Minnesota, USA), whose research focus is hypertension and peripheral vascular disease. Most cited articles and their citations were determined according to SCOPUS database search. A citation was defined as being accurate if the cited article supported or was in accordance with the statement by citing authors. A multilevel regression model for binary data was used to determine predictors of inaccurate citations. At least one inaccurate citation was found in 11% and 15% of articles in the feasibility study and verification set, respectively, suggesting that inaccurate citations are common in biomedical literature. The main findings were similar in both sets. The most common problem was the citation of nonexistent findings (38.4%), followed by an incorrect interpretation of findings (15.4%). One fifth of inaccurate citations were due to “chains of inaccurate citations,” in which inaccurate citations appeared to have been copied from previous papers. Reviews, longer time elapsed from publication to citation, and multiple citations were associated with higher chance of citation being inaccurate. Based on these findings, several actions that authors, mentors and journals can take to reduce citation inaccuracies and maintain the integrity of the scientific literature have been proposed.