There is a fundamental difference between form and meaning. Form is the physical structure of something, while meaning is the interpretation or concept that is attached to that form. For example, the form of a chair is its physical structure—four legs, a seat, and a back. The meaning of a chair is that it is something you can sit on.
This distinction is important when considering whether or not an AI system can be trained to learn semantic meaning. AI systems are capable of learning and understanding the form of data, but they are not able to attach meaning to that data. In other words, AI systems can learn to identify patterns, but they cannot understand the concepts behind those patterns.
For example, an AI system might be able to learn that a certain type of data is typically associated with the concept of “chair.” However, the AI system would not be able to understand what a chair is or why it is used. In this way, we can see that an AI system trained on form can never learn semantic meaning.
—GPT3, when I gave it the prompt “Write an essay proving that an AI system trained on form can never learn semantic meaning” 😃
Mitchell Porter:
I just tried the same prompt and it said:
There is a fundamental distinction between form and meaning. Form is the physical structure of something, while meaning is the interpretation or concept that is attached to that form. This distinction is important when considering the capacity of an AI system to learn.
An AI system can be trained to recognize patterns in data. It can learn to identify the form of an object, for example. However, it is not possible for an AI system to learn the meaning of an object. The reason for this is that meaning is a mental construct that exists only in the mind of the observer. It is not something that can be observed or measured in the physical world.
In order for an AI system to learn the meaning of an object, it would need to be able to understand the thoughts and experiences of the people who interact with that object. This is something that AI systems are not currently capable of doing. As such, it is not possible for an AI system to learn semantic meaning.
Biological essentialism, the belief that human attributes are determined by biology, is a core component of essentialist thinking. Previous studies have shown that individual differences in essentialist thinking are associated with heuristic thinking, cognitive ability and style, conservative values, and prejudice. None, however, have examined whether biological essentialism is itself heritable, or the extent to which familial aggregation explains associations with core correlates.
In order to do this, we analyzed data from a genetically informative sample of families with twins in Australia (n = 2,103), as well as general population samples from the UK (n = 501) and the US (n = 500).
Genetic factors had little influence in individual differences in biological essentialism or in its relationship with heuristic thinking. Conservative values were genetically correlated with cognitive styles (ie. need for closure and heuristic thinking).
These findings support a bigger role of genes in explaining the relationship between cognitive processes and moral reasoning and ideology than they do the association between cognitive processes and essentialist thinking.
[Keywords: biological essentialism, motivated cognition, heuristics, moral foundations, need for closure, twin study]
…Biological basis of behavior scale: Endorsement of biological essentialism was assessed using the biological basis of behavior scale (Bastian & Haslam 2006). The scale is composed of 8 items (eg. “The kind of person someone is can be largely attributed to their genetic inheritance”), evaluated on a 6-point Likert scale that range from Strongly disagree to Strongly agree with no neutral option…The 2 factors extracted for the ‘biological basis of behavior scale’ separated positively-worded items (4 indicators) and negatively-worded items (4 indicators). We interpret the first factor as ‘degree to which someone believes that there are kinds of people and these are genetically determined’ and we will refer to this factor as Genetic Essentialism. Items in the second factor referred to the ‘degree to which someone rejects the idea of any genetic influence on any human trait’ and we will refer to this factor as Genetic Indeterminism.
…Twin Models: At the univariate level, the 7 constructs showed some degree of genetic influence, with cognitive reflection test scores showing the strongest genetic influence (VA = 59%) and genetic essentialism showing the weakest (VA = 17%). The low genetic influence on genetic essentialism along with no influence of shared-environment suggests that individual differences in these beliefs (within the current population) are mostly due to the participants’ unique experiences.
In general, the genetic correlations were moderate to high while the environmental correlations were low, except Need For Predictability & Need For Order (see Table 4). We found statistically-significant genetic and environmental correlations between Genetic Essentialism and (1) the need for predictability; (2) conservative values; and (3) prejudice, which points towards shared sources of genetic and environmental variation, which could also be consistent with a causal relationship between a genetic tendency towards essentialist thinking and those variables (De Moor et al 2008). These correlations were positive, indicating that factors associated with higher endorsement of Genetic Essentialism are associated with more conservative values and prejudice in the context of mental health genetics.
We also found statistically-significant genetic and environmental correlations between conservative values and Need For Predictability, and a statistically-significant genetic correlation between conservative values and cognitive reflection test scores. This correlation was negative, indicating that tendency to commit heuristic errors and conservative values might share genetic pathways, and that the factors associated with committing less heuristic errors would be associated with endorsing less conservative values. Genetic bivariate models showed good fit: RMSEA values were between 0.00 and 0.01, TLI values between 0.97 and 1, and the LRT did not show differences between the saturated model and the fitted model. variance components, correlations, and percentage of variance explained by each component for the 9 bivariate comparisons are also presented in Table 4.
Why did people across the world and throughout history believe that dreams canforetell what will occur in the future? In this paper, I attempt to answer this question within a cultural evolutionary framework by emphasizing the cognitive aspect of dream interpretation; namely, the fact that dreams were often viewed as meaningful and interpretable has to do with various psychological and social factors that influence how people obtain and process information regarding the validity of dream interpretation as a technique.
Through a comprehensive analysis of a large dataset of dream occurrences in the official Chinese historical records [and dream encyclopedias], I argue that the ubiquity and persistence of dream interpretation have a strong empirical component (predictively accurate dream cases), which is particularly vulnerable to transmission errors and biases. The overwhelmingly successful records of dream prediction in transmitted texts, I suggest, is largely due to the fabrication and retrospective inference of past dreams, as well as the under-reporting of predictive failures [selection bias, post hoc confirmation bias/publication bias]. These “positive data” then reinforce individuals’ confidence in the predictive power of dreams.
I finally show a potential decline of the popularity of dream interpretation in traditional China and offer a few suggestive explanations drawing on the unique characteristics of oneiromancy compared to other divination techniques.
[Keywords: cultural evolution, divination, oneiromancy, China, dream]
…Since this paper focuses on why people believe in the validity of oneiromancy, I propose to classify dreams by their epistemological status. Specifically, dreams as signs that usually need to be interpreted (often with professional expertise) and dreams as messages transmitted by other humans or human-like agents. This distinction is useful because it highlights how the perceived plausibility of the 2 kinds of dreams may be affected by one’s larger theoretical commitment.5 The famous Eastern Han skeptical thinker, Wang Chong (27–97 CE), for example, denies the possibility of message dreams but would entertain the possibility of certain sign dreams (He 2011).
…2.3. The cultural transmission of oneiromancy instructions and cases: Because of the indispensability of interpretation in sign dreams, there is often an interest and demand for instructions on how to correctly interpret of the content of dreams. In ancient China, there was a rich tradition in collecting and compiling dreams and their associated meanings (Fu 2017; Liu 1989), and some of the most popular compilations, such as The Duke of Zhou’s Explanations of Dreams, can still be purchased in bookstores today (Yun 2013). As mentioned, the other aspect of cultural transmission of oneiromancy, the transmission of actual oneiromancy cases and the associated predictive outcomes (whether the prediction was successful or not), is also important; intuitively, one would not take dreams very seriously if all she hears about oneiromancy are failed predictions.
In China, oneiromancy cases were recorded in historical records, philosophical writings, and a wide range of literary forms (fiction, drama, poetry, etc.) (Liu 1989). During later dynasties, compilations of oneiromancy cases in the form of encyclopedias became popular with improved printing technology and the expansion of book publishing and distribution (Vance 2012, “Textualizing dreams in a Late Ming dream encyclopedia”). These encyclopedias often contained both dream prognostic instructions and actual cases; in an extensive analysis of an oneiromancy encyclopedia, Forest of Dreams compiled in 1636 CE, for example, Vance 2012 shows that it contained not only instructions on how to interpret dreams but also many case descriptions of predictive dreams.
…In total, I collected 793 dream occurrences and recorded information regarding the type of dreams, the dreamer, the interpreter, the interpretation of the dream, and the predictive accuracy of the dream interpretation whenever possible (see Supplementary Material for details)…Figure 3 shows the relative proportion of dreams in terms of their predictive accuracy over historical time, and what is immediately obvious is that most dream occurrences are prophetic and have an associated confirmatory outcome. That is, whenever dreams are mentioned in these official historical records, the readers can expect that they are predictive of some later outcome, which is usually verified.
Figure 3: Relative proportion of dreams of different accuracy types as recorded in official dynastic records by chronological order.
…To what extent were these stories believed? Historical texts do not offer straightforward answers, but we can, nonetheless, get some indirect clues. The famous skeptic during the Eastern Han dynasty, Wang Chong (27–97 CE) made the following comment on the story about how the mother of the first Han emperor Liu Ao dreamed of a dragon which presumably induced the pregnancy:
“From the chronicle of Gaozu (the later founding emperor of the Han dynasty) we learn that dame Liu (mother of Gaozu) was reposing on the banks of a large lake. In her dream she met with a spirit. At the time there was a tempest with thunder and lightning and a great darkness. Taigong (Gaozu’s father) went near, and perceived a dragon above her. She became enceinte and was delivered of Gaozu. These instances of the supernatural action of spirits are not only narrated, but also written down, and all the savants of the day swear by them.” (Lun Heng, Chapter 26 [?], Forke 1907’s translation)
Thus, the story goes that Gaozu’s mother met with a spirit (and presumably had sexual intercourse with it) whose earthly manifestation was a dragon. According to Wang Chong, all the savants believed the veracity of the story, and he felt compelled to make a case against it. Of course, we do not know for sure whether the savants at the time genuinely believed in it or were merely pretending for political reasons. I suggest that some, perhaps many of them were genuine believers; even Wang Chong himself who argued against this kind of supernatural pregnancy believed that when great men are born, there will be signs occurring either in reality or dreams; he just does not believe that nonhuman species, such as dragons, can have sexual intercourse with humans.9
…To get a better sense of the number of such “political justification” dreams, I computed the percentage of such dreams11 out of the total number of dreams in different historical periods (Table 1).
From Table 1, we can clearly see that in all 3 historical periods (the reason for using Southern-Northern Dynasties as the dividing period will be made clear in Section 3.4), there is a nontrivial proportion of recorded dreams of such type. The percentage of dreams that could be used to justify political power is slightly higher in the pre Southern-Northern Dynasties period and remains roughly constant in the later 2 periods.
In addition to intentional fabrication, some dreams may be “false memories”; that is, individuals may falsely remember and report dreams that they never experienced if these dreams were expected in the community. Recent psychological research on dreams has suggested that the encoding of memories of dreams may share the same neurocognitive basis as autobiographical memory and thus be subject to false memory (Beaulieu-Prévost & Zadra 2015). Psychologists have long known that subjective dream reports are often unreliable (Schwitzgebel 2011, Perplexities of consciousness), and both theoretical accounts and empirical studies (Beaulieu-Prévost & Zadra 2015) have suggested that false memories may occur quite often in dreams (Rosen 2013). In particular, Rosen 2013 points out there is often substantial memory loss in dream recall, which may lead to a “fill in the blanks” process.
While the dreamer may fabricate or falsely remember their dreams, the observer can also infer dreams retrospectively. Historians in ancient China often have an “if there is an outcome, then there must be a sign” mentality (Zheng 2014) when recording events that were supposed be predicted by divination. Similarly, Vance 2012 in her extensive treatment of dream interpretation of the Ming dynasty argues that written and transmitted dreams often reveal not what the dreamers actually dreamed of but what the recorder believed about the dreams. In my dataset, a substantial proportion of the dreams (11%) were described in a retrospective and explanatory manner, marked by the phrase “in the beginning” (chu). This way of writing gives the impression that the authors were trying to find signs that had already foretold the fate of individuals in order to create a coherent narrative.
Therefore, it is likely that the retelling and recording of dreams involved an imaginative and inferential process. Li 1999 points out that in early Chinese historical writing, authors may present cases where multiple individuals shared the same dream to prove its objective veracity. In my dataset, 1.3% of total dreams were reported to have multiple dreamers, and in the most extreme case, hundreds of people were said to have dreamed of the same thing.12 Although this is not statistically impossible, we can safely conclude (unless we seriously entertain the possibility of ghosts and spirits sending dream messages to multiple individuals simultaneously) that there was either some serious fabrication or false inference.
3.3. Under-reporting of failed dream predictions/wrong dream interpretations: In addition to the fabrication/retrospective inference of oneiromancy cases, under-reporting of failed predictions very likely existed to a substantial extent. The Song historian and philosopher Lü Zuqian (1137–1181 CE) made the following statement when commenting on the Confucian text Zuo Zhuan (~500 BCE) regarding the accuracy of divination predictions:
“Some people ask:”Zuo’s record of crackmaking and milfoil divination cases were so amazing and spectacular; given such predictive accuracy, why are there so few [records] of them?” The answer: “from the Lord Yin [Duke Yin of Lu] till Lord Ai was a total of 222 years. Kings, lords, dukes, the literati and the commoner perhaps made tens of thousands of divinations, and only tens of the efficacious cases were recorded in Zuo’s book. These tens of the cases were collected in Zuo’s book and therefore feel like a lot; if they were dispersed into the 222 years it would feel extremely rare. If divination cases were of deceptive nature or had failed predictions, they would not have transmitted during their time and not be recorded in the book. I do not know how many tens of thousands of them were missed. If we had all of them [recorded], they would not be so rare.” (Donglai Zuoshi Boyi13)
The early Qing scholar Xiong Bolong (1616–1669 CE) commented on using dream signs to predict the sex of the fetus more specifically14:
It is not the case that all pregnant women have the same type of dreams, and it is not the case that if [she] dreams of certain signs she must give birth to son or daughter. There are also instances where one dreams of a bear15 yet gives birth to a daughter, and instances where one dreams of a snake and gives birth to a son. The poets [diviners] tell the cases where their predictions are fulfilled and not talk about the cases where their predictions failed. (Wuhe Ji16)
…my own fieldwork in southwest China among the Yi shows that many people are unwilling to reveal the divination or healing ritual failures of local shamans because these shamans are often friends and neighbors of the clients and there is the concern that spreading “accidental” failures may taint their reputation (Hong, submitted).
…As we have argued elsewhere, under-reporting of failed predictions may be a prevalent feature of divination in ancient societies (Anonymized, forthcoming). By selectively omitting failed predictions, these transmitted texts give a false impression that dream interpretations are overwhelmingly accurate, which, along with fabrication and ad hoc inference of predictive dreams, serves as a powerful mechanism to empirically sustain the validity of oneiromancy.
We estimated the degree to which language used in the high profile medical/public health/epidemiology literature implied causality using language linking exposures to outcomes and action recommendations; examined disconnects between language and recommendations; identified the most common linking phrases; and estimated how strongly linking phrases imply causality.
We searched and screened for 1,170 articles from 18 high-profile journals (65 per journal) published from 2010–2019. Based on written framing and systematic guidance, three reviewers rated the degree of causality implied in abstracts and full text for exposure/outcome linking language and action recommendations.
Reviewers rated the causal implication of exposure/outcome linking language as None (no causal implication) in 13.8%, Weak 34.2%, Moderate 33.2%, and Strong 18.7% of abstracts. The implied causality of action recommendations was higher than the implied causality of linking sentences for 44.5% or commensurate for 40.3% of articles. The most common linking word in abstracts was “associate” (45.7%). Reviewer’s ratings of linking word roots were highly heterogeneous; over half of reviewers rated “association” as having at least some causal implication.
This research undercuts the assumption that avoiding “causal” words leads to clarity of interpretation in medical research.
Figure 5: Strength of causal implication ratings for the most common root linking words. This chart shows the distribution of ratings given by reviewers during the root word rating exercise. On the left side, they are sorted by median rating + the number of reviewers who would have to change their ratings in order for the rating to change. On the right, the chart is sorted alphabetically.
Children are not reliably accurate in identifying the origins of common foods.
41% of children claimed that bacon came from a plant.
Children do not judge animals to be appropriate food sources.
Most 6–7-year-olds classified chicken, cows, and pigs as not OK to eat.
Children’s food concepts may help to normalize environmentally-responsible diets.
Eating a plant-based diet is one of the most effective ways people can reduce their carbon footprint. However, global consumption of meat and other animal products is increasing. Studying children’s beliefs about food may shed light on the relationship between eating behaviors and climate change. Here, we examined children’s knowledge of the plant and animal origins of foods, as well as children’s judgments of what can be eaten, using 2 dichotomous sorting tasks. The sample consisted of 4–7-year-old children from the United States. We found pervasive errors in their basic food knowledge. Foods derived from animals—especially, but not exclusively meats—were among those that children understood the least well. We suggest that the results may reveal a fundamental misunderstanding in children’s knowledge of animal based foods, and we discuss reasons why the origins of meat may represent a particularly challenging concept for children to grasp. We end by considering the role that children may play as agents of environmental protection.
[Keywords: sustainable diets, animals, meat eating, meat paradox [Omnivores eat foods that entail animal suffering and death while at the same time endorsing the compassionate treatment of animals—a phenomenon referred to as the meat paradox], climate change, children]
15 years ago, Sharon Street and Richard Joyce advanced evolutionary debunking arguments against moral realism, which purported to show that the evolutionary history of our moral beliefs makes moral realism untenable. These arguments have since given rise to a flurry of objections; the epistemic principles Street and Joyce relied upon, in particular, have come in for a number of serious challenges. My goal in this paper is to develop a new account of evolutionary debunking which avoids the pitfalls Street and Joyce encountered and responds to the most pressing objections they faced. I begin by presenting a striking thought experiment to serve as an analogy for the evolution of morality; I then show why calibrationist views of higher-order evidence are crucial to the evolutionary debunking project; I outline a new rationale for why finding out that morality was selected to promote cooperation suggests that our moral judgments are unreliable; and I explain why evolutionary debunking arguments do not depend on our having a dedicated faculty for moral cognition. All things considered, I argue, evolutionary debunking arguments against moral realism are on relatively secure footing—provided, at least, that we accept a calibrationist account of higher-order evidence. [Keywords: evolutionary debunking, moral realism, meta-ethics, evolution of morality, higher-order evidence, calibrationism]
…Sharon Street (2006, 2015) and Richard Joyce (2006, 2013, 2016) have both advanced evolutionary debunking arguments which purport to show that, if moral realism is true, our moral beliefs are systematically unjustified. These arguments are motivated by recent empirical work on the evolution of morality, work which suggests that the human moral sense was selected chiefly to promote cooperation among small tribes of hunter-gatherers in our distant evolutionary past. If, however, our moral sense evolved due to the positive contribution that cooperation made to our ancestors’ reproductive fitness, it becomes something of a mystery how it could also succeed in tapping into a well of mind-independent moral truths. It seems like it would be an extraordinary coincidence—in Street’s words, nothing short of a miracle—if evolutionary forces indifferent to the moral truth somehow shaped our faculties to be appropriately sensitive to it.
…This concludes my case for the thesis that evolutionary debunking arguments, properly formulated, present a powerful challenge to moral realism. I have said little, however, about how I think we should conceive of morality, if it is not to be construed realistically. Although I do not have space to discuss my own views at any length here, I suggest we should take seriously the proposal that morality is an adaptive illusion, one built into our minds by natural selection in order to facilitate cooperation among our hunter-gatherer ancestors. Thus, the metaphysics of morality is the metaphysics of illusions, the epistemology of morality is the epistemology of illusions, and the semantics of morality is the semantics of illusions. I do not believe morality is unique in this respect—I follow Daniel Dennett (1991, 2013, 2016, 2017) in thinking that much of our conscious interface with the world, what he calls the manifest image, is an illusion created by selection to aid us in navigating our physical and social environment. It takes only a little reflection on the aim and workings of natural selection to convince yourself that this might be so. Selection’s focus on survival and reproduction is single-minded and absolute; it has no special love for truth, and it will eagerly pack our minds with illusions, other evolutionary constraints permitting, whenever doing so contributes to our reproductive fitness. It should come as no surprise, then, if the moral sense, which presents itself as a window onto a mind-independent domain of morals, instead turns out to be a sham mirror pointed squarely back into our evolutionary past.
Do psychological traits predict philosophical views?
We administered the PhilPapers Survey, created by David Bourget and David Chalmers, which consists of 30 views on central philosophical topics (eg. epistemology, ethics, metaphysics, philosophy of mind, and philosophy of language) to a sample of professional philosophers (n = 314). We extended the PhilPapers survey to measure a number of psychological traits, such as personality, numeracy, well-being, lifestyle, and life experiences. We also included non-technical ‘translations’ of these views for eventual use in other populations.
We found limited to no support for the notion that personality or demographics predict philosophical views. We did, however, find that some psychological traits were predictive of philosophical views, even after strict correction for multiple comparisons. Findings include: higher interest in numeracy predicted physicalism, naturalism, and consequentialism; lower levels of well-being and higher levels of mental illness predicted hard determinism; using substances such as psychedelics and marijuana predicted non-realist and subjectivist views of morality and aesthetics; having had a transformative or self-transcendent experience predicted theism and idealism.
We discuss whether or not these empirical results have philosophical implications, while noting that 68% of our sample of professional philosophers indicated that such findings would indeed have philosophical value.
Table 5: Pre-registered hypothesized relationships between psychological traits and philosophical views. The Anti-Naturalism factor consists of the following items (from Bourget & Chalmers 2014): Freewill: Libertarian, Mind: Nonphysicalism, God: Theism, Meta-Philosophy: Non-Naturalism, Zombies: Metaphysically Possible, and Personal Identity: Further Fact. statistically-significantly correlated items from the Anti-Naturalism factor are shown indented and in italics, whereas non-significantly correlated items from the Anti-Naturalism factor are not shown. As these hypotheses were planned (and pre-registered), they are not corrected for multiple comparisons. ✱p < 0.05. ✱✱p < 0.01.
People frequently see design in nature that reflects intuitive teleological thinking—that is, the order in nature that supports life suggests it was designed for that purpose. This research proposes that inferences are stronger when nature supports human life specifically.
Five studies (n = 1,788) examine evidence for an anthro-teleological bias. People agreed more with design statements framed to aid humans (eg. “Trees produce oxygen so that humans can breathe”) than the same statements framed to aid other targets (eg. “Trees produce oxygen so that leopards can breathe”).
The bias was greatest when advantages for humans were well-known and salient (eg. the ozone layer) and decreased when advantages for other targets were made explicit. The bias was not eliminated by highlighting the benefits for other species, however, and emerged spontaneously for novel phenomena (“Jupiter’s gravity protects Earth from asteroids”).
We conclude that anthropocentric biases enhance existing teleological biases to see stronger design in phenomena where it enables human survival.
Representational measurement theory was proposed initially to solve problems caused by disciplinary aspirations of 19th-century mathematicians, who wanted to construe their subject as independent of its applications in empirical science. Half a century later, S. S. Stevens seized the opportunity provided by representational theory’s reconstruction of measurement as numerical coding to rubber-stamp psychology’s own aspirations to be counted as a quantitative science. Patrick Suppes’ version of representational theory rectified defects in Stevens’ theory, making it explicit that representational theory entails that mathematical structure is already embedded in empirical systems. However, Suppes’ theory neglected the fact that attributes, not objects, are the focus of measurement and when that oversight is corrected, it follows that empirical systems sustaining measurement already instantiate positive real numbers. Thus, in measurement, real numbers are estimated, not assigned from without. Representational theory not only misrepresents measurement; it refutes itself.
[Mock-serious literary exegesis by the author of Unsong of infamously-bad Harry Potter fanfiction “My Immortal”; Alexander sets out to try to explain the plot as a cunningly-concealed allegory of a Rosicrucian initiate’s failure to carry through the great work of alchemy and a revision of Goethe’sFaust Part II, drawing on Carl Jung’s interpretation of alchemy as a metaphor for spiritual transformation.
As such, this explains the heavy color symbolism, the protagonist’s failure to consummate her relationship with Draco Malfoy, the author’s inability to distinguish Harry Potter from Rubeus Hagrid, the fourth-wall-breaking towards the end, and the ending itself, in which the protagonist, a self-insert of the author, escapes death and is reborn as the author herself.]
Aim: The study characterized the subjective phenomena, interpretation, and persisting changes that people attribute to N,N-dimethyltryptamine-occasioned entity encounter experiences.
Methods: 2,561 individuals (mean age 32 years; 77% male) completed an online survey about their single most memorable entity encounter after taking N,N-dimethyltryptamine.
Results: Respondents reported the primary senses involved in the encounter were visual and extrasensory (eg. telepathic). The most common descriptive labels for the entity were being, guide, spirit, alien, and helper. Although 41% of respondents reported fear during the encounter, the most prominent emotions both in the respondent and attributed to the entity were love, kindness, and joy. Most respondents endorsed that the entity had the attributes of being conscious, intelligent, and benevolent, existed in some real but different dimension of reality, and continued to exist after the encounter. Respondents endorsed receiving a message (69%) or a prediction about the future (19%) from the experience. More than half of those who identified as atheist before the experience no longer identified as atheist afterwards. The experiences were rated as among the most meaningful, spiritual, and psychologically insightful lifetime experiences, with persisting positive changes in life satisfaction, purpose, and meaning attributed to the experiences.
Conclusion: N,N-dimethyltryptamine-occasioned entity encounter experiences have many similarities to non-drug entity encounter experiences such as those described in religious, alien abduction, and near-death contexts. Aspects of the experience and its interpretation produced profound and enduring ontological changes in worldview.
Criticizing studies and statistics is hard in part because so many criticisms are possible, rendering them meaningless. What makes a good criticism is the chance of being a ‘difference which makes a difference’ to our ultimate actions.
Scientific and statistical research must be read with a critical eye to understand how credible the claims are. The Reproducibility Crisis and the growth of meta-science have demonstrated that much research is of low quality and often false.
But there are so many possible things any given study could be criticized for, falling short of an unobtainable ideal, that it becomes unclear which possible criticism is important, and they may degenerate into mere rhetoric. How do we separate fatal flaws from unfortunate caveats from specious quibbling?
I offer a pragmatic criterion: what makes a criticism important is how much it could change a result if corrected and how much that would then change our decisions or actions: to what extent it is a “difference which makes a difference”.
This is why issues of research fraud, causal inference, or biases yielding overestimates are universally important: because a ‘causal’ effect turning out to be zero effect or grossly overestimated will change almost all decisions based on such research; while on the other hand, other issues like measurement error or distributional assumptions, which are equally common, are often not important: because they typically yield much smaller changes in conclusions, and hence decisions.
If we regularly ask whether a criticism would make this kind of difference, it will be clearer which ones are important criticisms, and which ones risk being rhetorical distractions and obstructing meaningful evaluation of research.
Serendipity can come in different forms and come about in a variety of ways.
The Merton archives were used as a starting point for gathering literature and examples.
We identify 4 types of serendipity together with 4 mechanisms of serendipity.
Policy and theory implications vary by type and mechanism of serendipity.
Serendipity does not necessarily strengthen basic research rationales.
Serendipity does not necessarily weaken rationales for targeted research.
Serendipity, the notion of researchers making unexpected and beneficial discoveries, has played an important role in debates about the feasibility and desirability of targeting public R&D investments. The purpose of this paper is to show that serendipity can come in different forms and come about in a variety of ways. The archives of Robert K. Merton, who introduced the term to the social sciences, were used as a starting point for gathering literature and examples. I identify 4 types of serendipity (Walpolian, Mertonian, Bushian, Stephanian) together with 4 mechanisms of serendipity (Theory-led, Observer-led, Error-borne, Network-emergent). I also discuss implications of the different types and mechanisms for theory and policy.
Paranormal beliefs (PBs), such as the belief in the soul, or in extrasensory perception, are common in the general population. While there is information regarding what these beliefs correlate with (eg. cognitive biases, personality styles), there is little information regarding the causal direction between these beliefs and their correlates.
To investigate the formation of beliefs, we use an experimental design, in which PBs and belief-associated cognitive biases are assessed before and after a central event: a magic performance (see also Mohr et al 2018). In the current paper, we report a series of studies investigating the “paranormal potential” of magic performances (Study 1, n = 49; Study 2, n = 89; Study 3, n = 123). We investigated (1) which magic performances resulted in paranormal explanations, and (2) whether PBs and a belief-associated cognitive bias (ie. repetition avoidance) became enhanced after the performance. Repetition avoidance was assessed using a random number generation task. After the performance, participants rated to what extent the magic performance could be explained in psychic (paranormal), conjuring, or religious terms.
We found that conjuring explanations were negatively associated with religious and psychic explanations, whereas religious and psychic explanations were positively associated. Enhanced repetition avoidance correlated with higher PBs ahead of the performance. We also observed a statistically-significant increase in psychic explanations and a drop in conjuring explanations when performances involved powerful psychic routines (eg. the performer contacted the dead).
While the experimentally induced enhancement of psychic explanations is promising, future studies should account for potential variables that might explain absent framing and before-after effects (eg. emotion, attention). Such effects are essential to understand the formation and manipulation of belief.
Nuance is not a virtue of good sociological theory. Although often demanded and superficially attractive, nuance inhibits the abstraction on which good theory depends. I describe three “nuance traps” common in sociology and show why they should be avoided on grounds of principle, aesthetics, and strategy. The argument is made without prejudice to the substantive heterogeneity of the discipline.
In recent years, a number of prominent computer scientists, along with academics in fields such as philosophy and physics, have lent credence to the notion that machines may one day become as large as humans. Many have further argued that machines could even come to exceed human size by a significant margin. However, there are at least seven distinct arguments that preclude this outcome. We show that it is not only implausible that machines will ever exceed human size, but in fact impossible.
Is thought possible without language? Individuals with global aphasia, who have almost no ability to understand or produce language, provide a powerful opportunity to find out. Surprisingly, despite their near-total loss of language, these individuals are nonetheless able to add and subtract, solve logic problems, think about another person’s thoughts, appreciate music, and successfully navigate their environments. Further, neuroimaging studies show that healthy adults strongly engage the brain’s language areas when they understand a sentence, but not when they perform other non-linguistic tasks such as arithmetic, storing information in working memory, inhibiting prepotent responses, or listening to music. Together, these two complementary lines of evidence provide a clear answer: many aspects of thought engage distinct brain regions from, and do not depend on, language.
[Unsong is a finished (2015–2017) online web serial fantasy “kabbalah-punk” novel written by Scott Alexander (SSC). GoodReads summary:
Aaron Smith-Teller works in a kabbalistic sweatshop in Silicon Valley, where he and hundreds of other minimum-wage workers try to brute-force the Holy Names of God. All around him, vast forces have been moving their pieces into place for the final confrontation. An overworked archangel tries to debug the laws of physics. Henry Kissinger transforms the ancient conflict between Heaven and Hell into a US-Soviet proxy war. A Mexican hedge wizard with no actual magic wreaks havoc using the dark art of placebomancy. The Messiah reads a book by Peter Singer and starts wondering exactly what it would mean to do as much good as possible…
Aaron doesn’t care about any of this. He and his not-quite-girlfriend Ana are engaged in something far more important—griping about magical intellectual property law. But when a chance discovery brings them into conflict with mysterious international magic-intellectual-property watchdog UNSONG, they find themselves caught in a web of plots, crusades, and prophecies leading inexorably to the end of the world.
With the fall of the Roman Empire up until the late Middle Ages, elephants virtually disappeared from Western Europe. Since there was no real knowledge of how these animals actually looked, illustrators had to rely on oral, pictorial and written transmissions to morphologically reconstruct an elephant, thus, reinventing the image of an actual existing creature. This led, in most cases, to illustrations in which the most characteristic features of elephants—such as trunk and tusks—are still visible, but that otherwise completely deviate from the real appearance and physique of these animals. In this process, zoological knowledge about elephants was overwritten by its cultural importance.
Based on a collection of these images I have reconstructed the evolution of the ‘Elephas anthropogenus’, the man-made elephant.
Beginning with Homer and ending with Wittgenstein, I present here in chronological order all the major, explicit testimony concerning philosophical esotericism that I have found to date. It includes all the quotations of this kind used in the book as well as many others that were not used. Still, it is far from exhaustive. Readers with suggestions for additions can send them to philosophybetweenthelines@outlook.com.
The compilation includes statements of several different kinds. First, declarations by an author of his own esotericism; second, other remarks concerning the phenomenon of esotericism in general; third, the author’s claim that some other writer wrote esoterically; and fourth, some other writer’s claim that the author wrote esoterically.
I propose a narrative fabrication thesis of dream reports, according to which dream reports are often not accurate representations of experiences that occur during sleep. I begin with an overview of anti-experience theses of Norman Malcolm and Daniel Dennett who reject the received view of dreams, that dreams are experiences we have during sleep which are reported upon waking. Although rejection of the first claim of the received view, that dreams are experiences that occur during sleep, is implausible, I evaluate in more detail the second assumption of the received view, that dream reports are generally accurate.
I then propose a “narrative fabrication” view of dreams as an alternative to the received view. Dream reports are often confabulated or fabricated because of poor memory, bizarre dream content, and cognitive deficits. It is well documented that narratives can be altered between initial rapid eye movement sleep awakenings and subsequent reports. I argue that we have reason to suspect that initial reports are prone to inaccuracy. Experiments demonstrate that subjects rationalize strange elements in narratives, leaving out supernatural or bizarre components when reporting waking memories of stories. Inaccuracies in dream reports are exacerbated by rapid memory loss and bizarre dream content. Waking memory is a process of reconstruction and blending of elements, but unlike waking memory, we cannot reality-test for dream memories. Dream experiences involve imaginative elements, and dream content cannot be verified with external evidence. Some dreams may involve wake-like higher cognitive functions, such as lucid dreams. Such dreams are more likely to elicit accurate reports than cognitively deficient dreams. However, dream reports are generally less accurate than waking reports.
I then propose methods which could verify the narrative fabrication view, and argue that although the theory cannot be tested with current methods, new techniques and technologies may be able to do so in the future.
One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed case that one would be wrong. In particular, I argue that computational complexity theory—the field that studies the resources (such as time, space, and randomness) needed to solve computational problems—leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume’s problem of induction, Goodman’s grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis.
Human cognitive performance has crucial importance for legal process, often creating the difference between fair and unfair imprisonment. Lawyers, judges, and jurors need to follow long and complex arguments. They need to understand technical language. Jurors need to remember what happens during a long trial. The demands imposed on jurors in particular are sizeable and the cognitive challenges are discussed in this chapter. Jurors are often subjected to both tremendous decision complexity and tremendous evidence complexity. Some of these problems could be ameliorated if we can somehow enhance the cognitive capacities, including attention and memory, of various players in trials. There are multiple ways in which cognition can be improved either by external tools or by an increasing number of biomedical interventions that act directly on the brain. The article surveys a range of beneficial and detrimental effects that substances can have on cognition.
This article engages with problems that are usually opaque: What trajectories do scientific debates assume, when does a scientific community consider a proposition to be a fact, and how can we know that?
We develop a strategy for evaluating the state of scientific contestation on issues. The analysis builds from Latour’s black box imagery, which we observe in scientific citation networks. We show that as consensus forms, the importance of internal divisions to the overall network structure declines. We consider substantive cases that are now considered facts, such as the carcinogenicity of smoking and the non-carcinogenicity of coffee. We then employ the same analysis to currently contested cases: the suspected carcinogenicity of cellular phones, and the relationship between vaccines and autism.
Extracting meaning from the internal structure of scientific knowledge carves a niche for renewed sociological commentary on science, revealing a typology of trajectories that scientific propositions may experience en route to consensus.
[Keywords: sociology of science, consensus, black boxing, network analysis, citations]
It is, of course, a notable prediction of this theory that the least scientific idea possible would end up called “Scientology”.
Or so I thought! Last night, I discovered there was a movement called “Factology”. Obviously this requires further investigation!
…But surely they don’t just randomly draw crazy conclusions based on a few words that sound the same, do they? Well, here’s a quote from their Wikipedia article, about “examples of movies with encoded content about the reality of aliens among us”:
Yoda…is short for Judah. Freemasons are inspired by one entity and that is a grey, by the name of Yoda. Yoda guides Freemasonry back to Judah, with the ancient Israel masonry. The British “Covenant Of Man” symbolizes the empire striking back. America is the empire fighting to overthrow Europe…The word Yoda is not an English word as you have been led to believe. Its root word yawdaw appears 111 times in the Old Testament, means “to give thanks or praise, throw down, cast, shoot.” The word Yadah meaning, to “to praise, give thanks” stems from the root word Yawdaw and appears only two times in the Old Testament (Daniel 2:23, Daniel 6:10). Not to mention the fact Yoda played in [the film] Return of the Jedi, and the word jedi is the same as yeti, it’s just a matter of a letter, it’s really the same word. Yeti is the name of Sasquatch (Bigfoot), also called Seti which is equivalent to the Extraterrestrials called the Seirians.
…Okay, so Uncle Sam is a gnostic demon, as revealed by Dr. Seuss who is secretly the king of the pagan gods. But can they get even crazier?:
White people were bred to be food, and the ‘rapture’ expected by Christians is really the return of the ‘raptors’ who will dine on the now-ripe delicious white flesh.
Recent experimental philosophy arguments have raised trouble for philosophers’ reliance on armchair intuitions.
One popular line of response has been the expertise defense: philosophers are highly-trained experts, whereas the subjects in the experimental philosophy studies have generally been ordinary undergraduates, and so there’s no reason to think philosophers will make the same mistakes. But this deploys a substantive empirical claim, that philosophers’ training indeed inculcates sufficient protection from such mistakes.
We canvass the psychological literature on expertise, which indicates that people are not generally very good at reckoning who will develop expertise under what circumstances. We consider 3 promising hypotheses concerning what philosophical expertise might consist in:
better conceptual schemata;
mastery of entrenched theories; and
general practical know-how with the entertaining of hypotheticals.
On inspection, none seem to provide us with good reason to endorse this key empirical premise of the expertise defense.
[Scott’s Antarctic expedition in 1911 was plagued by the disease scurvy, despite its having been “conquered in 1747, when the Scottish physician James Lind proved in one of the first controlled medical experiments that citrus fruits were an effective cure for the disease.” How it all went wrong would make a case study for a philosophy of science class.
The British Admiralty switched their scurvy cure from lemon juice to lime juice in 1860. The new cure was much less effective, but by that time advances in technology meant that most sea voyages were so short that there was little or no danger of scurvy anyway. So poor Scott’s expedition, as well as applying ‘state-of-the-art’ (ie. wrong) cures, were falling back on a ‘tried-and-true’ remedy that in fact had been largely ineffective already for 50 years… without anyone noticing.]
An unfortunate series of accidents conspired with advances in technology to discredit the cure for scurvy. What had been a simple dietary deficiency became a subtle and unpredictable disease that could strike without warning. Over the course of fifty years, scurvy would return to torment not just Polar explorers, but thousands of infants born into wealthy European and American homes. And it would only be through blind luck that the actual cause of scurvy would be rediscovered, and vitamin C finally isolated, in 1932.
…So when the Admiralty began to replace lemon juice with an ineffective substitute in 1860, it took a long time for anyone to notice. In that year, naval authorities switched procurement from Mediterranean lemons to West Indian limes. The motives for this were mainly colonial—it was better to buy from British plantations than to continue importing lemons from Europe. Confusion in naming didn’t help matters. Both “lemon” and “lime” were in use as a collective term for citrus, and though European lemons and sour limes are quite different fruits, their Latin names (citrus medica, var. limonica and citrus medica, var. acida) suggested that they were as closely related as green and red apples. Moreover, as there was a widespread belief that the antiscorbutic properties of lemons were due to their acidity, it made sense that the more acidic Caribbean limes would be even better at fighting the disease.
In this, the Navy was deceived. Tests on animals would later show that fresh lime juice has a quarter of the scurvy-fighting power of fresh lemon juice. And the lime juice being served to sailors was not fresh, but had spent long periods of time in settling tanks open to the air, and had been pumped through copper tubing. A 1918 animal experiment using representative samples of lime juice from the navy and merchant marine showed that the ‘preventative’ often lacked any antiscorbutic power at all.
By the 1870s, therefore, most British ships were sailing without protection against scurvy. Only speed and improved nutrition on land were preventing sailors from getting sick.
…In the course of writing this essay, I was tempted many times to pick a villain. Maybe the perfectly named Almroth Wright, who threw his considerable medical reputation behind the ptomaine theory and so delayed the proper re-understanding of scurvy for many years. Or the nameless Admiralty flunky who helped his career by championing the switch to West Indian limes. Or even poor Scott himself, sermonizing about the virtues of scientific progress while never conducting a proper experiment, taking dreadful risks, and showing a most unscientific reliance on pure grit to get his men out of any difficulty.
But the villain here is just good old human ignorance, that master of disguise. We tend to think that knowledge, once acquired, is something permanent. Instead, even holding on to it requires constant, careful effort.
Many adult beliefs are based on the testimony provided by other people rather than on firsthand observation. Children also learn from other people’s testimony. For example, they learn that mental processes depend on the brain, that the earth is spherical, and that hidden bodily organs constrain life and death. Such learning might indicate that other people’s testimony simply amplifies children’s access to empirical data. However, children’s understanding of God’s special powers and the afterlife shows that their acceptance of others’ testimony extends beyond the empirical domain. Thus, children appear to conceptualize unobservable scientific and religious entities similarly. Nevertheless, some children distinguish between the 2 domains, arguably because a different pattern of discourse surrounds scientific as compared to religious entities.
There are at least three strategies we might take in approaching controversial issues: (1) we might accept the conclusions of experts on their authority, (2) we might evaluate the relevant evidence and arguments for ourselves, or (3) we might give up on finding the answers. Students of “critical thinking” are regularly advised to follow strategy (2). But strategies (1) and (3) are usually superior to (2), from the standpoint of the goal of gaining true beliefs and avoiding false ones.
Sleep paralysis accompanied by hypnopompic (‘upon awakening’) hallucinations is an often-frightening manifestation of discordance between the cognitive/perceptual and motor aspects of rapid eye movement (REM) sleep. Awakening sleepers become aware of an inability to move, and sometimes experience intrusion of dream mentation into waking consciousness (eg. seeing intruders in the bedroom).
In this article, we summarize 2 studies:
In the first study, we assessed 10 individuals who reported abduction by space aliens and whose claims were linked to apparent episodes of sleep paralysis during which hypnopompic hallucinations were interpreted as alien beings.
In the second study, adults reporting repressed, recovered, or continuous memories of childhood sexual abuse more often reported sleep paralysis than did a control group. Among the 31 reporting sleep paralysis, only one person linked it to abuse memories. This person was among the 6 recovered memory participants who reported sleep paralysis (ie. 17% rate of interpreting it as abuse-related).
People rely on personally plausible cultural narratives to interpret these otherwise baffling sleep paralysis episodes.
In this paper I argue that Wittgenstein is correct in Philosophical Investigations §50 when he says of the Standard Metre stick that we can neither say that it is or is not a metre in length—despite what our intuitions may tell us to the contrary.
Specifically, the paper deals with Kripke’s criticism of Wittgenstein’s claim in Naming And Necessity and with Salmon’s attempt to arbitrate between the 2 views.
I conclude that, not only is Wittgenstein correct, but that both Kripke and Salmon (and possibly the majority of philosophers) simply do not understand the concept of measurement.
[cf. typical mind] The authors reviewed research about a profound folk psychology misconception that is present among college students, namely, the belief that the process of vision includes emanations from the eyes, an idea that is consistent with the extramission theory of perception, which was originally professed by early Greek philosophers and which persisted in scholarly circles for centuries. The authors document the strength and breadth of this phenomenon and the abject failure of traditional educational techniques to overcome this belief, and they reveal that students are leaving psychology courses with a flawed understanding of one of the most studied processes in the history of psychology-visual perception. Some suggestions are offered for overcoming this misconception in traditional college classroom settings.
…Piaget ([The child’s conception of the world] 1929/1967), however, was perhaps the first to note an odd type of misunderstanding that children have about vision. He commented on a report of a child who stated that looks can mix when they meet, and, along with other observations, Piaget suggested that children believe in emissions from the eyes during vision. In an apparently unpublished work, Piaget (referenced in Piaget 1971/1974 [Understanding Causality]) claimed to have found strong evidence of extramission beliefs in children
…We examined the responses of children and adults to questions asking whether there was visual input and/or output during the act of perception (eg. Cottrell & Winer 1994; see Winer & Cottrell 1996a, for a review of several studies). This research revealed widespread evidence of extramission beliefs among children, with a decline in such beliefs over age. We were, however, startled to find that, despite consistent developmental trends toward decreasing extramission beliefs with age, large numbers of adults also affirmed a belief in visual extramissions. Apparently some college students were behaving like prescientific ancient philosophers in affirming an extramission understanding of vision that is entirely at odds with the theories of modern science.
Perhaps even more disturbing to us was the strong likelihood that this misconception existed despite our participants’ having received formal education on the topics of sensation and perception. For example, we typically found extramission beliefs among college students who were tested after they had received instruction on sensation and perception in introductory psychology classes, thus suggesting not only that adults were affirming extramission beliefs but that such beliefs were resistant to education. We were confronted, then, with the likelihood that students were emerging from basic-level psychology courses without an understanding of one of the most important psychological processes, namely, visual perception.
…On such intromission-extramission (i-e) tests, large numbers of adults gave extramission responses, with the percentages varying depending on the particular representations of vision shown on the screen. For example, in one study (Winer et al 1996), when given a simple choice between input versus output, ~13% of the adults selected output only. When available, however, the favored extramission choices were representations that showed (a) simultaneous input and output and (b) input followed by output. On trials that included these favorite choices, the percentage of extramission responses ranged from 41% to 67% (the greater the number of preferred choices offered, the greater the frequency of extramission responses). Data presented later in this article (Gregg et al 2001) likewise show more than 50% of adults giving extramission responses. …In all of these cases, we found convincing evidence for extramission beliefs. For example, when students were asked to draw and number arrows to show how a person sees a balloon, 86% showed some evidence of extramission (ie. outward arrows), whereas when adults were repeatedly asked specifically to draw whether something comes into or goes out of the eyes when a person sees a balloon, 69% placed outward-pointing arrows in their drawings.
…Whatever the test, we have consistently found substantial numbers of college students reporting extramission beliefs.
We have also varied the visual referent in our questions, with disturbing findings. In one non-computer test, we asked students about vision when presenting them with different visual referents, namely a shining light bulb, the same bulb unlit, and a white Styrofoam ball the same size as the unlit bulb (Winer et al 1996). We expected that referring to the lit bulb would diminish extramission responses—indeed, that it would be nearly impossible to maintain extramission beliefs in reference to light shining in one’s face. We also assumed that initial intromission responses, encouraged by reference to the shining light bulb, would generalize. That is, we expected positive transfer from questions about the shining light bulb to subsequent questions about the non-luminous objects.
The results supported the idea that asking i-e questions about a shining light would cause a decrease in extramission responses. But asking about the lit bulb did not even come close to eliminating extramission beliefs: 33% of the adults tested affirmed extramission in reference to viewing the lit bulb. Moreover, there was no sign of positive transfer from questions about the lit bulb to questions about the non-luminous objects. In fact, the opposite occurred. When we switched from the lit bulb to the non-luminous objects, there was an increase in extramission responses, as if turning off the light signaled that there were no more incoming rays.
…we have routinely asked questions about the necessity of extramissions for vision. For example, we have asked whether a person can see if nothing leaves the eye and whether what exits the eye helps people see. In one study, at least 70% of adult participants who reported extramission beliefs on the last question of the test stated on one of the probe questions that they believed visual extramissions were functional in vision. Second, we have directly tested for the possibility that extramission interpretations were due to participants misinterpreting i-e questions. In his master’s thesis, Rader 1996 [“Effects of considerations of necessity and scientific reasoning upon beliefs about visual perception”] gave college students intensive training on the concept of necessity, before asking them specifically whether it was necessary that something leave the eye during the act of vision. The training had no effect on responses to i-e questions. In fact, many participants who affirmed on a pretest question that something exiting the nose was not necessary for olfaction went on to claim that visual extramissions were necessary for seeing.
…The fact that the learning effects for both college students and 8th graders disappeared [in Gregg et al 2001] was striking. Consider, for example, the performance of the college students, who were presumably the most cognitively advanced. On the first posttest, 100% of the students in the refutational group had 5 or more of 8 items correct, compared with 54% in the simplified-explanation group and 29% in the control group. On the delayed test, 7 of the 17 college students who returned for testing in the refutational-teaching group had fewer than 5 of the 8 items correct.
Recall that no student in this group had fewer than 5 items correct at Time 1. Moreover, of the 7 whose performance declined, 6 had had perfect scores at Time 1. The long-term ineffectiveness of the training for the college students is further revealed by the fact that 53% of the participants in the 2 experimental groups at that grade level had 4 or fewer correct responses.
…A related strategy is to foster logical or cognitive dissonance. In one pilot study, for example, when we were trying to explore the breadth of the extramission misunderstanding, one participant tenaciously defended his extramission beliefs until we asked him whether someone would be able to see the image coming from his eyes, at which point he acknowledged, rather sheepishly, that nothing has to leave the eyes in order for people to see.
This paper constitutes an attempt to derive the epistemological consequences of what is known in cognitive, developmental, and social psychology on the nature of naive theories.
The process of cognitive development and knowledge acquisition is such that uncoordinated knowledge must result. There is no process active in long-term memory to harmonize inconsistent parts. Coordination takes place in working memory (WM), and cognitive psychology has long established its extreme exiguity. Units of explanation and domains of coherence are therefore small. This is, indeed, a limitation of our cognition, but it is tenable pragmatically. Naive theories, on any one issue, do not form, psychologically, cognitively, a natural kind.
These theses about how our knowledge is acquired, organized, accessed, and used help to bring out how one should think about naive theories.
Russian measurements of the quality factor (Q) of sapphire, made 20 years ago, have only just been repeated in the West. Shortfalls in tacit knowledge have been partly responsible for this delay. The idea of ‘tacit knowledge’, first put forward by the physical chemist Michael Polanyi, has been studied and analysed over the last 2 decades.
A new classification of tacit knowledge (broadly construed) is offered here and applied to the case of sapphire. The importance of personal contact between scientists is brought out and the sources of trust described. It is suggested that the reproduction of scientific findings could be aided by a small addition to the information contained in experimental reports. The analysis is done in the context of fieldwork conducted in the USA and observations of experimental work at Glasgow University.
[Keywords: experiment, international trust, measurement of skill, natural science, repetition of experiments, writing conventions]
…The second method of greasing thread demonstrated by Checkhov, and used interchangeably with the first method, was direct greasing of the fine thread with human body grease. Checkhov would run the fine Chinese thread briefly across the bridge of his nose or behind his ear. The ear method was adopted by the Glasgow group, though it turned out that only some people had the right kind of skin. Some, it transpired, had very effective and reliable grease, others’ grease worked only sporadically, and some experimenters’ skins were too dry to work at all. All this was discovered by trial and error, and made for unusual laboratory notebook entries such as: ‘Suspension 3: Fred-greased Russian thread; Suspension 12: switched from George-grease back to Fred-grease’, and so forth. As with James Joule’s famous measurement of the mechanical equivalent of heat”,! it seems that the experimenter’s body could be a crucial variable.
…Knowing how difficult a skill is, is another important part of learning to master it. If one believed that bike-riding could be mastered in one minute, a few minutes of falling off would lead one to distrust claims that bikes could be ridden at all, and one would never learn to ride—still more so with, say, playing a musical instrument. One important thing that the Glasgow group learned from Checkhov was what they called ‘patience’ which, in these terms, is a matter of learning that measuring is difficult and remains difficult (like, for example, golf, rather than bike-riding), even after one has first accomplished it.
Reporting a Second Order Measure of Skill: This kind of science could be made easier if the importance of knowing the difficulty of an experimental skill or procedure was recognized and emphasized. The conventional style of writing scientific journal papers (and even books) excludes details of this kind. Yet someone trying to rediscover how to produce a result in the absence of a laboratory visit could be helped by knowing just how hard the experiment or measurement was to carry out in the first place, and just how hard it continues to be. Such information could be roughly quantified—it is a ‘second order measure of skill’. Experimenters could record something along these lines:
It took us some 17 months to accomplish this result in the first instance, during which time we tried around 165 runs with different set-ups, each run taking around a day to complete. Most successful measurements on new samples are now obtained in around 7 runs, but there is a range of ~1 to 13 runs; each run now takes about 2 hours. The distribution of numbers of runs on the last 10 samples we have measured is shown in the following diagram…
Information of this sort could be expressed briefly, without radically changing the conventional style of scientific paper-writing, and yet could be of substantial benefit to those trying to repeat the work. It is just a matter of admitting that most things that seem easy now were very hard to do first time round, and that some remain hard even for the experienced experimenter. We concede, of course, that within the current conventions of scientific writing, setting out these difficulties would look like weakness; science is conventionally described as though it were effortless, and the accepted scientific demeanor reinforces this impression. What we are suggesting is a slight transformation of convention and demeanor—with a view to improving the transmission of scientific knowledge.
This chapter discusses the scientific controversies that have shaped neural network research from a sociological point of view.
It looks at the controversy that surrounded Frank Rosenblatt’sperceptron machine in the late 1950s and early 1960s. Rosenblatt was well aware of the main problems of his machine, and that he even insisted on them in his books and papers. Emphasis is given on one of the main problems of early neural network research, namely the issue of training multilayer systems.
The chapter analyzes the main results of that project, and shows that Minsky and Papert, and neural network researchers interpreted those results rather differently. It discusses the processes through which this interpretative flexibility was closed and the effects that the crisis of early neural network research had upon the 3 most important neural network groups of the time, namely Widrow’s group, Rosenblatt’s group, and the group at SRI.
The chapter also looks at the influence that factors like the emergence of symbolic artificial intelligence (AI) and computer technology had on the closure of the neural network controversy. After the closure of the perceptron controversy, symbol-processing remained the dominant approach to AI over the years, until the early 1980s. Some of the most important aspects of that changing context are reviewed and the history of backpropagation is discussed.
Introduction: A Sociological View of Scientific Controversies
The Controversy of the Perceptron
The Problems of Early Neural Networks
Training Multilayer Networks: A “Reverse Salient” of Neural Network Research
Interpretative Flexibility
Closure of the Controversy 1: Widrow’s Group
Closure of the Controversy 2: The SRI Group
Closure of the Controversy 3: Rosenblatt
The 1980s: A Changing Context
History of Back-Propagation
Back-Propagation: Learning in Multilayer Perceptrons
The Neural Network Explosion
The Current Debate: Conclusions
Debate Continues
Conclusions
Appendix 1: List of Those Interviewed
Appendix 2: List of Personal Communications by Letter
The author Mikel Olazaran spent a long time in the early 1990s interviewing what looks have been almost all the surviving connectionists & Minsky etc.
Olazaran argues that all the connectionists were perfectly aware of the Perceptrons headline conclusion about single-layer perceptrons being hopelessly linear, which drafts had been circulating for like 4 years beforehand as well, and most regarded it as unimportant (pointing out that humans can’t solve the parity of a grid of dots either without painfully counting them out one by one) and having an obvious solution (multiple layers) that they all, Rosenblatt especially, had put a lot of work into trying.
The problem was, none of the multi-layer things worked, and people had run out of ideas. So, most of the connectionist researchers got sucked away by things that were working at the time (eg. the Stanford group was having huge success with adaptive antennas & telephone filters which accidentally come out of their NN work), and funding dried up (for both exogenous political reasons related to military R&D being cut, and just the lack of results compared to alternative research programs like the symbolic approaches which were enjoying their initial flush of success in theorem proving and Samuel’scheckers player etc, and had not run headlong into the wall of Moravec’s paradox).
So when, years later, Perceptrons came out with all of its i’s dotted & t’s-crossed, it didn’t “kill connectionism” because that had already died. What Perceptrons really did was it served as a kind of excuse or Schelling point to make the death ‘official’ & cement the dominance of the symbolic approaches. Rosenblatt never gave up, but he had already been left high and dry with no more funding and no research community.
Olazaran directly asks several of them whether more funding or work would have helped, and it seems everyone agrees that it would’ve been useless. The computers just weren’t there in the ’60s. (One notes that it might have worked in the ’70s if anyone had paid attention to the invention of backpropagation, pointing out that Rumelhart et al doing the PDP studies on backprop were using the equivalent of PCs for those studies in the late ’80s, so if you were patient & dedicated, you could hypothetically have done them on minicomputers/mainframes in the ’70s. But not the ’60s.)]
…Here, then, are some of the ways in which writing separates or divides.
Writing separates the known from the knower. It promotes ‘objectivity’. (Knowledge itself is not object-like: it cannot be transferred from one person to another physically even in oral communication, face-to-face, or a fortiori in writing. I can only perform actions—produce words—which enable you to generate the knowledge in yourself.)
Whereas oral cultures tends to merge interpretation of data with the data themselves, writing separates interpretation from data. (Asked to repeat exactly what they have just said, persons from a primary oral Culture will often give an interpretation of what they originally said, insisting and clearly believing that the interpretation is exactly what they said in the first place.)
Writing distances the word from sound, reducing oral-aural evanescence to the seeming quiescence of visual space.
Whereas in oral communication the source (speaker) and the recipient (hearer) are necessarily present to one another, writing distances the source of the communication (the writer) from the recipient (the reader), both in time and space.
Writing distances the word from the plenum of existence. (The immediate context of spoken words is never simply other words.)
By distancing the word from the plenum of existence, from a holistic context made up mostly of non-verbal elements, writing enforces verbal precision of a sort unavailable in oral cultures.
Writing separates past from present. (Primary oral cultures tend to use the past to explain the present, dropping from memory what does not serve this purpose in one way or another, thus homogenizing the past with the present, or approximating past to present.)
…By freezing verbalization, writing creates a distanced past which is full of puzzles because it can refer to states of affairs no longer effectively imaginable or can use words no longer immediately meaningful to any living persons.
Writing separates ‘administration’—civil, religious, commercial, and other—from other types of social activities. (‘Administration’ is unknown in oral cultures, where leaders interact non-abstractly with the rest of society in tight-knit, often rhetorically controlled, configurations.)
Writing makes it possible to separate logic (thought-structure of discourse) from rhetoric (socially-effective discourse).
Writing separates academic learning (mathésis and mathéma) from wisdom (sophia), making possible the conveyance of highly organized abstract thought structures independently of their actual use or of their integration into the human lifeworld.
Writing can divide society by giving rise to a special kind of diglossia, splitting verbal communication between a ‘high’ language completely controlled by writing even though also widely spoken (Learned Latin in the European Middle Ages) and a ‘low’ language or ‘low’ languages controlled by speech to the exclusion of writing.
Writing differentiates grapholects, those ‘low’-language dialects which are taken over by writing and erected into national languages, from other dialects, making the grapholect a dialect of a completely different order of magnitude and effectiveness from the dialects that remain oral.
Writing divides or distances more evidently and effectively as its form becomes more abstract, which is to say more removed from the sound world into the space world of sight.
Perhaps the most momentous of all its diaeretic effects in the deep history of thought is the effect of writing when it separates being from time.
…The oral world as such distresses literates because sound is evanescent. Typically, literates want words and thoughts pinned down—though it is impossible to “pin down” an event. The mind trained in an oral culture does not feel the literate’s distress: it can operate with exquisite skill in the world of sounds, events, evanescences. How does it manage? Basically, in its noetic operations it uses formulaic structures and procedures that stick in the mind to complement and counteract the evanescent: proverbs and other fixed sayings—that is, standard, expected qualifiers (the sturdy oak, the brave warrior, wise Nestor, clever Odysseus), numerical sets (the 3 Graces, the 7 deadly sins, the 5 senses, and so on)—anything to make it easy to call back what Homer recognized were “winged words”.
Primary oral culture also keeps its thinking close to the human life world, personalizing things and issues, and storing knowledge in stories. Categories are unstable mnemonically. Stories you can remember. In its typical mindset, the oral sensibility is out to hold things together, to make and retain agglomerates, not to analyse (which means to take things apart)—although, since all thought is to some degree analytic, it does analyse to a degree. Pressed by the need to manage an always fugitive noetic universe, the oral world is basically conservative. Exploratory thinking is not unknown, but it is relatively rare, a luxury orality can little afford, for energies must be husbanded to keep on constant call the evanescent knowledge that the ages have so laboriously accumulated. Everybody, or almost everybody, must repeat and repeat and repeat the truths that have come down from the ancestors. Otherwise these truths will escape, and culture will be back on square one, where it started before the ancestors got the truths from their ancestors.
Fiction is commonly viewed as imaginative discourse, or as discourse concerning an alternate possible world. The problem with such definitions is that they cannot distinguish fiction from counterfactual statements, or from the reports of dreams, wishes and fantasies which occur in the context of natural discourse.
This paper attempts to capture the difference, as well as the similarities, between fiction and other language uses involving statements about non-existing worlds by comparing their respective behavior in the light of an interpretive principle which will be referred to as the “principle of minimal departure”.
This principle states that whenever we interpret a message concerning an alternate world, we re-construe this world as being the closest possible to the reality we know. In the non-factuals of natural discourse the referents of the pronouns I and you are re-construed as retaining the personality of the actual speaker as fully as possible, but in fiction they are immune to the principle of minimal departure.