“Energy compensation and adiposity in humans”, Vincent Careau, Lewis G. Halsey, Herman Pontzer, Philip N. Ainslie, Lene F. Andersen, Liam J. Anderson, Lenore Arab, Issad Baddou, Kweku Bedu-Addo, Ellen E. Blaak, Stephane Blanc, Alberto G. Bonomi, Carlijn V.C. Bouten, Maciej S. Buchowski, Nancy F. Butte, Stefan G.J.A. Camps, Graeme L. Close, Jamie A. Cooper, Sai Krupa Das, Richard Cooper, Lara R. Dugas, Simon D. Eaton, Ulf Ekelund, Sonja Entringer, Terrence Forrester, Barry W. Fudge, Annelies H. Goris, Michael Gurven, Catherine Hambly, Asmaa El Hamdouchi, Marije B. Hoos, Sumei Hu, Noorjehan Joonas, Annemiek M. Joosen, Peter Katzmarzyk, Kitty P. Kempen, Misaka Kimura, William E. Kraus, Robert F. Kushner, Estelle V. Lambert, William R. Leonard, Nader Lessan, Corby K. Martin, Anine C. Medin, Erwin P. Meijer, James C. Morehen, James P. Morton, Marian L. Neuhouser, Theresa A. Nicklas, Robert M. Ojiambo, Kirsi H. Pietiläinen, Yannis P. Pitsiladis, Jacob Plange-Rhule, Guy Plasqui, Ross L. Prentice, Roberto A. Rabinovich, Susan B. Racette, David A. Raichlen, Eric Ravussin, John J. Reilly, Rebecca M. Reynolds, Susan B. Roberts, Albertine J. Schuit, Anders M. Sjödin, Eric Stice, Samuel S. Urlacher, Giulio Valenti, Ludo M. Van Etten, Edgar A. Van Mil, Jonathan C.K. Wells, George Wilson, Brian M. Wood, Jack Yanovski, Tsukasa Yoshida, Xueying Zhang, Alexia J. Murphy-Alford, Cornelia U. Loechl, Amy H. Luke, Jennifer Rood, Hiroyuki Sagayama, Dale A. Schoeller, William W. Wong, Yosuke Yamada, John R. Speakman, IAEADLW database group (2021-08-27; backlinks; psychology):
Degree of energy compensation varied between people of different body composition
Understanding the impacts of activity on energy balance is crucial. Increasing levels of activity may bring diminishing returns in energy expenditure because of compensatory responses in non-activity energy expenditures. This suggestion has profound implications for both the evolution of metabolism and human health. It implies that a long-term increase in activity does not directly translate into an increase in total energy expenditure (TEE) because other components of TEE may decrease in response—energy compensation.
We used the largest dataset compiled on adult TEE and basal energyexpenditure (BEE) (n = 1,754) of people living normal lives to find that energy compensation by a typical human averages 28% due to reduced BEE; this suggests that only 72% of the extra calories we burn from additional activity translates into extra calories burned that day. Moreover, the degree of energy compensation varied considerably between people of different body compositions. This association between compensation and adiposity could be due to among-individual differences in compensation: people who compensate more may be more likely to accumulate body fat. Alternatively, the process might occur within individuals: as we get fatter, our body might compensate more strongly for the calories burned during activity, making losing fat progressively more difficult.
Determining the causality of the relationship between energy compensation and adiposity will be key to improving public health strategies regarding obesity.
[Keywords: activity, basal metabolic rate, daily energy expenditure, energy management models, exercise, Homo sapiens, trade-offs, weight loss, energy compensation]
…To further illustrate the compensation occurring at the within-individual level, we ran a second bivariate mixed model with AEE and BEE as the dependent variables. In this model, the within-individual covariance was statistically-significantly negative (Table S2B). The within-individual correlation (±SE) between AEEand BEE was r = −0.58 ± 0.08 (Figure 4B). Hence, during extended periods when the studied cohort expended more energy on activity, they compensated by reducing energy expended on basal processes (but individuals with higher-than-average AEE do not necessarily have alower-than-average BEE). The within-individual slope in these peopleindicates particularly strong energy compensation between AEE and BEE (Figure 4B). That is, in this sample of people, the calories they burn during bouts of activity are almost entirely compensated for by reducing energy expended on other processes such that variation in activity had little impact on TEE.
Background: International guidelines recommend children aged 9 months to 2 years consume whole (3.25%) fat cow’s milk, and children older than age 2 years consume reduced (0.1–2%) fat cow’s milk to prevent obesity. The objective of this study was to evaluate the longitudinal relationship between cow’s milk fat (0.1–3.25%) intake and body mass indexz-score (zBMI) in childhood. We hypothesized thathigher cow’s milk fat intake was associated with lower zBMI.
Methods: A prospective cohort study of children aged 9 months to 8 years was conducted through the TARGet Kids! primary care research network. The exposure was cow’s milk fat consumption (skim (0.1%), 1%, 2%, whole (3.25%)), measured by parental report. The outcome was zBMI. Height and weight were measured by trained research assistants and zBMI was determined according to WHO growth standards. A linear mixed effects model and logistic generalized estimating equations were used to determine the longitudinal association between cow’s milk fat intake and child zBMI.
Results: Among children aged 9 months to 8 years (n = 7467; 4699 of whom had repeated measures), each 1% increase in cow’s milk fat consumed was associated with a 0.05 lower zBMI score (95%CI −0.07 to −0.03, p < 0.0001) after adjustment for covariates including volume of milk consumed. Compared to children who consumed reduced fat (0.1–2%) milk, there was evidence that children who consumed whole milk had 16% lower odds of overweight (OR = 0.84, 95% CI 0.77 to 0.91, p < 0.0001) and 18% lower odds of obesity (OR = 0.82, 95% CI 0.68 to 1.00, p = 0.047).
Conclusions: Guidelines for reduced fat instead of whole cow’s milk during childhood may not be effective in preventing overweight or obesity.
…Findings from the present study are consistent with several other studies. A recent systematic review 14 and meta-analysis13 identified observational studies which examined the relationship between cow’s milk fat and child adiposity among children aged 9 months to 18 years. An association between higher cow’s milk fat and lower adiposity was found in the majority of studies, and no study identified that reduced fat milk lowered the risk of child overweight or obesity. However, the majority of the studies were considered to have high risk of bias due to cross-sectional design or lack of adjustment for potential confounding factors such as volume of milk, prior measures of adiposity, and parent BMI.13 The current study was designed to overcome these weaknesses through a large prospective cohort study with adjustment for important potentially confounding factors. Our findings are also consistent with a RCT of children aged 4–13 years which showed no evidence of a relationship between dairy fat (including milk, cheese, and yogurt) intake and child adiposity.41
Possible mechanisms underlying the observed relationship include reverse causality, where parents of leaner children provide higher cow’s milk fat and vice versa. Another possibility is that children who consume higher cow’s milk fat may be more satiated than those who consume reduced fat cow’s milk, leading them to consume a lower quantity of cow’s milk or other energy dense foods contributing to higher energy intake.42
“On the Opportunities and Risks of Foundation Models”, Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang (2021-08-16; backlinks; ai / scaling, economics):
AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL·E,GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character.
This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles (e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations).
Though foundation models are based on conventional deep learning and transfer learning, their scale results in new emergent capabilities, and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties.
To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.
The history of AI is one of increasing emergence and homogenization. With the introduction of machine learning, we moved from a large proliferation of specialized algorithms that specified how to compute answers to a small number of general algorithms that learned how to compute answers (i.e. the algorithm for computing answers emerged from the learning algorithm). With the introduction of deep learning, we moved from a large proliferation of hand-engineered features for learning algorithms to a small number of architectures that could be pointed at a new domain and discover good features for that domain. Recently, the trend has continued: we have moved from a large proliferation of trained models for different tasks to a few large “foundation models” which learn general algorithms useful for solving specific tasks. BERT and GPT-3 are central examples of foundation models in language; many NLP tasks that previously required differentmodels are now solved using finetuned or prompted versions of BERT and/or GPT-3.
Note that, while language is the main example of a domain with foundation models today, we should expect foundation models to be developed in an increasing number of domains over time. The authors call these “foundation” models to emphasize that (1) they form a fundamental building block for applications and (2) they are not themselves ready for deployment; they are simply a foundation on which applications can be built. Foundation models have been enabled only recently because they depend on having large scale in order to make use of large unlabeled datasets using self-supervised learning to enable effective transfer to new tasks. It is particularly challenging to understand and predict the capabilities exhibited by foundation models because their multitask nature emerges from the large-scale training rather than being designed in from the start, making the capabilities hard to anticipate. This is particularly unsettling because foundation models also lead to substantially increased homogenization, where everyone is using the same few models, and so any new emergent capability (or risk) is quickly distributed to everyone.
The authors argue that academia is uniquely suited to study and understand the risks of foundation models. Foundation models are going to interact with society, both in terms of the data used to create them and the effects on people who use applications built upon them. Thus, analysis of them will need to be interdisciplinary; this is best achieved in academia due to the concentration of people working in the various relevant areas. In addition, market-driven incentives need not align well with societal benefit, whereas the research mission of universities is the production and dissemination of knowledge and creation of global public goods, allowing academia to study directions that would have large societal benefit that might not be prioritized by industry.
All of this is just a summary of parts of the introduction to the report. The full report is over 150 pages and goes into detail on capabilities, applications, technologies (including technical risks), and societal implications. I’m not going to summarize it here, because it is long and a lot of it isn’t that relevant to alignment; I’ll instead note down particular points that I found interesting.
(pg. 26) Some studies have suggested that foundation models in language don’t learn linguistic constructions robustly; even if they use it well once, they may not do so again, especially under distribution shift. In contrast, humans can easily “slot in” new knowledge into existing linguistic constructions.
(pg. 34) This isn’t surprising but is worth repeating: many of the capabilities highlighted in the robotics section are very similar to the ones that we focus on in alignment (task specification, robustness, safety, sample efficiency).
(pg. 42) For tasks involving reasoning (e.g. mathematical proofs, program synthesis, drug discovery, computer-aided design), neural nets can be used to guide a search through a large space of possibilities. Foundation models could be helpful because (1) since they are very good at generating sequences, you can encode arbitrary actions (e.g. in theorem proving, they can use arbitrary instructions in the proof assistant language rather than being restricted to an existing database of theorems), (2) the heuristics for effective search learned in one domain could transfer well to other domains where data is scarce, and (3) they could accept multimodal input: for example, in theorem proving for geometry, a multimodal foundation model could also incorporate information from geometric diagrams.
(Section 3) A substantial portion of the report is spent discussing potential applications of foundation models. This is the most in-depth version of this I have seen; anyone aiming to forecast the impacts of AI on the real world in the next 5–10 years should likely read this section. It’s notable to me how nearly all of the applications have an emphasis on robustness and reliability, particularly in truth-telling and logical reasoning.
(Section 4.3) We’ve seen a few (AN #152) ways (AN #155) in which foundation models can be adapted. This section provides a good overview of the various methods that have been proposed in the literature. Note that adaptation is useful not just for specializing to a particular task like summarization, but also for enforcing constraints, handling distributional shifts, and more.
(pg. 92) Foundation models are commonly evaluated by their performance on downstream tasks. One limitation of this evaluation paradigm is that it makes it hard to distinguish between the benefits provided by better training, data, adaptation techniques, architectures, etc. (The authors propose a bunch of other evaluation methodologies we could use.)
(Section 4.9) There is a review of AI safety and AI alignment as it relates to foundation models, if you’re interested. (I suspect there won’t be much new for readers of this newsletter.)
(Section 4.10) The section on theory emphasizes studying the pretraining-adaptation interface, which seems quite good to me. I especially liked the emphasis on the fact that pretraining and adaptation work on different distributions, and so it will be important to make good modeling assumptions about how these distributions are related.
Dendritic spines, the postsynaptic compartments of excitatory neurotransmission, have different shapes classified from ‘stubby’ to ‘mushroom-like’. Whereas mushroom spines are essential for adult brain function, stubby spines disappear during brain maturation. It is still unclear whether and how they differ in protein composition.
To address this, we combined electron microscopy and quantitative biochemistry with super-resolution microscopy to annotate more than 47,000 spines for more than 100 synaptic targets. Surprisingly, mushroom and stubby spines have similar average protein copy numbers and topologies. However, an analysis of the correlation of each protein to the postsynaptic density mass, used as a marker of synaptic strength, showed substantially more statistically-significant results for the mushroom spines. Secretion and trafficking proteins correlated particularly poorly to the strength of stubby spines.
This suggests that stubby spines are less likely to adequately respond to dynamic changes in synaptic transmission than mushroom spines, which possibly explains their loss during brain maturation.
I think whaling is really cool. I can’t help it. It’s one of those things like guns and war and space colonization which hits the adventurous id. The idea that people used to go out in tiny boats into the middle of oceans and try to kill the biggest animals to ever exist on planet earth with glorified spears to extract organic material for fuel is awesome. It’s like something out of a fantasy novel.
So I embarked on this project to understand everything I could about whaling. I wanted to know why burning whale fat in lamps was the best way to light cities for about 50 years. I wanted to know how profitable whaling was, what the hunters were paid, and how many whaleships were lost at sea. I wanted to know why the classical image of whaling was associated with America and what other countries have whaling legacies. I wanted to know if the whaling industry wiped out the whales and if they can recover.
…Fun Fact 1: Right whale testicles make up 1% of their weight,23 so each testicle weighs around 700 pounds. The average American eats 222 pounds of meat per year (not counting fish),24 so a single right whale testicle should cover a family of 4 for almost a year.
2021-yang.pdf: “Wireless multilateral devices for optogenetic studies of individual and social behaviors”, Yiyuan Yang, Mingzheng Wu, Abraham Vázquez-Guardado, Amy J. Wegener, Jose G. Grajales-Reyes, Yujun Deng, Taoyi Wang, Raudel Avila, Justin A. Moreno, Samuel Minkowicz, Vasin Dumrongprechachan, Jungyup Lee, Shuangyang Zhang, Alex A. Legaria, Yuhang Ma, Sunita Mehta, Daniel Franklin, Layne Hartman, Wubin Bai, Mengdi Han, Hangbo Zhao, Wei Lu, Yongjoon Yu, Xing Sheng, Anthony Banks, Xinge Yu, Zoe R. Donaldson, Robert W. Gereau IV, Cameron H. Good, Zhaoqian Xie, Yonggang Huang, Yevgenia Kozorovitskiy, John A. Rogers (2021-05-10; backlinks):
Advanced technologies for controlled delivery of light to targeted locations in biological tissues are essential to neuroscience research that applies optogenetics in animal models. Fully implantable, miniaturized devices with wireless control and power-harvesting strategies offer an appealing set of attributes in this context, particularly for studies that are incompatible with conventional fiber-optic approaches or battery-powered head stages. Limited programmable control and narrow options in illumination profiles constrain the use of existing devices.
The results reported here overcome these drawbacks via 2 platforms, both with real-time user programmability over multiple independent light sources, in head-mounted and back-mounted designs. Engineering studies of the optoelectronic and thermal properties of these systems define their capabilities and key design considerations.
Neuroscience applications demonstrate that induction of interbrain neuronal synchrony in the medial prefrontal cortex shapes social interaction within groups of mice, highlighting the power of real-time subject-specific programmability of the wireless optogenetic platforms introduced here.
Chemosensory anxiety signals act independent of odor concentration.
It is well documented how chemosensory anxiety signals affect the perceiver’s physiology, however, much less is known about effects on overt social behavior. The aim of the present study was to investigate the effects of chemosensory anxiety signals on trust and risk behavior in men and women.
Axillary sweat samples were collected from 22 men during the experience of social anxiety, and during a sport control condition. In a series of 5 studies, the chemosensory stimuli were presented via an olfactometer to 214 participants acting as investors in a bargaining task either in interaction with a fictitious human co-player (trust condition) or with a computer program (risk condition).
It could be shown that chemosensory anxiety signals reduce trust and risk behavior in women. In men, no effects were observed.
Chemosensory anxiety is discussed to be transmitted contagiously, preferentially in women.
Cryo-EM structure of the bacterial flagellar motor complexed with the hook
Each subunit in the rod interlocks with adjacent subunits
10 peptides, FlgB, and FliE are adaptors that join the MS ring and the rod
The LP ring applies electrostatic forces to support rotation of the rod
The bacterial flagellarmotor is a supramolecular protein machine that drives rotation of the flagellum for motility, which is essential for bacterial survival in different environments and a key determinant of pathogenicity. The detailed structure of the flagellar motor remains unknown. Here we present an atomic-resolution cryoelectron microscopy (cryo-EM) structure of the bacterial flagellar motor complexed with the hook, consisting of 175 subunits with a molecular mass of approximately 6.3 MDa. The structure reveals that 10 peptides protruding from the MS ring with the FlgB and FliE subunits mediate torque transmission from the MS ring to the rod and overcome the symmetry mismatch between the rotational and helical structures in the motor. The LP ring contacts the distal rod and applies electrostatic forces to support its rotation and torque transmission to the hook. This work provides detailed molecular insights into the structure, assembly, and torque transmission mechanisms of the flagellar motor.
Culture can be defined as all that is learned from others and is repeatedly transmitted in this way, forming traditions that may be inherited by successive generations. This cultural form of inheritance was once thought specific to humans, but research over the past 70 years has instead revealed it to be widespread in nature, permeating the lives of a diversity of animals, including all major classes of vertebrates. Recent studies suggest that culture’s reach may extend also to invertebrates—notably, insects. In the present century, the reach of animal culture has been found to extend across many different behavioral domains and to rest on a suite of social learning processes facilitated by a variety of selective biases that enhance the efficiency and adaptiveness of learning. Far-reaching implications, for disciplines from evolutionary biology to anthropology and conservation policies, are increasingly being explored.
The human trophic level(HTL) during the Pleistocene and its degree of variability serve, explicitly or tacitly, as the basis of many explanations for human evolution, behavior, and culture. Previous attempts to reconstruct the HTL have relied heavily on an analogy with recent hunter-gatherer groups’ diets. In addition to technological differences, recent findings of substantial ecological differences between the Pleistocene and the Anthropocene cast doubt regarding that analogy’s validity. Surprisingly little systematic evolution-guided evidence served to reconstruct HTL.
Here, we reconstruct the HTL during the Pleistocene by reviewingevidence for the impact of the HTL on the biological, ecological, and behavioral systems derived from various existing studies. We adapt a paleobiological and paleoecological approach, including evidence from human physiology and genetics, archaeology, paleontology, and zoology, and identified 25 sources of evidence in total. The evidence shows that the trophic level of the Homo lineage that most probably led to modern humans evolved from a low base to a high, carnivorous position during the Pleistocene, beginning with Homo habilis and peaking in Homo erectus. A reversal of that trend appears in the Upper Paleolithic, strengthening in the Mesolithic/Epipaleolithic and Neolithic, and culminating with the advent of agriculture.
We conclude that it is possible to reach a credible reconstruction of the HTL without relying on a simple analogy with recent hunter-gatherers’ diets. The memory of an adaptation to a trophic level that is embedded in modern humans’ biology in the form of genetics, metabolism, and morphology is a fruitful line of investigation of past HTLs, whose potential we have only started to explore.
Introduction: Many individuals experience persistent symptoms and a decline in health-related quality of life (HRQoL) after coronavirusdisease 2019 (COVID-19) illness. Existing studies have focused on hospitalized individuals 30 to 90 days after illness onset and have reported symptoms up to 110 days after illness. Longer-term sequelae in outpatients have not been well characterized.
Methods: A longitudinal prospective cohort of adults with laboratory-confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection was enrolled at the University of Washington with a concurrent cohort of healthy patients in a control group (eAppendix in the Supplement). Electronic informed consent was obtained, and the study was approved by the University of Washington human participants institutional review board. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline. COVID-19 symptom data were obtained at the time of acute illness or retrospectively recounted at a 30-day enrollment visit. A total of 234 participants with COVID-19 were contacted between August and November 2020 to complete a single follow-up questionnaire between 3 and 9 months after illness onset. We did not perform statistical tests for this descriptive analysis because of the small numbers in each subgroup. Data analysis was conducted in R version 4.0.2 (R Project for Statistical Computing).
Results: A total of 177 of 234 participants (75.6%; mean [range] age, 48.0 [18–94] years; 101 [57.1%] women) with COVID-19 completed the survey. Overall, 11 (6.2%) were asymptomatic, 150 (84.7%) were outpatients with mild illness, and 16 (9.0%) had moderate or severe disease requiring hospitalization (Table). Hypertension was the most common comorbidity (23 [13.0%]). The follow-up survey was completed a median (range) of 169 (31–300) days after illness onset among participants with COVID-19 (Figure, A) and 87 (71–144) days after enrollment among 21 patients in the control group. Among participants with COVID-19, persistent symptoms were reported by 17 of 64 patients (26.6%) aged 18 to 39 years, 25 of 83 patients (30.1%) aged 40 to 64 years, and 13 of 30 patients (43.3%) aged 65 years and older. Overall, 49 of 150 outpatients (32.7%), 5 of 16 hospitalized patients (31.3%), and 1 of 21 healthy participants (4.8%) in the control group reported at least 1 persistent symptom. Of 31 patients with hypertension or diabetes, 11 (35.5%) experienced ongoing symptoms.
The most common persistent symptoms were fatigue (24 of 177 patients [13.6%]) and loss of sense of smell or taste (24 patients [13.6%]) (Figure B). Overall, 23 patients (13.0%) reported other symptoms, including brain fog (4 [2.3%]). A total of 51 outpatients and hospitalized patients (30.7%) reported worse HRQoL compared with baseline vs 4 healthy participants and asymptomatic patients (12.5%); 14 patients (7.9%) reported negative impacts on at least 1 activity of daily living (ADL), the most common being household chores.
Discussion: In this cohort of individuals with COVID-19 who were followed up for as long as 9 months after illness, approximately 30% reported persistent symptoms. A unique aspect of our cohort is the high proportion of outpatients with mild disease. Persistent symptoms were reported by one-third of outpatients in our study, consistent with a previously reported study, in which 36% of outpatients had not returned to baseline health by 14 to 21 days following infection. However, this has not been previously described 9 months after infection.
Consistent with existing literature, fatigue was the most commonly reported symptom. This occurred in 14% of individuals in this study, lower than the 53% to 71% reported in cohorts of hospitalized patients, likely reflecting the lower acuity of illness in our cohort. Furthermore, impairment in HRQoL has previously been reported amonghospitalized patients who have recovered from COVID-19; we found 29%of outpatients reported worsened HRQoL.
Notably, 14 participants, including 9 nonhospitalized individuals, reported negative impacts on ADLs after infection. With 57.8 million cases worldwide, even a small incidence of long-term debility could have enormous health and economic consequences.
Study limitations include a small sample size, single study location, potential bias from self-reported symptoms during illness episode, and loss to follow-up of 57 participants. To our knowledge, this study presents the longest follow-up symptom assessment after COVID-19 infection. Our research indicates that the health consequences of COVID-19 extend far beyond acute infection, even among those who experience mild illness. Comprehensive long-term investigation will be necessary to fully understand the impact of this evolving viral pathogen.
As the brain develops, neurons build new connections that are refined by pruning. Gour et al 2020 used electron microscopy to build a high-resolution study of mouse postnatal brain development. The survey reveals the details of how circuits are built to incorporate inhibitory neurons in the somatosensory cortex.
Brain circuits in the neocortex develop from diverse types of neurons that migrate and form synapses. Here we quantify the circuit patterns of synaptogenesis for inhibitory interneurons in the developing mouse somatosensory cortex. We studied synaptic innervation of cell bodies, apical dendrites, and axon initial segments using 3-dimensional electron microscopy focusing on the first 4 weeks postnatally (postnatal days P5 to P28). We found that innervation of apical dendrites occurs early and specifically: Target preference is already almost at adult levels at P5. Axons innervating cell bodies, on the other hand, gradually acquire specificity from P5 to P9, likely via synaptic overabundance followed by antispecific synapse removal. Chandelier axons show first target preference by P14 but develop full target specificity almost completely by P28, which is consistent with a combination of axon outgrowth and off-target synapse removal. This connectomic developmental profile reveals how inhibitory axons in the mouse cortex establish brain circuitry during development.
Introduction: The establishment of neuronal circuits in the cerebral cortex of mammals is an important developmental process extending over embryonic and postnatal periods, from the first occurrence of differentiated neurons to the final formation of precise synaptic innervation patterns, which are further shaped by experience. Of special interest is the establishment of inhibitory circuits, constituted by nerve cells that produce γ-aminobutyric acid as a neurotransmitter (GABAergic interneurons), which are known to form intricate neuronal networks with a distinctive degree of synaptic preference for the types of postsynaptic structures to innervate. While the time course of neuronal migration and integration of interneurons is beginning to be understood and the first molecular cues for selectively enhancing and suppressing synaptic innervation have been identified, a comprehensive mapping of cortical inhibitory innervation during postnatal development is missing.
Rationale: With the development of high-throughput 3-dimensional electron microscopy (3D EM) imaging and analysis of nervous tissue, the goal of systematically mapping neuronal connectivity in ever-increasing volumes of brain tissue has become possible. This methodological approach, connectomics, has so far been primarily aimed at comprehensive circuit mapping in complete smaller animals’ brains or parts of larger brains. An additional advantage of higher-throughput connectomic analysis, however, is the opportunity to repeat similar experiments under many experimental conditions. This advantage is particularly relevant for the study of developmental processes, which naturally require the measurement of multiple time points. In this study, we made use of these technological advances to map neuronal connectivity in 13 3D EM datasets with a focus on the primary somatosensory cortex of mouse during postnatal development.
Results: We acquired and analyzed data from layers 4 and 2–3 of mouse cortex over the period during which synaptic networks are formed within the neocortex. We studied data from mice at 5, 7, 9, 14, 28, and 56 days of age, corresponding to the development from baby to adult. We analyzed the formation of interneuronal synaptic preference for subsections of neurons, their cell bodies, their initial part of the axon, and apical dendrites. We found that only axons with preference for apical dendrites already show high target preference in the early time points measured. By contrast, preference for innervation of cell bodies was gradually established, with a peak in developmental change between postnatal days 7 and 9. During this time, preference for cell bodies increased almost 3×, and the density of synapses along these axons dropped by almost 2×. With this, we found that while for apical dendrite–preferring interneurons, mechanisms of ab initio target choice are plausible, cell body innervation could be established by the removal of inadequately placed synapses along the presynaptic axon. For the innervation of the initial section of axons, we found that axo-axonic innervation initially constitutes only a minor fraction of the innervation and develops to provide ~50% of the synaptic input to the axon initial segment. Our data indicate that synaptic preference for axon initial segments develops before the formation of special vertically oriented axonal configurations called cartridges.
Conclusion: The first comprehensive mapping of inhibitory circuit development in mammalian cortex provides quantitative insights into the formation of circuits and the precise time course for the establishment of synaptic target preference. The approach of connectomic screening also may prove useful for future studies of experimental interference with relevant genetic and environmental conditions of circuit formation in the mammalian brain.
Research on dog social cognition has received widespread attention. However, the vast majority of this research has focused on dogs’ relationships and responsiveness towards adult humans. While little research has considered dog-child interactions from a cognitive perspective, how dogs perceive and socially engage with children is critical to fully understand their interspecific social cognition. In several recent studies, dogs have been shown to exhibit behavioral synchrony, often associated with increased affiliation and social responsiveness, with their adult owners.
In the current study, we asked if family dogs would also exhibit behavioral synchrony with child family members. Our findings demonstrated that dogs engaged in all three measured components of behavioral synchrony with their child partner-activity synchrony (p < 0.0001), proximity (p < 0.0001), and orientation (p = 0.0026)—at levels greater than would be expected by chance. The finding that family dogs synchronize their behavior with that of child family members may shed light on how dogs perceive familiar children. Aspects of pet dog responsiveness to human actions previously reported in studies with adult humans appear to generalize to cohabitant children in at least some cases. However, some differences between our study outcomes and those reported in the dog-adult human literature were also observed.
Given the prevalence of families with both children and dogs, and the growing popularity of child-focused animal-assisted interventions, knowledge about how dogs respond to the behavior of human children may also help inform and improve safe and successful dog-child interactions.
There have been several major outbreaks of emerging viral diseases, including Hendra, Nipah, Marburg and Ebola virus diseases, severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome(MERS)—as well as the current pandemic of coronavirus disease 2019(COVID-19). Notably, all of these outbreaks have been linked to suspected zoonotic transmission of bat-borne viruses.
Bats—the only flying mammal—display several additional features that are unique among mammals, such as a long lifespan relative to body size, a low rate of tumorigenesis and an exceptional ability to host viruses without presenting clinical disease. Here we discuss the mechanisms that underpin the host defence system and immune tolerance of bats, and their ramifications for human health and disease. Recent studies suggest that 64 million years of adaptive evolution have shaped the host defence system of bats to balance defence and tolerance, which has resulted in a unique ability to act as an ideal reservoir host for viruses.
Lessons from the effective host defence of bats would help us to better understand viral evolution and to better predict, prevent and control future viral spillovers. Studying the mechanisms of immune tolerance in bats could lead to new approaches to improving human health. We strongly believe that it is time to focus on bats in research for the benefit of both bats and humankind.
We integrated ubiquity, mass and lifespan of all major cell types to achieve a comprehensive quantitative description of cellular turnover.
We found a total cellular mass turnover of 80 ± 20 grams per day, dominated by blood cells and gut epithelial cells. In terms of cell numbers, close to 90% of the (0.33 ± 0.02) × 1012 cells per day turnover was blood cells.
2021-asnicar.pdf: “Microbiome connections with host metabolism and habitual diet from 1,098 deeply phenotyped individuals”, Francesco Asnicar, Sarah E. Berry, Ana M. Valdes, Long H. Nguyen, Gianmarco Piccinno, David A. Drew, Emily Leeming, Rachel Gibson, Caroline Roy, Haya Al Khatib, Lucy Francis, Mohsen Mazidi, Olatz Mompeo, Mireia Valles-Colomer, Adrian Tett, Francesco Beghini, Leonard Dubois, Davide Bazzani, Andrew Maltez Thomas, Chloe Mirzayi, Asya Khleborodova, Sehyun Oh, Rachel Hine, Christopher Bonnett, Joan Capdevila, Serge Danzanvilliers, Francesca Giordano, Ludwig Geistlinger, Levi Waldron, Richard Davies, George Hadjigeorgiou, Jonathan Wolf, Jose M. Ordovas, Christopher Gardner, Paul W. Franks, Andrew T. Chan, Curtis Huttenhower, Tim D. Spector, Nicola Segata (2021-01-11; backlinks):
The gut microbiome is shaped by diet and influences host metabolism; however, these links are complex and can be unique to each individual. We performed deep metagenomic sequencing of 1,203 gut microbiomes from 1,098 individuals enrolled in the Personalised Responses to Dietary Composition Trial (PREDICT 1) study, whose detailed long-term diet information, as well as hundreds of fasting and same-meal postprandial cardiometabolic blood marker measurements were available. We found many statistically-significant associations between microbes and specific nutrients, foods, food groups and general dietary indices, which were driven especially by the presence and diversity of healthy and plant-based foods. Microbial biomarkers of obesity were reproducible across external publicly available cohorts and in agreement with circulating blood metabolites that are indicators of cardiovascular disease risk. While some microbes, such as Prevotella copri and Blastocystis spp., were indicators of favorable postprandial glucose metabolism, overall microbiome composition was predictive for a large panel of cardiometabolic blood markers including fasting and postprandial glycemic, lipemic and inflammatory indices. The panel of intestinal species associated with healthy dietary habits overlapped with those associated with favorable cardiometabolic and postprandial markers, indicating that our large-scale resource can potentially stratify the gut microbiome into generalizable health levels in individuals without clinically manifest disease.
2020-abbott.pdf: “The Mind of a Mouse”, Larry F. Abbott, Davi D. Bock, Edward M. Callaway, Winfried Denk, Catherine Dulac, Adrienne L. Fairhall, Ila Fiete, Kristen M. Harris, Moritz Helmstaedter, Viren Jain, Narayanan Kasthuri, Yann LeCun, Jeff W. Lichtman, Peter B. Littlewood, Liqun Luo, John H. R. Maunsell, R. Clay Reid, Bruce R. Rosen, Gerald M. Rubin, Terrence J. Sejnowski, H. Sebastian Seung, Karel Svoboda, David W. Tank, Doris Tsao, David C. Van Essen (2020-09-17; backlinks):
Large scientific projects in genomics and astronomy are influential not because they answer any single question but because they enable investigation of continuously arising new questions from the same data-rich sources. Advances in automated mapping of the brain’s synaptic connections (connectomics) suggest that the complicated circuits underlying brain function are ripe for analysis. We discuss benefits of mapping a mouse brain at the level of synapses.
An Unbiased Catalog of Cells and Their Synaptic Connections
Connections and Projections in the Same Animal
A Path toward Learning the Structure of Long-Term Memory
A Path toward Describing the Neuropathology of Brain Disorders
A Path toward Designing Non-biological Thinking Systems
Human challenge trials (HCTs) have been proposed as a means toaccelerate SARS-CoV-2 vaccine development. We identify and discuss 3potential use cases of HCTs in the current pandemic: evaluating efficacy, converging on correlates of protection, and improving understanding of pathogenesis and the human immune response. We outline the limitations of HCTs and find that HCTs are likely to be most useful for vaccine candidates currently in preclinical stages of development. We conclude that, while currently limited in their application, there are scenarios in which HCTs would be extremelybeneficial. Therefore, the option of conducting HCTs to accelerateSARS-CoV-2 vaccine development should be preserved. As HCTs require many months of preparation, we recommend an immediate effort to (1) establish guidelines for HCTs for COVID-19; (2) take the first stepstoward HCTs, including preparing challenge virus and making preliminary logistical arrangements; and (3) commit to periodically re-evaluating the utility of HCTs.
Plague, a highly infective disease caused by Yersinia pestis (Proteobacteria: Enterobacteriales), ravaged Europe from 1347 over the course of more than 450 years. During the Italian Plague (1629–1631), the disease was rampaging in the entire Northern Italy down to Tuscany, but the city of Ferrara was relatively spared, in spite that the economic activities were maintained with highly affected cities, such as Milan, through the relevant salt commerce.
The aim of the study is to evaluate the hygiene rules that were effective in preventing the spread of the plague in Ferrara in 1630, by examining historical documents and reports. According to these documents, a kind of empirical “integrated disease management” was carried out, using remedies including compounds with bactericidal, anti-parasite and repellent activity, and by technical strategies including avoidance of possible plague carriers. The anti-plague remedies and technical strategies used in ancient Ferrara are critically analysed using a multidisciplinary approach (pharmaceutic, medical, epidemiologic and entomological) and compared to current prevention protocols.
Background: Walnut consumption counteracts oxidative stress and inflammation, 2 drivers of cognitive decline. Clinical data concerning effects on cognition are lacking.
Objectives: The Walnuts And Healthy Aging study is a 2-center (Barcelona, Spain; Loma Linda, CA) randomized controlled trial examining the cognitive effects of a 2-y walnut intervention in cognitively healthy elders.
Methods: We randomly allocated 708 free-living elders (63–79 y, 68% women) to a diet enriched with walnuts at ~15% energy (30–60 g/d) or a control diet (abstention from walnuts). We administered a comprehensive neurocognitive test battery at baseline and 2 y. Change in the global cognition composite was the primary outcome. We performed repeated structural and functional brain MRI in 108 Barcelona participants.
Results: A total of 636 participants completed the intervention. Besides differences in nutrient intake, participants from Barcelona smoked more, were less educated, and had lower baseline neuropsychological test scores than those from Loma Linda. Walnuts were well tolerated and compliance was good. Modified intention-to-treat analyses (n = 657) uncovered no between-group differences in the global cognitive composite, with mean changes of −0.072 (95% CI: −0.100, −0.043) in the walnut diet group and −0.086 (95% CI: −0.115, −0.057) in the control diet group (p = 0.491). Post hoc analyses revealed statistically-significant differences in the Barcelona cohort, with unadjusted changes of −0.037 (95% CI: −0.077, 0.002) in the walnut group and −0.097 (95% CI: −0.137, −0.057) in controls (p = 0.040). Results of brain fMRI in a subset of Barcelona participants indicated greater functional network recruitment in a working memory task in controls.
Conclusions: Walnut supplementation for 2 y had no effect on cognition in healthy elders. However, brain fMRI and post hoc analyses by site suggest that walnuts might delay cognitive decline in subgroups at higher risk. These encouraging but inconclusive results warrant further investigation, particularly targeting disadvantaged populations, in whom greatest benefit could be expected.
This trial was registered at clinicaltrials.gov as NCT01634841.
Killer whales (Orcinus orca) are cooperative apex predators that have been documented foraging on a wide array of prey, ranging from small schooling fish to large cetaceans. Foraging strategies of killer whales that hunt marine mammals are complex and vary globally.
A high-risk and specialized form of killer whale foraging behavior is known as intentional stranding. During this foraging behavior, members of a group of killer whales deliberately direct themselves towards pinniped prey, accelerate towards the shore, and become temporarily stranded on their ventral surface in the surf zone.
In Patagonia, along the shores of the Peninsula Valdéz, a small population of killer whales exhibit intentional stranding by using channels between reefs and steeply sloping beaches to partially beach themselves to capture southern sea lions and southern elephant seals.
Intentional stranding has also been documented by killer whales on Possession Island in the Crozet Archipelago in the sub-Antarctic Indian Ocean. Unlike the steep beaches of Peninsula Valdéz, the two prominent beaches on Possession Island where killer whales use intentional stranding have a low grade slope. Southern elephant seals are their primary prey in this region.
Objective: To determine the temporal sequence of objectively defined subtle cognitive difficulties (Obj-SCD) in relation to amyloidosis and neurodegeneration, the current study examined the trajectories of amyloid PET and medial temporal neurodegeneration inparticipants with Obj-SCD relative to cognitively normal (CN) and mildcognitive impairment (MCI) groups.
Method: A total of 747 Alzheimer’s Disease Neuroimaging Initiative participants (305 CN, 153 Obj-SCD, 289 MCI) underwentneuropsychological testing and serial amyloid PET and structural MRI examinations. Linear mixed effects models examined 4-year rate of change in cortical 18F-florbetapir PET, entorhinal cortex thickness,and hippocampal volume in those classified as Obj-SCD and MCI relative to CN.
Result: Amyloid accumulation was faster in the Obj-SCD group thanin the CN group; the MCI and CN groups did notstatistically-significantly differ from each other. The Obj-SCD andMCI groups both demonstrated faster entorhinal cortical thinningrelative to the CN group; only the MCI group exhibited faster hippocampal atrophy than CN participants.
Conclusion: Relative to CN participants, Obj-SCD was associated with faster amyloid accumulation and selective vulnerability of entorhinal cortical thinning, whereas MCI was associated with fasterentorhinal and hippocampal atrophy. Findings suggest that Obj-SCD, operationally defined using sensitive neuropsychological measures, can be identified prior to or during the preclinical stage of amyloid deposition. Further, consistent with the Braak neurofibrillary staging scheme, Obj-SCD status may track with early entorhinal pathologicchanges, whereas MCI may track with more widespread medial temporalchange. Thus, Obj-SCD may be a sensitive and noninvasive predictor of encroaching amyloidosis and neurodegeneration, prior to frank cognitive impairment associated with MCI.
De novo-designed proteins1,2,3 hold great promise as building blocks for synthetic circuits, and can complement the use of engineered variants of natural proteins4,5,6,7. One such designer protein—degronLOCKR, which is based on ‘latching orthogonal cage-key proteins’ (LOCKR) technology8—is a switch that degrades a protein of interest in vivo upon induction by a genetically encoded small peptide. Here we leverage the plug-and-play nature of degronLOCKR to implement feedback control of endogenous signalling pathways and synthetic gene circuits. We first generate synthetic negative and positive feedback in the yeast mating pathway by fusing degronLOCKR to endogenous signalling molecules, illustrating the ease with which this strategy can be used to rewire complex endogenous pathways. We next evaluate feedback control mediated by degronLOCKR on a synthetic gene circuit9, to quantify the feedback capabilities and operational range of the feedback control circuit. The designed nature of degronLOCKR proteins enables simple and rational modifications to tune feedback behaviour in both the synthetic circuit and the mating pathway. The ability to engineer feedback control into living cells represents an important milestone in achieving the full potential of synthetic biology10,11,12. More broadly, this work demonstrates the large and untapped potential of de novo design of proteins for generating tools that implement complex synthetic functionalities in cells for biotechnological and therapeutic applications.
2019-langan.pdf: “De novo design of bioactive protein switches”, Robert A. Langan, Scott E. Boyken, Andrew H. Ng, Jennifer A. Samson, Galen Dods, Alexandra M. Westbrook, Taylor H. Nguyen, Marc J. Lajoie, Zibo Chen, Stephanie Berger, Vikram Khipple Mulligan, John E. Dueber, Walter R. P. Novak, Hana El-Samad, David Baker (2019-07-24; backlinks):
Allosteric regulation of protein function is widespread in biology, but is challenging for de novo protein design as it requires the explicit design of multiple states with comparable free energies. Here we explore the possibility of designing switchable protein systems de novo, through the modulation of competing intermolecular and intramolecular interactions. We design a static, five-helix ‘cage’ with a single interface that can interact either intramolecularly with a terminal ‘latch’ helix or intermolecularly with a peptide ‘key’. Encoded on the latch are functional motifs for binding, degradation or nuclear export that function only when the key displaces the latch from the cage. We describe orthogonal cage-key systems that function in vitro, in yeast and in mammalian cells with up to 40-fold activation of function by key. The ability to design switchable protein functions that are controlled by induced conformational change is a milestone for de novo protein design, and opens up new avenues for synthetic biology and cell engineering.
Pavement burns account for substantial burn-related injuries in the Southwestern United States and other hot climates with nearly continuous sunlight and daily maximum temperatures above 100°F. At peak temperatures, pavement can be hot enough to cause second-degree burns in a matter of seconds. The goal of this study was to review pavement burn injury admissions at a desert burn center compared with maximum ambient temperatures to determine which temperatures correlated to an increase in burn admissions. We obtained ambient temperature data from the National Oceanic and Atmospheric Administration. We reviewed our registry for 5 years retrospectively of all pavement burn injury admissions to our burn center. A total of 173 pavement-related burn cases were identified. We demonstrated an exponential increase in the rate of burn admissions as maximum ambient temperatures increased. More than 88% of pavement-related burn injury admissions occurred when the ambient temperature reached 95°F or higher. The risk per day was extrapolated based on the number of pavement burn injury admissions and the number of days at each of the maximum ambient temperatures recorded. The risk of pavement burns in areas of direct sunlight begins around 95°F and increases exponentially as ambient temperatures rise. This information will be used for burn outreach prevention and public health awareness programs. The benefit of this study relates to the entire community since high ambient temperatures put everyone at risk for hot pavement burns.
Wild Lisotrigona cacciae (Nurse) and L. furva Engel were studied in their natural forest habitat at three sites in northern Thailand, May 2013–November 2014. The author, both experimenter and tear source, marked the minute bees while they drank from his eyes viewed in a mirror. All marked workers, 34 L. cacciae and 23 L. furva, came repeatedly to engorge, 34 and 27 times on average, respectively. The maximum number of times the same L. cacciae and L. furva came was 78 and 144 visits in one day, respectively; the maximum over two days was 145 visits by one L. cacciae; the maximum number of visiting days by the same bee was four over seven days by one L. furva which made 65 visits totally. The same forager may collect tears for more than 10h in a day, on average for 3h15min and 2h14min for L. cacciae and L. furva, respectively. Engorging from the inner eye corner averaged 3.1 and 2.2 min, respectively, but only 1.3 and 0.9 min when settled on the lower eye lid/ciliae. The interval between consecutive visits averaged 3.3 min and 3.8 min, respectively. Lachryphagy occurred during all months of the year, with 91–320 foragers a day during the hot season and 6–280 foragers during the rainy season; tear collecting resumed after a downpour. During the cold season eye visitation was reduced to 3–64 foragers, but none left her nest when the temperature was below 22°C. Flying ranges were greater than in comparable non-lachryphagous meliponines. It is proposed that Lisotrigona colonies have workers that are, besides nectar and pollen foragers, specialized tear collectors. Tears are 200 times richer in proteins than sweat, a secretion well-known to be imbibed by many meliponines. Digestion of proteins dissolved in tears is not hampered by an exine wall as in pollen, and they have bactericidal properties. These data corroborate the inference that Lisotrigona, which also visit other mammals, birds and reptiles, harvest lachrymation mainly for its content of proteins rather than only for salt and water.
2017-leptak.pdf: “What evidence do we need for biomarker qualification?”, Chris Leptak, Joseph P. Menetski, John A. Wagner, Jiri Aubrecht, Linda Brady, Martha Brumfield, William W. Chin, Steve Hoffmann, Gary Kelloff, Gabriela Lavezzari, Rajesh Ranganathan, John-Michael Sauer, Frank D. Sistare, Tanja Zabka, David Wholley (2017-11-22):
Biomarkers can facilitate all aspects of the drug development process. However, biomarker qualification—the use of a biomarker that is accepted by the U.S. Food and Drug Administration—needs a clear, predictable process. We describe a multi-stakeholder effort including government, industry, and academia that proposes a framework for defining the amount of evidence needed for biomarker qualification. This framework is intended for broad applications across multiple biomarker categories and uses.
Our objective was to use expectancy-violation methods for determining whether Portia africana, a salticid spider that specializes in eating other spiders, is proficient at representing exact numbers of prey. In our experiments, we relied on this predator’s known capacity to gain access to prey by following pre-planned detours. After Portia first viewed a scene consisting of a particular number of prey items, it could then take a detour during which the scene went out of view. Upon reaching a tower at the end of the detour, Portia could again view a scene, but now the number of prey items might be different. We found that, compared with control trials in which the number was the same as before, Portia’s behaviour was statistically-significantly different in most instances when we made the following changes in number: 1 versus 2, 1 versus 3, 1 versus 4, 2 versus 3, 2 versus 4 or 2 versus 6. These effects were independent of whether the larger number was seen first or second. No statistically-significant effects were evident when the number of prey changed between 3 versus 4 or 3 versus 6. When we changed prey size and arrangement while keeping prey number constant, no statistically-significant effects were detected. Our findings suggest that Portia represents 1 and 2 as discrete number categories, but categorizes 3 or more as a single category that we call ‘many’.
2017-mosialou.pdf: “MC4R-dependent suppression of appetite by bone-derived lipocalin 2”, Ioanna Mosialou, Steven Shikhel, Jian-Min Liu, Antonio Maurizi, Na Luo, Zhenyan He, Yiru Huang, Haihong Zong, Richard A. Friedman, Jonathan Barasch, Patricia Lanzano, Liyong Deng, Rudolph L. Leibel, Mishaela Rubin, Thomas Nicholas, Wendy Chung, Lori M. Zeltser, Kevin W. Williams, Jeffrey E. Pessin, Stavroula Kousteni
A plant growth promoting endophyte, Paenibacillus polymyxa P2b-2R, originally isolated from a lodgepole pine seedling and its green fluorescent protein (GFP) derivative, P2b-2Rgfp, were evaluated for their ability to survive, fix atmospheric nitrogen (N) and promote plant growth when inoculated into corn (Zea Mays L.) in a long-term trial. We were also interested to see the effects of GFP-tagging of P2b-2R on its ability to promote growth of corn seedlings in a long-term study.
Corn seedlings were inoculated with either strain P2b-2R or P2b-2Rgfp and non-inoculated seedlings were treated as controls. Seedlings were harvested after 3 months and evaluated for plant growth promotion (length and biomass) and N fixation (15N foliar dilution assay). Colonization and survival of P2b-2R and P2b-2Rgfp outside (rhizosphere) and inside (internal tissues) the inoculated seedlings were also determined.
Both strains survived inside and outside corn seedlings forming rhizospheric and endophytic colonies in stem and root tissues. Inoculation by P2b-2R strain promoted corn plant growth via enhancing seedling length and biomass by 52% and 53%, respectively. Similarly, P2b-2Rgfp inoculation enhanced seedling length by 68% and biomass by 67%. Corn seedlings inoculated with strain P2b-2R derived 30% of foliar N from the atmosphere and seedlings inoculated with P2b-2Rgfp derived 32% of foliar N from the atmosphere. But there was no statistically-significant difference between P2b-2R and P2b-2Rgfp treated seedlings in terms of overall seedling length, biomass and amount of N fixed in this long-term trial.
These results combined with the results from an earlier study suggest that P. polymyxa P2b-2R and its GFP-tagged derivative is capable of enhancing overall plant growth throughout the life cycle of corn plant.
Here we use the acellular slime mould P. polycephalum to study decision making.
We use foraging and network construction as experimental paradigms.
Our work reveals the underlying basic mechanisms that organisms use to make decisions.
We think that the slime mould can be developed further to function as a “model brain”.
Because of its peculiar biology and the ease with which it can be cultured, the acellular slime mould Physarum polycephalum has long been a model organism in a range of disciplines. Due to its macroscopic, syncytial nature, it is no surprise that it has been a favourite amongst cell biologists. Its inclusion in the experimental tool kit of behavioural ecologists is much more recent. These recent studies have certainly paid off. They have shown that, for an organism that lacks a brain or central nervous system, P. polycephalum shows rather complex behaviour. For example, it is capable of finding the shortest path through a maze, it can construct networks as efficient as those designed by humans, it can solve computationally difficult puzzles, it makes multi-objective foraging decisions, it balances its nutrient intake and it even behaves irrationally. Are the slime mould’s achievements simply “cute”, worthy of mentioning in passing but nothing to take too seriously? Or do they hint at the fundamental processes underlying all decision making? We will address this question after reviewing the decision-making abilities of the slime mould.
Vitrification simplifies and frequently improves cryopreservation because it eliminates mechanical injury from ice, eliminates the need to find optimal cooling and warming rates, eliminates the importance of differing optimal cooling and warming rates for cells in mixed cell type populations, eliminates the need to find a frequently imperfect compromise between solution effects injury and intracellular ice formation, and enables cooling to be rapid enough to “outrun” chilling injury, but it complicates the osmotic effects of adding and removing cryoprotective agents and introduces a greater risk of cryoprotectant toxicity during the addition and removal of cryoprotectants.
Fortunately, a large number of remedies for the latter problem have been discovered over the past 30+ years, and the former problem can in most cases be eliminated or adequately controlled by careful attention to technique. Vitrification is therefore beginning to realize its potential for enabling the superior and convenient cryopreservation of most types of biological systems (including molecules, cells, tissues, organs, and even some whole organisms), and vitrification is even beginning to be recognized as a successful strategy of nature for surviving harsh environmental conditions.
However, many investigators who employ vitrification or what they incorrectly imagine to be vitrification have only a rudimentary understanding of the basic principles of this relatively new and emerging approach to cryopreservation, and this often limits the practical results that can be achieved. A better understanding may therefore help to improve present results while pointing the way to new strategies that may be yet more successful in the future.
To assist this understanding, this chapter describes the basic principles of vitrification and indicates the broad potential biological relevance of vitrification.
The development of shell-less culture methods for bird embryos with high hatchability would be useful for the efficient generation of transgenic chickens, embryo manipulations, tissue engineering, and basic studies in regenerative medicine. To date, studies of culture methods for bird embryos include the whole embryo culture using narrow windowed eggshells, surrogate eggshells, and an artificial vessel using a gas-permeable membrane. However, there are no reports achieving high hatchability of >50% using completely artificial vessels. To establish a simple method for culturing chick embryos with high hatchability, we examined various culture conditions, including methods for calcium supplementation and oxygen aeration. In the embryo cultures where the embryos were transferred to the culture vessel after 55–56h incubation, more than 90% of embryos survived until day 17 when a polymethylpentene film was used as a culture vessel with calcium lactate and distilled water supplementations. The aeration of pure oxygen to the surviving embryos from day 17 yielded a hatchability of 57.1% (8 out of 14). Thus, we successfully achieved a high hatchability with this method in chicken embryo culture using an artificial vessel.
Comprehensive high-resolution structural maps are central to functional exploration and understanding in biology. For the nervous system, in which high resolution and large spatial extent are both needed, such maps are scarce as they challenge data acquisition and analysis capabilities.
Here we present for the mouse inner plexiform layer—the main computational neuropil region in the mammalian retina—the dense reconstruction of 950 neurons and their mutual contacts. This was achieved by applying a combination of crowd-sourced manual annotation and machine-learning-based volume segmentation to serial block-face electron microscopy data.
We characterize a new type of retinal bipolar interneuron and show that we can subdivide a known type based on connectivity. Circuit motifs that emerge from our data indicate a functional mechanism for a known cellular response in a ganglion cell that detects localized motion, and predict that another ganglion cell is motion sensitive.
In this case report, the authors describe a 48-year-old male who complained to his primary care physician of abdominal discomfort and yellow/orange skin discoloration. Physical examination was normal except for some mild mid-abdominal discomfort (no observed skin color changes). An abdominal CT scan indicated a colon that was full of stool. Laboratory studies indicated elevated liver enzymes. Upon further questioning, the patient reported ingesting 6–7 pounds of carrots per week to facilitate his dieting effort. The patient was diagnosed with constipation, hypercarotinemia, and possible vitamin A toxicity. Following the cessation of excessive carrot ingestion, his liver enzymes normalized within 1 month.
Animals housed with running wheels and subjected to daily food restriction show paradoxical reductions in food intake and increases in running wheel activity.
This phenomenon, known as activity-based anorexia (ABA), leads to marked reductions in body weight that can ultimately lead to death. Recently, ABA has been proposed as a model of anorexia nervosa (AN). AN affects about 8 per 100,000 females and has the highest mortality rate among all psychiatric illnesses. Given the reductions in quality of life, high mortality rate, and the lack of pharmacological treatments for AN, a better understanding of the mechanisms underlying AN-like behavior is greatly needed.
This chapter provides basic guidelines for conducting ABA experimentsusing mice. The ABA mouse model provides an important tool for investigating the neurobiological underpinnings of AN-like behavior and identifying novel treatments.
The proper connectivity between neurons is essential for the implementation of the algorithms used in neural computations, such as the detection of directed motion by the retina. The analysis of neuronal connectivity is possible with electron microscopy, but technological limitations have impeded the acquisition of high-resolution data on a large enough scale.
Here we show, using serial block-face electron microscopy and 2-photon calcium imaging, that the dendrites of mouse starburst amacrine cells make highly specific synapses with direction-selective ganglion cells depending on the ganglion cell’s preferred direction. Our findings indicate that a structural (wiring) asymmetry contributes to the computation of direction selectivity. The nature of this asymmetry supports some models of direction selectivity and rules out others. It also puts constraints on the developmental mechanisms behind the formation of synaptic connections.
Our study demonstrates how otherwise intractable neurobiological questions can be addressed by combining functional imaging with the analysis of neuronal connectivity using large-scale electron microscopy.
The introduction of smallpox vaccination after the publication of Edward Jenner’s An Inquiry into the Causes and Effects of Variolae Vaccinae depended on the spread of cowpox, a relatively rare disease. How Europeans and their colonial allies transported and maintained cowpox in new environments is a social and technological story involving a broad range of individuals from physicians and surgeons to philanthropists, ministers, and colonial administrators. Putting cowpox in new places also meant developing new techniques and organizations. This essay focuses on the actual practices of vaccination and their environmental contexts in order to illuminate the dynamic exchanges of materials, images, and ideas that made the spread of vaccination possible.
[Keywords: smallpox vaccination, cowpox, vaccine technologies, visual language/medical illustrations, environmental history of disease]
[2-page popular summary of Sueda et al 2008: a survey of dog owners about plant-eating found that it was common, usually didn’t seem related to illness, occasionally triggered vomiting, younger dogs did it more, and no diet appeared correlated with plant-eating. Similar preliminary results for cats are mentioned. Hart interprets these results as support for his theory that plant-eating is an evolved behavior intended to help control intestinal parasites, mechanically through fiber going through the intestines but also partially through vomiting.]
Grass or plant eating is a widely recognized behaviour amongst domestic dogs. We first estimated the prevalence of plant eating by administering a written survey to owners of healthy dogs visiting the outpatient service of a veterinary medical teaching hospital for routine health maintenance procedures. Of 47 owners systematically surveyed whose dogs had daily exposure to plants, 79% reported that their dog had eaten grass or other plants. Using an internet survey targeting owners of plant-eating dogs, we then acquired information regarding the frequency and type of plants eaten, frequency with which dogs appeared ill before eating plants and frequency with which vomiting was seen afterwards. Of 3340 surveys returned, 1571 met enrollment criteria. Overall, 68% of dogs were reported to eat plants on a daily or weekly basis with the remainder eating plants once a month or less. Grass was the most frequently eaten plant by 79% of dogs. Only 9% were reported to frequently appear ill before eating plants and only 22% were reported to frequently vomit afterwards. While no relationship was found between sex, gonadal status, breed group or diet type with regard to frequency or type of plants eaten, a younger age was statistically-significantly associated with: (1) an increase in frequency of plant eating; (2) an increase in consuming non-grass plants; (3) a decrease in regularly showing signs of illness before eating plants and (4) a decrease in regularly vomiting after consuming plants. The findings support the perspective that plant eating is a normal behaviour of domestic dogs.
This paper uses the 1918 influenza pandemic as a natural experiment for testing the fetal origins hypothesis. The pandemic arrived unexpectedly in the fall of 1918 and had largely subsided by January 1919, generating sharp predictions for long-term effects. Data from the 1960–80 decennial U.S. Census indicate that cohorts in utero during the pandemic displayed reduced educational attainment, increased rates of physical disability, lower income, lower socioeconomic status, and higher transfer payments compared with other birth cohorts. These results indicate that investments in fetal health can increase human capital.
Portia may be about the size of a fat raisin, with eyes no larger than sesame seeds, yet it has a visual acuity that beats a cat or a pigeon. The human eye is better, but only about five times better. So from a safe distance a foot or two away, Portia sits scanning Scytodes, looking to see if it is carrying an egg sac in its fangs… The retinas of its principle eyes have only about a thousand receptors compared to the 200 million or so of the human eyeball. But Portia can swivel these tiny eyes across the scene in systematic fashion, patiently building up an image point by point. Having rejected a few alternatives routes, Portia makes up its mind and disappears from sight. A couple of hours later, the silent assassin is back, dropping straight down on Scytodes from a convenient rock overhang on a silk dragline—looking like something out of the movie, Mission Impossible. Once again, Portia’s guile wins the day.
…Undoubtedly many of Portia’s cognitive abilities are genetic. Laboratory tests carried out by Robert Jackson, chief of Canterbury’s spider unit, have shown that only Portia from the particular area where Scytodes is common can recognise the difference between an egg sac carrying and non-egg sac carrying specimen. And it is a visual skill they are born with. The same species of Portia trapped a few hundred miles away doesn’t show any evidence of seeing the egg sac. But as Jackson points out, this just deepens the mystery. First there is the fact that such a specific mental behaviour as looking for an egg sac could be wired into a spider’s genome. And then there is the realisation that this is a population-specific, not species-specific, trait! It is a bit of locally acquired genetic knowledge. How does any simple hardwiring story account for that?
… “The White Tail can pluck, but only in a programmed, stereotyped, way. It doesn’t bother with tactics, or experimenting, or looking to see which way the other spider is facing. It just charges in and overpowers its prey with its size. Portia is a really weedy little spider and has to spend ages planning a careful attack. But its eyesight and trial and error approach means it can tackle any sort of web spider it comes across, even ones it has never met before in the history of its species”, says Harland. While Portia’s deception skills are impressive, the real admiration is reserved for its ability to plot a path to its victim. For an instinctive animal, out of sight is supposed to be out of mind. But Portia can take several hours to get into the right spot, even if it means losing sight of its prey for long periods.
…As a maze to be worked out from a single viewing—and with no previous experience of such mazes—this would be a tall order even for a rat or monkey. Yet more often than not, Portia could identify the right path. There was nothing quick about it. Portia would sit on top of the dowel for up to an hour, twisting to and fro as it appeared to track its eyes across the various possible routes. Sometimes it couldn’t decide and would just give up. However, once it had a plan, it would clamber down and pick the correct wire, even if this meant at first heading back behind where it had been perched. And walking right past the other wire. Harland says it seems that Portia can see where it has to get to in order to start its journey and ignore distractions along the way. This impression was strengthened by the fact that on trials where Portia made a wrong choice, it often gave up on reaching the first high bend of the wire—even though the bait was not yet in sight. It was as if Portia knew where it should be in the apparatus and could tell straight away when it had made a dumb mistake.
Crazy talk, obviously. There just ain’t room in Portia’s tiny head for anything approaching a plan, an expectation, or any other kind of inner life. The human brain has some 100 billion neurons, or brain cells, and even a mouse has around 70 million. Harland says no one has done a precise count on Portia but it is reckoned to have about 600,000 neurons, putting it midway between the quarter million of a housefly and the one million of a honey bee. Yet in the lab over the past few years, Portia has kept on surprising.
…Rather controversially, Li calls this the forming of a search image. Yet even if this mental priming is reduced to some thoroughly robotic explanation, such as an enhanced sensitivity of certain prey-recognising circuits and a matching damping of others, it still says that there is a general shift in the running state of Portia’s nervous system. Portia is responding in a globally cohesive fashion and is not just a loose bundle of automatic routines.
…Harland says Portia’s eyesight is the place to start. Jumping spiders already have excellent vision and Portia’s is ten times as good, making it sharper than most mammals. However being so small, there is a trade-off in that Portia can only focus its eyes on a tiny spot. It has to build up a picture of the world by scanning almost pixel by pixel across the visual scene. Whatever Portia ends up seeing, the information is accumulated slowly, as if peering through a keyhole, over many minutes. So there might be something a little like visual experience, but nothing like a full and “all at once” experience of a visual field. Harland feels that the serial nature of this scanning vision also makes it easier to imagine how prey recognition and other such decision processes could be controlled by some quite stereotyped genetic programs. When Portia is looking for an egg sac obscuring the face of Scytodes, it wouldn’t need to be representing the scene as a visual whole. Instead it could be checking a template, ticking off critical features in a sequence of fixations. In such a case, the less the eye sees with each fixation, perhaps the better. The human brain has to cope with a flood of information. Much of the work lies in discovering what to ignore about any moment. So the laser-like focus of Portia’s eyes might do much of this filtering by default. Yet while much of Portia’s mental abilities may reduce to the way its carefully designed eyes are coupled to largely reflexive motor patterns, Harland says there is still a disconcerting plasticity in its gene-encoded knowledge of the world. If one population of Portia can recognise an egg-carrying Scytodes but specimens from another region can’t, then this seems something quite new—a level of learning somewhere in-between the brain of an individual and the genome of a species… As Harland says, Portia just doesn’t fit anyone’s theories right at the moment.
Three species of Portia (Portia africana from Kenya, Portia fimbriata from Australia and Portia labiata from the Philippines) were tested with flies Drosophila immigrans and Musca domestica and with web-building spiders Badumna longinquus and Pholcus phalangioides. Badumna longinquus has powerful chelicerae, but not especially long legs, whereas Ph. phalangioides has exceptionally long legs, but only small, weak chelicerae. Typically, Portia sighted flies, walked directly towards them and attacked without adjusting orientation. However, Portia’s attacks on the spiders were aimed primarily at the cephalothorax instead of the legs or abdomen. Portia usually targeted the posterior-dorsal region of B. longinquus’ cephalothorax by attacking this species from above and behind. When the prey was Ph. phalangioides, attack orientation was defined primarily by opportunistic gaps between this species’ long legs (gaps through which Portia could contact the pholcid’s body without contacting one of the pholcid’s legs). Portia’s attack strategy appears to be an adjustment to the different types of risk posed by different types of prey.
Objective: To determine the role of clinicians in the discovery of off-label use of prescription drugs approved by the United States Food and Drug Administration (FDA).
Data Sources: Micromedex Healthcare Series was used to identify new uses of new molecular entities approved by the FDA in 1998, literature from January 1999–December 2003 was accessed through MEDLINE, and relevant patents were identified through the U.S. Patent and Trademark Office.
Data Synthesis and Main Finding: A survey of new therapeutic uses for new molecular entity drugs approved in 1998 was conducted for the subsequent 5 years of commercial availability. During that period, 143 new applications were identified in a computerized search of the literature for the 29 new drugs considered and approved in 1998. Literature and patent searches were conducted to identify the first report of each new application. Authors of the seminal articles were contacted through an electronic survey to determine whether they were in fact the originators of the new applications. If they were, examination of article content and author surveys were used to explore if each new application was discovered through clinical practice that was independent of pharmaceutical company or university research (field discovery) or if the discovery was made by or with the involvement of pharmaceutical company or university researchers (central discovery). 82 (57%) of the 143 drug therapy innovations in our sample were discovered by practicing clinicians through field discovery.
Conclusion: To our knowledge, the major role of clinicians in the discovery of new, off-label drug therapies has not been previously documented or explored. We propose that this finding has important regulatory and health policy implications.
In conclusion, fencing tempo is a vital element of swordsmanship, but clearly for the duelist hitting before being hit is not at all the same thing as hitting without being hit. Exsanguination is the principal mechanism of death caused by stabbing and incising wounds and death by this means is seldom instantaneous. Although stab wounds to the heart are generally imagined to be instantly incapacitating, numerous modern medical case histories indicate that while victims of such wounds may immediately collapse upon being wounded, rapid disability from this type of wound is by no means certain. Many present-day victims of penetrating wounds involving the lungs and the great vessels of the thorax have also demonstrated a remarkable ability to remain physically active minutes to hours after their wounds were inflicted. These cases are consistent with reports of duelists who, subsequent to having been grievously or even mortally wounded through the chest, neck, or abdomen, nevertheless remained actively engaged upon the terrain and fully able to continue long enough to dispatch those who had wounded them.
…Early American motion pictures have frequently misrepresented virtually every aspect of authentic swordplay. This seems to have been especially true of the industry’s depiction of the manner in which swordsmen fell before the blades of their opponents. While anecdotes of duels may have been biased by politics or personal vanity, modern forensic medicine provides ample evidence to support historical accounts of gravely wounded duelists continuing in combats for surprising lengths of time, sometimes killing those who had killed them.
In the first installment of this essay modern forensic evidence indicated that exsanguination is the principal mechanism of death caused by stabbing and incising wounds, but that death by this means is seldom instantaneous; victims frequently capable of continued physical activity, even after being stabbed in the heart. Similarly, victims of sharp force injuries to the lungs are not infrequently able to carry on for protracted periods of time. Wounds which result in the introduction of blood into the upper airway, on the other hand, are likely to incapacitate and kill an adversary quite rapidly.
Duels featuring penetrating wounds to the muscles of the sword arm appear in some cases to have left duelists fully capable of manipulating their weapons. Thrusts to the thigh and leg may have been even less efficacious. Strokes with the cutting edges of swords to the limbs may result in more serious wounds to the musculature than the penetrating variety, but historical accounts of duels demonstrate that immediate incapacitation of an adversary stricken with such wounds was by no means guaranteed. Incising wounds which sever tendons, however, can be expected to immediately incapacitate the muscles from which they arise. Recent medical reports of sharp force injuries to the brain suggest that even a sword-thrust penetrating the skull ought not to have been expected always to disable an opponent instantaneously. While severe pain is usually incapacitating, the stress of combat may mask the pain of gravely serious wounds, enabling the determined duelist to remain on the ground for a considerable length of time.
The immediate consequences to a duelist of wounds inflicted by thrusts or cuts from the rapier, dueling sabre or smallsword were unpredictable. While historical anecdotes of affairs of honor and twentieth century medical reports show that many stabbing victims collapsed immediately upon being wounded, others did not. While a swordsman certainly gained no advantage for having been wounded, it cannot be said that an unscathed adversary, after having delivered a fatal thrust or cut, had no further concern for his safety. Duelists receiving serious and even mortal wounds were sometimes able to continue effectively in the combat long enough to take the lives of those who had taken theirs…For the duelist, however, another form of tempo had to be considered. In the early history of affairs of honor, this “dueling tempo” spanned the period extending from the moment that a wound was inflicted until the instant that the adversary was no longer able to continue effectively. This span of time was unpredictable in length and could be expressed in terms ranging from a fraction of a second to minutes. Considering the number and severity of wounds that were sustained by combatants in the early days of the duel, it would not be surprising to find that many duelists of latter days secretly breathed a sigh of relief when interrupted by seconds rushing in to terminate affairs of honor immediately upon the delivery of a well placed cut or thrust.
Portia is a genus of web-invading araneophagic (spider eating) jumping spiders known from earlier studies to derive aggressive-mimicry signals by using a generate-and-test (trial and error) algorithm. We studied individuals of Portia labiata from two populations (Los Baños and Sagada) in the Philippines that have previously been shown to differ in the level to which they rely on trial-and-error derivation of signals for prey capture (Los Baños relied on trial and error more strongly than Sagada P. labiata).
Here we investigated P. labiata’s use of trial and error in a novel situation (a confinement problem: how to escape from an island surrounded by water) that is unlikely to correspond closely to anything the spider would encounter in nature. During Experiment 1, spiders chose between two potential escape tactics (leap or swim), one of which was set at random to fail (brought spider no closer to edge of tray) and the other of which was set for partially succeeding (brought spider closer to edge of tray). By using trial and error, the Los Baños P. labiata solved the confinement problem statistically-significantly more often than the Sagada P. labiata in Experiment 1, both when the correct choices were positively reinforced (ie., when the spider was moved closer to edge of tray) and when incorrect choices were punished (ie., when the spider got no closer to edge of tray). In Experiment 2, the test individual’s first choice was always set to fail, and P. labiata was given repeated opportunities to respond to feedback, yet the Sagada P. labiata continued to place little reliance on trial and error for solving the confinement problem.
That the Los Baños P. labiata relied more strongly on trial-and-error problem solving than the Sagada P. labiata has now been demonstrated across two different tasks.
Caribou drive systems made of stone lines and cairns [inuksuit] are a common feature of the far north but have been little studied by archaeologists. Two communal caribou kill sites from southern Victoria Island, Nunavut, Canada are discussed and illustrated. The Eggington site is a single-line drive where herds of caribou were directed through a saddle between two hills and killed from shooting pits. The POD site is a V-shaped funnel with two prominent lines of cairns and stone walls ending with opposing shooting pits. The sites, of uncertain age, are similar to those described by Jenness for the historic Caribou Inuit. Critical aspects of landscape and caribou behavior/biology that were manipulated to achieve the kills include the nature of the terrain, sense of smell and eyesight, wind, and the reaction of caribou to motion. Caribou drives, though often devoid of artifacts, have the power to reveal the sophisticated systems of knowledge that enabled successful communal kills.
To clarify the oxidative metabolism of methadone (R)- and (S)-enantiomers, the depletion of parent (R)- and (S)-methadone and the formation of racemic 2-ethylidene-1,5-dimethyl-3,3-diphe-nylpyrolidine were studied using human liver microsomes and recombinant cytochrome P450 enzymes. Based on studies with isoform-selective chemical inhibitors and expressed enzymes, CYP3A4 was the predominant enzyme involved in the metabolism of (R)-methadone. However, it has different stereoselectivity toward (R)- and (S)-methadone. In recombinant CYP3A4, the metabolic clearance of (R)-methadone was about 4-fold higher than that of (S)-methadone. CYP2C8 is also involved in the metabolism of methadone, but its contribution to the metabolism of (R)-methadone was smaller than that of CYP3A4. But for the metabolism of (S)-methadone, the roles of CYP2C8 and CYP3A4 appeared equal.Although CYP2D6 is involved in the metabolism of (R)- and (S)-methadone, its role was smaller compared with CYP3A4 and CYP2C8. Using clinically relevant concentrations of ketoconazole (1 μM, selective CYP3A4 inhibitor), trimethoprim (100 μM, selective CYP2C8inhibitor), and paroxetine (5 μM, potent CYP2D6 inhibitor), these inhibitors decreased the hepatic metabolism of (R)-[(S)-]methadone by 69% (47%), 22% (51%), and 41% (77%), respectively. However, inhibition of the metabolism of (R)- and (S)-methadone by paroxetine was due to inhibition not only of CYP2D6, but also CYP3A4 and, to a minor extent, CYP2C8. The present invitro findings indicated that CYP3A4, CYP2C8, and CYP2D6 are all involved in the stereoselective metabolism of methadone (R)- and (S)-enantiomers. These data suggest that coadministration of inhibitors of CYP3A4 and CYP2C8 may produce clinically-significant drug-drug interactions with methadone.
Bone cells are organized into an interconnected network, which extends from the osteocytes within bone to the osteoblasts and lining cells on the bone surfaces.
There is experimental evidence suggesting that bone tissue exhibits basic properties of short-term and long-term memory. An analogy might be made between the bone cell network and neuronal systems. For instance, recent studies suggest that the neurotransmitter glutamate may play a role in cell-to-cell communication among bone cells. Glutamate is a key neurotransmitter involved in learning and memory in reflex loops and the hippocampus.
The simplest forms of memory include habituation (desensitization) and sensitization. It is argued that bone cells exhibit habituation to repeated mechanical stimuli and sensitization to mechanical loading by parathyroid hormone (PTH). Acquired long-term memory of a mechanical loading environment may influence the responsiveness of bone tissue to external stimuli. For instance, bone tissue from the skull shows markedly different responses to several stimuli, e.g., mechanical loading, disuse, and PTH, compared with long bones.
We speculate that the history of weight bearing imparts long-term cellular memory to the bone cell network that modulates the cellular response to a wide variety of stimuli.
Purpose: The beneficial effects of exercise on bone mass and strength can be attributed to the sensitivity of bone cells to mechanical stimuli. However, bone cells lose mechanosensitivity soon after they are stimulated. We investigated whether the osteogenic response to a simulated high-impact exercise program lasting 4 months could be enhanced by dividing the daily protocol into brief sessions of loading, separated by recovery periods.
Methods: The right forelimbs of adult rats were subjected to 360 load cycles · d-1, 3 d·wk-1, for 16 wk. On each loading day, one group received all 360 cycles in a single, uninterrupted bout (360×1); the other group received 4 bouts of 90 cycles/bout (90×4), with each bout separated by 3 h. After sacrifice, bone mineral content (BMC),and areal bone mineral density (aBMD) were measured in the loaded(right) and non-loaded control (left) ulnae using DXA. Volumetric BMD(vBMD) and cross-sectional area (CSA) were measured at midshaft andthe olecranon by using pQCT. Maximum and minimum second moments of area (IMAX and IMIN) were measured from the midshaft tomographs.
Results: After 16 wk of loading, BMC, aBMD, vBMD, midshaft CSA, IMAX, and IMIN were statistically-significantly greater in right (loaded) ulnae compared with left (non-loaded) ulnae in the 2 loaded groups. When the daily loading regimen was broken into 4 sessions per day (90×4), BMC, aBMD, midshaft CSA, and I(MIN) improved statistically-significantly over the loading schedule that applied the daily stimulus in a single, uninterrupted session (360×1).
Conclusion: Human exercise programs aimed at maintaining or improving bone mass might achieve greater success if the daily exercise regime is broken down into smaller sessions separated by recovery periods.
[Keywords: mechanical loading, bone adaptation, recovery, exercise, osteoporosis, BMD]
Portia is a genus of web-invading araneophagic jumping spiders known from earlier studies to derive aggressive-mimicry signals by using a generate-and-test algorithm (trial-and-error tactic). Here P. fimbriata’s use of trial-and-error to solve a confinement problem (how to escape from an island surrounded by water) is investigated. Spiders choose between two potential escape tactics (leap or swim), one of which will fail (bring spider no closer to edge of tray) and the other of which will partially succeed (bring spider closer to edge of tray). The particular choice that will partially succeed is unknown to the spider. Using trial-and-error, P. fimbriata solves the confinement problem both when correct choices are rewarded (ie. when the spider is moved closer to edge of tray) and when incorrect choices are punished (ie. when the spider gets no closer to edge of tray).
When physiological adaptation is insufficient, hosts have developed behavioral responses to avoid or limit contact with parasites. One such behavior, leaf-swallowing, occurs widely among the African great apes. This behavior involves the slow and deliberate swallowing without chewing of whole bristly leaves. Folded one at a time between tongue and palate, the leaves pass through the gastro-intestinal (GI) tract visibly unchanged.
Independent studies in two populations of chimpanzees (Pan troglodytes schweinfurthii) showed statistically-significant correlations between the swallowing of whole leaves and the expulsion of the nodule worm Oesophagostomum stephanostomum and a species of tapeworm (Bertiella studeri). We integrate behavioral, parasitological and physiological observations pertaining to leaf-swallowing to elucidate the behavioral mechanism responsible for the expulsion and control of nodule worm infections by the ape host.
Physical irritation produced by bristly leaves swallowed on an empty stomach, increases motility and secretion resulting in diarrhea which rapidly moves leaves through the GI tract. In the proximal hindgut, the site of third-stage larvae (L3) cyst formation and adult worm attachment, motility, secretion and the scouring effect of rough leaves is enhanced by haustral contractions and peristalsis-antiperistalsis. Frequently, at the peak of reinfection, a proportion of nonencysted L3 is also predictably vulnerable. These factors should result in the disruption of the life cycle of Oesophagostomum spp. Repeated flushing during peak periods of reinfection is probably responsible for long-run reduction of worm burdens at certain times of the year.
Accordingly, leaf-swallowing can be viewed as a deliberate adaptive behavioral strategy with physiological consequences for the host. The expulsion of worms based on the activation of basic physiological responses in the host is a novel hitherto undescribed form of parasitic control.
Background: Rhinotillexomania is a recent term coined to describe compulsive nose picking. There is little world literature on nose-picking behavior in the general population.
Method: We studied nose-picking behavior in a sample of 200 adolescents from 4 urban schools.
Results: Almost the entire sample admitted to nose picking [193⁄200 = 96%], with a median frequency of 4 times per day; the frequency was >20 times per day in 7.6% of the sample. Nearly 17% of subjects considered that they had a serious nose-picking problem. Other somatic habits such as nail biting, scratching in a specific spot, or pulling out of hair were also common; 3 or more such behaviors were simultaneously present in 14.2% of the sample, only in males. Occasional nose bleeds complicating nose-picking occurred in 25% of subjects. Several interesting findings in specific categories of nose pickers were identified.
Conclusion: Nose picking is common in adolescents. It is often associated with other habitual behaviors. Nose picking may merit closer epidemiologic and nosological scrutiny.
…A need in this study was to identify and eliminate mischievous responses, such as might be expected from adolescent school children who are invited to complete a questionnaire on an offbeat subject. We used the question “Do you occasionally eat the nasal matter that you have picked?” to identify mischievous responses with the expectation that students who answer affirmatively to this question: are likely to respond mischievously to other questions as well. 9 subjects (4.5%) admitted to eating their nasal debris; however, these subjects did not differ from the rest of the group on any of the variables studied. This finding suggests that our expectation may have been wrong; that is, the responses of “eaters” may have been valid and not motivationally distorted. We therefore did not exclude these responses from the data set. The interesting conclusion is that, perhaps, a small percentage of nose pickers do, indeed, eat their nasal debris. In this context, it is worth observing that Tarachow18 reported that persons do eat nasal debris, and find it tasty, too.
…Subjects varied widely in their response to the question that sought their opinion on the percentage of nose pickers in the population; the mean was found to be 46.7%. Subjects’ opinions on the prevalence of nose picking showed no correlation with the frequency with which they themselves indulged in nose picking (r = 0.01, non-statistically-significant)…Interestingly, the frequency of nose-picking behavior (in an individual) did not correlate statistically-significantly with the perception of the commonness of the behavior in the population. This suggests the hypothesis that the frequency of nose picking is intrinsically driven, or at least that it is influenced by factors other than similar behavior in others.
Portia fimbriata is a web-invading araneophagic jumping spider (Salticidae). The use of signal-generating behaviours is characteristic of how P. fimbriata captures its prey, with three basic categories of signal-generating behaviours being prevalent when the prey spider is in an orb web. The predatory behaviour of P. fimbriata has been referred to as “aggressive mimicry”, but no previous studies have provided details concerning the characteristics of P. fimbriata’s signals.
We attempt to determine the model signals for P. fimbriata’s ‘aggressive mimicry’ signals. Using laser Doppler vibrometer and the orb webs of Zygiella x-notata and Zosis geniculatus, P. fimbriata’s signals are compared with signals from other sources. Each of P. fimbriata’s three categories of behaviour makes a signal that resembles one of three signals from other sources: prey of the web spider (insects) ensnared in the capture zone of the web, prey making faint contact with the periphery of the web and large-scale disturbance of the web (jarring the spider’s cage).
Experimental evidence from testing P. fimbriata with two sizes of lure made from Zosis (dead, mounted in a lifelike posture in standard-size orb web) clarifies P. fimbriata’s signal-use strategy:
when the resident spider is small, begin by simulating signals from an insect ensnared in the capture zone (attempt to lure in the resident spider);
when the resident spider is large, start by simulating signals from an insect brushing against the periphery of the web (keep the resident spider out in the web, but avoid provoking from it a full-scale predatory attack);
when walking in the resident spider’s web, regardless of the resident spider’s size, step toward the spider while making a signal that simulates a large-scale disturbance of the web (mask footsteps with a self-made vibratory smokescreen).
Recent research on the eyes and vision-guided behaviour of jumping spiders (Salticidae) is reviewed. Special attention is given to Portia Karsch. The species in this African, Asian and Australian genus have especially complex predatory strategies. Portia’s preferred prey are other spiders, which are captured through behavioural sequences based on making aggressive-mimicry web signals, problem solving and planning. Recent research has used Portia to study cognitive attributes more often associated with large predatory mammals such as lions and rarely considered in studies on spiders. In salticids, complex behaviour and high-spatial-acuity vision are tightly interrelated. Salticid eyes are unique and complex. How salticid eyes function is reviewed. Size constraints are discussed.
Portia fimbriata from Queensland, Australia, is an araneophagic jumping spider (Salticidae) that includes in its predatory strategy a tactic (cryptic stalking) enabling it to prey effectively on a wide range of salticids from other genera.
Optical cues used by P. fimbriata to identify the salticid species on which it most commonly preys, Jacksonoides queenslandicus, were investigated experimentally in the laboratory using odorless lures made from dead prey on which various combinations of features were altered. P. fimbriata adopted cryptic stalking only against intact salticid lures and modified lures on which the large anterior-median eyes were visible. Ordinary stalking was usually adopted when the lure did not have the anterior-median eyes visible. There was no evidence that cues from the legs of prey salticids influence the choice of stalking style of P. fimbriata, but cues from the legs do appear to influence strongly whether a prey is stalked at all. Cues from the cephalothorax and abdomen also influenced the stalking tendency, but to a lesser degree than cues from the legs.
An algorithm to describe the perceptual processes of P. fimbriata when visually discriminating between salticid and non-salticid prey is discussed.
Portia fimbriata, an araneophagic jumping spider (Salticidae), makes undirected leaps (erratic leaping with no particular target being evident) in the presence of chemical cues from Jacksonoides queenslandicus, another salticid and a common prey of P. fimbriata. Whether undirected leaping by P. fimbriata functions as hunting by speculation is investigated experimentally.
Our first hypothesis, that undirected leaps provoke movement by J. queenslandicus, was investigated using living P. fimbriata and three types of lures made from dead, dry arthropods (P. fimbriata, J. queenslandicus, and Musca domestica). When a living P. fimbriata made undirected leaps or a spring-driven device made the lures suddenly move up and down, simulating undirected leaping, J. queenslandicus responded by waving its palps and starting to walk. There was no statistical evidence that the species from which the lure was made influenced J. queenslandicus’ response in these tests.
Our second hypothesis, that J. queenslandicus reveals its location to P. fimbriata by moving, was investigated by recording P. fimbriata’s reaction to J. queenslandicus when J. queenslandicus reacted to lures simulating undirected leaping. In these tests, P. fimbriata responded by turning toward J. queenslandicus and waving its palps.
Jumping spiders Portia labiata were tested in the laboratory on three different kinds of detours. In one, both routes led to the lure. In the other variants, one of the routes had a gap, making that route impassable. When tested with only one complete route, Portia chose this route after visually inspecting both routes. An analysis of scanning showed that, at the beginning of the scanning routine, the spiders scanned both the complete and the incomplete route but that, by the end of the scanning routine, they predominantly scanned only the complete route. Two rules seemed to govern their scanning: (1) they would continue turning in one direction when scanning away from the lure along horizontal features of the detour route; and (2) when the end of the horizontal feature being scanned was reached, they would change direction and turn back towards the lure. These rules ‘channeled’ the spiders’ scanning on to the complete route, and they then overwhelmingly chose to head towards the route they had fixated most while scanning.
This chapter illustrates the cognitive abilities of araneophagic jumping spiders. “Portia”, a genus of araneophagic jumping spiders (family Salticidae), appears to have the most versatile and flexible predatory strategy known for an arthropod. A dominant feature of Portia’s predatory strategy is aggressive mimicry, a system in which the predator communicates deceitfully with its prey. Typical salticids do not build webs. Instead, they are hunters that catch their prey in stalk-and-leap sequences guided by vision. Salticids differ from all other spiders by having large anteromedial eyes and acute vision. However, the behavior of Portia is anything but typical for a salticid. Besides hunting its prey cursorily, Portia also builds a prey-catching web. The typical prey of a salticid is insects, but Portia’s preferred prey is other spiders. Portia frequently hunts web-building spiders from other families by invading their webs and deceiving them with aggressive-mimicry signals. While in the other spider’s web, it makes aggressive-mimicry signals by moving legs, palps, abdomen, or some combination of these to make web-borne vibrations. Portia’s typical victim, a web-building spider but not a salticid, typically lacks acute vision and instead perceives the world it lives in by interpreting tension and vibration patterns in its web.
Table of Contents: Introduction · Spiders that eat other spiders · Predator-prey interactions between Portia fimbriata and Euryattus sp. · Detecting Portia’s footsteps · Smokescreen tactics · Flexibly adjusting signals to prey behavior · Making detours and planning ahead · Cognitive levels · Levels of deception · Design options for animal brains
The Sailor’s Hat crater was artificially formed on the south coast of Kaho’olawe Island in 1965 with explosives. The explosion formed a crater about 50 m from the shoreline, which penetrates the watertable to a 5 m depth.
The pool at the bottom of the crater meets the criteria of an anchialine pond because it shows tidal fluctuation, has measurable salinity, and lacks surface connections to the sea. The water chemistry of this pool is similar to the ocean except silica is elevated and salinity is slightly depressed suggesting a small groundwater influence. The fauna is dominated by water boatmen, an endemic shrimp and tubeworm, polychaetes, amphipods, an ostracod, gastropod, solitary ectoproct. anemone, flatworm and sponge. The atyid shrimp, Halocaridina rubra, is a characteristic species of Hawaiian anchialine systems and probably colonized this 32-year old pool by active migration via the watertable. Colonization by the remaining fauna may have occurred by storm surf (for marine species) or with the wind. Most predators are unable to inhabit anchialine ponds because of difficult access due to physical barriers, or to unsuitable ecological conditions. The anchialine habitat and life history strategy of the atyid shrimp have probably been important influences on the adaptive success of H. rubra in the Hawaiian Islands, and may be important characteristics of hypogeal anchialine species elsewhere.
…The life history and behavior of Halocaridina rubra suggests that it is a fugitive species that cannot tolerate the high level of predation that is present in most Hawaiian aquatic systems. Thus H. rubra colonizes and is successful in marginal habitats that most predators are either unable to colonize because of physical barriers or the ecological conditions are inappropriate. Sailor’s Hat pool probably represents such a marginal habitat and may be the only site on Kaho’olawe where this shrimp species has sufficient food resources and protection from predators to sustain a viable population level. Given the ecological characteristics of this pool and barring further disturbance from humans or predators, it may be colonized and serve as a habitat for other rare Hawaiian anchialine species in the future.
In a laboratory study, 12 different experimental set-ups were used to examine the ability of Portia fimbriata, a web-invading araneophagic jumping spider from Queensland, Australia, to choose between two detour paths, only one of which led to a lure (a dead, dried spider). Regardless of set-up, the spider could see the lure when on the starting platform of the apparatus, but not after leaving the starting platform. The spider consistently chose the ‘correct route’ (the route that led to the lure) more often than the ‘wrong route’ (the route that did not lead to the lure). In these tests, the spider was able to make detours that required walking about 180° away from the lure and walking past where the incorrect route began. There was also a pronounced relationship between time of day when tests were carried out and the spider’s tendency to choose a route. Furthermore, those spiders that chose the wrong route abandoned the detour more frequently than those that chose the correct route, despite both groups being unable to see the lure when the decision was made to abandon the detour.
Swallowing whole leaves by chimpanzees and other African apes has been hypothesized to have an anti parasitic or medicinal function, but detailed studies demonstrating this were lacking. We correlate for the first time quantifiable measures of the health of chimpanzees with observations of leaf-swallowing in Mahale Mountains National Park, Tanzania. We obtained a total of 27 cases involving the use of Aspilia mossambicensis (63%), Lippia plicata (7%), Hibiscus sp. (15%), Trema orientalis (4%), and Aneilema aequinoctiale (11%), 15 cases by direct observation of 12 individuals of the Mahale M group. At the time of use, we noted behavioral symptoms of illness in the 8 closely observed cases, and detected single or multiple parasitic infections (Strongyloides fulleborni, Trichuris trichiura, Oesophagostomum stephanostomum) in 10 of the 12 individuals. There is a statistically-significant relationship between the presence of whole leaves (range, 1–51) and worms of adult O. stephanostomum (range, 2–21) in the dung. HPLC analysis of leaf samples collected after use showed that thiarubrine A, a compound proposed to act as a potent nematocide in swallowing Aspilia spp., was not present in leaves of A. mossambicensis or the three other species analyzed. Alternative nematocidal or egg-laying inhibition activity was not evident. Worms of O. stephanostomum were recovered live and motile from chimpanzee dung, trapped within the folded leaves and attached to leaf surfaces by trichomes, though some were moving freely within the fecal matter, suggesting that the physical properties of leaves may contribute to the expulsion of parasites. We review previous hypotheses concerning leaf-swallowing and propose an alternative hypothesis based on physical action.
The stalking behaviour of four species of jumping spiders, Portia fimbriata, P. labiata, P. schultzi and P. africana, was examined to determine whether Portia opportunistically exploits situations in which the prey spider is distracted by environmental disturbances.
Disturbances were created mainly by wind blowing on webs and a magnet shaking webs. All four Portia species moved statistically-significantly further during disturbance than during non-disturbance, a behaviour labeled ‘opportunistic smokescreen behaviour’. Portia can discriminate between spiders and other prey such as live insects, wrapped-up insects in the web, and egg sacs, because Portia used opportunistic smokescreen behaviour only against spiders and not against these other types of prey. If the location of disturbances and the location of prey differ, Portia can accurately discriminate between them. Portia’s smokescreen behaviour apparently is a true predatory tactic because Portia attacked prey more often during disturbances than at other times.
Smokescreen behaviour appears to work in part because the disturbances that Portia uses for smokescreens interfere with the prey’s ability to sense Portia’s stalking movements.
Salticids, the largest family of spiders, have unique eyes, acute vision, and elaborate vision-mediated predatory behavior, which is more pronounced than in any other spider group. Diverse predatory strategies have evolved, including araneophagy, aggressive mimicry, myrmicophagy, and prey-specific prey-catching behavior. Salticids are also distinctive for development of behavioral flexibility, including conditional predatory strategies, the use of trial-and-error to solve predatory problems, and the undertaking of detours to reach prey. Predatory behavior of araneophagic salticids has undergone local adaptation to local prey, and there is evidence of predator-prey coevolution. Trade-offs between mating and predatory strategies appear to be important in ant-mimicking and araneophagic species.
15 boys and 15 girls were asked to record for 2 days the time spent awake, eating meals or snacks, and sleeping. The salivary flow rates elicited by chewing foods were also determined.
The mean flow rate (± SD) of unstimulated saliva was 0.26 ± 0.16ml/min and that of saliva while chewing six different foods was 3.6 ± 0.8 ml/min. The mean times spent eating, and awake but not eating, were 80.8 ± 27.3 and 820 ± 59 min, respectively, and the volumes of saliva produced during those periods would average about 288 and 208 ml, respectively.
If the flow rate is virtually zero during sleep, the estimated total salivary volume produced per day is calculated to be about 500 ml.
Portia is a web-invading araneophagic spider that uses aggressive mimicry to deceive its prey. The present paper is a first step toward clarifying experimentally the cues that govern Portia’s decisions of whether to enter a web, whether to make signals once in a web, and whether to persist at signalling once started.
The following conclusions are supported: cues from seeing a web elicit web entry, but volatile chemical cues from webs of prey spiders are not important; seeing a spider in a web increases Portia’s inclination to enter the web; after web entry, cues from webs of prey spiders are sufficient to elicit signalling behaviour, even in the absence of other cues coming directly from the prey spider; seeing a prey spider or detecting vibrations on the web make Portia more prone to signal, but volatile chemical cues from prey spiders are not important; once Portia is on a web and signalling, seeing a moving spider and detecting vibrations on the web encourage Portia to persist in signalling; on the basis of visual cues alone, Portia can distinguish between quiescent spiders, insects and eggsacs.
The microbial flora on the surfaces of 15 books obtained from a public library and from 15 books obtained from a family household were studied. Staphylococcus epidermidis was recovered from 4 of the library books and 3 of the family household books. The number of organisms per page was between one to four. This data illustrates the safety of using library books, as they do not serve as a potential source of transmission of virulent bacteria.
The terms “reversed-route detours” and “forward-route detours” are introduced to distinguish between detours that require moving away from a goal and those that do not. We provide the first evidence under controlled laboratory conditions that salticids can perform reversed-route detours.
Two species were tested: 1. Portia fimbriata, a web-invading salticid from Queensland, Australia, that normally preys on web-building spiders; 2. Trite planiceps, an insectivorous cursorial salticid from New Zealand.
Although both of these species completed reversed-route detours, Trite planiceps was much more dependent on prey movement than Portia fimbriata. Interspecific differences appear to be related to the different predatory styles of these two salticids.
Portia is a jumping spider that invades other spiders’ webs, makes vibratory signals that deceive the resident spider (aggressive mimicry), then attacks and eats the spider. Portia exploits a wide range of prey-spider species.
Evidence is provided from observation and experimentation that Portia uses a trial-and-error method as part of its strategy for deriving appropriate signals for different prey. To use this method, Portia first broadcasts an array of different signals, then narrows to particular signals as a consequence of feedback from the prey spider. Feedback can be web vibration or seeing spiders move, or both.
This appears to be an example of deception involving at least a limited form of learning, an uncommon phenomenon in invertebrates.
The influence of prey movement on the performance of simple detours by salticids was investigated. Seven species were studied. Two subject species, Portia fimbriata and Portia labiata, are specialized web-invading species that eat other spiders. The other five species investigated (Euryattus sp., Euophrys parvula, Marpissa marina, Trite auricoma and Trite planiceps) are more typical cursorial hunters of insects. We provide evidence that:
salticids will initiate detours toward motionless prey;
salticids are more inclined to initiate detours toward moving than toward motionless prey;
salticids tend to complete detours even when prey that had been moving at the start remains stationary during the detour;
prey movement makes the salticid more likely to stalk and attack when prey is only a few centimetres away and in a position from which it can be reached by a straight-line pursuit;
Portia is more inclined than the other salticids to initiate detours to motionless prey, then to stalk and attack motionless prey when close, than the other salticids are.
Mechanisms that might account for Portia being different from the other salticids are discussed.
Portia is a behaviourally complex and aberrant salticid genus. The genus is of unusual importance because it is morphologically primitive. Five species were studied in nature (Australia, Kenya, Malaysia, Sri Lanka) and in the laboratory in an effort to clarify the origins of the salticids and of their unique, complex eyes. All the species of Portia studied were both web builders and cursorial.
Portia was also an araneophagic web invader, and it was a highly effective predator on diverse types of alien webs. Portia was an aggressive mimic, using a complex repertoire of vibratory behaviour to deceive the host spiders on which it fed. The venom of Portia was unusually potent to other spiders; its easily autotomised legs may have helped Portia escape if attacked by its frequently dangerous prey. Portia was also kleptoparasitic and oophagic when occupying alien webs. P. fimbriata from Queensland, where cursorial salticids were superabundant, used a unique manner of stalking and capturing other salticids.
The display repertoires used during intraspecific interactions were complex and varied between species. Both visual (typical of other salticids) and vibratory (typical of other web spiders) displays were used. Portia copulated both on and away from webs and frequently with the female hanging from a dragline. Males cohabited with subadult females on webs, mating after the female matured. Adult and subadult females sometimes used specialised predatory attacks against courting or mating males. Sperm induction in Portia was similar to that in other cursorial spiders.
Portia mimicked detritus in shape and colour, and its slow, mechanical locomotion preserved concealment. Portia occasionally used a special defensive behaviour (wild leaping) if disturbed by a potential predator. Two types of webs were spun by all species (Type 1, small resting platforms; Type 2, large prey-capture webs). Two types of egg sacs were made, both of which were highly aberrant for a salticid. Responses of different species and both sexes of Portia were quantitatively compared for different types of prey.
Many of the trends in behaviour within the genus, including quantitative differences in predatory behaviour, seemed to be related to differences in the effectiveness of the cryptic morphology of Portia in concealing the spider in its natural habitat (‘effective crypsis’).
The results of the study supported, in general, Jackson & Blest’s (1982a) hypothesis of salticid evolution which, in part, proposes that salticid ancestors were web builders with poorly developed vision and that acute vision evolved in conjunction with the ancestral spiders becoming proficient as araneophagic invaders of diverse types of webs.
The anecdotal and historical literature describing intoxication in elephants from fermented fruit or alcoholic beverages is reviewed. Seven African elephants readily self-administered 7% unflavored alcohol solutions; the results included separation from herd groupings and changes in the frequency and/or duration of several behaviors as scored according to a quantitative observational system. Alcohol decreased feeding, drinking, bathing, and exploration for most animals. Inappropriate behaviors such as lethargy and ataxia increased for all elephants. Results are discussed in terms of stress-induced drinking and intoxication.
…The first elephant was brought to America in 1796 and was billed as“the most respectable animal in the world” even though it drank 30 bottles of port a day, drawing the corks with its trunk (Winfrey, 1980, p. 64). And elephant trainers and handlers regularly employ beer and other beverage alcohol as positive reinforcers for their animals (eg., Lewis & Fish, 1978).
This apparent preference for alcohol has produced dramatic consequences. For example, in 1974 a herd of 150 elephants broke into an illegal still and drank copious quantities of “moonshine” liquor. Intoxicated, they rampaged across West Bengal, killing 5 people, injuring 12, demolishing seven concrete buildings, and trampling 20 village huts (San Francisco Chronicle, 1974). In Africa, elephants have been known to cause wide-spread destruction of property in their search for and intoxication from beverage alcohol.
…It was found that 7% solutions were the highest concentrations readily and totally self-administered when water was also available ad lib. Interestingly, the 7% concentration is equivalent to the alcohol concentration found in the fermented grain eaten by elephants in Africa. When ethanol was flavored with fruit extracts, the elephants self-administered 10% concentrations, but no higher.
Musth is a condition observed in male Asiatic elephants and is characterized by aggression and temporal gland secretions. A classic and controversial 1962 study attempted to induce a musth syndrome in an elephant via treatment with LSD. Two elephants in the present study survived dosages of LSD (0.003–0.10 mg/kg) and exhibited changes in the frequency and/or duration of several behaviors as scored according to a quantitative observational system. LSD increased aggression and inappropriate behaviors such as ataxia. Results are discussed in terms of musth and drug-induced perceptual-motor dysfunction.
…Treatment with the low dosage of LSD produced dramatic changes in behavior within 10–20 min. The female showed a small increase in rock/sway time and slightly increased ear flapping and exploration. Perhaps the most interesting change was the increased inappropriate behavior marked by leaning with closed eyes and slightly ataxic gait. Vocalizations decreased but changed to short squeaks or chirping, which may indicate pleasure or conflict. The male showed similar, albeit more intense, behaviors, as well as head shaking and several aggressive displays.
The high dosage of LSD produced an initial aggressive display by the female, marked by trumpets and snorts, vocalizations that indicate extreme arousal. This was followed by increasing ataxia, with spread forelegs and hindlegs, and eventually by the animal’s falling onto its side. It remained down for approximately 60 min and exhibited shallow respirations and some tremors, but when nudged by handlers, arose slowly and eventually regained an upright posture. Activity remained quiescent for the remainder of the session. The high dosage also produced an aggressive display in the male elephant, which repeatedly trumpeted and snorted while charging the observer. This was quickly followed by leaning with closed eyes and ataxia. Periodically, this inappropriate behavior was interrupted by aggressive displays or dustbathing. During all LSD sessions, both elephants refused feeding and most drinking. However, during the high-dosage session, the male bathed with the hay but did not eat it.
…Within 24 h following LSD treatments, both elephants returned to normal baseline behaviors, including feeding and drinking. Examination of the temporal glands revealed no evidence of discharging…The female displayed some aggression during the high-dose session, but the accompanying vocalizations suggest that this was more alarm and panic to the sudden onset of perceptual-motor symptoms than it was a threat.
Male Julodimorpha bakewelli White were observed attempting to copulate with beer bottles. Colour and reflection of tubercles on the bottle glass are suggested as causes for attraction and release of sexual behaviour.
The possible existence of indigenous Jovian organisms is investigated by characterizing the relevant physical environment of Jupiter, discussing the chromophores responsible for the observed coloration of the planet, and analyzing some permissible ecological niches of hypothetical organisms. Values of the eddy diffusion coefficient are estimated separately for the convective troposphere and the more stable mesosphere, and equilibrium condensation is studied for compounds containing Na, Cl, or both. The photoproduction of chromophores and nonequilibrium organic molecules is analyzed, and the motion of hypothetical organisms is examined along with the diffusion of metabolites and the consequent growth of organisms. Four kinds of organisms are considered: primary photosynthetic autotrophs (‘sinkers’), larger autotrophs or heterotrophs that actively maintain their pressure level (‘floaters’), organisms that seek out others (‘hunters’), and organisms that live at almost pyrolytic depths (‘scavengers’). It is concluded that ecological niches for sinkers, floaters, and hunters appear to exist in the Jovian atmosphere.
…The eddy diffusion coefficient is estimated as a function of altitude, separately for the Jovian troposphere and mesosphere. The growth-rate and motion of particles is estimated for various substances: the water clouds are probably nucleated by NH4Cl and sodium compounds are likely to be absent at and above the levels of the water clouds. Complex organic molecules produced by the Lα photolysis of methane may possibly be the absorbers in the lower mesosphere which account for the low reflectivity of Jupiter in the near-ultraviolet. The optical frequency chromophores are localized at or just below the Jovian tropopause. Candidate chromophore molecules must satisfy the condition that they are produced sufficiently rapidly that convective pyrolysis maintains the observed chromophore optical depth. Organic molecules and polymeric sulfur produced through H2S photolysis at λ>2300 Å probably fail this test, even if a slow, deep circulation pattern, driven by latent heat, is present. The condition may be satisfied if complex organic chromophores are produced with high quantum yield by NH3 photolysis at λ<2300 Å. However, Jovian photoautotrophs in the upper troposphere satisfy this condition well, even with fast circulation, only biochemical properties of comparable terrestrial organisms are assured. Unless buoyancy can be achieved, a hypothetical organism drifts downward and is pyrolyzed. An organism in the form of a thin, gas-filled balloon can grow fast enough to replicate if (i) it can survive at the low mesospheric temperatures, or if (ii) photosynthesis occurs in the troposphere. If hypothetical organisms are capable of slow, powered locomotion and coalescence, they can grow large enough to achieve buoyancy. Ecological niches for sinkers, floaters, and hunters appear to exist in the Jovian atmosphere.
Reports an unusual case of hydranencephaly. The child survived for 19 years and showed evidence on 3 occasions of an increase in eyeblink rate with tactile reinforcement. Diagnosis was confirmed by an autopsy which revealed no preserved cortex in either hemisphere. [The subject died after the third test.]
A method is described which permits the growth of chicken embryos in petri dishes from the third to the 20th day of incubation. The procedure is relatively simple and has the advantage of providing ready access to the embryo and its membranes for tissue grafting, for introduction of teratogenic agents, and for microscopic observation of morphogenesis and growth…we found it essential to develop methods that would permit rapid and ready observation of large numbers of eggs under conditions facilitating examination with transmitted light, permitting time-lapse photography, and encouraging routine access to the grafted tissue. The procedures we describe in this report have now been used by us for growing several thousand eggs during the past several months…
It is well-known that Homo sapiens voluntarily learns to self-administer psychoactive drugs without additional reinforcement. The primary societal use of these self-administrations is social (Blum et al 1969), while the motivation to repeatedly self-administer is considered a major factor in human drug abuse (cf. Weeks, 1971). Among the many drugs used in this way by man are the hallucinogens. Indeed, it is a traditional, albeit tacit, assumption of psychopharmacological thinking that Homo sapiens is the only species that will self-administer hallucinogens without additional rewards.
…Conclusions: The ethologic search has found that Homo sapiens is not alone in the self-administration of hallucinogens. Either by accident or design, numerous infrahuman species also self-administer these drugs. Table 2 shows some ethologic examples of the self-administration of various types of drugs as described in this paper. The drug types and their naturally occurring substances are listed together with the animals which self-administer them, pattern of self-administration as discussed in this paper, animals which self-administer the same or similar substances in the laboratory, and the human use of these substances. Of the 14 drugs listed in Table 2, four are hallucinogens and four others are known to have hallucinogenic effects in man. Many of the examples cited here need further controlled psychopharmacological study in order to identify the biological, environmental, and pharmacological variables which reinforce and maintain self-administration. Nonetheless, it is clear that the consequences of such administrations dramatically affect the social behavior of these animals. Whether “sick”, “ill”, “intoxicated”, “poisoned”, “hypersensitive”, “genetically guided”, “narcotized”, or “addicted”, hallucinogen-treated animals tend to isolate themselves from social groups.
[Considers learning and memory within an adaptive-evolutionary framework, using an analysis of the role of learning in thiamine specific hunger. The demands of the environment on the rat, the contingencies in the natural environment, the importance of the novelty-familiarity dimension, and the realization of 2 new principles of learning, permit a learning explanation of most specific hungers. The 2 new principles, “belongingness” and “long-delay learning”, specifically meet the peculiar demands of learning in the feeding system. An attempt is made to develop the laws of taste-aversion learning. It is argued that the laws or mechanism of learning are adapted to deal with particular types of problems and can be fully understood only in a naturalistic context. The “laws” of learning in the feeding system need not be the same as those in other systems. Speculations are presented concerning the evolution and development of learning abilities and cognitive function. It is concluded that full understanding of learning and memory involves explanation of their diversity and the extraction of common general principles.]
When large populations of mice were treated with LSD (2mcg/kg to 30mcg/kg), bufotenine (5mg/kg to 30mg/kg), a cannabis sativa extract (50mg/kg to 100mg/kg), or tetrahydrocannabinol (2mg/kg to 10mg/kg), there was a dramatic change in social behavior. Such treatment produced a statistically-significant reduction in aggression, group aggregation, and temporary disruptions of social hierarchies. Hallucinogenic-treated mice placed in normal untreated colonies were hypersensitive to auditory and tactile stimulation and aggregated in small groups apart from the rest of the population. Treatment with saline or BOL-148 produced nostatistically-significant changes in behavior.
…When strangers were introduced into the drugged colonies, they were relatively ignored by the inhabitants. This was true whether the strangers were introduced in a drugged or undrugged state. If the strangers were undrugged, however, they moved about the colony investigating mice and inducing squealing and flight behavior in the inhabitants. And, if the strangers were dominant mice to begin with, they would often establish dominance over the entire colony, exploiting the food supplies and territories of the inhabitants.
While the surface conditions of Venus make the hypothesis of life there implausible, the clouds of Venus are a different story altogether. As was pointed out some years ago1, water, carbon dioxide and sunlight—the prerequisites for photosynthesis—are plentiful in the vicinity of the clouds. Since then, good additional evidence has been provided that the clouds are composed of ice crystals at their tops2,3, and it seems likely that there are water droplets toward their bottoms4. Independent evidence for water vapour also exists5. The temperature at the cloud tops is about 210° K, and at the cloud bottoms is probably at least 260–280° K (refs. 4 and 6). Atmospheric pressure at this temperature level is about 1 atmosphere.7 The observed planetary albedo falls steeply in the violet and ultra-violet8, which accounts for the pale lemon yellow colour of Venus. The albedo decline would not be expected for pure ice particles, and must therefore be caused by some contaminant. Dust, ozone, C3O2 and other gases may possibly explain these data but, whatever the explanation, the ultra-violet flux below the clouds is likely to be low. If small amounts of minerals are stirred up to the clouds from the surface, it is by no means difficult to imagine an indigenous biology in the clouds of Venus. What follows is one such speculation.
The interplay between socioecologlcal and biological processes manifests Itself at the level of individuals, populations, and species. The biology of Individuals is deeply modified when they are groups; many of the attributes of populations such as size, distribution, composition, etc. are related to social interactions, and at the level of species, patterns of social relations within groups tend to be structured in ways that influence survival, reproduction, and exchange among populations.
In one experimental approach to these problems, the social ecology of freely growing populations of mice In large enclosures was related to behavioral, physiological, and health changes of individuals, to demographic changes and to changes of gene frequencies. Another experiment examined the process and effects of artificial selection for the same trait in different social environments.
The population enclosures were octagonal structures subdivided Into central and peripheral sections with a total surface area of 13.3 square feet. From a founder group of mice of known genetic (progeny of a four-way cross among inbred mouse strains C57L/J, SWR/J, C3HeB/FeJ, 129/J) and environmental background, three equivalent samples of mice were distributed into replicate population enclosures (Pop A and B) and into standard laboratory cages as randomly mated male-female pairs—the control group (Pop C).
During the first year of study, daily observations of the enclosures were made, and several censuses were performed. Identifiable cohorts, animals born during each census interval, were established to provide an additional way of analyzing changes in the populations.
In Pop C, reproduction remained constant and mortality was negligible. Marked changes occurred in Pop A and B. The sizes (1000-A and 800-B mice) and densities (85-A and 60-B mice per square foot) are several times greater than those of any previously reported population of small mammals. However, there would have been 100,000 mice in each enclosure at the end of a year had the populations continued to grow as they did at first. Changes of reproductive physiology constituted prominent aspects of self-regulation in the enclosures. Peak demographic input rates occurred during the third month, but were already associated with decreased productivity per adult female. Analysis of maturation and reproduction pointed to inhibition of reproduction in sexually mature females as the most important factor in the decline of productivity. Pregnancy rates fell steadily and inhibition of full-term gestation occurred. Gonads and reproductive cells of males were adult, but a large proportion of males showed little sexual activity.
Neonatal mortality was particularly striking in Pop B, where 30% of females showed advanced pregnancy during the last 5.5 months with no newborns surviving. About 25% of the mice in the enclosures died during the year. Highest weekly death rates occurred during the first half of the year before peak numbers were present. Autopsies of mice of Pop A revealed little in the way of abnormal findings.
Biomass either paralleled or increased more rapidly than numbers in both enclosures, contrasting with some other population: studies in which growth was impaired with crowding.
Changes of behavior included: 1. disappearance of circadian activity peaks, 2. decline in frequency of fighting per male but an increase in unusual aggressiveness, 3. aberrations of sexual behavior, 4. deterioration of maternal care, 5. cannibalism, 6. striking decrease in social responsiveness.
Cohorts in the populations were biologically distinguishable sub-units in contrast to control cohorts, which showed no such differentiation. Cohorts in Pop A and B differed with respect to reproduction physiology, mortality, and behavior, and intercohort differences persisted at all levels of population density.
Many of the properties of Pop A and B mice changes when the mice were placed in different social environments, attesting to the specificity of the influence of social factors. For example, mice of Pop A, randomly paired in control cages, showed a marked rise in reproduction, and cohorts reproductively inhibited before were most productive in the new social environment. Behavioral tests performed outside the enclosure environment revealed: 1. intercohort differences among Pop A mice contrasted with stereotyped behavior of Pop C mice, and 2. changes in behavior of Pop A mice both immediately after removal from the population and after six weeks in new social conditions. Pop B mice changed their social environment by emigrating into the empty interconnected enclosure of Pop A. Two distinctive sub-populations formed. Greater changes in reproduction, mortality, and behavior occurred in the emigrant subpopulation, which underwent more extensive social reorganization. Immediately following reunion of the two subpopulations, a population crash occurred, possibly related to the sudden changes of social conditions.
Use of genetically defined animals made feasible the study of gene frequency changes. Polymorphism of alleles at the C locus affecting coat color differed between Pop A and B on the one hand and Pop C on the other. Although the magnitude of the upward change of recessive c in Pop A and B was not large, the consistency and similarity of the change in Pop A and B and lack of change in Pop C suggested the action of systematic processes and the probable adaptiveness of the changes. There was little evidence of differential adult reproduction or mortality among the phenotypes but there were suggestions of differential neonatal survival. The relatively slow rate of change of the alleles after the first generation suggested the establishment of a state of balanced polymorphism at the C locus. Hemoglobin allele and genotype frequencies of mice of Pop A alive at the end of the year did not deviate from what might have been predicted on the basis of panmixia.
Selection for the same trait in varied environments tends to involve genetic and physiological differences. The question of adaptability to different social environments was studied; heavy body weight at sexual maturity was chosen as the trait for selection; groups of different sizes—pairs or groups of 20–30 mice—were the environmental variables. Sexes were kept separate between weaning and sexual maturity. A within-litter selection method was used.
Crowding depressed weight at sexual maturity but equal improvement with selection occurred in both social environments. Heritability was also equal in crowded and uncrowded groups. Environmental exchange carried out in the sixth and seventh generation suggested that mice selected in crowded environments performed slightly better in both crowded and uncrowded environments.
The large sizes and unusual degree of crowding attained by the freely growing populations in this study compared with previous studies may be related to the types of animals used, to the number of individuals in the founder nuclei, and to the physical structure of the enclosures. Extreme crowding was compatible with general physical health. The decline of fertility and fecundity, the decreased survival of newborns, and the appearance of behavioral aberrations—rather than disease or an increase in adult mortality—represented the major self-regulatory mechanisms that eventually limited population growth. The growth of individuals was not inhibited. Social withdrawal and the decline of social interaction rather than a rise of interaction characterized the populations. Such findings cast doubt about the generality of the so-called “Stress” theory of social ecology that emphasizes increased interaction and pituitary-adrenal hyperactivity as the principal mechanisms involved in self-regulation of vertebrate populations.
Other formulations of mammalian social ecology, such as those that focus on the importance of early development, of spatial requirements, of neurophysiological reactivity, and of communications, constitute additional explanations of the interplay of social and biological processes in crowded populations.
Although man’s potential reactions are more complex and variable than those of lower vertebrates and give prominence to the role of symbols and culture, his social environment is even more fundamental to his entire existence. This, if anything, increases the importance of the interplay of socioecological and biological processes for man.
An emergency ration of pemmican provides 1000 calories a day; adaptation to resulting starvation persists for at least a week and reduces physiological disturbances during a second starvation period.
…These studies suggest that the acceptance of a survival ration, which of necessity must be an unusual diet, can be enhanced by prior consumption of the ration. The work reemphasizes the fact that metabolic and physiological adaptations to semistarvation can occur. Whether this adaptation is also associated with psychological adjustments which permit the individual to withstand the rigors of a reduced food intake is not apparent from these reports. Additional work in the latter area would certainly be desirable and might offer suggestions as to how an obese individual might best adapt himself to the rigors of a reducing diet.
2 out of 3 different 1000 calorie combinations of pemmican and sugar were fed to each of 12 subjects during a two-phase, winter field study. All of the diets tested consisted primarily of pemmican, with the sugar contribution ranging from 0 to not more than 32% of the calories. The 5-day experimental phases were separated by a 7-day “recovery period.”
In both periods, on all diets, performance was considered adequate for survival situations involving moderate activity, thus confirming a previous report. The isocaloric substitution of pemmican with 40 grams of sugar raised the fasting blood sugar levels, decreased the nitrogen balance, and, in some cases, reduced ketonuria. However, a further increase in the proportion of sugar in the ration to 80 grams had no additional effect.
In the second period, the magnitude of all the above responses was strikingly reduced. In most cases, the degree of reduction did not appear to be related to differences in the composition of the Period I diets. The fasting blood sugars during the second period, however, did bear an inverse and highly statistically-significant relationship to the levels of carbohydrate intake during the first period. Thus, the data suggest that the adaptation to caloric restriction which developed during the first period, as evidenced by sequential changes in blood sugar levels, nitrogen balance and ketone body excretion, persisted throughout the recovery period, permitting the subjects to respond more favorably to the second dietary stress.
Pemmican, a dehydrated high-fat, high-protein, carbohydrate-free meat preparation was fed, with and without an isocaloric supplement of sugar, to 10 human subjects undergoing simulated survival in a severely cold environment for 9 days.
No ill effects were noted that could not be attributed to caloric restriction, and the performance of the subjects was considered adequate for survival situations involving moderate activity. An isocaloric supplement of 40 grams of sugar increased the fasting blood-sugar levels, decreased the nitrogen balance, and decreased the excretion of ketones. During the 3 days following initiation of the dietary regimen, fasting blood-sugar levels and daily nitrogen balances fell precipitately, while ketone excretion rose. After this, however, the blood sugar levels rose somewhat and leveled off, the nitrogen balance increased appreciably, and excretion of ketones fell gradually to quite low levels irrespective of the low caloric supplement of sugar.
These results have been interpreted to mean that the subjects were becoming adapted to the combination of pemmican and restricted caloric intake.
3 studies designed to determine some of the psychological and sociological factors affecting the acceptability of pemmican in a simulated survival situation were described.
In the first, it was found that acceptability was affected by prior exposure to unfavorable opinions, unfavorable personal expectations, perception of crew attitudes, hunger and fatigue at the time of initial use, nibbling only small quantities at a time, and food aversions exhibited presently or during childhood.
The second study confirmed most of these and in addition indicated that absence of a prior use of the ration might be a factor.
In the third study, it was found that distinctive patterns of early life experiences differentiate the aversion group from the acceptability group. The acceptability group has had experiences indicative of higher motivation for achievement, more leadership, greater adaptability, a more aggressive adjustment to life in general, and more effective social adjustment.
Es ist an sich möglich, mit Hilfe von Keimversuchen an Getreide eine Schwangerschaft zu diagnostizieren. Die angestellten Versuche fielen stets eindeutig aus. Für die Praxis ist dieses Verfahren infolge der langen Dauer des Reaktionsablaufes aber natürlich nicht verwertbar.
Der Urin nichtschwangerer Frauen übt eine starke Hemmung auf die Keimung von Weizen und Gerste aus oder hindert sie (Dialyse) sogar gänzlich.
Eine schwache Hemmung der Keimung tritt anfänglich auch beim Gießen mit Schwangerenharn ein, doch bewirkt er nach der Keimung eine starke Entwicklung des vegetativen Wachstums, was nicht durch die Zufuhr von Nährsalzen erklärt werden kann, zumal Harn nichtschwangerer Frauen, der darin gleich sein sollte, eine stark giftige (verbrennende) Wirkung ausübt.
Eine Geschlechtsdiagnose wurde nicht versucht, doch sind Versuche darüber in Vorbereitung.
Die Tatsache, daß Follikelhormon auf das Pflanzenwachstum einzuwirken vermag, gab Veranlassung, Versuche nach alt überlieferten volksmedizinischen Texten zu machen, wonach aus der Wirkung von Schwangerenurin auf keimfähige Gersten und Weizenkörner auf das Geschlecht des zu erwartenden Kindes geschlossen werden könne.
Es ergab sich die Regel, daß schnelleres Wachstum der Gerste gegenüber dem Weizen ein Mädchen, während nicht beschleunigtes oder verzögertes Wachstum der Gerste einen Knaben bedeutet. Auf diese Weise konnten bei Untersuchungen mit Urinen von 100 Schwangeren zu 80% richtige Diagnosen gestellt werden; 20% waren falsch.
[Popular science discussion of scaling laws by biologist: why does a short fall not faze a mouse or insect but injures a man and makes a horse go splash, and why are giants impossible? Because the strength of bodily parts increases less than total volume or weight, and they become weaker and more fragile the bigger they are. Other examples include surface tension, blood pumping, oxygen respiration, flying, warm-bloodedness vs volume, eye acuity, brain size—and perhaps human organizations like governments and businesses?]
Let us take the most obvious of possible cases, and consider a giant man sixty feet high—about the height of Giant Pope and Giant Pagan in the illustrated Pilgrim’s Progress of my childhood. These monsters were not only ten times as high as Christian, but ten times as wide and ten times as thick, so that their total weight was a thousand times his, or about eighty to ninety tons. Unfortunately the cross-sections of their bones were only a hundred times those of Christian, so that every square inch of giant bone had to support ten times the weight borne by a square inch of human bone. As the human thigh-bone breaks under about ten times the human weight, Pope and Pagan would have broken their thighs every time they took a step. This was doubtless why they were sitting down in the picture I remember. But it lessens one’s respect for Christian and Jack the Giant Killer.