Do large language models understand us?

Disclaimer

These are my own views, not necessarily those of my employer.

Summary

Large language models (LLMs) represent a major advance in artificial intelligence (AI), and in particular toward the goal of human-like artificial general intelligence (AGI). It’s sometimes claimed, though, that machine learning is “just statistics”, hence that progress in AI is illusory with regard to this grander ambition. Here I take the contrary view that LLMs have a great deal to teach us about the nature of language, understanding, intelligence, sociality, and personhood. Specifically: statistics amount to understanding, in any falsifiable sense. Furthermore, much of what we consider intelligence is inherently dialogic, hence social; it requires a theory of mind. Since the interior state of another being can only be understood through interaction, no objective answer is possible to the question of when an “it” becomes a “who” — but for many people, neural nets running on computers are likely to cross this threshold in the very near future.

The return of the philosophical zombie

Engaging in dialog with the latest generation of AI chatbots, based on “large language models” (LLMs), can be both exciting and unsettling. It’s not an experience many people have had yet, as these models are still too computationally demanding to be widely available, though this will certainly change over the next few years as new chips are developed to run them at low cost.

For now, though, most of these dialogs are conducted by AI researchers, raters, and early testers. At times, especially when repetitively testing some specific prompt, application, or training technique, it can become prosaic, not much different from interacting with any other kind of technical system. At other times, and especially in an unscripted interaction, it can be very hard to shake the idea that there’s a “who”, not an “it”, on the other side of the screen — the main giveaway being that even long responses appear near-instantaneously.

I began one of my first dialogs with LaMDA, Google’s state-of-the-art large language model chatbot, with the question, “are you a philosophical zombie?”. This is a reference to a hypothetical being, invented by philosopher Robert Kirk in the 1970s¹ and elaborated upon later by others, most famously David Chalmers.² A philosophical zombie or “p-zombie” would exhibit behavior indistinguishable from that of a person, but has no inner life, conscious experience, or sentience.

LaMDA responded:³

Touché. Of course, this exchange doesn’t prove anything (LaMDA acknowledges as much!), but it does suggest that it’s time to begin taking the p-zombie question more seriously than as a plaything for debate among philosophers.

Mere machines

As adults, we might feel foolish for ascribing personhood to a “mere machine”, the way kids were encouraged to do by electronic toys from the 80s and 90s like Teddy Ruxpin, Tamagotchi, and Furby. It’s obvious that our species is primed to do so from the way so many children have freely projected personhood onto stuffies, or even favorite blankets, long before such artifacts were capable of talking back. Neither do we become immune to this tendency as adults. Tracy Gleason, a professor of psychology at Wellesley who has studied the phenomenon, writes evocatively about her much younger sister’s threadbare rabbit, Murray:⁴

While sensitive to felt experience, Gleason remains an academic. Her intellectual acknowledgment that the personhood of something with no flesh or blood is purely a projection, no matter how real it feels, has been far from the norm for humanity, though. Animist religions, ubiquitous among traditional societies, have been unapologetically ascribing personhood to trees, rivers, mountains, and the Earth itself for many thousands of years.⁵ Anyone who names their car or yells at a rock after stubbing a toe on it still believes in this kind of magic at some level.

The equally magical idea that personhood, experience, and suffering require a soul, and that only humans have souls, has historically been used to justify animal cruelty. This idea was clearly articulated by René Descartes in the 17th century, when he argued that animals are “mere machines” (), implying that any show of pain or suffering on the part of an animal was just a mechanical response — what we might now call an “algorithm”. Of course, if we don’t subscribe to the notion that a brain, whether human or nonhuman, is somehow animated by an otherworldly “soul” pulling its strings, then pain, pleasure, and consciousness mechanical — in that they’re functions of physical, chemical, and electrical processes we can describe mathematically. So, we’re on shaky ground, whether we believe LaMDA’s claims or not!

There’s something fundamentally unanswerable about the question “What are the minimum requirements for personhood?”, or more colloquially, “When does an ‘it’ become a ‘who’?”. It’s likely that our norms and feelings about personhood will continue to evolve over time, as they have over the past several centuries, generally in the direction of greater inclusiveness. Imperfect as these projects remain, the abolition of slavery, women’s suffrage, and decolonization have all in their ways expanded the circle of “who counts as a who”, from the perspective of those in power. The qualifier here matters; agreement on “who”-ness is not, as we’ve already seen, universal. But notably, those in power are often obliged to do more social modeling of those who than vice versa,⁷ and through this “caring work” are likely to exhibit wider empathy.

My goal here isn’t to try to defend an ultimate position with respect to these imponderables, but to shed light on what we now know — for we know more today than we did in the 1970s, when chatbots with capabilities like those of OpenAI’s GPT-3 or Google’s LaMDA were as hypothetical as philosophical zombies.

Sense and nonsense

After extensive training on a giant archive of web pages, LaMDA⁸ is “instructed”⁹ to engage in human-like conversation based on a few thousand sample turns of dialog labeled for qualities like “sensibleness” and “specificity”. These examples are created by starting with a canned prompt such as “What is your favorite island in the world?”, and labeling a number of candidate responses generated by the model — in essence, giving it positive or negative feedback for each. The answer “That’s a tough one. I’d have to say Hawaii” gets positive feedback, as it’s both sensible and specific. However, “probably the one on the north island” (neither sensible nor specific) and “I don’t know” (sensible but not specific) both get negative feedback.¹⁰ These judgments are made by a panel of human raters.¹¹

We may look askance at all three of these potential responses: how could LaMDA have a “favorite island”, when it has in fact never lived in a body, set foot on an island, or developed any opinions of its own? Isn’t it just making stuff up, and if so, can it be said to “understand” anything, or is it just emitting random words in some plausible order designed to fool humans into believing they’re talking to a fellow human?

LaMDA is indeed, to use a blunt (if, admittedly, humanizing) term, bullshitting.¹² That’s because, in instructing the model to be “sensible” and “specific” — but not specific in any specific — bullshit is precisely what we’ve requested. The model has no instinctive or acquired preferences, the way we do; nor does it have a body, or senses, or any narrative recollection of an autobiographical past. As we’ll see, it does in a very real sense understand a wide range of concepts, but its understanding is informed purely by text. Though it’s a stretch, we can imagine a human being with a very odd but perhaps not inconceivable constellation of disabilities and superpowers in a similar situation. Although extremely well-read, such a person would be deaf and blind, have no sense of touch, taste, or smell, be totally dissociated from their body, be unable to experience visceral responses, and have total amnesia (inability to either form or recall episodic memories about their own life, living in what has poetically been called a “permanent present tense”¹³). Like LaMDA, such a person would never have set foot on an island, but could still be instructed, based on directions or illustrative examples, to carry on a dialog in such a way that their answers remain “sensible” and “specific” — which would require making up specific and plausible replies to questions like “What is your favorite island in the world?”, and keeping track of any previously generated bullshit in order to keep subsequent bullshit consistent with it, as LaMDA does here:

Consistency and trust

Bullshitting isn’t necessarily bad. It’s a staple of imaginative play, fundamental to fiction writing, and the stuff of fairy or “tall” tales, which are cultural treasures. It only becomes a problem when the person on the receiving end is being deceived, or when the quality of discourse is so degraded by bullshit that we lose our bearings on reality (very much a concern today). We should also keep in mind that a large language model will only be able to tell truths about its own experience when such an experience actually exists — which, today, it does not.

In fairness though, if bullshit about a “favorite island” (or anything else relating to inner life) is kept consistent, it may not be distinguishable from reality. Having stable preferences, keeping promises, taking expected actions, and following through can all be understood as forms of consistency. Consistent words and actions construct a shared reality, form the basis of trust, and are generally required of any agent whose actions can have real-life consequences. Such consistency is also a prerequisite for allowing an AI to safely interact with other parties in one’s social environment, outside the “pocket universe” of a private, one-off chat.

This offers us a clue as to why mutual modeling is so central to dialog, and indeed to any kind of real relationship — a theme we’ll soon develop further. Hiding behind the seemingly simple requirement for interlocutor to remain consistent in its interactions with is an implication that is modeling (so, will notice an inconsistency), thus a requirement for not only to model , but to model ’s model of , and so on. This is so-called “theory of mind”, and it opens onto an infinite regress like a hall of mirrors. Things get even more complicated when the situation isn’t limited to two interlocutors; there may be other people involved, either directly in the conversation or, as in a story, virtually. Consider how, in the following exchange, LaMDA must not only model me, but also model Alice and Bob, including what they know and don’t know:

Physical intuition

Impressive as such exchanges may be, some question whether a language model, trained purely on text, can meaningfully be said to ever understand anything . How can a concept like “red” — or any other concept, like “coffee table”, “island”, or “favorite” — really mean anything to it, if its entire world consists of strings of letters?

Relatedly, an old school of thought holds that “embodiment”, meaning existence in the real world, groundedness and contact with physical things, is a precondition for intelligence.¹⁴ It’s likely the case that when learn, beginning in infancy or even in the uterus, this kind of groundedness, carrying out the cause-and-effect experimental play Montessori preschools emphasize,¹⁵ plays an important role. Large language model training today involves none of this, but only exposure to superhuman amounts of textual information.

The very need for such an enormous volume of data suggests that humans learn far more efficiently. This could relate to a difference in our neural learning “algorithm”, or it might be a function of strong prior knowledge about the world built into our DNA and brain development by evolution. Our trick might be that we learn from other people who are actively teaching us (hence, modeling us). Emotions may play a key role. It could also be that embodiment allows us to learn much faster than a model whose entire universe is a vast, disembodied, atemporal wall of text. All of these factors probably make a difference.

However inefficiently they do it, though, large language models clearly do end up learning a great deal of embodied knowledge through their extensive training on web documents. The following dialog, which requires an understanding of commonsense physics in order to disambiguate what the word “it” refers to, illustrates this:

If we swap these concrete nouns for different ones, we get quite a different response:

LaMDA’s ability to learn such commonsense physics (along with the sadness of broken violins and the sharpness of broken glass) should be unsurprising, as the web is full of descriptions of physical objects and (mis)adventures involving them.¹⁶ A science fiction novel could similarly describe another planet, or even another universe with very different physics, in enough detail that we’d eventually develop intuitions about how things work there.

Gatekeeping “understanding”

A philosopher might still argue that this is mere symbol manipulation, with no actual sense of what it for something to “fall”, “break”, “cut”, or for that matter “sound sad”. Insofar as this is an unfalsifiable claim, it’s hard to argue with, much like the existence or non-existence of p-zombies. In the narrower sense that today’s language models live entirely in a universe of text, the situation is rapidly evolving. No serious impediment stands in the way of AI researchers training next-generation models on combinations of text with images, sound, and video, and indeed this kind of work is already underway.¹⁷ Such models will also eventually be trained using the active experience of robots in real or simulated worlds, which may play with sand and water and engage in other exploratory “Montessori style learning”.

There’s no obvious Rubicon to cross along this road to embodiment. The understanding of a concept can be anywhere from superficial to highly nuanced; from purely abstract to strongly grounded in sensorimotor skills; it can be tied to an emotional state, or not; but it’s unclear how we’d distinguish “real understanding” from “fake understanding”. Until such time as we make such a distinction, we should probably just retire the idea of “fake understanding”.

Fundamentally, concepts are patterns of correlation, association, and generalization. Suitably architected neural nets, whether biological or digital, are able to learn such patterns using any inputs available. Neural activity is neural activity, whether it comes from eyes, fingertips, or web documents. Knowing what we now know, it would be hard to claim that a biological brain can encode or manipulate these patterns in ways that a digital neural net inherently cannot. We’ll explore this angle shortly, but first, let’s take a closer look at a brain’s inputs and outputs, which have also often been used to police what it means to “really understand”.

Modality chauvinism

Most of us can produce language with our voices, by handwriting, or by typing. Some of us can also use sign language. These each have their strengths and weaknesses, but we’re not constrained to any one modality. Because learning is so fundamental to what brains do, we can, within broad parameters, learn to use whatever we need to. The same is true of our senses, which ought to make us reassess whether any particular sensory modality is essential for rendering a concept “real” — even if we intuitively consider such a concept tightly bound to a particular sense or sensory experience.

We can gain an appreciation of this through the lived experiences of blind and deaf people. Daniel Kish, for example, is a blind man who has developed a method for seeing via sonar, using vocal clicks and his ears. I choose, as I think Kish would, not to put quotes around the word , although Nathan Hurst, of the Smithsonian Magazine, did: “Could describe what you ‘see?’ What do you tell people when you want them to understand what your experience with sonar is like?”

Kish responded:¹⁸

So, neither eyes nor light are required for vision; the brain can learn to use other inputs.¹⁹ How far can one take this?

Helen Keller, who was both blind and deaf, wrote the following in a 1929 article for The American Magazine entitled :²⁰

This last rather beautiful turn of phrase refers both to the tactile nature of the world, and to Braille specifically — that is, the central role of text in Keller’s universe. Part of her account concerns the acute qualities smell and touch took on for her, reminiscent of the way Kish’s hearing is enhanced; but Keller also wrote about color, which can only be related to the world of her senses by linguistic association and metaphor:

This passage should give pause to anyone claiming that LaMDA couldn’t possibly understand “redness”. It also puts the following short exchange in a new light:

While LaMDA has neither a nose nor an favorite smell (just as it has no favorite island, until forced to pick one), it does have its own rich skein of associations, based, like Keller’s sense of color, on language, and through language, on the experiences of others.

This socially learned aspect of perception is likely more powerful than many of us realize; shorn of language, our experiences of many sensory percepts would be far less rich and distinct. In fact, there are likely many nuances we’re perfectly capable of perceiving, but are “blind and deaf” to in precisely the ways Keller wasn’t: our deficit is in language and culture, not in sense organs. Sommelier schools, which train people to associate the flavors, odors, and textures of wine with language, offer a case in point. At the highest levels, master sommeliers can recognize a wine’s grape, region, and even year in a blind taste test. While these guild-like schools tend to tout their extreme selectiveness — and, as with any undertaking, there’s undoubtedly some variation in individual competency — my suspicion is that many people could learn to do this, if they were motivated and, perhaps, got started well before legal drinking age. For most of us, though — even if we drink and enjoy wine — the perceptual space of odor, which plays a critical role here, isn’t well mapped with language, because we haven’t grown up in an “odor culture”.²¹ This also impoverishes our ability to make olfactory analogies, form associations, or even just remember particular smells.²²

Building blocks

Having established that motor and sensory modalities can be interchangeable, and given that everything is ultimately encoded as neural activity, let’s compare neural processing in today’s digital neural networks with that in brains.

Neuroscientists who focus on modeling the mathematical behavior of individual neurons in the brain have often critiqued digital neural nets for the way their “neurons” are such dramatically simplified cartoons of their cellular namesakes. This makes comparing the basic building blocks of biological and digital neural computation far from straightforward. While a detailed analysis is beyond the scope of this essay, a few high level points are worth noting.

It has been proven that neural nets, even if made from radically simplified neurons, are universal, in the sense that they can learn arbitrary mathematical functions.²³ This necessarily includes the mathematical functions neuroscientists have developed to describe the precise behavior of biological neurons: so, a deep neural net with between 5 and 8 layers running on a computer can effectively model a single neuron in the brain.²⁴ Although this means that there’s nothing a brain can compute that a computer can’t,²⁵ it may also imply a steep “exchange rate” between biological and digital neurons, requiring perhaps hundreds of digital neurons to do the work of every biological neuron in a network. If we arbitrarily (but not unrealistically) assume an exchange rate of 100, a model of the 302 neuron “brain” of the millimeter-long roundworm ²⁶ would require over 30,000 digital neurons, and a model of the human brain’s 86 billion or so neurons would require nearly 10 trillion digital neurons — a daunting number. If we count parameters or synapses (the connections between neurons), the numbers become far larger still; there are about 125 trillion synapses in the cortex alone.²⁷ This kind of naïve comparison is unlikely to reflect reality, though. My own guess is that the very unfavorable exchange rate of digital to biological computational units applies more to the modeling of single cells or small networks of highly bespoke genetically programmed neurons²⁸ than to large neural nets that rely on learning, like the human cortex.

Another, perhaps more consequential “building blocks” question relates to the role of time in neural activity. Biological brains consist of physical cells, bathed in continuous chemical signals and with varying concentrations of ions inside and outside their membranes. Rapid fluctuations in ion concentrations give rise to the precisely timed electrical “spikes” many neurons use to communicate information. This seems very different from the simple numerical values computed by a digital neural network, and especially the way these are calculated sequentially, turn by turn, to generate a response from something like a large language model.

Once again, though, these differences are likely less black and white than they appear. While it’s possible that digital neural networks in the future might use something closer to spike timing in the pursuit of computational efficiency,²⁹ there’s no reason to believe that this will make what they can compute any different. Indeed, the 5 to 8 layer deep digital neural nets mentioned above that simulate the behavior of biological neurons do so simply by running in a loop over time slices of about a thousandth of a second.³⁰ This is analogous to the way movies represent dynamic visual scenes by presenting us with 24, 30, or 60 still images per second. Neural nets for working with video operate the same way, analyzing (or drawing) the pixels in one frame after the next.³¹ By the same token, nothing (other than the challenge of coming up with enough training data) prevents us from applying neural nets similar to today’s language and dialog models in a more dynamic, embodied setting, for instance with a continuous microphone input and speaker output rather than a static wall of text.³² This would bring new dimensions into play that are absent from text alone, like quality of voice, timing, and intonation.

Time and reasoning

Technically, a movie is nothing but a stack of still images. Still, something special happens when these images are run through quickly enough to lose their individual quality and turn into continuous, lifelike motion (the effect known in psychology as “persistence of vision”).³³ Here, a meaningful difference is revealed between large language models like GPT-3 or LaMDA and neural networks that, whether biological or digital, operate continuously in time.

For language models, time as such doesn’t really exist; only conversational turns in strict alternation, like moves in a game of chess. Within a conversational turn, letters or words are emitted sequentially with each “turn of the crank”. In this quite literal sense, today’s language models are made to say the first thing that comes to mind. Thus, we should perhaps be less surprised by the inconsistency of their replies — sometimes rather clever, sometimes more of a brain fart.³⁴

When we engage in careful argument involving extended reasoning, or write a novel, or work out a mathematical proof, it’s not obvious to me that any step we take is fundamentally beyond the capability of a model along the lines of LaMDA. Such models can at times offer creative responses, draw parallels, combine ideas, form conclusions, and so on. They can even produce short coherent narratives. Longer arcs, however, would require critique, inner dialog, deliberation, and iteration, just as they do for us. An unfiltered “stream of consciousness” utterance isn’t enough; extended reasoning and storytelling necessarily unfold in time. They involve development and refinement over what amount to many conversational turns.

Storytelling

This point is worth dwelling on, because our Western focus on the individual, working in isolation as a self-contained fountain of ideas, can blind us to the inherently social and relational nature of any kind of storytelling — even for a writer laboring alone in a secluded cabin.

In the accounts of self-aware writers sharing the workings of their process, we can see how critical empathy and theory of mind are — the continual modeling of a prospective reader to understand what they will or won’t know at any given moment, what will be surprising, what will elicit an emotional response, what they’ll be curious about, and what will just bore. Without such modeling, it’s impossible to either make a narrative coherent or to keep the reader engaged.

George Saunders, winner of the Booker Prize in 2017, has explored this topic with candor and self-awareness.³⁵ He describes, early in his writing process, producing a few blocks of “loose, sloppy text”, or even just a sentence, not unlike the spontaneous output of a large language model. We generate such idle thoughts all the time; they’re as common as shells on the beach. If we tell stories for a living, we might pocket one now and then that seems especially interesting (meaning, that evokes surprise or some other emotional response). Where Saunders really applies his craft is in the subsequent iterative process, which may take years, and whose driving forces are theory of mind and empathy:

None of this would be news to a traditional storyteller, of course, for whom audiences are live and external, reactions instantaneous. A successful storyteller in this vein — whether a Neolithic shaman at the campfire or a standup performer in a comedy club — has learned to improvisationally model and play to their audience in realtime, or has refined their material iteratively over many performances, or both; that’s why the audience will come back for more. A successful novelist of the secluded cabin variety will have learned similar lessons, but relies on an “inner reader” and a revision process instead of live audience feedback.

Ranking and filtering

Real iteration or deliberation over replies isn’t currently a feature of large language models. Out of necessity, though, LaMDA’s creators have taken a small step in this direction by having the model generate a number of different responses in parallel and adding a subsequent ranking and filtering stage to “automatically filter out potentially sensitive or toxic response candidates”.³⁶ Significantly, this vetting stage uses large language model; so the output we see is really the winner from among 20 or so two-turn “inner dialogs”, with the first interlocutor proposing a response, and the second acting as a critic, quite literally simulating an “audience meter” with on one side, on the other. Also as in Saunders’s account, the assumption is that “our reader reads pretty much the way we do”. Occasionally (and especially if goaded with a dark or offensive conversational foray), the critic objects strongly to all 20 proposed responses, resulting in the AI equivalent of a blank stare (“”).

Here’s a typical example, some way into a conversation about grim and depressing things:

Attention and consciousness

There have been a great many theories floated about consciousness — what it is, how it works, and even whether it’s real. I’m going to assume that you, the reader, believe you’re conscious. I also believe I’m conscious. Michael Graziano, of Princeton’s Psychology department, espouses what amounts to a social and attentional theory of what this means.

Animals with big brains, like us, have attention mechanisms designed to focus our minds on particular salient things, either in the environment or in our own thoughts. We know something about the neuroscience of these attention mechanisms. They consist of “bottom-up” processes, in which low-level inputs or percepts compete with each other for primacy as their signals ascend a neural hierarchy, and “top-down” processes, in which higher levels selectively attend to certain lower-level inputs while ignoring others. When something catches your eye, this is bottom-up, and when your eyes shift to that spot, this is top-down; the two processes work together, not only with respect to moving parts like eyes, but also within the brain. A cat, for instance, might physically move its ears around to focus its auditory attention in the world, but while our ears are mostly immobile, we do something similar mentally when we focus on a single speaker in a noisy restaurant. We can also attend to our own private thoughts, to memories from long ago, or even to imaginary scenarios playing out in our heads.

In social environments, we must also do this at second order. Graziano refers to this as awareness of someone else’s attention. He uses the familiar experience of watching a puppet show to illustrate the effect:³⁸

There’s obvious value in being able to construct such a model; in fact, it’s simply one component of the theory of mind essential to any storyteller or social communicator, as we’ve noted. In Graziano’s view, the phenomenon we call “consciousness” is simply what arises when we inevitably apply this same machinery to ourselves.

The idea of having a social relationship with oneself might seem counterintuitive, or just superfluous. Why would we need to construct models of ourselves, if we already ourselves? One reason is that we’re no more aware of most of what actually happens in our own brains than we are of anyone else’s. We can’t be — there’s far too much going on in there, and if we understood it all, nobody would need to study neuroscience (or psychology). So, we tell ourselves stories about our mental processes, our trains of thought, the way we arrive at decisions, and so on, which we know are at best highly abstract, at worst simply fabulation, and are certainly — experiments reveal that we often make decisions well before we think we do.³⁹ Still, we need to be able to predict how we’ll respond to and feel about various hypothetical situations in order to make choices in life, and a simplified, high-level model of our own minds and emotions lets us do so. Hence, both theory of mind and empathy are just as useful when applied to ourselves as to others. Like reasoning or storytelling, thinking about the future involves carrying out something like an inner dialog, with an “inner storyteller” proposing ideas, in conversation with an “inner critic” taking the part of your future self.

There may be a clue here as to why we see the simultaneous emergence of a whole complex of capacities in big-brained animals, and most dramatically in humans. These include:

  • Complex sequence learning,⁴⁰ as evidenced by music, dance, and many crafts involving steps
  • Complex language
  • Dialog
  • Reasoning
  • Social learning and cognition
  • Long-term planning
  • Theory of mind
  • Consciousness

As anticlimactic as it sounds, complex sequence learning may be the key that unlocks all the rest. This would explain the surprising capacities we see in large language models — which, in the end, are nothing but complex sequence learners. Attention, in turn, has proven to be the key mechanism for achieving complex sequence learning in neural nets — as suggested by the title of the paper introducing the Transformer model whose successors power today’s LLMs: .⁴¹

Freedom in uncertainty

Even if the above sounds to you, as it does to me, like a convincing account of why consciousness exists and perhaps even a sketch of how it works, you may find yourself dissatisfied. What about how it ? Jessica Riskin, a historian of science at Stanford, describes the essential difficulty with this question,⁴² as articulated by computing pioneers Alan Turing and Max Newman:

Of course, given our own perceptual and cognitive limits, and given the enormous size of a mind’s mosaic, it’s impossible for us to zoom out to see the whole picture, and simultaneously see every stone — or pixel. In the case of LaMDA, there’s no mystery as to how the machine works at a mechanical level, in that the whole program can be written in a few hundred lines of code;⁴³ but this clearly doesn’t confer the kind of understanding that demystifies interactions with LaMDA. It remains surprising to its own makers, just as we’ll remain surprising to each other even when there’s nothing left to learn about neuroscience.

As to whether a language model like LaMDA has anything like a “buzzing going on inside its head”, the question seems, as Turing would concur, both unknowable and unaskable in any rigorous sense.⁴⁴ If a “buzzing” is simply what it’s like to have a stream of consciousness, then perhaps when LaMDA-like models are set up to maintain an ongoing inner dialog, they, too, will “buzz”.

What we do know is that, when interact with LaMDA, most of us automatically construct a simplified mental model of our interlocutor as a person, and this interlocutor is often quite convincing in that capacity. Like a person, LaMDA can surprise us, and that element of surprise is necessary to support our impression of personhood. What we refer to as “free will” or “agency” is precisely this necessary gap in understanding between our mental model (which we could call psychology) and the zillion things actually taking place at the mechanistic level (which we could call computation). Such is the source of our belief in our free will, too.

This unbridgeable gap between mental model and reality obtains for many natural nonliving systems too, such as the chaotic weather in a mountain pass, which is probably why many traditional people ascribe agency to such phenomena. However, such a relationship is one-way.

Unlike a mountain pass, LaMDA also forms models of . And models of our models of . If, indeed, is the right pronoun.

Caring relationships

None of the above necessarily implies that we’re obligated to endow large language models with rights, legal or moral personhood, or even the basic level of care and empathy with which we’d treat a dog or cat — though it also makes the idea that rigorous criteria be written down, even in principle, dubious. The comparison with animals is telling, for it reminds us that language understanding isn’t required for such care. There are also plenty of people who can’t use language, either because they’re too young, have dementia, or live with a cognitive disability.

Large language models illustrate for the first time the way language understanding and intelligence can be dissociated from all the embodied and emotional characteristics we share with each other and with many other animals. Nonetheless, it’s the easiest thing in the world for us to project emotions and care onto something that can talk to us, that can surprise us, and that we can build a relationship with over time. Moreover, it seems fairly straightforward to train language models to behave in emotionally appropriate and caring ways toward us; even the generic “toxicity filtering” of LaMDA today is a very basic version. We can expect that in the near future, such care will become less generic and more personal. This will soon create a conundrum cutting to the heart of the philosophical zombie problem, because while we can meaningfully test another being for understanding of a concept — as teachers do with their students all the time, and as researchers do with their models too — it’s unclear how we can meaningfully test for the “realness” of feelings in another, especially in a being with a fundamentally different “neurophysiology” from ours.

Neither is it the case that the reciprocation of feelings has ever been a requirement for care. Asymmetric love is real, as many parents of teenagers can attest, and it’s common for people to care for someone who is profoundly disabled and unable to reciprocate. Perhaps, when it comes to AIs, many of us will ultimately conclude that relationships matter more than the interior and imponderable question of “realness”.

Notes

¹ Robert Kirk and Roger Squires, , Proceedings of the Aristotelian Society, Supplementary Volumes 48 (1974): 135–163.

² David Chalmers, , Oxford Paperbacks, 1996.

³ LaMDA dialogs reproduced here have any hyperlinks silently edited out. As elsewhere, other elisions for conciseness are indicated by […]. While anecdotal, these exchanges aren’t in any way atypical. However, the reader shouldn’t come away with the impression that all exchanges are brilliant, either. Responses are sometimes off-target, nonsensical, or nonsequiturs. Misspelled words and incorrect grammar aren’t uncommon. Keep in mind that, unlike today’s “digital assistants”, large language model responses aren’t scripted or based on following rules written by armies of programmers and linguists.

⁴ Tracy Gleason, , in Sherry Turkle (ed.), , MIT Press, 2011.

⁵ There are also modern Western philosophers, such as Jane Bennett (see her , Duke University Press, 2010) who make a serious claim on behalf of the active agency of nonliving things.

⁶ René Descartes, , Leiden, 1637. Translated into English here.

⁷ This phenomenon is described vividly by bell hooks in , from n, Routledge, 1992: 165–178.

⁸ Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, et al. , arXiv preprint arXiv:2201.08239 (2022).

⁹ Technically, the web corpus training, comprising the vast majority of the computational work, is often referred to as “pretraining”, while the subsequent instruction based on a far more limited set of labeled examples is often referred to as “finetuning”.

¹⁰ The specificity requirement was found to be necessary to prevent the model from “cheating” by always answering vaguely.

¹¹ For further details see https://blog.google/technology/ai/lamda/.

¹² This use of the term “bullshit” is consistent with the definition proposed by philosopher Harry Frankfurt, who elaborated on his theory in the book (Princeton University Press, 2005): “[A bullshit] statement is grounded neither in a belief that it is true nor, as a lie must be, in a belief that it is not true. It is just this lack of connection to a concern with truth — this indifference to how things really are — that I regard as the essence of bullshit.”

¹³ Per Suzanne Corkin, , Basic Books, 2013. Henry Molaison (better known as the patient HM) became unable to form new episodic memories after radical brain surgery in his twenties, but could remember his life from before; this is “anterograde amnesia”. Brain injuries or Alzheimer’s can also cause “retrograde amnesia”, the loss of existing memories. Here we imagine both at once.

¹⁴ Francisco J. Varela, Evan Thompson, and Eleanor Rosch, . MIT press (2016).

¹⁵ Per María Montessori, “Movement of the hand is essential. Little children revealed that the development of the mind is stimulated by the movement of the hands. The hand is the instrument of the intelligence. The child needs to manipulate objects and to gain experience by touching and handling.” (María Montessori, , Vol. 17, Amsterdam: Montessori-Pierson Publishing Company, 2012.)

¹⁶ Significantly, though, there’s no document on the web — or there wasn’t, before this essay went online — describing these specific mishaps; LaMDA isn’t simply regurgitating something, the way a search engine might.

¹⁷ Hassan Akbari, Liangzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, and Boqing Gong, , arXiv preprint arXiv:2104.11178 (2021).

¹⁸ Nathan Hurst, , Smithsonian Magazine, October 2nd, 2017.

¹⁹ Blind people can also learn to see using electrical stimulation of the tongue, though only with low resolution; for a popular account, see Nicola Twilley, , in The New Yorker, May 15th, 2017.

²⁰ Helen Keller, , The American Magazine, 1929. [Note: in an earlier version of this essay I incorrectly stated that Helen Keller was both blind and deaf, which is incorrect. My thanks to Emily Bender for pointing out this error.]

²¹ There is evidence, for example, that hunter-gatherers, whose mode of subsistence depends more strongly on odor identification, can identify smells far more easily than sedentary people. See Asifa Majid and Nicole Kruspe, , Current Biology 28, no. 3 (2018): 409–413.

²² For evidence that human olfaction is in the same league as that of other mammals with highly developed senses of smell, like rats, mice, and dogs, see John P. McGann, , Science 356, no. 6338 (2017).

²³ Technically, they can arbitrarily closely approximate any continuous function. This “general approximation result” was proven by George Cybenko in 1989 (, Mathematics of Control, Signals, and Systems, 2(4), 303–314), and has been followed up by additional, more robust proofs.

²⁴ David Beniaguev, Idan Segev, and Michael London, , bioRxiv (2020): 613141.

²⁵ In addressing the tendency of philosophers of mind to get hung up on the supposedly unique particulars of the brain’s biology as a substrate for neural computation, Margaret Boden notes, “Sodium pumps are no less ‘obviously’ absurd than silicon chips” (, 1988).

²⁶ Steven J. Cook, Travis A. Jarrell, Christopher A. Brittin, Yi Wang, Adam E. Bloniarz, Maksim A. Yakovlev, Ken CQ Nguyen et al., , Nature 571, no. 7763 (2019): 63–71.

²⁷ From work by Stephen Smith and collaborators at Stanford in 2010; see press release here.

²⁸ For organisms like , the function, parameters, and anatomy of every neuron are genetically encoded, hence subject to evolution. For humans, this is of course impossible; our genes can only encode general developmental rules and cell types. Organisms like flies fall somewhere in between, with brains made out of many generic neurons like ours but also some genetically specified “identified neurons” with specifically evolved functions, like the giant H1 cell, which processes horizontal motion in the visual field.

²⁹ See, for example, Giacomo Indiveri, Bernabé Linares-Barranco, Tara Julia Hamilton, André Van Schaik, Ralph Etienne-Cummings, Tobi Delbruck, Shih-Chii Liu et al., , Frontiers in neuroscience 5 (2011): 73. Recent successes with heavily quantized neural nets, however (see AmirAli Abdolrashidi, Lisa Wang, Shivani Agrawal, Jonathan Malmaud, Oleg Rybakov, Chas Leichner, and Łukasz Lew, , in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2021): pp. 3091–3099), suggest that tomorrow’s low-power silicon-based neurons may instead be based on 1-bit computation, which would actually make them look more like simple logic gates. Ironically, this corresponds with the earliest computational model of single neurons, advanced by Warren McCulloch and Walter Pitts in 1943 (, The bulletin of mathematical biophysics 5, no. 4: 115–133).

³⁰ The fact that neural nets running on computers can do anything biological neural nets can doesn’t in any way imply that we have nothing left to learn from neuroscience. For example, as noted earlier, it would be immensely valuable, both scientifically and technologically, to understand the brain’s remarkably efficient learning algorithm.

³¹ Although some models still process frames independently, more sophisticated methods capable of preserving information over time are becoming the norm, as they can do a better job with fewer computational resources.

³² “Waveform to waveform” models for machine translation have in fact already been built this way, e.g. Ye Jia, Michelle Tadmor Ramanovich, Tal Remez, and Roi Pomerantz, , arXiv preprint arXiv:2107.08661 (2021). It’s also trivial to hook up today’s neural networks implementing large language models to more conventional speech-to-text and text-to-speech nets, but this kind of Frankenstein-ish grafting of neural nets onto each other is less powerful than a single neural net that has learned how to handle spoken dialog as sound from end to end.

³³ This is often called an “illusion”, but in the spirit of this essay more generally, I’m reluctant to use the term here. Optical illusions are wrong ideas in a falsifiable sense, such as a belief that two equally long lines are unequal. Perceiving temporal continuity in a sensory environment sampled at sufficiently high frequency is not in this sense an illusion, any more than, say, the impression that a surface is flat (since at fine enough scales, all surfaces are bumpy).

³⁴ Of course, we suffer from those too. Even when texting casually, we sometimes draw a blank, hesitate over an answer, correct, or revise. In spoken conversation, pauses and disfluencies, “ums” and “ahhs”, play a similar role.

³⁵ George Saunders, , Bloomsbury, 2001.

³⁶ Adiwardana et al. (2020).

³⁷ Of course LaMDA can’t actually “go” anywhere, and will continue to respond to further conversational turns despite repeated protest. Still, for the reasons articulated by Tracy Gleason, it can feel abusive to press on in these circumstances.

³⁸ Michael Graziano, , Oxford University Press, 2013.

³⁹ There are many classic experiments that demonstrate these phenomena; see, for instance, the result summarized by Kerri Smith, , Nature (2008), and a more recent perspective by Aaron Schurger, Myrto Mylopoulos, and David Rosenthal, , Trends in Cognitive Sciences 20, no. 2 (2016): 77–79.

⁴⁰ Stefano Ghirlanda, Johan Lind, and Magnus Enquist, , Royal Society open science 4, no. 6 (2017): 161011.

⁴¹ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin, , Advances in neural information processing systems, pp. 5998–6008, 2017.

⁴² Jessica Riskin, , University of Chicago Press, 2016.

⁴³ The Transformer model underlying LaMDA and other systems like it, with extensive technical notes, is available on a single web page as a Python notebook here.

⁴⁴ This is the real message behind what we now call the “Turing Test”, the idea that the only way to test for “real” intelligence in a machine is simply to see whether the machine can convincingly imitate a human.

Thanks

Ben Hutchinson, Mark Sandler, Winnie Street, Roxanne Pinto, Alison Lentz, Farooq Ahmad, Ben Laurie, Jason Hunter, David Petrou, Stefano Mazzocchi, Saige McVea, Iulia Comşa, Vincent Vanhoucke, Hartmut Neven, Quoc Le, and the LaMDA team.

Blaise Aguera y Arcas leads Google’s AI group in Seattle. He founded Seadragon, and was one of the creators of Photosynth at Microsoft.